Search Results

Search found 6869 results on 275 pages for 'tek systems'.

Page 32/275 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Architecting persistence (and other internal systems). Interfaces, composition, pure inheritance or centralization?

    - by Vandell
    Suppose that you need to implement persistence, I think that you're generally limited to four options (correct me if I'm wrong, please) Each persistant class: Should implement an interface (IPersistent) Contains a 'persist-me' object that is a specialized object (or class) that's made only to be used the class that contains it. Inherit from Persistent (a base class) Or you can create a gigantic class (or package) called Database and make your persistence logic there. What are the advantages and problems that can come from each of one? In a small (5kloc) and algorithmically (or organisationally) simple app what is probably the best option?

    Read the article

  • Could it be more efficient for systems in general to do away with Stacks and just use Heap for memory management?

    - by Dark Templar
    It seems to me that everything that can be done with a stack can be done with the heap, but not everything that can be done with the heap can be done with the stack. Is that correct? Then for simplicity's sake, and even if we do lose a little amount of performance with certain workloads, couldn't it be better to just go with one standard (ie, the heap)? Think of the trade-off between modularity and performance. I know that isn't the best way to describe this scenario, but in general it seems that simplicity of understanding and design could be a better option even if there is a potential for better performance.

    Read the article

  • Hardware refresh of Solaris 10 systems? Try this!

    - by mgerdts
    I've been seeing quite an uptick in the people that are wanting to install Solaris 11 when they are doing hardware refreshes.  I applaud that effort - Solaris 11 (and 11.1) have great improvements that advance the state of the art and make the best use of the latest hardware. Sometimes, however, you really don't want to disturb the OS or upgrade to the a later version of an application that is certified with Solaris 11.  That's a great use of Solaris 10 Zones.  If you are already using Solaris Cluster, or would like to have more protection as you put more eggs in an ever growing basket, check out solaris10 Brand Zone Clusters.

    Read the article

  • How do professional application developers use version control systems, like GIT and Subversion?

    - by Wolfi
    I am a beginner developer and I have been wondering from the start, how do professional use tools like GIT and Subversion (I don't have a very good understanding about these tools), to fulfill their project's needs. If they do use it, how would I set up something like that? My applications are not so large and I am not working in a team yet, would they be of huge help to me? There are questions on this site about how to use the tools, but I need beginner support.

    Read the article

  • What tools do I have to disuade users from using the same password with similar systems?

    - by Resorath
    I'm building a web application that connects to other web services (using strictly anonymous binding, so no user passwords are being used). However the web application maintains its own users itself, and is required to ask certain details such as e-mail addresses and public linking information to these other web services (for example, a username but not a password). I want to deter or prevent users from reusing passwords in my application that they have also used in the applications I'm linking to. For example, if I ask for their e-mail and provide me with their gmail address, I don't want them using their gmail password for my system. Another example would be reusing a password to a linked system in which they also gave me their username. One idea I had was to simply try using the information they gave me, along with the password they are trying to store and log in to these external web applications to test the password - then immediately unbind if I was successful and ask the user to use a different password. However I suspect there is a host of morale and legal issues there. The reason this is a big deal to me is accountability. My application is simply not funded enough to invest properly in security around user passwords. A salted, hashed password in a public SQL-like database is as secure as it gets. So if passwords and linked usernames or e-mails get out, I don't want my userbase compromised.

    Read the article

  • Content Management Systems - Why Should I Build My Website With One?

    Why would you want your website built using a Content Management System (CMS)? Well, there are quite a few compelling reasons. A CMS is driven by a database that stores all the content of the website and only delivers pages when called for by the users' browser. The CMS has a "back end" where content is added to the database, and all you need to do it is your browser. This changes everything! It means for the first time a site owner can make changes to their website when they want to and how they want to - read on for more information...

    Read the article

  • Master Data Management Implementation Styles

    - by david.butler(at)oracle.com
    In any Master Data Management solution deployment, one of the key decisions to be made is the choice of the MDM architecture. Gartner and other analysts describe some different Hub deployment styles, which must be supported by a best of breed MDM solution in order to guarantee the success of the deployment project.   Registry Style: In a Registry Style MDM Hub, the various source systems publish their data and a subscribing Hub stores only the source system IDs, the Foreign Keys (record IDs on source systems) and the key data values needed for matching. The Hub runs the cleansing and matching algorithms and assigns unique global identifiers to the matched records, but does not send any data back to the source systems. The Registry Style MDM Hub uses data federation capabilities to build the "virtual" golden view of the master entity from the connected systems.   Consolidation Style: The Consolidation Style MDM Hub has a physically instantiated, "golden" record stored in the central Hub. The authoring of the data remains distributed across the spoke systems and the master data can be updated based on events, but is not guaranteed to be up to date. The master data in this case is usually not used for transactions, but rather supports reporting; however, it can also be used for reference operationally.   Coexistence Style: The Coexistence Style MDM Hub involves master data that's authored and stored in numerous spoke systems, but includes a physically instantiated golden record in the central Hub and harmonized master data across the application portfolio. The golden record is constructed in the same manner as in the consolidation style, and, in the operational world, Consolidation Style MDM Hubs often evolve into the Coexistence Style. The key difference is that in this architectural style the master data stored in the central MDM system is selectively published out to the subscribing spoke systems.   Transaction Style: In this architecture, the Hub stores, enhances and maintains all the relevant (master) data attributes. It becomes the authoritative source of truth and publishes this valuable information back to the respective source systems. The Hub publishes and writes back the various data elements to the source systems after the linking, cleansing, matching and enriching algorithms have done their work. Upstream, transactional applications can read master data from the MDM Hub, and, potentially, all spoke systems subscribe to updates published from the central system in a form of harmonization. The Hub needs to support merging of master records. Security and visibility policies at the data attribute level need to be supported by the Transaction Style hub, as well.   Adaptive Transaction Style: This is similar to the Transaction Style, but additionally provides the capability to respond to diverse information and process requests across the enterprise. This style emerged most recently to address the limitations of the above approaches. With the Adaptive Transaction Style, the Hub is built as a platform for consolidating data from disparate third party and internal sources and for serving unified master entity views to operational applications, analytical systems or both. This approach delivers a real-time Hub that has a reliable, persistent foundation of master reference and relationship data, along with all the history and lineage of data changes needed for audit and compliance tracking. On top of this persistent master data foundation, the Hub can dynamically aggregate transaction data on demand from different source systems to deliver the unified golden view to downstream systems. Data can also be accessed through batch interfaces, published to a message bus or served through a real-time services layer. New data sources can be readily added in this approach by extending the data model and by configuring the new source mappings and the survivorship rules, meaning that all legacy data hubs can be leveraged to contribute their records/rules into the new transaction hub. Finally, through rich user interfaces for data stewardship, it allows exception handling by business analysts to keep it current with business rules/practices while maintaining the reliability of best-of-breed master records.   Confederation Style: In this architectural style, several Hubs are maintained at departmental and/or agency and/or territorial level, and each of them are connected to the other Hubs either directly or via a central Super-Hub. Each Domain level Hub can be implemented using any of the previously described styles, but normally the Central Super-Hub is a Registry Style one. This is particularly important for Public Sector organizations, where most of the time it is practically or legally impossible to store in a single central hub all the relevant constituent information from all departments.   Oracle MDM Solutions can be deployed according to any of the above MDM architectural styles, and have been specifically designed to fully support the Transaction and Adaptive Transaction styles. Oracle MDM Solutions provide strong data federation and integration capabilities which are key to enabling the use of the Confederated Hub as a possible architectural style approach. Don't lock yourself into a solution that cannot evolve with your needs. With Oracle's support for any type of deployment architecture, its ability to leverage the outstanding capabilities of the Oracle technology stack, and its open interfaces for non-Oracle technology stacks, Oracle MDM Solutions provide a low TCO and a quick ROI by enabling a phased implementation strategy.

    Read the article

  • About Solaris 11 and UltraSPARC II/III/IV/IV+

    - by nospam(at)example.com (Joerg Moellenkamp)
    I know that I will get the usual amount of comments like "Oh, Jörg ? you can't be negative about Oracle" for this article. However as usual I want to explain the logic behind my reasoning. Yes ? I know that there is a lot of UltraSPARC III, IV and IV+ gear out there. But there are some very basic questions: Does your application you are currently running on this gear stops running just because you can't run Solaris 11 on it? What is the need to upgrade a system already in production to Solaris 11? I have the impression, that some people think that the systems get useless in the moment Oracle releases Solaris 11. I know that Sun sold UltraSPARC IV+ systems until 2009. The Sun SF490 introduced 2004 for example, that was a Sun SF480 with UltraSPARC IV and later with UltraSPARC IV+. And yes, Sun made some speedbumps. At that time the systems of the UltraSPARC III to IV+ generations were supported on Solaris 8, on Solaris 9 and on Solaris 10. However from my perspective we sold them to customers, which weren't able to migrate to Solaris 10 because they used applications not supported on Solaris 9 or who just didn't wanted to migrate to Solaris 10. Believe it or not ? I personally know two customers that migrated core systems to Solaris 10 in ? well 2008/9. This was especially true when the M3000 was announced in 2008 when it closed the darned single socket gap. It may be different at you site, however that's what I remember about that time when talking with customers. At first: Just because there is no Solaris 11 for UltraSPARC III, IV and IV+, it doesn't mean that Solaris 10 will go away anytime soon. I just want to point you to "Expect Lifetime Support - Hardware and Operating Systems". It states about Premier Support:Maintenance and software upgrades are included for Oracle operating systems and Oracle VM for a minimum of eight years from the general availability date.GA for Solaris 10 was in 2005. Plus 8 years ? 2013 ? at minimum. Then you can still opt for 3 years of "Extended Support" ? 2016 ? at minimum. 2016 your systems purchased in 2009 are 7 years old. Even on systems purchased at the very end of the lifetime of that system generation. That are the rules as written in the linked document. I said minimum The actual dates are even further in the future: Premier Support for Solaris 10 ends in 2015, Extended support ends 2018. Sustaining support ? indefinite. You will find this in the document "Oracle Lifetime Support Policy: Oracle Hardware and Operating Systems".So I don't understand when some people write, that Oracle is less protective about hardware investments than Sun. And for hardware it's the same as with Sun: Service 5 years after EOL as part of Premier Support. I would like to write about a different perspective as well: I have to be a little cautious here, because this is going in the roadmap area, so I will mention the public sources here: John Fowler told last year that we have to expect at at least 3x the single thread performance of T3 for T4. We have 8 cores in T4, as stated by Rick Hetherington. Let's assume for a moment that a T4 core will have the performance of a UltraSPARC core (just to simplify math and not to disclosing anything about the performance, all existing SPARC cores are considered equal). So given this pieces of information, you could consolidate 8 V215, 4 or 8 V245, 2 full blown V445,2 full blown 490, 2 full blown M3000 on a single T4 SPARC processor. The Fowler roadmap prezo talked about 4-socket systems with T4. So 32 V215, 16 to 8 V245, 8 fullblown V445, 8 full blown V490, 8 full blown M3000 in a system image. I think you get the idea. That said, most of the systems we are talking about have already amortized and perhaps it's just time to invest in new systems to yield other advantages like reduced space consumptions, like reduced power consumption, like some of the neat features sun4v gives you, and yes ? reduced number of processor licenses for Oracle and less money for Oracle HW/SW support. As much as I dislike it myself that my own UltraSPARC III and UltraSPARC II based systems won't run on Solaris 11 (and I have quite a few of them in my personal lab), I really think that the impact on production environments will be much less than most people think now. By the way: The reason for this move is a quite significant new feature. I will tell you that it was this feature, when it's out. I assume, telling just a word more could lead to much more time to blog.

    Read the article

  • What are some good hosted issue tracking systems for non-developers?

    - by Knox
    This question is similar to Good Open Source issue tracking systems for non-developers but in our case, we would prefer a hosted solution. The users are end-users, on the low end of computer literate, non developers. We have around 5 support people and 50 or so end users. We are interested in simple issue tracking, like a printer being down, as opposed to project or bug tracking. Email integration is desired.

    Read the article

  • There are lots of "Core i" CPUs, but Dell only offers a few -- who builds systems with the others?

    - by Jesse
    Passmark shows many varieties of Core i3, i5, and i7 cpus. Some of them, even at similar prices, are much faster than others. But Dell only offers a few options, and they're not the fast ones. For example, Dell offers the Core i5 650 (benchmark), which costs $220, and doesn't come close to the performance of the Core i3-2100 (benchmark), which costs $120. Does anyone sell systems with the faster, cheaper chips?

    Read the article

  • There are lots of "Core i" CPUs, but Dell only offers a few -- who builds systems with the others? [closed]

    - by Jesse
    Passmark shows many varieties of Core i3, i5, and i7 cpus. Some of them, even at similar prices, are much faster than others. But Dell only offers a few options, and they're not the fast ones. For example, Dell offers the Core i5 650 (benchmark), which costs $220, and doesn't come close to the performance of the Core i3-2100 (benchmark), which costs $120. Does anyone sell systems with the faster, cheaper chips?

    Read the article

  • How exactly are Distributed File Systems used in cloud environment?

    - by vaab
    How exactly are Distributed File Systems used in cloud environment ? More precisely: Are live VMs images (or their filesystem) usually located in the DFS ? Are VMs usually used to run the backbone (actual code) of DFS structure ? Precise example citing DFS (ceph, Gluster, GFS, GPFS, Lustre) or cloud environment (Openstack , CloudStack, ...) would be appreciated, even if I'm more interessted by ceph on OpenStack for now.

    Read the article

  • Can I copy a cross compiler tool chain between systems (I did before)?

    - by Jamie
    I tested fairly extensively with Ubuntu 10.04 Beta 2 Server in a VM, and was able to simply copy (read tar x) a cross compiled tool chain from an Ubuntu 8.10 VM. I created the tar myself, which is essentially a lot of stuff in \usr\local. Now that I've got a bare metal installation of Ubuntu 10.04 proper, the copy isn't working. In particularly, I'm getting the error: $ arm-linux-gcc -bash: /usr/local/bin/arm-linux-gcc: No such file or directory I've got the systems side by side in SSH windows ... any suggestions?

    Read the article

  • How to have 3 operating systems on a mirror RAID 1.

    - by Chris_45
    How do one proceed if I want to have 3 Operating Systems: Windows 7, Ubuntu, Debian plus a swap partion, all in all 4 partitions? Lets say I have 2 disks, each 640 GB and make room - 300 GB for Windows 7, 160 GB Ubuntu, 160 GB Debian and the rest for swap 20 GB. Where do I make these partitions, do I first make one big raid array 1 in BIOS and then partition when Windows 7 is installed or do I already in BIOS make these 4 partitions?

    Read the article

  • Quantify value for management

    - by nivlam
    We have two different legacy systems (window services in this case) that do exactly the same thing. Both of these systems have small differences for the different applications they serve. Both of these system's core functionality lies within a shared library. Most of the time, the updates occur in the shared library and we simply deploy the updated library to both of these systems. The systems themselves rarely change. Since both of these systems do essentially the same thing, our development team would like to consolidate these two systems into a single service. What can I do to convince management to allocate time for such a task? Some of the points I've noted are: Easier maintenance Decrease testing/QA time Unfortunately, this isn't enough. They would like us to provide them with hard numbers on the amount of hours this will save in the future and how this will speed up future development. Since most of the work is done in the shared library and the systems themselves never change, it's hard for us to quantify how many hours this will save. What kind of arguments can I make to justify the extra work to consolidate these systems?

    Read the article

  • Does the Win XP/7 dual boot "missing restore points" problem apply to systems with separate hard disks for each O/S?

    - by Robert Oschler
    I'm in the process of installing Windows 7/64 on a system with Windows XP/32 on it. During my research, I read about a problem that occurs in the dual boot scenario where Windows XP deletes Windows 7's restore points when it accesses the Windows 7 volume: http://support.microsoft.com/kb/926185 I found a workaround but it seems pretty painful since it appears to involve using the registry to make the Windows 7 volume appear invisible or "offline" to Windows XP, making sharing disk data between the two O/S annoying since you have to use something like an external storage device to get it done: http://www.vistax64.com/tutorials/127417-system-restore-points-stop-xp-dual-boot-delete.html I was wondering if this problem only occurs with systems that have both O/S installed on the same physical hard drive (in different partitions)? In my case, I will have each O/S on a completely separate physical hard drive. Any other tips would be appreciated. -- roschler

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >