Search Results

Search found 1397 results on 56 pages for 'transactional replication'.

Page 35/56 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Leveraging ERP Investments with EPM and BI Solutions

    - by john.orourke(at)oracle.com
    Now that many organizations have implemented ERP systems to automate and integrate their operational processes, IT investments are beginning to shift to the management systems i.e. EPM and BI tools and applications that integrate data from multiple transactional systems.  These solutions automate and integrate the management processes and enable organizations to achieve "management excellence" becoming smarter, more agile and more aligned than their competitors.  In fact the results of a recent IDC survey indicate that "Organizations that have implemented performance management more broadly are nearly four times more likely to be among the most competitive organizations in their industry."  One example of an organization that is leveraging their ERP investments with Oracle EPM and BI solutions is General Dynamics.  The Business Intelligence Collaborative (BIC) group within General Dynamics' IT organization assists various business units with the implementation, application support, and application hosting for their Business Intelligence and Enterprise Performance Management Applications.  Attend the Oracle Virtual Trade Show "Spotlight on Customer Success" on February 3rd to hear the details of how General Dynamics is using Oracle Essbase, Hyperion Planning, and Oracle BI to improve their planning, reporting and analysis processes and leverage their investments in Oracle E-Business Suite and other operational systems.   During the event, you can also hear about the latest developments and plans for Oracle Applications products, as well as what's coming with Oracle Fusion Applications. Here's a link to the Virtual Trade Show event overview and registration page.  The event runs from 8AM - 1PM PST/11AM - 4PM EST, and the EPM session is 10:30 - 11AM PST/1:30 - 2PM EST.    http://event.on24.com/event/26/79/15/rt/opFb.html?partnerref=internal I hope you'll join us on February 3rd!  

    Read the article

  • Top Fusion Apps User Experience Guidelines & Patterns That Every Apps Developer Should Know About

    - by ultan o'broin
    We've announced the availability of the Oracle Fusion Applications user experience design patterns. Developers can get going on these using the Design Filter Tool (or DeFT) to select the best pattern for their context of use. As you drill into the patterns you will discover more guidelines from the Applications User Experience team and some from the Rich Client User Interface team too that are also leveraged in Fusion Apps. All are based on the Oracle Application Development Framework components. To accelerate your Fusion apps development and tailoring, here's some inside insight into the really important patterns and guidelines that every apps developer needs to know about. They start at a broad Fusion Apps information architecture level and then become more granular at the page and task level. Information Architecture: These guidelines explain how the UI of an Oracle Fusion application is constructed. This enables you to understand where the changes that you want to make fit into the oveall application's information architecture. Begin with the UI Shell and Navigation guidelines, and then move onto page-level design using the Work Areas and Dashboards guidelines. UI Shell Guideline Navigation Guideline Introduction to Work Areas Guideline Dashboards Guideline Page Content: These patterns and guidelines cover the most common interactions used to complete tasks productively, beginning with the core interactions common across all pages, and then moving onto task-specific ones. Core Across All Pages Icons Guideline Page Actions Guideline Save Model Guideline Messages Pattern Set Embedded Help Pattern Set Task Dependent Add Existing Object Pattern Set Browse Pattern Set Create Pattern Set Detail on Demand Pattern Set Editing Objects Pattern Set Guided Processes Pattern Set Hierarchies Pattern Set Information Entry Forms Pattern Set Record Navigation Pattern Set Transactional Search and Results Pattern Group Now, armed with all this great insider information, get developing some great-looking, highly usable apps! Let me know in the comments how things go!

    Read the article

  • Is there a way to create a consistent snapshot/SnapMirror across multiple volumes?

    - by Tomer Gabel
    We use a NetApp FAS 6-series filer with an application that spans multiple volumes. For backup purposes I would like to create a consistent snapshot that spans these volumes at the same point in time (or at least with an extremely low delta); additionally, we'd like to to use SnapMirror to replace the production environment to test volumes. The problem is in creating a consistent snapshot/SnapMirror, since these commands are not transactional and do not take multiple parameters. I tried scripting consecutive "snap create" or "snapmirror resync" commands via SSH, but there's always a 0.5-2 second difference between each snapshot. It's currently "good enough", but I'm seriously concerned about the consistency impact with increased load (we're currently in pre-production). Has anyone managed to create a consistent snapshot that spans several volumes? How did you pull it off?

    Read the article

  • High Performance SQL Views Using WITH(NOLOCK)

    - by gt0084e1
    Every now and then you find a simple way to make everything much faster. We often find customers creating data warehouses or OLAP cubes even though they have a relatively small amount of data (a few gigs) compared to their server memory. If you have more server memory than the size of your database or working set, nearly any aggregate query should run in a second or less. In some situations there may be high traffic on from the transactional application and SQL server may wait for several other queries to run before giving you your results. The purpose of this is make sure you don’t get two versions of the truth. In an ATM system, you want to give the bank balance after the withdrawal, not before or you may get a very unhappy customer. So by default databases are rightly very conservative about this kind of thing. Unfortunately this split-second precision comes at a cost. The performance of the query may not be acceptable by today’s standards because the database has to maintain locks on the server. Fortunately, SQL Server gives you a simple way to ask for the current version of the data without the pending transactions. To better facilitate reporting, you can create a view that includes these directives. CREATE VIEW CategoriesAndProducts AS SELECT * FROM dbo.Categories WITH(NOLOCK) INNER JOIN dbo.Products WITH(NOLOCK) ON dbo.Categories.CategoryID = dbo.Products.CategoryID In some cases quires that are taking minutes end up taking seconds. Much easier than moving the data to a separate database and it’s still pretty much real time give or take a few milliseconds. You’ve been warned not to use this for bank balances though. More from Data Stream

    Read the article

  • "Siebel2FusionCRM Integration" solution by ec4u (D)

    - by Richard Lefebvre
    ec4u, a CRM System Integration leader based in Germany and Switzerland, and an historical Oracle/Siebel partner, offers a complete "Siebel2FusionCRM Integration" solution, based on tools methodology and services. ec4u Siebel2FusionCRM Integration solution's main objectives are: Integration between Siebel (on-premise) and Fusion CRM / Marketing (“in the cloud”) Accounts, Contacts and Addresses are maintained by Sales in Siebel CRM and synchronized in real-time into Fusion CRM / Marketing CDM Processing ensures clean data for marketing campaigns (validation and deduplication) Create E-Mail marketing campaigns and newsletters in Fusion The solution features: Upsert processes figure out what information needs to be updated, inserted or terminated (deleted). However, as Siebel is the data master, it is still a one-way synchronization. Handle deleted or nullified information by terminating them in Fusion CRM (set start and end date to define the validity period) Initial load and real-time synchronization use the same processes Invocations/Operations can be repeated due to no transactional support from Fusion web services Tagging sub entries in case of 1 to N mapping (Example: Telephone number is one simple field in Siebel but in Fusion you can have multiple telephone numbers in a sub table) E-Mail-Notification in case of any error (containing error message, instance number, detailed payload) Schematron Validation Interested? Looking for more details or a partnership with ec4u for a "Siebel2FusionCRM Integration" project? Contact: Gregor Bublitz, Director Expert Services ([email protected])

    Read the article

  • SQL Server cluster performance baseline

    - by Dwight T
    Currently I'm tasked with getting a good performance baseline on a SQL 2005 cluster. The main db on the server is for Sharepoint, but I would like to add other dbs on the cluster. I do have access to Quest's Performance Analysis tool to help. What are key factors to look at to see if the cluster can handle additional dbs? Do you look at different performance indicators for a cluster vs a stand alone sql server? One db will be a low usage transactional db and a read only db that is used for sales data. Thanks Dwight

    Read the article

  • How to go from Mainframe to the Cloud?

    - by Ruma Sanyal
    Running applications on IBM mainframes is expensive, complex, and hinders IT responsiveness. The high costs from frequent forced upgrades, long integration cycles, and complex operations infrastructures can only be alleviated by migrating away from a mainframe environment.  Further, data centers are planning for cloud enablement pinned on principles of operating at significantly lower cost, very low upfront investment, operating on commodity hardware and open, standards based systems, and decoupling of hardware, infrastructure software, and business applications. These operating principles are in direct contrast with the principles of operating businesses on mainframes. By utilizing technologies such as Oracle Tuxedo, Oracle Coherence, and Oracle GoldenGate, businesses are able to quickly and safely migrate away from their IBM mainframe environments. Further, running Oracle Tuxedo and Oracle Coherence on Oracle Exalogic, the first and only integrated cloud machine on the market, Oracle customers can not only run their applications on standards-based open systems, significantly cutting their time to market and costs, they can start their journey of cloud enabling their mainframe applications. Oracle Tuxedo re-hosting tools and techniques can provide automated migration coverage for more than 95% of mainframe application assets, at a fraction of the cost Oracle GoldenGate can migrate data from mainframe systems to open systems, eliminating risks associated with the data migration Oracle Coherence hosts transactional data in memory providing mainframe-like data performance and linear scalability Running Oracle software on top of Oracle Exalogic empowers customers to start their journey of cloud enabling their mainframe applications Join us in a series of events across the globe where you you'll learn how you can build your enterprise cloud and add tremendous value to your business. In addition, meet with Oracle experts and your peers to discuss best practices and see how successful organizations are lowering total cost of ownership and achieving rapid returns by moving to the cloud. Register for the Oracle Fusion Middleware Forum event in a city new you!

    Read the article

  • Do ORMs enable the creation of rich domain models?

    - by Augusto
    After using Hibernate on most of my projects for about 8 years, I've landed on a company that discourages its use and wants applications to only interact with the DB through stored procedures. After doing this for a couple of weeks, I haven't been able to create a rich domain model of the application I'm starting to build, and the application just looks like a (horrible) transactional script. Some of the issues I've found are: Cannot navigate object graph as the stored procedures just load the minimum amount of data, which means that sometimes we have similar objects with different fields. One example is: we have a stored procedure to retrieve all the data from a customer, and another to retrieve account information plus a few fields from the customer. Lots of the logic ends up in helper classes, so the code becomes more structured (with entities used as old C structs). More boring scaffolding code, as there's no framework that extracts result sets from a stored procedure and puts it in an entity. My questions are: has anyone been in a similar situation and didn't agree with the store procedure approch? what did you do? Is there an actual benefit of using stored procedures? appart from the silly point of "no one can issue a drop table". Is there a way to create a rich domain using stored procedures? I know that there's the posibility of using AOP to inject DAOs/Repositories into entities to be able to navigate the object graph. I don't like this option as it's very close to voodoo.

    Read the article

  • IIM Calcutta &ndash; EPBM14 &ndash; Campus Visit &ndash; Day 3 &ndash; Kali Temple, IS &amp; Strateg

    - by Ram Shankar Yadav
    Hey folks~ Today was one of the most happening day for us, we got up @ 5 AM and did our daily morning chores and left hostel room by 5:35 AM. So why we got up so early??? Okay we planned a visit to Kali Temple at KaliGhat! So till everyone assembles, we took few pics on the Lake Deck. We left Joka campus at 6 AM and reached Kali temple at 6:50 AM. Kali temple is very famous in Kolkata, but the frankly speaking it’s one of the most unorganised one for sure~! After darshan, we got back into our Taxis and headed for breakfast at Gupta Brothers Restaurant. We enjoyed “doodh-jalebi”, “Kachori-sabzi” and rasogullas there~! While coming back, one of our Taxi’s excel broken and we waited for the first to come back and take us to Campus. After coming back to campus, we attended our sessions on IS & Strategy on ITC eChaupal and Transactional Cost and Vertical Integration(TCVI). After our classed we had “Dinner Party” organised by IIMC, and we enjoyed it thoroughly, and many of us took this as an opportunity to share our biz card and talk to Professors. Overall it’s was a rocking day!! Stay tuned for more… Cheers, ram :)   Photo Album :

    Read the article

  • links for 2011-02-07

    - by Bob Rhubart
    Creating JAXWS Service in WebLogic Workshop Middleware Magic Jay SenSharma shares "a simple demo which explains how we can create a Complex JAXWS WebService using WebLogic Workshop." (tags: WebLogic jaxws middleware) Wentari: Re-Learning PeopleSoft "If I truly want to be an enterprise architect, what better way than to have hands on knowledge about all the Oracle offerings outside of my specialization in Siebel and OBIEE." -- Peter Yeung (tags: oracle otn businessintelligence obiee siebel) Andrejus Baranovskis's Blog: CreateWithParams Operation for Oracle ADF BC 11g Oracle ACE Director Andrejus Baranovski illustrates how you can apply a CreateWithParams operation in two easy steps. (tags: oracle otn oracleace soa) APEX plugins contributed to the APEX community by AMIS developers AMIS Technology blog The APEX 4.0 plugin framework "allows for more more organized, better structured development with lots more reuse potential," according to Oracle ACE Director Lucas Jellema. (tags: oracle otn oracleace apex) Oracle BI EE 11g and Oracle ADF - Part 2 - Real Time reporting using View Objects Venkatakrishnan J looks into "another reporting innovation (by use of the common ADF Framework) i.e. real time reporting using BI EE by directly reversing metadata from a transactional application. (tags: oracle otn oracleace businessintelligence obiee adf) On-demand Webcast: Java in the Smart Grid (The Java Source) Learn more about the Smart Grid and the role that Java is poised to play in this important initiative. (tags: oracle otn java smartgrid)

    Read the article

  • Tips about how to spread Object Oriented practices

    - by Augusto
    I work for a medium company that has around 250 developers. Unfortunately, lots of them are stuck in a procedural way of thinking and some teams constantly deliver big Transactional Script applications, when in fact the application contains rich logic. They also fail to manage the design dependencies, and end up with services which depend on another large number of services (a clean example of Big Ball of Mud). My question is: Can you suggest how to spread this type of knowledge? I know that the surface of the problem is that these applications have a poor architecture and design. Another issue is that there are some developers who are against writing any kind of test. A few things I'm doing to change this (but I'm either failing or the change is too small are) Running presentations about design principles (SOLID, clean code, etc). Workshops about TDD and BDD. Coaching teams (this includes using sonar, findbugs, jdepend and other tools). IDE & Refactoring talks. A few things I'm thinking to do in the future (but I'm concern that they might not be good) Form a team of OO evangelists, who disseminate an OO way of thinking in differet teams (these people would need to change teams every few months). Running design review sessions, to criticise the design and suggest improvements (even if the improvements are not done because of time constraints, I think this might be useful) . Something I found with the teams I coach, is that as soon as I leave them, they revert back to the old practices. I know I don't spend a lot of time with them, usually just one month. So whatever I'm doing, it doesn't stick. I'm sorry this question is spattered with frustration, but the alterative to write this was to hit my head on the wall until I pass out.

    Read the article

  • Uses of persistent data structures in non-functional languages

    - by Ray Toal
    Languages that are purely functional or near-purely functional benefit from persistent data structures because they are immutable and fit well with the stateless style of functional programming. But from time to time we see libraries of persistent data structures for (state-based, OOP) languages like Java. A claim often heard in favor of persistent data structures is that because they are immutable, they are thread-safe. However, the reason that persistent data structures are thread-safe is that if one thread were to "add" an element to a persistent collection, the operation returns a new collection like the original but with the element added. Other threads therefore see the original collection. The two collections share a lot of internal state, of course -- that's why these persistent structures are efficient. But since different threads see different states of data, it would seem that persistent data structures are not in themselves sufficient to handle scenarios where one thread makes a change that is visible to other threads. For this, it seems we must use devices such as atoms, references, software transactional memory, or even classic locks and synchronization mechanisms. Why then, is the immutability of PDSs touted as something beneficial for "thread safety"? Are there any real examples where PDSs help in synchronization, or solving concurrency problems? Or are PDSs simply a way to provide a stateless interface to an object in support of a functional programming style?

    Read the article

  • Inserting static current time in Excel

    - by Mike Cole
    I have a time log spreadsheet. I have a new sheet for each day. In each sheet, I have a transactional record of how my time was spent. When I start or end a task, I usually type in the time ("11:00 AM" for example). Is there a shortcut to inserting the current time into a field? I'm sure it can be done with a macro, but I'm not very knowledgeable about macros. I'd like to simply highlight a field and hit some sort of shortcut key to insert a static value of the current time. Thanks for any help!

    Read the article

  • What relational database system should I learn? [closed]

    - by acidzombie24
    At the moment i know sqlite (my favorite), mysql (its ok, i get annoyed) and i do not want to learn ms/t sql (it only allows one nullable row if the column is unique). I am thinking about learning a new database system. My requirements for it is Must allow multiple connections at once (read and write) All or data i choose must be ACID compliant Performance should be good. I have a 17gb table in one project. It should perform well on read and transactional writes. With mysql it took hours to restore it and there were no foreign keys on that specific table. It only finished in a workday because i found a suggestion to adjust a setting which i think was key-buffer. And it still took hours Unique columns that allow more then one row to be null. I shouldn't have to say it but dammit MS. Allows one to make ongoing backups. Something like 'binary logs'. Some relatively small amounts of data i can grab and apply it to my local db to have it in sync with the one on the server. Table joins. I rather not write a bunch of queries to simulate a join What I would like but is not required Foreign keys. This may be a requirement later Open sourced Fair tool support. So i can measure queries, easily backup/restore, etc .NET and C (or C++) interface. (I seen one that uses raw tcp with JSON which was okish) Good subquery support. Once i was working with an older version of mysql (i believe <5.1 but it could have been 5.1) and i had to write many queries to do one query because it couldn't do subqueries. Or maybe it couldnt do it efficiently and died bc of memory limitations with a huge dataset. What db system should i learn?

    Read the article

  • TFS SQL Deployment Data Script

    - by Greg
    We are using TFS and SQL 2005 (looking to upgrade to SQL 2012 if that makes a difference). We store our database schema in a Visual Studio Database project (VS 2010). When code is released to live we currently use the Visual Studio Database Project to build a script for all our schema changes. The problem we have been getting is having to alter or add to that script to add/fix data for the deployment. For example if we add a new non-nullable column to an existing table we need to populate that column with data during the insert. Other times we may want to create new records in transactional tables (e.g. assign specific users to a new security access). Do Visual Studio Database Projects have a way to store these scripts that only need to be run once and somehow include them in the build? Does it know which scripts need to be run (for example if we are inserting default data we don't want to do that again a second time)? OR Is there a better way to manage these scripts?

    Read the article

  • What applications is NTFS preferable for? [closed]

    - by javano
    When building a new server I prefer to deploy Linux as my OS of choice. This gives me the luxury of being able to choose from various file systems (amongst other aspects), and I will choose a different FS for different servers, depending on what they will be used for. With Windows OS variants you can only use NTFS. Have any benchmarks or tests been performed that have shown NTFS to be a preferable choice for a given scenario or application (apart from just "running Windows" because it has to be on NTFS). To clarify what I mean; I might use filesystem X for large transactional storage volumes, but filesystem Y for front end web app servers. If I had a multi-platform application to deploy that (let's pretend) was available on Mac/Win/Lin, is there any type of application or scenario that would benefit from being on NTFS?

    Read the article

  • Using Spring as a JPA Container

    - by sdoca
    Hi, I found this article which talks about using Spring as a JPA container: http://java.sys-con.com/node/366275 I have never used Spring before this and am trying to make this work and hope someone can help me. In the article it states that you need to annotate a Spring bean with @Transactional and methods/fields with @PersistenceContext in order to provide transaction support and to inject an entity manager. Is there something the defines a bean as a "Spring Bean"? I have a bean class which implements CRUD operations on entities using generics: @Transactional public class GenericCrudServiceBean implements GenericCrudService { @PersistenceContext(unitName="MyData") private EntityManager em; @Override @PersistenceContext public <T> T create(T t) { em.persist(t); return t; } @Override @PersistenceContext public <T> void delete(T t) { t = em.merge(t); em.remove(t); } ... ... ... @Override @PersistenceContext public List<?> findWithNamedQuery(String queryName) { return em.createNamedQuery(queryName).getResultList(); } } Originally I only had this peristence context annotation: @PersistenceContext(unitName="MyData") private EntityManager em; but had a null em when findWithNamedQuery was invoked. Then I annotated the methods as well, but em is still null (no injection?). I was wondering if this had something to do with my bean not being recognized as "Spring". I have done configuration as best I could following the directions in the article including setting the following in my context.xml file: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns:tx="http://www.springframework.org/schema/tx" tx:schemaLocation="http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd"> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="MyData" /> <property name="dataSource" ref="dataSource" /> <property name="loadTimeWeaver" class="org.springframework.classloading.ReflectiveLoadTimeWeaver" /> <property name="jpaVendorAdapter" ref="jpaAdapter" /> </bean> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="oracle.jdbc.driver.OracleDriver" /> <property name="url" value="jdbc:oracle:thin:@localhost:1521:MySID" /> <property name="username" value="user" /> <property name="password" value="password" /> <property name="initialSize" value="3" /> <property name="maxActive" value="10" /> </bean> <bean id="jpaAdapter" class="org.springframework.orm.jpa.vendor.EclipseLinkJpaVendorAdapter"> <property name="databasePlatform" value="org.eclipse.persistence.platform.database.OraclePlatform" /> <property name="showSql" value="true" /> </bean> <bean class="org.springframework.ormmjpa.support.PersistenceAnnotationBeanPostProcessor" /> <tx:annotation-driven /> </beans> I guessed that these belonged in the context.xml file because the article never specifically said which file is the "application context" file. If this is wrong, please let me know.

    Read the article

  • Can't access dfs namespace over vpn

    - by cpf
    I've recently configured 2 servers in AD on the same domain level. They are physically separated and permanently connected through a site-to-site vpn for dfs replication. All well, but when users connect to either site through vpn (from home e.g.) they can't use the domain level method: \\domain.com\data Internally this works perfectly, resolving domain.com when connected through vpn gets the correct IP. I've tried Google to figure things out. What I was able to find was that more people have this issue, no real solution found though. Can anyone explain why this is happening? Especially a solution would be really helpful! Thanks in advance.

    Read the article

  • Windows Server 2003 can't see Vista machine

    - by Django Reinhardt
    Hi there, I've got a real PITA problem that I'm sure has a really simple solution. I have a Windows Server 2003 machine that needs to be able to see the network name of a Vista box - but refuses to. It can see the Vista box (and even access its shared folder) if I enter the Vista box's IP address. Problem is: SQL Server refuses to do Replication with anything other than the "actual server name". That means that the 2003 machine needs to be able to connect through the Vista machines network name... not just its IP address. I'm guessing it's a simple incompatibility between OS's, but I'm sure there's got to be a simple way of fixing it. Note: Yes, the Vista machine can connect to 2003 machine, no problem. And other machines in the office can connect to both the Vista machine and 2003 (they have more recent OS's). Thanks for any help!

    Read the article

  • Exchange 2007 CCR: Logs not replicating to passive node partition

    - by yum_tacos4u
    In my environment I have setup Exchange 2007 in an CCR cluster, mirroring our main servers to a set of servers in passive mode. One of the partitions on the passive node that I have setup for the logs for Exchange 2007 has faulted, causing the partition to be unreadable. I have replaced the partition on the passive node, and setup the drive to mirror the one in active mode, but the logs are not replicating since the change. Is there anyway to force the replication of the new drive for the logs to the new partition? Any idea why the logs are not replicating? Any help or comments is appreciated, and thanks in advance.

    Read the article

  • F5/BigIP rule to redirect affinity-bound users from INACTIVE pool node to other ACTIVE node

    - by j pimmel
    We have several server nodes set up for the end users of our system and because we don't use any kind of session replication in the app servers, F5 maintains affinity for users with the ACTIVE node the client was first bound to. At times when we want to re-deploy the app, we change the F5 config and take a node out of the ACTIVE pool. Gradually the users filter off and we can deploy, but the process is a bit slow. We can't just dump all the users into a different node because - given the update heavy nature of the user activities - we could cause them to lose changes. That said, there is one URL/endpoint - call it http://site/product/list - which we know, when the client hits it, that we could shove them off the INACTIVE node they had affinity with and onto a different ACTIVE node. We have had a few tries writing an F5 rule along these lines, but haven't had much success so i thought I might ask here, assuming it's possible - I have no reason to think it's not based on what we have found so far.

    Read the article

  • Load-balancer options

    - by toolkit
    I am looking at a number of possible options for load-balancing. So far, I am constrained to the following options: DNS server load-balancer, balancing to a cluster of tomcat servers, with terracotta for session replication. Pros - don't have to buy new kit. Cons - DNS lb can keep directing to a broken server. Hardware load-balancer, direct to cluster of tomcat servers. Pros - could have second box for failover lb. Cons - expense. Apache server load-balancer. Pros - apache's lb polls for broken servers. Cons - apache server is single point of failure, plus need to buy another server. Are there any other options I should consider? Thanks. Update: Thanks for all the answers so far +1's all round. Not accepting an answer yet, to keep more ideas coming.

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (~2-3k requests per second, running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6 cores) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96

    Read the article

  • Implementing a Linux-HA based clustering setup on Windows

    - by Alex
    I have a (tried and tested) setup involving: 2x Load balancing nodes on a floating IP via Heartbeat, load balancing 2 tomcat servers. 2x Tomcat servers 2x Galera Cluster MySQL servers synchronously replicating (+1 arbitrator node) All are evenly spread across 2 physical nodes. Now, I have to somehow get the same functionality on Windows Server (2008? I think) nodes .... running under Xen virtualization. There is no possibility to use Linux for any of the nodes. I count two main problems: No Linux-HA hearbeat daemon for the load balancing No Galera synchronous replication for MySQL I freely admit to having nearly no Windows knowledge when it comes to clustering. Is there a way to closely mimic the setup I have described or is it a total write-off?

    Read the article

  • Terse, documented, correct way to create Kerberos-backed user shares in Greyhole

    - by MrGomez
    As a migration strategy away from Windows Home Server (which is currently out of support and intractable for our needs, for a variety of reasons), our little cloister of nerds has targeted Greyhole for our shared use at home. Despite the documentation's terseness, getting the system set up for simple, single-user operation isn't especially difficult, but this scenario fails to service our needs. Among other highlights of the system, we're attempting to emulate Integrated Windows Authentication (with Kerberos) and single-user shares to keep the Windows users in the house happy and well-supported. I'm aware of the underlying systems that go into Greyhole and understand how to set up per-user shares in Samba, but the documentation doesn't seem to support cases for Greyhole to sop up these directories as separate landing zones for replication. Enter my question: are both of these cases (IWA user authentication and user-partitioned personal shares) supported by Greyhole? If so, please cite or link the supporting documentation if it exists.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >