Search Results

Search found 12634 results on 506 pages for 'transactional memory'.

Page 245/506 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Introducing Oracle Multitenant

    - by OracleMultitenant
    0 0 1 1142 6510 Oracle Corporation 54 15 7637 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} The First Database Designed for the Cloud Today Oracle announced the general availability (GA) of Oracle Database 12c, the first database designed for the Cloud. Oracle Multitenant, new with Oracle Database 12c, is a key component of this – a new architecture for consolidating databases and simplifying operations in the Cloud. With this, the inaugural post in the Multitenant blog, my goal is to start the conversation about Oracle Multitenant. We are very proud of this new architecture, which we view as a major advance for Oracle. Customers, partners and analysts who have had previews are very excited about its capabilities and its flexibility. This high level review of Oracle Multitenant will touch on our design considerations and how we re-architected our database for the cloud. I’ll briefly describe our new multitenant architecture and explain it’s key benefits. Finally I’ll mention some of the major use cases we see for Oracle Multitenant. Industry Trends We always start by talking to our customers about the pressures and challenges they’re facing and what trends they’re seeing in the industry. Some things don’t change. They face the same pressures and the same requirements as ever: Pressure to do more with less; be faster, leaner, cheaper, and deliver services 24/7. Big companies have achieved scale. Now they want to realize economies of scale. As ever, DBAs are faced with the challenges of patching and upgrading large numbers of databases, and provisioning new ones.  Requirements are familiar: Performance, scalability, reliability and high availability are non-negotiable. They need ever more security in this threatening climate. There’s no time to stop and retool with new applications. What’s new are the trends. These are the techniques to use to respond to these pressures within the constraints of the requirements. With the advent of cloud computing and availability of massively powerful servers – even engineered systems such as Exadata – our customers want to consolidate many applications into fewer larger servers. There’s a move to standardized services – even self-service. Consolidation Consolidation is not new; companies have tried various different approaches to consolidation of databases in the cloud. One approach is to partition a powerful server between several virtual machines, one per application. A downside of this is that you have the resource and management overheads of OS and RDBMS per VM – that is, per application. Another is that you have replaced physical sprawl with virtual sprawl and virtual sprawl is still expensive to manage. In the dedicated database model, we have a single physical server supporting multiple databases, one per application. So there’s a shared OS overhead, but RDBMS process and memory overhead are replicated per application. Let's think about our traditional Oracle Database architecture. Every time we create a database, be it a production database, a development or a test database, what do we do? We create a set of files, we allocate a bunch of memory for managing the data, and we kick off a series of background processes. This is replicated for every one of the databases that we create. As more and more databases are fired up, these replicated overheads quickly consume the available server resources and this limits the number of applications we can run on any given server. In Oracle Database 11g and earlier the highest degree of consolidation could be achieved by what we call schema consolidation. In this model we have one big server with one big database. Individual applications are installed in separate schemas or table-owners. Database overheads are shared between all applications, which affords maximum consolidation. The shortcomings are that application changes are often required. There is no tenant isolation. One bad apple can spoil the whole batch. New Architecture & Benefits In Oracle Database 12c, we have a new multitenant architecture, featuring pluggable databases. This delivers all the resource utilization advantages of schema consolidation with none of the downsides. There are two parts to the term “pluggable database”: "pluggable", which is new, and "database", which is familiar.  Before we get to the exciting new stuff let’s discuss what hasn’t changed. A pluggable database is a fully functional Oracle database. It’s not watered down in any way. From the perspective of an application or an end user it hasn’t changed at all. This is very important because it means that no application changes are required to adopt this new architecture. There are many thousands of applications built on Oracle databases and they are all ready to run on Oracle Multitenant. So we have these self-contained pluggable databases (PDBs), and as their name suggests, they are plugged into a multitenant container database (CDB). The CDB behaves as a single database from the operations point of view. Very much as we had with the schema consolidation model, we only have a single set of Oracle background processes and a single, shared database memory requirement. This gives us very high consolidation density, which affords maximum reduction in capital expenses (CapEx). By performing management operations at the CDB level – “managing many as one” – we can achieve great reductions in operating expenses (OpEx) as well, but we retain granular control where appropriate. Furthermore, the “pluggability” capability gives us portability and this adds a tremendous amount of agility. We can simply unplug a PDB from one CDB and plug it into another CDB, for example to move it from one SLA tier to another. I'll explore all these new capabilities in much more detail in a future posting.  Use Cases We can identify a number of use cases for Oracle Multitenant. Here are a few of the major ones. 0 0 1 113 650 Oracle Corporation 5 1 762 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} Development / Testing where individual engineers need rapid provisioning and recycling of private copies of a few "master test databases" Consolidation of disparate applications using fewer, more powerful servers Software as a Service deploying separate copies of identical applications to individual tenants Database as a Service typically self-service provisioning of databases on the private cloud Application Distribution from ISV / Installation by Customer Eliminating many typical installation steps (create schema, import seed data, import application code PL/SQL…) - just plug in a PDB! High volume data distribution literally via disk drives in envelopes distributed by truck! - distribution of things like GIS or MDM master databases …various others! Benefits Previous approaches to consolidation have involved a trade-off between reductions in Capital Expenses (CapEx) and Operating Expenses (OpEx), and they’ve usually come at the expense of agility. With Oracle Multitenant you can have your cake and eat it: Minimize CapEx More Applications per server Minimize OpEx Manage many as one Standardized procedures and services Rapid provisioning Maximize Agility Cloning for development and testing Portability through pluggability Scalability with RAC Ease of Adoption Applications run unchanged It’s a pure deployment choice. Neither the database backend nor the application needs to be changed. In future postings I’ll explore various aspects in more detail. However, if you feel compelled to devour everything you can about Oracle Multitenant this very minute, have no fear. Visit the Multitenant page on OTN and explore the various resources we have available there. Among these, Oracle Distinguished Product Manager Bryn Llewellyn has written an excellent, thorough, and exhaustively detailed White Paper about Oracle Multitenant, which is available here.  Follow me  I tweet @OraclePDB #OracleMultitenant

    Read the article

  • Are concurrency issues possible when using the WCF Service Behavoir attribute set to ConcurrencyMode

    - by Brandon Linton
    We have a WCF service that makes a good deal of transactional NHibernate calls. Occasionally we were seeing SQL timeouts, even though the calls were updating different rows and the tables were set to row level locking. After digging into the logs, it looks like different threads were entering the same point in the code (our transaction using block), and an update was hanging on commit. It didn't make sense, though, because we believed that the following service class attribute was forcing a unique execution thread per service call: [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, InstanceContextMode = InstanceContextMode.PerCall)] We recently changed the concurrency mode to ConcurrencyMode.Single and haven't yet run into any issues, but the bug was very difficult to reproduce (if anyone has any thoughts on flushing a bug like that out, let me know!). Anyway, that all brings me to my question: shouldn't an InstanceContextMode of PerCall enforce thread-safety within the service, even if the ConcurrencyMode is set to multiple? How would it be possible for two calls to be serviced by the same service instance? Thanks!

    Read the article

  • "conveyor belt" cache architecture

    - by Andrew Matthews
    I'm producing an application with a few peculiar internal communication characteristics that make the usual suspects for data storage and transport (Qs and RDBMSs) ill-fitted. I'm wondering whether there is a product out there that matches the following characteristics: all data put into it is peristent all reads are delivered out of memory data is universally available data lives where it is most needed data is versioned (nice to have) updates are transactional (I'd like ACID characteristics) data is potentially replicated, but always in sync works on windows is based on or has bindings for .NET is really fast is really robust is redundant is scalable I'm looking at things like Microsoft codename "Velocity", but I am not sure whether it fits all of the above characteristics. Likewise, Memcached is not a perfect fit either. The current version of this app opts for an RDBMS with a signaling system for inter-system sync, but latency is too high and versioning of the DB is a pain. I need all the robustness, but with none of the trade-offs.

    Read the article

  • innodb lock wait timeout

    - by shantanuo
    As per the documentation link given below: When a lock wait timeout occurs, the current statement is not executed. The current transaction is not rolled back. (Until MySQL 5.0.13 InnoDB rolled back the entire transaction if a lock wait timeout happened. You can restore this behavior by starting the server with the --innodb_rollback_on_timeout option, available as of MySQL 5.0.32. http://dev.mysql.com/doc/refman/5.0/en/innodb-parameters.html#sysvar_innodb_lock_wait_timeout Does it mean that when a lock wait timeout occurs, it compromises the transactional integrity? "roollback on timeout" was the default behaviour till 5.0.13 and I guess that was the correct way to handle such situations. Does anyone think that this should be the default behaviour and the user should not be asked to add a parameter for a functionality that is taken for granted?

    Read the article

  • Google App Engine - DELETE JPQL Query and Cascading

    - by Taylor Leese
    I noticed that the children of PersistentUser are not deleted when using the JPQL query below. However, the children are deleted if I perform an entityManager.remove(object). Is this expected? Why doesn't the JPQL query below also perform a cascaded delete? @OneToMany(mappedBy = "persistentUser", cascade = CascadeType.ALL) private Collection<PersistentLogin> persistentLogins; ... @Override @Transactional public final void removeUserTokens(final String username) { final Query query = entityManager.createQuery( "DELETE FROM PersistentUser p WHERE username = :username"); query.setParameter("username", username); query.executeUpdate(); }

    Read the article

  • Efficient persistent storage for simple id to table of values map for java

    - by wds
    I need to store some data that follows the simple pattern of mapping an "id" to a full table (with multiple rows) of several columns (i.e. some integer values [u, v, w]). The size of one of these tables would be a couple of KB. Basically what I need is to store a persistent cache of some intermediary results. This could quite easily be implemented as simple sql, but there's a couple of problems, namely I need to compress the size of this structure on disk as much as possible. (because of amount of values I'm storing) Also, it's not transactional, I just need to write once and simply read the contents of the entire table, so a relational DB isn't actually a very good fit. I was wondering if anyone had any good suggestions? For some reason I can't seem to come up with something decent atm. Especially something with an API in java would be nice.

    Read the article

  • Customised email alerts through MailChimp API

    - by user1293351
    I am building a site that runs an automated process every 30 minutes to match up new flights with their respective user. Once this process is completed I want to email the flight details out to the respective user. However the flight info will be different for every single user with their being 0-300+ potential emails. Is this something that the MailChimp API will allow or do? I found this page http://apidocs.mailchimp.com/api/how-to/transactional-campaigns.php which I am not sure if this effects me. Is the STS more suited to this? http://apidocs.mailchimp.com/sts/1.0/ Thanks Alex

    Read the article

  • Ref to map vs. map to refs vs. multiple refs

    - by mikera
    I'm working on a GUI application in Swing+Clojure that requires various mutable pieces of data (e.g. scroll position, user data, filename, selected tool options etc.). I can see at least three different ways of handling this set of data: Create a ref to a map of all the data: (def data (ref { :filename "filename.xml" :scroll [0 0] })) Create a map of refs to the individual data elements: (def datamap { :filename (ref "filename.xml") :scroll (ref [0 0]) })) Create a separate ref for each in the namespace: (def scroll (ref [0 0])) (def filename (ref "filename.xml")) Note: This data will be accessed concurrently, e.g. by background processing threads or the Swing event handling thread. However there probably isn't a need for consistent transactional updates of multiple elements. What would be your recommended approach and why?

    Read the article

  • Is it a good idea to use a computed column as part of a primary key ?

    - by Brann
    I've got a table defined as : OrderID bigint NOT NULL, IDA varchar(50) NULL, IDB bigint NULL, [ ... 50 other non relevant columns ...] The natural primary key for this table would be (OrderID,IDA,IDB), but this it not possible because IDA and IDB can be null (they can both be null, but they are never both defined at the same time). Right now I've got a unique constraint on those 3 columns. Now, the thing is I need a primary key to enable transactional replication, and I'm faced with two choices : Create an identity column and use it as a primary key Create a non-null computed column C containing either IDA or IDB or '' if both columns were null, and use (OrderID,C) as my primary key. The second alternative seams cleaner as my PK would be meaningful, and is feasible (see msdn link), but since I've never seen this done anywhere, I was wondering if they were some cons to this approach.

    Read the article

  • Temporary intermediate table

    - by user289429
    In our project to generate massive reports in oracle we use some permanent table to hold intermediate results. For example to generate one report we run few queries and populate the table, at the final step we join the intermediate table with huge application tables. These intermediate tables are cleared for next report run. We have few concerns in performance areas. These intermediate tables are transactional and don't have statistics. Is it good idea to join these with application tables which are partitioned and have up to date statistics. We need these results stored in the intermediate tables to be available across requests from UI hence we are not in a position to use oracle provided temporary tables. Any thoughts on what could be done would be appreciated.

    Read the article

  • Does it make sense to use BOTH mongodb and mysql in the same rails application?

    - by Brian Armstrong
    I have a good reason to use mongodb for part of my app. But people generally describe it as not a good fit for "transactional" applications like a bank where transactions have to be exact/consistent, etc. Does it make sense to split the models up in Rails and have some of them use MySql and others mongo? Or will this generally cause more problems than it's worth? I'm not building a banking app or anything, but was thinking it might make sense for my users table or or transactions table (recording revenue) to do that part in MySql.

    Read the article

  • One account, multiple users, multiple shopping cart in a web application

    - by lemotdit
    I received a somewhat unusual request (imo) for a transactional web site. I have to implement the possibility of having multiple shopping cart for the same user. Those really are shopping carts, not order templates. I.E: A store with several departments ordering under the same account, but with a different person placing orders for a specific department only. Having more than one user per account is not an option since it would involve 'too much' management from the stores owner and the admins. Anyone had to deal with this before? The option so far is to have names for shopping cart, and a dropdown list or something alike after login to choose the cart with some kind of 'busy flag' to lock the cart if it's in use in another session.

    Read the article

  • Hibernate / MySQL Bulk insert problem

    - by Marty Pitt
    I'm having trouble getting Hibernate to perform a bulk insert on MySQL. I'm using Hibernate 3.3 and MySQL 5.1 At a high level, this is what's happening: @Transactional public Set<Long> doUpdate(Project project, IRepository externalSource) { List<IEntity> entities = externalSource.loadEntites(); buildEntities(entities, project); persistEntities(project); } public void persistEntities(Project project) { projectDAO.update(project); } This results in n log entries (1 for every row) as follows: Hibernate: insert into ProjectEntity (name, parent_id, path, project_id, state, type) values (?, ?, ?, ?, ?, ?) I'd like to see this get batched, so the update is more performant. It's possible that this routine could result in tens-of-thousands of rows generated, and a db trip per row is a killer. Why isn't this getting batched? (It's my understanding that batch inserts are supposed to be default where appropriate by hibernate).

    Read the article

  • What is an Enterprise Java Bean really?

    - by HDave
    On the Tomcat FAQ it says: "Tomcat is not an EJB server. Tomcat is not a full J2EE server." But if I: use Spring to supply an application context annotate my entities with JPA annotations (and use Hibernate as a JPA provider) configure C3P0 as a connection pooling data source annotate my service methods with @Transactional (and use Atomikos as JTA provider) Use JAXB for marshalling and unmarshalling and possibly add my own JNDI capability then don't I effectively have a JEE application server? And then aren't my beans EJBs? Or is there some other defining characteristic? What is it that a JEE compliant app server gives you that you can't easily/readily get from Tomcat with some 3rd party subsystems?

    Read the article

  • Hibernate inserting into join table

    - by Karl
    I got several entities. Two of them got a many-to-many relation. When I do a bigger operation on these entities it fails with this exception: org.hibernate.exception.ConstraintViolationException: could not insert collection rows: I execute the operation i a @Transactional context. I don't do any explicit flushing i my daos. The flush is triggered by a query. In the queue are 15 elements (all of the same structure). one of them always fails (but it's always a different one (I checked) and always at a different position). Does anybody have a hint for me for what I might do wrong? My Mapping: @ManyToMany(targetEntity = CategoryImpl.class) protected Set<Category> categories = new HashSet<Category>();

    Read the article

  • Clojure for a lisp illiterate

    - by dbyrne
    I am a lifelong object-oriented programmer. My job is primarily java development, but I have experience in a number of languages. Ruby gave me my first real taste of functional programming. I loved the features Ruby borrowed from the functional paradigm such as closures and continuations. Eventually, I graduated to Scala. This has been a great way to gradually learn to approach non-trivial problems in a functional manner. Now I am interested in Clojure. I know all the sexy features that make it enticing (software transactional memory, macros, etc.), but I just can't get used to "thinking in lisp". I've seen Rich Hickey's screencasts aimed at java programmers, but they are geared towards explaining language features and not approaching real world problems. I am looking for any advice or resources which have made this transition easier for others.

    Read the article

  • What is Java Message Service (JMS) for?

    - by Daniel
    I am currently evaluating JMS and I don't get what I could use it for. Currently, I believe this would be a Usecase: I want to create a SalesInvoice PDF and print it when an SalesOrder leaves the Warehouse, so during the Delivery transaction I could send a transactional print request which just begins when the SalesOrder transaction completes successfully. Now I found out most JMS products are standalone server. Why would a need a Standalone Server for Message Processing, vs. e.g. some simple inproc processing with Quartz scheduler? How does it interact with my application? Isn't it much too slow? What are Usecases you already implemented successfully?

    Read the article

  • Grails Services / Transactions / RuntimeException / Testing

    - by Rob
    I'm testing come code in a service with transactional set to true , which talks to a customer supplied web service the main part of which looks like class BarcodeService { .. /// some stuff ... try{ cancelBarCodeResponse = cancelBarCode(cancelBarcodeRequest) } catch(myCommsException e) { throw new RuntimeException(e) } ... where myCommsException extends Exception .. I have a test which looks like // As no connection from my machine, it should fail .. shouldFailWithCause(RuntimeException){ barcodeServices.cancelBarcodeDetails() } The test fails cause it's catching a myCommsException rather than the RuntimeException i thought i'd converted it to .. Anyone care to point out what i'm doing wrong ? Also will the fact that it's not a RuntimeException mean any transaction related info done before my try/catch actually be written out rather than thrown away ?? Thanks

    Read the article

  • Set Hibernate session's flush mode in Spring

    - by glaz666
    I am writing integration tests and in one test method I'd like to write some data to DB and then read it. @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = {"classpath:applicationContext.xml"}) @TransactionConfiguration() @Transactional public class SimpleIntegrationTest { @Resource private DummyDAO dummyDAO; /** * Tries to store {@link com.example.server.entity.DummyEntity}. */ @Test public void testPersistTestEntity() { int countBefore = dummyDAO.findAll().size(); DummyEntity dummyEntity = new DummyEntity(); dummyDAO.makePersistent(dummyEntity); //HERE SHOULD COME SESSION.FLUSH() int countAfter = dummyDAO.findAll().size(); assertEquals(countBefore + 1, countAfter); } } As you can see between storing and reading data, the session should be flushed because the default FushMode is AUTO thus no data can be actually stored in DB. Question: Can I some how set FlushMode to ALWAYS in session factory or somewhere else to avoid repeating session.flush() call? All DB calls in DAO goes with HibernateTemplate instance. Thanks in advance.

    Read the article

  • How to synchronize two (or n) replication processes for MS SQL databases?

    - by Yauheni Sivukha
    There are two master databases and two read-only copies updated by standard transactional replication. It is needed to map some entity from both read-only databases, lets say that A databases contains orders and B databases contains lines. The problem is that replication to one database can lag behind replication of second database, and at the moment of mapping R-databases will have inconsistent data. For example. We stored 2 orders with lines at 19:00 and 19:03. Mapping process started at 19:05, but to the moment of mapping A database replication processed all changes up to 19:03, but B database replication processed only changes up to 19:00. After mapping we will have order entity with order as of 19:03 and lines as of 19:00. The troubles are guaranteed:) In my particular case both databases have temporal model, so it is possible to fetch data for every time slice, but the problem is to identify time of latest replication. Question: How to synchronize replication processes for several databases to avoid situation described above?

    Read the article

  • [PersistenceException: org.hibernate.exception.SQLGrammarException: could not execute query]

    - by doniyor
    i need help. i am trying to select from database thru sql statement in play framework, but it gives me error, i cannot figure out where the clue is. here is the code: @Transactional public static Users findByUsernameAndPassword(String username, String password){ String hash = DigestUtils.md5Hex(password); Query q = JPA.em().createNativeQuery("select * from USERS where" + "USERNAME=? and PASSWORD=?").setParameter(1, username).setParameter(2, password); List<Users> users = q.getResultList(); if(users.isEmpty()){ return null; } else{ return users.get(0); here is the eror message: [PersistenceException: org.hibernate.exception.SQLGrammarException: could not execute query] can someone help me please! any help i would appreciate! thanks

    Read the article

  • GWT : NULL Session

    - by jidma
    I'm using spring4gwt in my project. I have the following login service implementation: @Service("loginService") public class LoginServiceImpl extends RemoteServiceServlet implements LoginService { @Override @Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class) public UserBean checkUser(String userName, String password) throws Exception { HttpSession httpSession = getThreadLocalRequest().getSession(); } } When i call the loginService.checkUser("test","test") (In hosted mode), I get a null pointer exception, as getThreadLocalRequest() returns NULL instead of the actual session. I didn't try in web mode yet. Why would I get a null session ? Does it have something to do with spring4gwt ? Thank you

    Read the article

  • Reset JPA generated value between tests

    - by Rythmic
    I'm running spring + hibernate + JUnit with springJunit4runner and transactional set to default rollback I'm using in-memory derbydb as Database. Hibernate is used as a JPA Provider and I am successfully testing CRUD kinds of stuff. However, I have a problem with JPA and the behaviour of @GeneratedValue If I run one of my tests in isolation, two entitys are persisted with id 1 and 2. If i run the whole test suite the ids are instead 6 and 7. Spring does rollbacks just fine so there are only these two entitys in the database after addition and of course zero before. But behaviour of @GeneratedValue doesn't allow me to reliable findById unless I would return the Id from the dao.add(Entity e) //method I don't feel like doing that for the sake of testing, or is it a good practise to return the entity that was persisted so I should be doing it anyway?

    Read the article

  • The time-to-reach-queue has elapsed

    - by nieve
    I'm attempting to send a message to a remote private queue from an error queue with powershell. The code I use looks like this: $msg = $src_q.Peek() $msg.Label = GetLabelWithoutFailedQueue($msg) $msg.UseDeadLetterQueue = $true $msg.UseTracing = $true $msg.AcknowledgeType = [System.Messaging.AcknowledgeTypes]::NegativeReceive $msg.TimeToBeReceived = [System.TimeSpan]::FromSeconds(10) $msg.TimeToReachQueue = [System.TimeSpan]::FromSeconds(10) $tx = new-object System.Messaging.MessageQueueTransaction $tx.Begin() $dest_q.Send($msg, $tx) $tx.Commit() The message keeps on appearing on the transactional dead letter queue with the class: "The time-to-reach-queue has elapsed." Anyone's got any idea what could trigger such an error? The queue definitely exists- I do manage to peek it. Also, the reason I get the message from the error queue by peeking is just for testing purposes; I have tried doing the same thing with Receive and the result is the same.

    Read the article

  • SQL Server - how to determine if indexes aren't being used?

    - by rwmnau
    I have a high-demand transactional database that I think is over-indexed. Originally, it didn't have any indexes at all, so adding some for common processes made a huge difference. However, over time, we've created indexes to speed up individual queries, and some of the most popular tables have 10-15 different indexes on them, and in some cases, the indexes are only slightly different from each other, or are the same columns in a different order. Is there a straightforward way to watch database activity and tell if any indexes are not hit anymore, or what their usage percentage is? I'm concerned that indexes were created to speed up either a single daily/weekly query, or even a query that's not being run anymore, but the index still has to be kept up to date every time the data changes. In the case of the high-traffic tables, that's a dozen times/second, and I want to eliminate indexes that are weighing down data updates while providing only marginal improvement.

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >