Search Results

Search found 17627 results on 706 pages for 'hierarchical query'.

Page 517/706 | < Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >

  • .NET XPath Returns No Results

    - by Stacy Vicknair
    When using XPath in .NET one of the gotchas to be aware of is that all namespaces must be named, otherwise you’ll end up with no results. Default namespaces that are specified with xmlns alone still need to be recognized in the XPath query! Say I had a bit of XML like what is returned from the QueryService web service in Sharepoint: 1: <?xml version="1.0" encoding="UTF-8"?> 2: <ResponsePacket xmlns="urn:Microsoft.Search.Response"> 3: <Response> 4: <Range> 5: ... 6: <Results> 7: <Document xmlns="urn:Microsoft.Search.Response.Document" relevance="849"> 8: ...   When consuming and navigating this response with XPath it is necessary to name all namespaces. Then those named namespaces must be used in reference to the individual element being requested (i.e. doc:Document). In VB: 1: Dim xdoc = new XPathDocument(reader) 2: Dim nav = xdoc.CreateNavigator() 3: Dim nsMgr = new XmlNamespaceManager(nav.NameTable) 4: nsMgr.AddNamespace("resp", "urn:Microsoft.Search.Response") 5: nsMgr.AddNamespace("doc", "urn:Microsoft.Search.Response.Document") 6:  7: Dim results = nav.Select("//doc:Document", nsMgr)   In C#: 1: var xdoc = new XPathDocument(reader); 2: var nav = xdoc.CreateNavigator(); 3: var nsMgr = new XmlNamespaceManager(nav.NameTable); 4:  5: nsMgr.AddNamespace("resp", "urn:Microsoft.Search.Response"); 6: nsMgr.AddNamespace("doc", "urn:Microsoft.Search.Response.Document"); 7:  8: var results = nav.Select("//doc:Document", nsMgr);

    Read the article

  • MongoDB: Replicate data in documents vs. “join”

    - by JavierCane
    Disclaimer: This is a question derived from this one. What do you think about the following example of use case? I have a table containing orders. These orders has a lot of related information needed by my current queries (think about the products; the buyer information; the region, country and state of the sale point; and so on) In order to think with a de-normalized approach, I don't have to put identifiers of these related items in my main orders collection. Instead, I have to repeat all the information for each order (ie: I will repeat the buyer's name, surname, etc. for each of its orders). Assuming the previous premise, I'm committing to maintain all the data related to an order without a lot of updates (because if I modify the buyer's name, I'll have to iterate through all orders updating the ones made by the same buyer, and as MongoDB blocks at a document level on updates, I would be blocking the entire order at the update moment). I'll have to replicate all the products' related data? (ie: category, maker and optional attributes like color, size…) What if a new feature is requested and I've to make a lot of queries with the products "as the entry point of the query"? (ie: reports showing the products' sales performance grouping by region, country, or whatever) Is it fair enough to apply the $unwind operation to my orders original collection? (What about the performance?) I should have to do another collection with these queries in mind and replicate again all the products' information (and their orders)? Wouldn't be better to store a product_id in the original orders collection in order to be more tolerable to requirements change? (What about emulating JOINs?) The optimal approach would be a mixed solution with a RDBMS system like MySQL in order to retrieve the complete data? I mean: store products, users, and location identifiers in the orders collection and have queries in MySQL like getAllUsersDataByIds in which I would perform a SELECT * FROM users WHERE user_id IN ( :identifiers_retrieved_from_the_mongodb_query )

    Read the article

  • Procurement and E-Business Suite Product Analyzers .. Can you use this tool to resolve your SR?

    - by LindaJ-Oracle
    Procurement and E-Business Suite Product Analyzers (Doc ID 1545562.1). Analyzers are Query/Read only tools with easy to read html output. The tools are delivered by EBS Support via My Oracle Support documents ids for ease of use. The Analyzer scripts are meant to be part of your Production maintenance program by your Sysadmin, or to designated end users. The result set is an easy to read html output that provides recommendations, solutions and early warnings to of items that should be reviewed and correct. Each analyzer can be ran on demand or scheduled for repeatability and emailed to critical reviewers. There are several Analyzers available for E-Business Suite Applications Technology Group, Financials, and Manufacturing including some of the following topics.  Review them all at (Doc ID 1545562.1). Workflow Concurrent Processing Clone Log Parser Utility (Rapid Clone) Invoices, Payments, Accounting, Suppliers and EBTax Validate Data before Period Close EBTax Setup Payables Trial Balance Internet Expenses AutoInvoice Post-Process ASCP Performance PO Approval iProcurement Items For the Procurement specific Analyzers access them directly at: R12 IP Item Analyzer Diagnostic Script (Doc ID 1586248.1) R12: PO Approval Analyzer Diagnostic Script (Doc ID 1525670.1)

    Read the article

  • OS X: What does the '@' attribute on a file mean?

    - by claytontstanley
    On a Snow Leopard machine, at the Terminal: la ~/src/rmcl/ | grep RMCL -rw-r--r--@ 1 claytonstanley staff 6766167 Nov 13 2009 RMCL What is that '@' attribute? This file is part of an older OS X program that runs under Rosetta. I'm having issues where some older programs running under Rosetta require the @ attribute when opening files. But I'm not sure what that attribute is, so I have no way to know how to add/remove it. I did try a thorough Google search on this, but I wasn't able to find the answer. I would have thought this would be an easy one to find. Maybe the Google query isn't acting properly because of the single @ special character. Any info. is much appreciated. Thanks!

    Read the article

  • Setting up a DNS server for a 3rd level domain

    - by user45339
    If I would like a client to use HIS DNS for a 3rd level domain name (ie test.domain.com), how would I be able to do that? So I have domain servers ns1.domain.com + ns2.domain.com for domain.com but now I want ns1.rabbit.com and ns2.rabbit.com for test.domain.com. How can this be done? I know it's possible because I saw it at some providers. Then, as a second part (and related) part; how to set up a WHOIS for that test.domain.com? So that, if you query my server for the information about test.domain.com, it'll be different than the info on domain.com? Thanks

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What is a Relational Database Management System (RDBMS)?

    A Relational Database Management System (RDBMS)  can also be called a traditional database that uses a Structured Query Language (SQL) to provide access to stored data while insuring the integrity of the data. The data is stored in a collection of tables that is defined by relationships between data items. In addition, data permitted to be joined in new relationships. Traditional databases primarily process data through transactions called transaction processing. Transaction processing is the methodology of grouping related business operations based predefined business events. An example of this can be seen when a person attempts to purchase an item from an online e-tailor. The business must execute specific operations for a related  business event. In this case, a business must store the following information: Customer Info, Order Info, Order Item Info, Customer Payment Data, Payment Results, and Current Order Status. Example: Pseudo SQL Operations needed for processing an online e-tailor sale. Insert Customer into Customers Insert New Order into Orders Insert Each New Order Item into OrderItems Insert Customer Payment Info into PaymentInfo Insert Payment Processing Result into PaymentDetails Update Customer for Current Order Status Common Relational Database Management System Microsoft SQL Server Microsoft Access Oracle MySQL DB2 It is important to note that no current RDBMS has fully implemented all of the Relational Principles. Common RDBMS Traits Volatile Data Supports Transaction Processing Optimized for Updates and Simple Queries 

    Read the article

  • Search multiple tables

    - by gilden
    I have developed a web application that is used mainly for archiving all sorts of textual material (documents, references to articles, books, magazines etc.). There can be any given number of archive tables in my system, each with its own schema. The schema can be changed by a moderator through the application (imagine something similar to a really dumbed down version of phpMyAdmin). Users can search for anything from all of the tables. By using FULLTEXT indexes together with substring searching (fields which do not support FULLTEXT indexing) the script inserts the results of a search to a single table and by ordering these results by the similarity measure I can fairly easily return the paginated results. However, this approach has a few problems: substring searching can only count exact results the 50% rule applies to all tables separately and thus, mysql may not return important matches or too naively discards common words. is quite expensive in terms of query numbers and execution time (not an issue right now as there's not a lot of data yet in the tables). normalized data is not even searched for (I have different tables for categories, languages and file attatchments). My planned solution Create a single table having columns similar to id, table_id, row_id, data Every time a new row is created/modified/deleted in any of the data tables this central table also gets updated with the data column containing a concatenation of all the fields in a row. I could then create a single index for Sphinx and use it for doing searches instead. Are there any more efficient solutions or best practises how to approach this? Thanks.

    Read the article

  • Software solution from the 2000's, should I attempt to patch or remake the whole thing?

    - by ShadowScripter
    I was sent out to discuss a system that a certain company is currently using and what should be done with it. The company manufactures various carton displays. This system was developed to keep track of clients, orders and prices. Lots have happened since the system was created and the system is now, as the manager described it, "locked up" and "problematic", which I translate as "not dynamic" and "unstable". Some info about the system It was developed around the year 2000 Fairly small system, 2-5 users, 6 forms, ~8 tables with average quantities of data Built on early Visual Basic, forms created with the drag and drop design. Interface is basically just a window with a menu and some forms Uses MSSQL database (SQL2005 server) to store data and ODBC driver to query, data was migrated from excel before this system, and before excel it was handled, calculated and written by hand and paper Users work in Microsoft XP environment (and up) Their main problem is that they can't adjust and calculate prices, can't add new carton types etc, correctly anymore because they can't (or rather, they don't know how to) touch the data on the server. I suggested 3 possible solutions Attempt to patch the current system Create a fresh new interface (preferably similar environment, VB.net or VB based) Bring it back to an Excel solution, considering it is such a small system There might be more options, but these are the ones I could think of. My questions are What should I recommend and why? What is or could be the pros and cons of these alternatives? Are there other (possibly better) alternatives?

    Read the article

  • just another apache to nginx rewrite question

    - by Brandon
    I have the following Apache rewrite directives: RewriteCond %{REQUEST_URI} ^/proxy(/|$) [NC] RewriteCond %{QUERY_STRING} (^|&)uri=(.*?)(&|$) [NC] RewriteRule .* /api/vs1.0/%2 [NC,L] And I'm trying out nginx, so trying to move the rewrites over. I came up with... rewrite ^/proxy(/|$) /api/vs1.0/$2 last; rewrite (^|&)uri=(.*?)(&|$) /api/vs1.0/$2 last; Which is probably grossly incorrect. I'm just a mere web developer, so I was wondering if anyone could lend a hand here. I would be much obliged. I see that I am ignoring the query string specification, but I'm thinking that it shouldn't matter. I only have a vague idea of what the original rewrite is accomplishing, so I haven't much hope here in coming up with something decent, despite reading the relevant documentation for both servers.

    Read the article

  • Are there any Powershell modules specifically for IIS6 administration?

    - by program247365
    I'm looking to manage/administer many IIS6 servers remotely via Powershell (query sites, iis settings, etc). Is this possible? Is there a Microsoft-supported module out there? Or do I have to use WMI-Object/WebAdministration? If so, could some one give me some quick instructions on doing some simple "get info" commands in Powershell to a remote IIS6 machine? (It's frustrating that there is a nice IIS7/Microsoft-supported module out there: http://learn.iis.net/page.aspx/428/getting-started-with-the-iis-70-powershell-snap-in/ But not an IIS6 one easily found.)

    Read the article

  • Use a SQL Database for a Desktop Game

    - by sharethis
    Developing a Game Engine I am planning a computer game and its engine. There will be a 3 dimensional world with first person view and it will be single player for now. The programming language is C++ and it uses OpenGL. Data Centered Design Decision My design decision is to use a data centered architecture where there is a global event manager and a global data manager. There are many components like physics, input, sound, renderer, ai, ... Each component can trigger and listen to events. Moreover, each component can read, edit, create and remove data. The question is about the data manager. Whether to Use a Relational Database Should I use a SQL Database, e.g. SQLite or MySQL, to store the game data? This contains virtually all game content like items, characters, inventories, ... Except of meshes and textures which are even more performance related, so I will keep them in memory. Is a SQL database fast enough to use it for realtime reading and writing game informations, like the position of a moving character? I also need to care about cross-platform compatibility. Aside from keeping everything in memory, what alternatives do I have? Advantages Would Be The advantages of using a relational database like MySQL would be the data orientated structure which allows fast computation. I would not need objects for representing entities. I could easily query data of objects near the player needed for rendering. And I don't have to take care about data of objects far away. Moreover there would be no need for savegames since the hole game state is saved in the database. Last but not least, expanding the game to an online game would be relative easy because there already is a place where the hole game state is stored.

    Read the article

  • How to force ADF to speak your language (or any common language)

    - by Blueberry Coder
    When I started working for Oracle, one of the first tasks I was given was to contribute some content to a great ADF course Frank and Chris are building. Among other things, they asked me to work on a module about Internationalization. While doing research work, I unearthed a little gem I had overlooked all those years. JDeveloper, as you may know, speaks your language - as long as your language is English, that is. Oracle ADF, on the other hand, is a citizen of the world. It is available in more than 25 different languages. But while this is a wonderful feature for end users, it is rather cumbersome for developers. Why is that? Have you ever tried to search the OTN forums for a solution with a non-English error message as your query? I have, once. But how can you force ADF to use English for its logging operations? Playing with your system settings will not help, unfortunately. By default, ADF will output its error messages in the selected locale for the operating system account the application server runs on. The only way to change this behavior is to pass initialization parameters to the JVM used by the application server. It is even possible to specify the language and country/region separately. In the example below, we choose English and the United States respectively. -Duser.language=en -Duser.country=US In the case of WebLogic Server, it is possible to add such parameters in setDomainEnv.sh (or .cmd) to apply the settings to all the managed servers present on a node. In the coming weeks, I will write a few posts about other internationalization issues. Is there anything you would like me to cover? Let me know in the comments.

    Read the article

  • What is the recommended MongoDB schema for this quiz-engine scenario?

    - by hughesdan
    I'm working on a quiz engine for learning a foreign language. The engine shows users four images simultaneously and then plays an audio file. The user has to match the audio to the correct image. Below is my MongoDB document structure. Each document consists of an image file reference and an array of references to audio files that match that image. To generate a quiz instance I select four documents at random, show the images and then play one audio file from the four documents at random. The next step in my application development is to decide on the best document schema for storing user guesses. There are several requirements to consider: I need to be able to report statistics at a user level. For example, total correct answers, total guesses, mean accuracy, etc) I need to be able to query images based on the user's learning progress. For example, select 4 documents where guess count is 10 and accuracy is <=0.50. The schema needs to be optimized for fast quiz generation. The schema must not cause future scaling issues vis a vis document size. Assume 1mm users who make an average of 1000 guesses. Given all of this as background information, what would be the recommended schema? For example, would you store each guess in the Image document or perhaps in a User document (not shown) or a new document collection created for logging guesses? Would you recommend logging the raw guess data or would you pre-compute statistics by incrementing counters within the relevant document? Schema for Image Collection: _id "505bcc7a45c978be24000005" date 2012-09-21 02:10:02 UTC imageFileName "BD3E134A-C7B3-4405-9004-ED573DF477FE-29879-0000395CF1091601" random 0.26997075392864645 user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioFiles [ 0 { audioFileName "C3669719-9F0A-4EB5-A791-2C00486665ED-30305-000039A3FDA7DCD2" user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioLanguage "English" date 2012-09-22 01:15:04 UTC } 1 { audioFileName "C3669719-9F0A-4EB5-A791-2C00486665ED-30305-000039A3FDA7DCD2" user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioLanguage "Spanish" date 2012-09-22 01:17:04 UTC } ]

    Read the article

  • .htaccess URL rewrite to multi-parameter item

    - by MrCS
    I just spent the last 10 hours of my life on this & am running in circles, so was hoping someone may be able to help me. I want a specific URL to load like this: http://example.com/f/2011/image.png?attribute=small When a URL in a format such as this hits, I'd like to rewrite it to hit the server as: http://example.com/generate.php?f=2011/image.png&attribute=small Based on above, my question is two-fold: How can I rewrite the URL in htaccess to meet my requirements above? If the original URL didn't have the attribute query string parameter, how can I ensure attribute will be false/blank/etc when it hits the server via htaccess?

    Read the article

  • What is a "good" tool to password-protect .pdf files?

    - by Marius Hofert
    What is a "good" tool to encrypt (password protect) .pdf files? (without being required to buy additional software; the protection can be created under linux but the password query should work on Windows, too) I know that zip can do it: zip zipfile_name_without_ending -e files_to_encrypt.foo What I don't like about this is that for a single file, you have to use Winzip to open the zip and then click the file again. I rather would like to be prompted for a password when opening the .pdf (single file case). I know that pdftk can do this: pdftk foo.pdf output foo_protected.pdf user_pw mypassword. The problem here is that the password is displayed in the terminal -- even if you use ... user_pw PROMPT. But in the end you get a password-protected .pdf and you are prompted for the password when opening the file.

    Read the article

  • Files copying between servers by creation time

    - by driftux
    My bash scripting knowledge is very weak that's why I'm asking help here. What is the most effective bash script according to performance to find and copy files from one LINUX server to another using specifications described below. I need to get a bash script which finds only new files created in server A in directories with name "Z" between interval from 0 to 10 minutes ago. Then transfer them to server B. I think it can be done by formatting a query and executing it for each founded new file "scp /X/Y.../Z/file root@hostname:/X/Y.../Z/" If script finds no such remote path on server B it will continue copying second file which directory exists. File should be copied with permissions, group, owner and creation time. X/Y... are various directories path. I want setup a cron job to execute this script every 10 minutes. So the performance is very important in this case. Thank you.

    Read the article

  • Error connecting to the application.

    - by ahmed
    Hi guys, let me explain you the current scenario. We have a asp.net application on framework 2 running on intranet(Windows 2003 Server and Sql Server 2000). Now we have a xp machine where we installed and configured IIS with a virtual directory pointing on the local Xp machine and this machine is connected to our intranet. We have copied the same application files of the server to this XP machine. But the thing is the connection string/database of the application is pointing towards the intranet server. The problem is when we try to run the application on the XP machine we get this error : An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) Is this query related or concerned with this site or stackoverflow ?

    Read the article

  • How to Read XML and Generate SQL Insert

    - by hackerkatt
    I am trying to write a VB Script to read a XML file (downloaded daily) and insert the information into a MSSQL DB. The content of the XML is a list if CDRs (Call Data Records). I need to parse the file and insert the cdr's into a table. I'm a Ruby,Perl,PHP,Javascript,SQL,... programer. But I've really never written any VB Script. I've done some googling and find a number of examples on how to generate XML from a SQL Query, but not the reverse. Any help/suggestions would be greatly appreciated. Thank you!

    Read the article

  • Storing revisions of a document

    - by dev.e.loper
    This is a follow up question to my original question. I'm thinking of going with generating diffs and storing those diffs in the database 'History' table. I'm using diff-match-patch library to generate what is called a 'patch'. On every save, I compare previous and new version and generate this patch. The patch could be used to generate a document at specific point in time. My dilemma is how to store this data. Should I: a Insert a new database record for every patch? b. Store these patches in javascript array and store that array in history table. So there is only one db History record for document with an array of all the patches. Concerns with: a. Too many db records generated. Will be slow and CPU intensive to query. b. Only one record. If record is somehow corrupted/deleted. Entire revision history is gone. I'm looking for suggestions, concerns with either approach.

    Read the article

  • Does SQL Server Management Studio 2008 Activity Monitor work with SQL Server 2000?

    - by Andrew Janke
    I am trying to use SQL Server Management Studio 2008's Activity Monitor with an SQL Server 2000 instance to diagnose some query performance issues. I can connect SMSS 2008 to the db fine, and use it to browse objects and run queries. But when I press the Activity Monitor button, it pops up an error message saying: Microsoft SQL Server Management Studio This operation does not support connections to Microsoft SQL Server Personal Edition version 8.00.818. This MSDN article implies that Activity Monitor works with SQL Server 2000. Is it the fact that it's Personal Edition that's preventing it from working? The error message isn't clear whether it's the edition or version that's the problem.

    Read the article

  • Configuring memcached for a particular scenario

    - by pradeepchhetri
    I have a web application which queries opentsdb server(which in backend using Hbase cluster) for the datapoints of different metrics and using dygraph javascript graphing library, I am plotting those metrics. Since getting all the datapoints of past one day from opentsdb for a particular metric is itself taking nearly 2 seconds, my application which is plotting nearly 25 metrics is becoming very slow. In order to reduce this latency, I am thinking of using memcached module of php5 for caching all the queries. But I have few questions regarding memcached. Is there any way I can configure memcache to keep on updating its cache in the background by running some command line queries after particular interval of time. Is there any way I can configure memcache to always reply for a query using cache instead of first updating its cache because my application just plots datapoints for past one day. Missing out some datapoints is not that critical.

    Read the article

  • Redircting to a url that has a ? in it

    - by dkmojo
    I have a somewhat strange problem. A client has moved their site to Wordpress - cool no problem. They use a service for link exchanges that has a Wordpress plugin. The issue is that the new Links pages use a query string to display the correct content and I cannot figure out how to redirect the old URLs correctly. Old URLs look like this: domain.com/link/category-name.html The plugin makes them look like this in WP: domain.com/links/?page=category-name.html How in the world can I get the redirect to work properly? Here's what I have tried: Redirect 301 /link/actors.html http://www.artisticimages.biz/links/?page=actors.html Redirect 301 /link/actors.html http://www.artisticimages.biz/links/%3Fpage=actors.html Redirect 301 /link/actors.html http://www.artisticimages.biz/links/\?page=actors.html But none of those have worked. Any help is greatly appreciated!

    Read the article

  • Helping to Reduce Page Compression Failures Rate

    - by Vasil Dimov
    When InnoDB compresses a page it needs the result to fit into its predetermined compressed page size (specified with KEY_BLOCK_SIZE). When the result does not fit we call that a compression failure. In this case InnoDB needs to split up the page and try to compress again. That said, compression failures are bad for performance and should be minimized.Whether the result of the compression will fit largely depends on the data being compressed and some tables and/or indexes may contain more compressible data than others. And so it would be nice if the compression failure rate, along with other compression stats, could be monitored on a per table or even on a per index basis, wouldn't it?This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this helpful information. It contains the following fields: +-----------------+--------------+------+ | Field | Type | Null | +-----------------+--------------+------+ | database_name | varchar(192) | NO | | table_name | varchar(192) | NO | | index_name | varchar(192) | NO | | compress_ops | int(11) | NO | | compress_ops_ok | int(11) | NO | | compress_time | int(11) | NO | | uncompress_ops | int(11) | NO | | uncompress_time | int(11) | NO | +-----------------+--------------+------+ similarly to INFORMATION_SCHEMA.INNODB_CMP, but this time the data is grouped by "database_name,table_name,index_name" instead of by "page_size".So a query like SELECT database_name, table_name, index_name, compress_ops - compress_ops_ok AS failures FROM information_schema.innodb_cmp_per_index ORDER BY failures DESC; would reveal the most problematic tables and indexes that have the highest compression failure rate.From there on the way to improving performance would be to try to increase the compressed page size or change the structure of the table/indexes or the data being stored and see if it will have a positive impact on performance.

    Read the article

  • Solutions for "Maintenance Mode"

    - by Ka Lyse
    Given a web application running across 10+ servers, what techniques have you put in place for doing things like altering the state of your website so that you can implement certain features. For instance, you might want to: Restrict Logins/Disable Certain Features Turn Site to "Read Only" Turn Site to Single "Maintenance Mode" page. Doing any of the above is pretty trivial. You can throw a particular "flag" in an .ini file, or add a row/value to a site_options table in your database and just read that value and do the appropriate thing. But these solutions have their problems. Disadvantages/Problems For instance, if you use a file for your application, and you want to switch off a certain feature temporarily, then you need to update this file on all servers. So then you might want to look at running something like ZooKeeper, but you are probably overcomplicating things. So then, you might decide that you want to store these "feature" flags in a database. But then you are obviously adding unncessary queries to each page request. So you think to yourself, that you will throw memcached in to the mix and just cache the query. Then you just retrieve all of your "Features" from memcached and add a 2ms~ latency to your application on every page. So to get around this, you decide to use a two tier-cache system, whereby you use an inmemory cache on each machine, (like Apc/Redis etc). This would work, but then it gets complicated, because you would have to set the key/hash life to perhaps 60 seconds, so that when you purge/invalidate the memcached object storing your "Features" result, your on machine cache is prompt enough to get the the new states. What suggestions might you have? Keeping in mind that optimization/efficiency is the priority here.

    Read the article

< Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >