Search Results

Search found 5511 results on 221 pages for 'maintenance operations'.

Page 32/221 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Alternatives to sql like databases

    - by user613326
    Well i was wondering these days computers usually have 2GB or 4GB memory I like to use some secure client server model, and well an sql database is likely candidate. On the other hand i only have about 8000 records, who will not frequently be read or written in total they would consume less then 16 Megabyte. And it made me wonder what would be good secure options in a windows environment to store the data work with it multi-client single server model, without using SQL or mysql Would for well such a small amount of data maybe other ideas better ? Because i like to keep maintenance as simple as possible (no administrators would need to know sql maintenance, as they dont know databases in my target environment) Maybe storing in xml files or.. something else. Just wonder how others would go if ease of administration is the main goal. Oh and it should be secure to, the client server data must be a bit secure (maybe NTLM files shares https or...etc)

    Read the article

  • How do I recover a site after WordPress' Automatic Update has failed?

    - by Metacom
    I manage several WordPress websites, and successfully recently used the Automatic Upgrade feature to bring them up to 3.1. However, on one of the sites, I received a 503 (I believe it was 503). After that I was presented with "The service is unavailable." whenever I tried to access any page, including the index page, wp-admin, etc. I had similar problems before when a WordPress site got stuck in maintenance mode, and all I needed to do was login via FTP and rename or delete the .maintenance file. I tried that in this case, but that didn't do the trick. I am now presented with "Fatal error: Call to undefined function require_wp_db() in **\wp-settings.php on line 71" I can't figure out how to fix this problem, and I was wondering if anyone else had any ideas. Any suggestions are appreciated! Thank you for reading.

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • MySQL Connect Only 10 Days Away - Focus on InnoDB Sessions

    - by Bertrand Matthelié
    Time flies and MySQL Connect is only 10 days away! You can check out the full program here as well as in the September edition of the MySQL newsletter. Mat recently blogged about the MySQL Cluster sessions you’ll have the opportunity to attend, and below are those focused on InnoDB. Remember you can plan your schedule with Schedule Builder. Saturday, 1.00 pm, Room Golden Gate 3: 10 Things You Should Know About InnoDB—Calvin Sun, Oracle InnoDB is the default storage engine for Oracle’s MySQL as of MySQL Release 5.5. It provides the standard ACID-compliant transactions, row-level locking, multiversion concurrency control, and referential integrity. InnoDB also implements several innovative technologies to improve its performance and reliability. This presentation gives a brief history of InnoDB; its main features; and some recent enhancements for better performance, scalability, and availability. Saturday, 5.30 pm, Room Golden Gate 4: Demystified MySQL/InnoDB Performance Tuning—Dimitri Kravtchuk, Oracle This session covers performance tuning with MySQL and the InnoDB storage engine for MySQL and explains the main improvements made in MySQL Release 5.5 and Release 5.6. Which setting for which workload? Which value will be better for my system? How can I avoid potential bottlenecks from the beginning? Do I need a purge thread? Is it true that InnoDB doesn't need thread concurrency anymore? These and many other questions are asked by DBAs and developers. Things are changing quickly and constantly, and there is no “silver bullet.” But understanding the configuration setting’s impact is already a huge step in performance improvement. Bring your ideas and problems to share them with others—the discussion is open, just moderated by a speaker. Sunday, 10.15 am, Room Golden Gate 4: Better Availability with InnoDB Online Operations—Calvin Sun, Oracle Many top Web properties rely on Oracle’s MySQL as a critical piece of infrastructure for serving millions of users. Database availability has become increasingly important. One way to enhance availability is to give users full access to the database during data definition language (DDL) operations. The online DDL operations in recent MySQL releases offer users the flexibility to perform schema changes while having full access to the database—that is, with minimal delay of operations on a table and without rebuilding the entire table. These enhancements provide better responsiveness and availability in busy production environments. This session covers these improvements in the InnoDB storage engine for MySQL for online DDL operations such as add index, drop foreign key, and rename column. Sunday, 11.45 am, Room Golden Gate 7: Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster—Andrew Morgan and John Duncan, Oracle Ever-increasing performance demands of Web-based services have generated significant interest in providing NoSQL access methods to MySQL (MySQL Cluster and the InnoDB storage engine of MySQL), enabling users to maintain all the advantages of their existing relational databases while providing blazing-fast performance for simple queries. Get the best of both worlds: persistence; consistency; rich SQL queries; high availability; scalability; and simple, flexible APIs and schemas for agile development. This session describes the memcached connectors and examines some use cases for how MySQL and memcached fit together in application architectures. It does the same for the newest MySQL Cluster native connector, an easy-to-use, fully asynchronous connector for Node.js. Sunday, 1.15 pm, Room Golden Gate 4: InnoDB Performance Tuning—Inaam Rana, Oracle The InnoDB storage engine has always been highly efficient and includes many unique architectural elements to ensure high performance and scalability. In MySQL 5.5 and MySQL 5.6, InnoDB includes many new features that take better advantage of recent advances in operating systems and hardware platforms than previous releases did. This session describes unique InnoDB architectural elements for performance, new features, and how to tune InnoDB to achieve better performance. Sunday, 4.15 pm, Room Golden Gate 3: InnoDB Compression for OLTP—Nizameddin Ordulu, Facebook and Inaam Rana, Oracle Data compression is an important capability of the InnoDB storage engine for Oracle’s MySQL. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes and better throughput by reducing the I/O workload. Facebook pushes the limit of InnoDB compression and has made several enhancements to InnoDB, making this technology ready for online transaction processing (OLTP). In this session, you will learn the fundamentals of InnoDB compression. You will also learn the enhancements the Facebook team has made to improve InnoDB compression, such as reducing compression failures, not logging compressed page images, and allowing changes of compression level. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • How to Secure a Data Role by Multiple Business Units

    - by Elie Wazen
    In this post we will see how a Role can be data secured by multiple Business Units (BUs).  Separate Data Roles are generally created for each BU if a corresponding data template generates roles on the basis of the BU dimension. The advantage of creating a policy with a rule that includes multiple BUs is that while mapping these roles in HCM Role Provisioning Rules, fewer number of entires need to be made. This could facilitate maintenance for enterprises with a large number of Business Units. Note: The example below applies as well if the securing entity is Inventory Organization. Let us take for example the case of a user provisioned with the "Accounts Payable Manager - Vision Operations" Data Role in Fusion Applications. This user will be able to access Invoices in Vision Operations but will not be able to see Invoices in Vision Germany. Figure 1. A User with a Data Role restricting them to Data from BU: Vision Operations With the role granted above, this is what the user will see when they attempt to select Business Units while searching for AP Invoices. Figure 2.The List Of Values of Business Units is limited to single one. This is the effect of the Data Role granted to that user as can be seen in Figure 1 In order to create a data role that secures by multiple BUs,  we need to start by creating a condition that groups those Business Units we want to include in that data role. This is accomplished by creating a new condition against the BU View .  That Condition will later be used to create a data policy for our newly created Role.  The BU View is a Database resource and  is accessed from APM as seen in the search below Figure 3.Viewing a Database Resource in APM The next step is create a new condition,  in which we define a sql predicate that includes 2 BUs ( The ids below refer to Vision Operations and Vision Germany).  At this point we have simply created a standalone condition.  We have not used this condition yet, and security is therefore not affected. Figure 4. Custom Role that inherits the Purchase Order Overview Duty We are now ready to create our Data Policy.  in APM, we search for our newly Created Role and Navigate to “Find Global Policies”.  we query the Role we want to secure and navigate to view its global policies. Figure 5. The Job Role we plan on securing We can see that the role was not defined with a Data Policy . So will create one that uses the condition we created earlier.   Figure 6. Creating a New Data Policy In the General Information tab, we have to specify the DB Resource that the Security Policy applies to:  In our case this is the BU View Figure 7. Data Policy Definition - Selection of the DB Resource we will secure by In the Rules Tab, we  make the rule applicable to multiple values of the DB Resource we selected in the previous tab.  This is where we associate the condition we created against the BU view to this data policy by entering the Condition name in the Condition field Figure 8. Data Policy Rule The last step of Defining the Data Policy, consists of  explicitly selecting  the Actions that are goverened by this Data Policy.  In this case for example we select the Actions displayed below in the right pane. Once the record is saved , we are ready to use our newly secured Data Role. Figure 9. Data Policy Actions We can now see a new Data Policy associated with our Role.  Figure 10. Role is now secured by a Data Policy We now Assign that new Role to the User.  Of course this does not have to be done in OIM and can be done using a Provisioning Rule in HCM. Figure 11. Role assigned to the User who previously was granted the Vision Ops secured role. Once that user accesses the Invoices Workarea this is what they see: In the image below the LOV of Business Unit returns the two values defined in our data policy namely: Vision Operations and Vision Germany Figure 12. The List Of Values of Business Units now includes the two we included in our data policy. This is the effect of the data role granted to that user as can be seen in Figure 11

    Read the article

  • How to get html/css/jpg pages server by both apache & tomcat with mod_jk

    - by user53864
    I've apache2 and tomcat6 both running on port 80 with mod_jk setup on ubutnu servers. I had to setup an error document 503 ErrorDocument 503 /maintenance.html in the apache configuration and somehow I managed to get it work and the error page is server by apache when tomcat is stopped. Developers created a good looking error page(an html page which calls css and jpg) and I'm asked to get this page served by apache when tomcat is down. When I tried with JkUnMount /*.css in the virtual hosting, the actual tomcat jsp pages didn't work properly(lost the format) as the tomcat applications uses jsp, css, js, jpg and so on. I'm trying if it is possible to get .css and .jpg served by both apache and tomcat so that when the tomcat is down I'll get css and jpg serverd by apache and the proper error document is served. Anyone has any technique? Here is my apache2 configuration: vim /etc/apache2/apache2.conf Alias / /var/www/ ErrorDocument 503 /maintenance.html ErrorDocument 404 /maintenance.html JkMount / myworker JkMount /* myworker JkMount /*.jsp myworker JkUnMount /*.html myworker <VirtualHost *:80> ServerName station1.mydomain.com DocumentRoot /usr/share/tomcat/webapps/myapps1 JkMount /* myworker JkUnMount /*.html myworker </VirtualHost> <VirtualHost *:80> ServerName station2.mydomain.com DocumentRoot /usr/share/tomcat/webapps/myapps2 JkMount /* myworker JkMount /*.html myworker </VirtualHost>

    Read the article

  • How can I use varnish to generate a robots.txt file even for subdomain of the same site?

    - by Sam
    I want to generate a robots.txt file using Varnish 2.1. That means that domain.com/robots.txt is served using Varnish and also subdomain.domain.com/robots.txt is also served using Varnish. The robots.txt must be hardcoded into default.vcl file. is that possible? I know Varnish can generate a maintenance page on error. I'm trying to make it generate a robots.txt file. Can anyone help? sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <title>Maintenance in progress</title> </head> <body> <h1>Maintenance in progress</h1> </body> </html> "}; return (deliver); }

    Read the article

  • My .htaccess file re-directed problems?

    - by Glenn Curtis
    I am hoping you can help me! Below is my .htaccess files for my Apache server running on top of Ubuntu server. This is my local server which I installed so I can develop my site on this instead of using my live site! However i have all my files and the database on my localhost now but each time I access my server, vaio-server (its a sony laptop), it just takes me to my live site! Now eveything is in the root of Apache, /var/www - its the only site I will develop on this system so I don't need to config this to look at any many than this one site! I think thats all, all the Apache files, site-available/default ect are as standard. - Please Help!! Many Thanks Glenn Curtis. DirectoryIndex index.php index.html # Upload sizes php_value upload_max_filesize 25M php_value post_max_size 25M # Avoid folder listings Options -Indexes <IfModule mod_rewrite.c> Options +FollowSymlinks RewriteEngine on RewriteBase / # Maintenance #RewriteCond %{REQUEST_URI} !/maintenance.html$ #RewriteRule $ /maintenance.html [R=302,L] #Redirects to www #RewriteCond %{HTTP_HOST} !^vaio-server [NC] #RewriteCond %{HTTPS}s ^on(s)|off #RewriteRule ^(.*)$ glenns-showcase.net/$1 [R=301,QSA,L] #Empty string RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule>

    Read the article

  • Can an installation of SSRS be used for other reports if the SCOM Reporting Role is installed?

    - by Pete Davis
    I'm currently in the process of planing a SCOM 2007 R2 deployment and would like to deploy the OperationsManagerDW and Reporting Server to a shared SQL 2008 cluster which is used for reporting across multiple solutions. However in the in the deployment guide for SCOM 2007 R2 it says: Due to changes that the Operations Manager 2007 Reporting component makes to SQL Server Reporting Services security, no other applications that make use of SQL Server Reporting Services can be installed on this server. Which concerns me that it may interfere with existing or future (non SCOM) reports in some way even if deployed as a separate SSRS instance. Later in the same guide it states: Installing Operations Manager 2007 Reporting Services integrates the security of the instance of SQL Reporting Services with the Operations Manager role-based security. Do not install any other Reporting Services applications in this same instance of SQL Server. Does this mean that I can install a new SSRS instance and use this on the shared cluster for SCOM reporting or that I'd also need to create a whole new SQL Server instance as well as SSRS instance or I'd need a whole separate server for SCOM OperationsManagerDW and Reporitng Server?

    Read the article

  • CUPS basic auth error through web interface

    - by Inaimathi
    I'm trying to configure CUPS to allow remote administration through the web interface. There's enough documentation out there that I can figure out what to change in my cupsd.conf (changing Listen localhost:631 to Port 631, and adding Allow @LOCAL to the /, /admin and /admin/conf sections). I'm now at the point where I can see the CUPS interface from another machine on the same network. The trouble is, when I try to Add Printer, I'm asked for a username and password, but my response is rejected even when I know I've gotten it right (I assume it's asking for the username and password of someone in the lpadmin group on the server machine; I've sshed in with credentials its rejecting, and the user I'm using has been added to the lpadmin group). If I disable auth outright, by changing DefaultAuthType Basic to DefaultAuthType None, I get an "Unauthorized" error instead of a password request when I try to Add Printer. What am I doing wrong? Is there a way of letting users from the local network to administer the print server through the CUPS web interface? EDIT: By request, my complete cupsd.conf (spoiler: minimally edited default config file that comes with the edition of CUPS from the Debian wheezy repos): LogLevel warn MaxLogSize 0 SystemGroup lpadmin Port 631 # Listen localhost:631 Listen /var/run/cups/cups.sock Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS dnssd # DefaultAuthType Basic DefaultAuthType None WebInterface Yes <Location /> Order allow,deny Allow @LOCAL </Location> <Location /admin> Order allow,deny Allow @LOCAL </Location> <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny Allow @LOCAL </Location> # Set the default printer/job policies... <Policy default> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> # Set the authenticated printer/job policies... <Policy authenticated> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> AuthType Default Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy>

    Read the article

  • Advantages of multiple SQL Server files with a single RAID array

    - by Dr Giles M
    Originally posted on stack overflow, but re-worded. Imagine the scenario : For a database I have RAID arrays R: (MDF) T: (transaction log) and of course shared transparent usage of X: (tempDB). I've been reading around and get the impression that if you are using RAID then adding multiple SQL Server NDF files sitting on R: within a filegroup won't yeild any more improvements. Of course, adding another raid array S: and putting an NDF file on that would. However, being a reasonably savvy software person, it's not unthinkable to hypothesise that, even for smaller MDFs sitting on one RAID array that SQL Server will perform growth and locking operations (for writes) on the MDF, so adding NDFs to the filegroup even if they sat on R: would distribute the locking operations and growth operations allowing more throughput? Or does the time taken to reconstruct the data from distributed filegroups outweigh the benefits of reduced locking? I'm also aware that the behaviour and benefits may be different for tables/indeces/log. Is there a good site that distinguishes the benefits of multiple files when RAID is already in place?

    Read the article

  • Holiday Book Recommendation

    - by dalton
    I'm after a book to read whilst on holiday. Some criteria: The book has to be relatively short. < 500 pages. I'd prefer a book that changes your thinking, rather than reams of syntax to look at. So the last two years here have been my books: Last year, The Craftsman by Richard Sennet (Changed how I viewed career development, quality) Year before, Zen and The art of motorcycle maintenance (Makes you think about quality, maintenance)

    Read the article

  • Adding operation in middle of complex sequence diagram in visio 2003

    - by James
    I am using Microsoft Visio 2003 to define static classes with operations/methods and a sequence diagrams referring to these classes. The sequence diagram is almost done, but i realized that i missed one operation in middle of the diagram. When i try to move rest of the sequences down by selecting it as a block, all the operations in the block loose link with static diagrams. ( Methods which were referred to static classes as fun(), became fun, which means that now they no longer refer to static diagrams and any future changes would not be reflected in dynamic sequence diagrams automatically.) The sequence diagrams have grown to A3 size paper and i have many of such diagrams which needs correction. Manually moving the operations one by one would involve lots of effort. Could someone kindly suggest a way to overcome this problem?

    Read the article

  • Use web interface to modify htaccess file

    - by justinl
    The solution to a previous question (How to implement “Maintenance Mode” on already established website), was to use a modified .htaccess to deny IP addresses. What is the best way to use a web interface to modify an .htaccess file? What I want is a way for an admin to log in to the admin area and switch Maintenance Mode on and off using a basic html form. I'm using PHP and I'm already using an .htaccess file for a handful of ReWriting.

    Read the article

  • Why would an ext3 filsystem be rolled back on a Debian VM running in VirtualBox after loss of power to the host

    - by Sevas
    A Debian Virtual machine runs as a Guest VirtualBox VM. It's filesystem is EXT3. The host system loses power and after booting up the host system and guest VM, I find that the VM's filesystem has been rolled back to a previous state, losing changes made to the filesystem some time before losing power. The operations that were rolled back had been fully completed before the loss of power (files fully copied, file handles closed, etc.), but it's possible and even likely that other write operations were occuring on the VM at the point of the crash. So I am trying to figure out if it's the filesystem recovery process that rolls back filesystem operations after encountering corruption post power loss, or is it possibly related to VirtualBox and the way it ignores flush requests for performance gains by default (discussed here) Are there any other factors that would result in the filesystem being rolled back after losing power?

    Read the article

  • Token based Authentication for WCF HTTP/REST Services: Authorization

    - by Your DisplayName here!
    In the previous post I showed how token based authentication can be implemented for WCF HTTP based services. Authentication is the process of finding out who the user is – this includes anonymous users. Then it is up to the service to decide under which circumstances the client has access to the service as a whole or individual operations. This is called authorization. By default – my framework does not allow anonymous users and will deny access right in the service authorization manager. You can however turn anonymous access on – that means technically, that instead of denying access, an anonymous principal is placed on Thread.CurrentPrincipal. You can flip that switch in the configuration class that you can pass into the service host/factory. var configuration = new WebTokenWebServiceHostConfiguration {     AllowAnonymousAccess = true }; But this is not enough, in addition you also need to decorate the individual operations to allow anonymous access as well, e.g.: [AllowAnonymousAccess] public string GetInfo() {     ... } Inside these operations you might have an authenticated or an anonymous principal on Thread.CurrentPrincipal, and it is up to your code to decide what to do. Side note: Being a security guy, I like this opt-in approach to anonymous access much better that all those opt-out approaches out there (like the Authorize attribute – or this.). Claims-based Authorization Since there is a ClaimsPrincipal available, you can use the standard WIF claims authorization manager infrastructure – either declaratively via ClaimsPrincipalPermission or programmatically (see also here). [ClaimsPrincipalPermission(SecurityAction.Demand,     Resource = "Claims",     Operation = "View")] public ViewClaims GetClientIdentity() {     return new ServiceLogic().GetClaims(); }   In addition you can also turn off per-request authorization (see here for background) via the config and just use the “domain specific” instrumentation. While the code is not 100% done – you can download the current solution here. HTH (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

  • Reminder: True WCF Asynchronous Operation

    - by Sean Feldman
    A true asynchronous service operation is not the one that returns void, but the one that is marked as IsOneWay=true using BeginX/EndX asynchronous operations (thanks Krzysztof). To support this sort of fire-and-forget invocation, Windows Communication Foundation offers one-way operations. After the client issues the call, Windows Communication Foundation generates a request message, but no correlated reply message will ever return to the client. As a result, one-way operations can't return values, and any exception thrown on the service side will not make its way to the client. One-way calls do not equate to asynchronous calls. When one-way calls reach the service, they may not be dispatched all at once and may be queued up on the service side to be dispatched one at a time, all according to the service configured concurrency mode behavior and session mode. How many messages (whether one-way or request-reply) the service is willing to queue up is a product of the configured channel and the reliability mode. If the number of queued messages has exceeded the queue's capacity, then the client will block, even when issuing a one-way call. However, once the call is queued, the client is unblocked and can continue executing while the service processes the operation in the background. This usually gives the appearance of asynchronous calls.

    Read the article

  • Flexible cloud file storage for a web.py app?

    - by benwad
    I'm creating a web app using web.py (although I may later rewrite it for Tornado) which involves a lot of file manipulation. One example, the app will have a git-style 'commit' operation, in which some files are sent to the server and placed in a new folder along with the unchanged files from the last version. This will involve copying the old folder to the new folder, replacing/adding/deleting the files in the commit to the new folder, then deleting all unchanged files in the old folder (as they are now in the new folder). I've decided on Heroku for the app hosting environment, and I am currently looking at cloud storage options that are built with these kinds of operations in mind. I was thinking of Amazon S3, however I'm not sure if that lets you carry out these kinds of file operations in-place. I was thinking I may have to load these files into the server's RAM and then re-insert them into the bucket, costing me a fortune. I was also thinking of Progstr Filer (http://filer.progstr.com/index.html) but that seems to only integrate with Rails apps. Can anyone help with this? Basically I want file operations to be as cheap as possible.

    Read the article

  • Flash Technology Can Revolutionize your IT Infrastructure

    - by kimberly.billings
    A recent article in the Data Center Journal written by Mark Teter outlines how flash is becoming a disruptive technology in the data center and how it will soon replace HDDs in the storage hierarchy. As Teter explains, the drivers behind this trend are lower cost/performance and power savings; flash is over 100x faster for reads than the fastest HDD, and while it is expensive, it can produce dramatic reductions in the cost of performance as measured in Input/Outputs per second (IOPS). What's more, flash consumes 1/5th the power of HDD, so it's faster AND greener. Teter writes, "when appropriately used, flash turns the current economics of IT performance on its head. That's disruptive." Exadata Smart Flash Cache in the Sun Oracle Database Machine makes intelligent use of flash storage to deliver extreme performance for OLTP and mixed workloads. It intelligently caches data from the Oracle Database replacing slow mechanical I/O operations to disk with very rapid flash memory operations. Exadata Smart Flash Cache is the fundamental technology of the Sun Oracle Database Machine that enables the processing of up to 1 million random I/O operations per second (IOPS), and the scanning of data within Exadata storage at up to 50 GB/second. Are you incorporating flash into your storage strategy? Let us know! Read more: "Flash technology can revolutionize your IT infrastructure", The Data Center Journal, March 30, 2010. Exadata Smart Flash Cache and the Sun Oracle Database Machine white paper var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Database Web Service using Toplink DB Provider

    - by Vishal Jain
    With JDeveloper 11gR2 you can now create database based web services using JAX-WS Provider. The key differences between this and the already existing PL/SQL Web Services support is:Based on JAX-WS ProviderSupports SQL Queries for creating Web ServicesSupports Table CRUD OperationsThis is present as a new option in the New Gallery under 'Web Services'When you invoke the New Gallery option, it present you with three options to choose from:In this entry I will explain the options of creating service based on SQL queries and Table CRUD operations.SQL Query based Service When you select this option, on 'Next' page it asks you for the DB Conn details. You can also choose if you want SOAP 1.1 or 1.2 format. For this example, I will proceed with SOAP 1.1, the default option.On the Next page, you can give the SQL query. The wizard support Bind Variables, so you can parametrize your queries. Give "?" as a input parameter you want to give at runtime, and the "Bind Variables" button will get enabled. Here you can specify the name and type of the variable.Finish the wizard. Now you can test your service in Analyzer:See that the bind variable specified comes as a input parameter in the Analyzer Input Form:CRUD OperationsFor this, At Step 2 of Wizard, select the radio button "Generate Table CRUD Service Provider"At the next step, select the DB Connection and the table for which you want to generate the default set of operations:Finish the Wizard. Now, run the service in Analyzer for a quick check.See that all the basic operations are exposed:

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >