Search Results

Search found 2528 results on 102 pages for 'cheungcc 2000'.

Page 22/102 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Correlated SQL Join Query from multiple tables

    - by SooDesuNe
    I have two tables like the ones below. I need to find what exchangeRate was in effect at the dateOfPurchase. I've tried some correlated sub queries, but I'm having difficulty getting the correlated record to be used in the sub queries. I expect a solution will need to follow this basic outline: SELECT only the exchangeRates for the applicable countryCode From 1. SELECT the newest exchangeRate less than the dateOfPurchase Fill in the query table with all the fields from 2. and the purchasesTable. My Tables: purchasesTable: > dateOfPurchase | costOfPurchase | countryOfPurchase > 29-March-2010 | 20.00 | EUR > 29-March-2010 | 3000 | JPN > 30-March-2010 | 50.00 | EUR > 30-March-2010 | 3000 | JPN > 30-March-2010 | 2000 | JPN > 31-March-2010 | 100.00 | EUR > 31-March-2010 | 125.00 | EUR > 31-March-2010 | 2000 | JPN > 31-March-2010 | 2400 | JPN costOfPurchase is in whatever the local currency is for a given countryCode exchangeRateTable > effectiveDate | countryCode | exchangeRate > 29-March-2010 | JPN | 90 > 29-March-2010 | EUR | 1.75 > 30-March-2010 | JPN | 92 > 31-March-2010 | JPN | 91 The results of the query that I'm looking for: > dateOfPurchase | costOfPurchase | countryOfPurchase | exchangeRate > 29-March-2010 | 20.00 | EUR | 1.75 > 29-March-2010 | 3000 | JPN | 90 > 30-March-2010 | 50.00 | EUR | 1.75 > 30-March-2010 | 3000 | JPN | 92 > 30-March-2010 | 2000 | JPN | 92 > 31-March-2010 | 100.00 | EUR | 1.75 > 31-March-2010 | 125.00 | EUR | 1.75 > 31-March-2010 | 2000 | JPN | 91 > 31-March-2010 | 2400 | JPN | 91 So for example in the results, the exchange rate, in effect for EUR on 31-March was 1.75. I'm using Access, but a MySQL answer would be fine too. UPDATE: Modification to Allan's answer: SELECT dateOfPurchase, costOfPurchase, countryOfPurchase, exchangeRate FROM purchasesTable p LEFT OUTER JOIN (SELECT e1.exchangeRate, e1.countryCode, e1.effectiveDate, min(e2.effectiveDate) AS enddate FROM exchangeRateTable e1 LEFT OUTER JOIN exchangeRateTable e2 ON e1.effectiveDate < e2.effectiveDate AND e1.countryCode = e2.countryCode GROUP BY e1.exchangeRate, e1.countryCode, e1.effectiveDate) e ON p.dateOfPurchase >= e.effectiveDate AND (p.dateOfPurchase < e.enddate OR e.enddate is null) AND p.countryOfPurchase = e.countryCode I had to make a couple small changes.

    Read the article

  • I want Oracle Query for Formatted Output

    - by reply2viveksshah
    The following is my table design: Ecode Degree YOQ 7654321 SSC 2000 7654321 HSC 2002 7654321 Bcom 2006 7654321 MCA 2010 7654322 SSC 2002 7654322 HSC 2004 7654322 Bcom 2007 7654323 SSC 2002 7654323 HSC 2004 7654323 BE 2008 I want the following formatted output using an oracle query: Ecode Degree year 7654321 SSC,HSC,Bcom,MCA 2000,2002,2006,2010 7654322 SSC,HSC,Bcom 2002,2004,2007 7654323 SSC,HSC,BE 2002,2004,2008 How can this be done?

    Read the article

  • Raphaeljs animation kills my browser

    - by user1688606
    I have this code where I have a made a character using 20 paths and put it into a set. Now when I animate the set, the first transformation runs smoothly, the second animation stutters, the third animation doesn't happen as it should and the 4th animation kills my pc, the browser hangs and in the task manager I can see that it consumes up to 70% of CPU. How can I avoid this and free the resources so all the animations run smoothly. *I have to execute 10 simple y-axis transformation animations on that character. JS Fiddle window.onload = function(){ var paper = Raphael(0,0,400,400); var character = paper.set(); paper.setStart(); var attr = {fill:'red',stroke:'none'}; var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var shape = paper.rect(100,100,10,20).attr(attr); var character = paper.setFinish(); character.transform("t0,200") //1st animation.. var chartrnsfrm = Raphael.animation({ transform:'...t0,-48' },1000,"easeout",function(){ character.animate(chartrnsfrm1.delay(2000)) }); character.animate(chartrnsfrm.delay(2000)); //2nd animation.. var chartrnsfrm1 = Raphael.animation({ transform:'...t0,-48' },1000,"easeout",function(){ character.animate(chartrnsfrm2.delay(2000)) }); //3rd animation.. var chartrnsfrm2 = Raphael.animation({ transform:'...t0,-48' },1000,"easeout",function(){ character.animate(chartrnsfrm3.delay(2000)) }); //4th animation.. var chartrnsfrm3 = Raphael.animation({ transform:'...t0,-48' },1000,"easeout"); }

    Read the article

  • Mysql Query problem ?

    - by deep
    ID NAME AMT 1 Name1 1000 2 Name2 500 3 Name3 3000 4 Name1 5000 5 Name2 2000 6 Name1 3000 consider above table as sample. am having a problem in my sql query, Am using like this. Select name,amt from sample where amt between 1000 and 5000 it returns all the values in the table between 1000 and 5000, instead I want to get maximum amount record for each name i.e., 3 name3 3000 4 name1 5000 5 name2 2000

    Read the article

  • Jquery, FadeIn before unload

    - by Starboy
    I'm trying to accomplish a transition effect if you will. On doc ready div fades out, the problem im having is when a visitor navigates away from the page (or .unload) I want the div to fade back in. $(document).ready(function(){ $('#overlay').fadeOut(2000, 'easeOutQuad'); }); $(window).beforeunload(function() { $('#overlay').fadeIn(2000, 'easeOutQuad'); });

    Read the article

  • Access element in json having numerical index

    - by hunt
    i have following format of json in which i want to asscess 0.4 , kem , 2 , 2000 values but it seems it doesn't have name index so how one can access it in jquery. when i paste following code in json viewer then i am getting numerical index for 0.4 , kem , 2 "td": [ { "@attributes": { "class": "odd" }, "span": [ "3", "7" ] }, "0.4", "Kem", "24\/04\/2010", "2000", "2", "14000", "Good", "Buckley", "56.0", "2:05.32", "36.65", "54.5" ] }

    Read the article

  • How to handle choice field with JPA 2, Hibernate 3.5

    - by phmr
    I have an entity with Integer attributes that looks like this in proto code: class MyEntity: String name Integer frequency Integer type def getFrequency() def getType() get* accessors return strings according to this table. value(type) HumanReadableString(type) 1 BSD 2 Apache 3 GPL min frequency max frequency HumanReadableString(frequency) 0 1000 rare 1000 2000 frequent 2001 3000 sexy It should be possible to get all possible values that an attribute can take, example: getChoices(MyEntity, "type") returns ("rare", "frequent", "sexy") It should be possible to get the bound value from the string: getValue(MyEntity, "frequency", "sexy") returns (2000,3000)

    Read the article

  • Issue building C++ DLL with Visual Studio 2010

    - by Jon Tackabury
    I have a very simple DLL written in unmanaged C++ that I access from my application. I recently switch to Visual Studio 2010, and the DLL went from 55k down to 35k with no code changes, and now it will no longer load in Windows 2000. I didn't change any code or compiler settings. I have my defines setup for 0x0500, which should include Windows 2000 support. Has anyone else run into this, or have any ideas of what I can do?

    Read the article

  • Does running a SQL Server 2005 database in compatibility level 80 have a negative impact on performa

    - by Templar
    Our software must be able to run on SQL Server 2000 and 2005. To simplify development, we're running our SQL Server 2005 databases in compatibility level 80. However, database performance seems slower on SQL 2005 than on SQL 2000 in some cases (we have not confirmed this using benchmarks yet). Would upgrading the compatibility level to 90 improve performance on the SQL 2005 servers?

    Read the article

  • Does anybody have any suggestions on which of these two approaches is better for large delete?

    - by RPS
    Approach #1: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE TOP (@count) FROM ProductOrderInfo WHERE ProductId = @product_id AND bCopied = 1 AND FileNameCRC = @localNameCrc SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' Approach #2: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE FROM ProductOrderInfo WHERE ProductId = @product_id AND FileNameCRC IN ( SELECT TOP(@count) FileNameCRC FROM ProductOrderInfo WITH (NOLOCK) WHERE bCopied = 1 AND FileNameCRC = @localNameCrc ) SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' END

    Read the article

  • Redirect Large Number of URLs (HTML Files) to Wordpress

    - by Chetan
    Hi, I have over 2000 HTML files that are now in Wordpress blog. I have the URL Map of Old_file.html and new wordpress URL. I want 301 redirect but don't want to add 2000 lines to htaccess. Can you please suggest how to accomplish this using PHP so that when there is a request for old url, the php script should lookup into the database and redirect(301) to the new URL ? Thanks.

    Read the article

  • Converting a string into a list in Python

    - by Sam
    I have a text document that contains a list of numbers and I want to convert it to a list. Right now I can only get the entire list in the 0th entry of the list, but I want each number to be an element of a list. Does anyone know of an easy way to do this in Python? 1000 2000 3000 4000 to ['1000','2000','3000','4000']

    Read the article

  • unchecked the store only latest version in VSS for a project?

    - by SCM
    Hi I have an issue in VSS 2005, there is a project in VSS, having multiple folders files in it, and most of the files around 2000 having their property "Store only latest version" is checked. I don't know how it is set. I want to change those around 2000 files property "Store only latest version" to unchecked, so that VSS retain all those files previous version. can it be done through a single command to unchecked this option for all those files in project recursively? Thanks

    Read the article

  • how to do this in MySql Query??

    - by deep
    ID NAME AMT 1 Name1 1000 2 Name2 500 3 Name3 3000 4 Name1 5000 5 Name2 2000 6 Name1 3000 consider above table as sample. am having a problem in my sql query, Am using like this. Select name,amt from sample where amt between 1000 and 5000 it returns all the values in the table between 1000 and 5000, instead I want to get maximum amount record for each name i.e., 3 name3 3000 4 name1 5000 5 name2 2000

    Read the article

  • I need to programmatically remove a batch of unique constraints that I don't know the names of.

    - by Bill
    I maintain a product that is installed at multiple locations which as been haphazardly upgraded. Unique constraints were added to a number of tables, but I have no idea what the names are at any particular instance. What I do know is the table/columnname pair that has the unique constraints and I would like to write a script to delete any unique constraint on these column/table combinations. This is MSSQL 2000 and later. Something that works on 2000/2005/2008 would be best!

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • 65536% Autogrowth!

    - by Tara Kizer
    Twice a year, we move our production systems to our disaster recovery site.  Last Saturday night was one of those days.  There are about 50 SQL Server databases to be moved to the DR site, which is done via database mirroring.  It takes only a few seconds to failover, but some databases have a bit more involved work such as setting up replication.  Everything went relatively smooth, but we encountered a weird bug on our most mission critical system.  After everything was successfully failed over to the DR site, it was noticed that mirroring was in a suspended state on one of the databases.  We thought we had run into a SQL Server 2005 bug that we had been encountering and were working with Microsoft on a fix.  Microsoft did fix it in both SQL Server 2005 service pack 3 cumulative update package 13 and service pack 4 cumulative update package 2, however SP3 CU13 and SP4 both recently failed on this system so we were not patched yet with the bug fix.  As the suspended state was causing us issues with replication, we dropped mirroring.  We then noticed we had 10MB of free disk space on the mount point where the principal’s data files are stored.  I knew something went amiss as this system should have at least 150GB free on that mount point.  I immediately checked the main database’s data file and was shocked to see an autgrowth size of 65536%.  The data file autogrew right before mirroring went into the suspended state. 65536%! I didn’t have a lot of time to research if this autgrowth problem was a known SQL Server bug, so I deferred that research to today.  A quick Google search yielded no results but emphasis on “quick”.  I checked our performance system, which was recently restored with a copy of the affected production database, and found the autogrowth setting to be 512MB.  So this autogrowth bug was encountered sometime in the last two weeks.  On February 26th, we had attempted to install SQL 2005 SP4 on production, however it had failed (PSS case open with Microsoft).  I suspected that the SP4 failure was somehow related to this autgrowth bug although that turned out not to be the case. I then tweeted (@TaraKizer) about this problem to see if the SQL Server community (#sqlhelp) had any insights.  It seems several people have either heard of this bug or encountered it.  Aaron Bertrand (blog|twitter) referred me to this Connect item. Our affected database originated on SQL Server 2000 and was upgraded to SQL Server 2005 in 2007.  Back on SQL Server 2000, we were using the default file growth setting which was a percentage.  Sometime after the 2005 upgrade is when we changed it to 512MB.  Our situation seemed to fit the bug Aaron referred to me, so now the question was whether Microsoft had fixed it yet. I received a reply to my tweet from Amit Banerjee (twitter) that it had been fixed in SP3 CU1 (KB958004).  My affected system is SP3 CU8, so I was initially confused why we had encountered the bug.  Because I don’t read things fully, I had missed that there are additional steps you have to follow after applying the bug fix.  Amit set me straight.  Although you can read this information in the KB article, I will also copy it here in case you are as lazy as me and miss the most important section of it (although if you are as lazy as me, you won’t have read this far down my blog post): This hotfix will prevent only future occurrences of this problem. For example, if you restore a database from SQL Server 2000 to a SQL Server 2005 instance that contains this hotfix, this problem will not occur. However, if you already have a database that is affected by this problem, you must follow these steps to resolve this problem manually: Apply this hotfix. Set the file growth settings for the affected files to percentage settings, and then set the settings back to megabyte settings. Take the database offline, and then bring it back online. Verify that the values of the is_percent_growth column are correct in the sys.database_files system table and in the sys.master_files system table.

    Read the article

  • MySQL Server 5.6 defaults changes

    - by user12626240
    We're improving the MySQL Server defaults, as announced by Tomas Ulin at MySQL Connect. Here's what we're changing:  Setting  Old  New  Notes back_log  50  50 + ( max_connections / 5 ) capped at 900 binlog_checksum  off  CRC32  New variable in 5.6 binlog_row_event_max_size  1k  8k flush_time  1800  Windows changes from 1800 to 0  Was already 0 on other platforms host_cache_size  128  128 + 1 for each of the first 500 max_connections + 1 for every 20 max_connections over 500, capped at 2000  New variable in 5.6 innodb_autoextend_increment  8  64  Now affects *.ibd files. 64 is 64 megabytes innodb_buffer_pool_instances  0  8. On 32 bit Windows only, if innodb_buffer_pool_size is greater than 1300M, default is innodb_buffer_pool_size / 128M innodb_concurrency_tickets  500  5000 innodb_file_per_table  off  on innodb_log_file_size  5M  48M  InnoDB will always change size to match my.cnf value. Also see innodb_log_compressed_pages and binlog_row_image innodb_old_blocks_time 0  1000 1 second innodb_open_files  300  300; if innodb_file_per_table is ON, higher of table_open_cache or 300 innodb_purge_batch_size  20  300 innodb_purge_threads  0  1 innodb_stats_on_metadata  on  off join_buffer_size 128k  256k max_allowed_packet  1M  4M max_connect_errors  10  100 open_files_limit  0  5000  See note 1 query_cache_size  0  1M query_cache_type  on/1  off/0 sort_buffer_size  2M  256k sql_mode  none  NO_ENGINE_SUBSTITUTION  See later post about default my.cnf for STRICT_TRANS_TABLES sync_master_info  0  10000  Recommend: master_info_repository=table sync_relay_log  0  10000 sync_relay_log_info  0  10000  Recommend: relay_log_info_repository=table. Also see Replication Relay and Status Logs table_definition_cache  400  400 + table_open_cache / 2, capped at 2000 table_open_cache  400  2000   Also see table_open_cache_instances thread_cache_size  0  8 + max_connections/100, capped at 100 Note 1: In 5.5 there was already a rule to make open_files_limit 10 + max_connections + table_cache_size * 2 if that was higher than the user-specified value. Now uses the higher of that and (5000 or what you specify). We are also adding a new default my.cnf file and guided instructions on the key settings to adjust. More on this in a later post. We're also providing a page with suggestions for settings to improve backwards compatibility. The old example files like my-huge.cnf are obsolete. Some of the improvements are present from 5.6.6 and the rest are coming. These are ideas, and until they are in an official GA release, they are subject to change. As part of this work I reviewed every old server setting plus many hundreds of emails of feedback and testing results from inside and outside Oracle's MySQL Support team and the many excellent blog entries and comments from others over the years, including from many MySQL Gurus out there, like Baron, Sheeri, Ronald, Schlomi, Giuseppe and Mark Callaghan. With these changes we're trying to make it easier to set up the server by adjusting only a few settings that will cause others to be set. This happens only at server startup and only applies to variables where you haven't set a value. You'll see a similar approach used for the Performance Schema. The Gurus don't need this but for many newcomers the defaults will be very useful. Possibly the most unusual change is the way we vary the setting for innodb_buffer_pool_instances for 32-bit Windows. This is because we've found that DLLs with specified load addresses often fragment the limited four gigabyte 32-bit address space and make it impossible to allocate more than about 1300 megabytes of contiguous address space for the InnoDB buffer pool. The smaller requests for many pools are more likely to succeed. If you change the value of innodb_log_file_size in my.cnf you will see a message like this in the error log file at the next restart, instead of the old error message: [Warning] InnoDB: Resizing redo log from 2*64 to 5*128 pages, LSN=5735153 One of the biggest challenges for the defaults is the millions of installations on a huge range of systems, from point of sale terminals and routers though shared hosting or end user systems and on to major servers with lots of CPU cores, hundreds of gigabytes of RAM and terabytes of fast disk space. Our past defaults were for the smaller systems and these change that to larger shared hosting or shared end user systems, still with a bias towards the smaller end. There is a bias in favour of OLTP workloads, so reporting systems may need more changes. Where there is a conflict between the best settings for benchmarks and normal use, we've favoured production, not benchmarks. We're very interested in your feedback, comments and suggestions.

    Read the article

  • zxing project on android

    - by Aisthesis Cronos
    Hello everybody Many weeks ago,I tried to work on a mini project on Android OS requires ZXING, I followed several tutorials on this web site and on other Example: tuto1, and many tags and tutorials here tuto2, tuto3 ... But I failed each time. I can't import the android project into eclipse IDE to compile it with my code "not via Intent zxing APK-and my program like this example : private Button.OnClickListener btScanListener = new Button.OnClickListener() { public void onClick(View v) { Intent intent = new Intent("com.google.zxing.client.android.SCAN"); intent.putExtra("SCAN_MODE", "QR_CODE_MODE"); try { startActivityForResult(intent, REQUEST_SCAN); } catch (ActivityNotFoundException e) { Toast.makeText(Main.this, "Barcode Scanner not intaled ", 2000).show(); } } }; public void onActivityResult(int reqCode, int resCode, Intent intent) { if (REQUEST_SCAN == reqCode) { if (RESULT_OK == resCode) { String contents = intent.getStringExtra("SCAN_RESULT"); Toast.makeText(this, "Succès : " + contents, 2000).show(); } else if (RESULT_CANCELED == resCode) { Toast.makeText(this, "Scan annulé", 2000).show(); } } }` ". I feel disappointed, frustrated and sad. I still have errors after importing the project. I tried both versions 1.5 and 1.6 zxing I tried to import the project c: \ ZXing-1.6 \ android, and an other new project with c: \ ZXing-1.6 \ zxing-1.6 \ android,I cheked out SVN: ttp: / / zxing.googlecode.com / svn / trunk / zxing-read-only with tortoiseSVN and reproduce the same work, but unfortunately without results! I really fed up with myself ... Please help me to solve this problem.how can I import the project and compile it correctly in my own project? 1 - I use Windows 7 64-bit Home Premium 2 - Eclipse IDE for Java EE Web Developers. Version: Helios Service Release 2 Build id: 20110218-0911 What is the effective and sure method to run this, otherwise if there is a video or a guide details or someone who already done it previously I would really appreciate it if someone would help me out

    Read the article

  • number of args for stored procedure PLS00306

    - by Peter Kaleta
    Hi I have problem with calling for my procedure.Oracle scrams pls00306 Error: Wrong number of types of arguments in call to procedure. With my type declaration procedure has exact the same declaration like in header below. If I run it as separate prcedure it works , when i work in ODCI interface for exensible index creation , it throws pls 00306. MEMBER PROCEDURE FILL_TREE_LVL(target_column VARCHAR2, cur_lvl NUMBER, max_lvl NUMBER, parent_rect NUMBER,start_x NUMBER, start_y NUMBER,end_x NUMBER, end_y NUMBER) IS stmt VARCHAR2(2000); rect_id NUMBER; diff_x NUMBER; diff_y NUMBER; new_start_x NUMBER; new_end_x NUMBER; i NUMBER; j NUMBER; BEGIN {...} END FILL_TREE_LVL; STATIC FUNCTION ODCIINDEXCREATE (ia SYS.ODCIINDEXINFO, parms VARCHAR2, env SYS.ODCIEnv) RETURN NUMBER IS stmt VARCHAR2(2000); stmt2 VARCHAR2(2000); min_x NUMBER; max_x NUMBER; min_y NUMBER; max_y NUMBER; lvl NUMBER; rect_id NUMBER; pt_tab VARCHAR2(50); rect_tab VARCHAR2(50); cnum NUMBER; TYPE point_rect is RECORD( point_id NUMBER, rect_id NUMBER ); TYPE point_rect_tab IS TABLE OF point_rect; pr_table point_rect_tab; BEGIN {...} FILL_TREE_LVL('any string',0,lvl,min_x,min_y,max_x, max_y); {...} END;

    Read the article

  • NHibernate with Framework .NET Framework 2

    - by Daniel Dolz
    Hi. I can not make NHibernate 2.1 work in machines without framework 3.X (basically, windows 2000 SP4, although it happens with XP too). NHibernate doc do no mention this. Maybe you can help? I NEED to make NHibernate 2.1 work in Windows 2000 PCs, so you think this can be done? Thanks! PD: DataBase is SQL 2000/2005. Error is: NHibernate.MappingException: Could not compile the mapping document: Datos.NH_VEN_ComprobanteBF.hbm.xml ---> NHibernate.HibernateException: Could not instantiate dialect class NHibernate.Dialect.MsSql2000Dialect ---> System.Reflection.TargetInvocationException: Se produjo una excepción en el destino de la invocación. ---> System.TypeInitializationException: Se produjo una excepción en el inicializador de tipo de 'NHibernate.NHibernateUtil'. ---> System.TypeLoadException: No se puede cargar el tipo 'System.DateTimeOffset' del ensamblado'mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. en NHibernate.Type.DateTimeOffsetType.get_ReturnedClass() en NHibernate.NHibernateUtil..cctor() --- Fin del seguimiento de la pila de la excepción interna --- en NHibernate.Dialect.Dialect..ctor() en NHibernate.Dialect.MsSql2000Dialect..ctor() --- Fin del seguimiento de la pila de la excepción interna --- en System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandle& ctor, Boolean& bNeedSecurityCheck) en System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean fillCache) en System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) en System.Activator.CreateInstance(Type type, Boolean nonPublic) en NHibernate.Bytecode.ActivatorObjectsFactory.CreateInstance(Type type) en NHibernate.Dialect.Dialect.InstantiateDialect(String dialectName) --- Fin del seguimiento de la pila de la excepción interna --- en NHibernate.Dialect.Dialect.InstantiateDialect(String dialectName) en NHibernate.Dialect.Dialect.GetDialect(IDictionary`2 props) en NHibernate.Cfg.Configuration.AddValidatedDocument(NamedXmlDocument doc) --- Fin del seguimiento de la pila de la excepción interna --- en NHibernate.Cfg.Configuration.LogAndThrow(Exception exception) en NHibernate.Cfg.Configuration.AddValidatedDocument(NamedXmlDocument doc) en NHibernate.Cfg.Configuration.ProcessMappingsQueue() and continues...

    Read the article

  • Number of args for stored procedure PLS-00306

    - by Peter Kaleta
    Hi I have problem with calling for my procedure. Oracle scrams PLS-00306 Error: Wrong number of types of arguments in call to procedure. With my type declaration procedure has exact the same declaration like in header below. If I run it as separate procedure it works, when i work in ODCI interface for extensible index creation, it throws PLS-00306. MEMBER PROCEDURE FILL_TREE_LVL (target_column VARCHAR2, cur_lvl NUMBER, max_lvl NUMBER, parent_rect NUMBER,start_x NUMBER, start_y NUMBER, end_x NUMBER, end_y NUMBER) IS stmt VARCHAR2(2000); rect_id NUMBER; diff_x NUMBER; diff_y NUMBER; new_start_x NUMBER; new_end_x NUMBER; i NUMBER; j NUMBER; BEGIN {...} END FILL_TREE_LVL; STATIC FUNCTION ODCIINDEXCREATE (ia SYS.ODCIINDEXINFO, parms VARCHAR2, env SYS.ODCIEnv) RETURN NUMBER IS stmt VARCHAR2(2000); stmt2 VARCHAR2(2000); min_x NUMBER; max_x NUMBER; min_y NUMBER; max_y NUMBER; lvl NUMBER; rect_id NUMBER; pt_tab VARCHAR2(50); rect_tab VARCHAR2(50); cnum NUMBER; TYPE point_rect is RECORD( point_id NUMBER, rect_id NUMBER ); TYPE point_rect_tab IS TABLE OF point_rect; pr_table point_rect_tab; BEGIN {...} FILL_TREE_LVL('any string', 0, lvl,0, min_x, min_y, max_x, max_y); {...} END;

    Read the article

  • Learning Hibernate: too many connections

    - by stivlo
    I'm trying to learn Hibernate and I wrote the simplest Person Entity and I was trying to insert 2000 of them. I know I'm using deprecated methods, I will try to figure out what are the new ones later. First, here is the class Person: @Entity public class Person { private int id; private String name; @Id @GeneratedValue(strategy = GenerationType.TABLE, generator = "person") @TableGenerator(name = "person", table = "sequences", allocationSize = 1) public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } } Then I wrote a small App class that insert 2000 entities with a loop: public class App { private static AnnotationConfiguration config; public static void insertPerson() { SessionFactory factory = config.buildSessionFactory(); Session session = factory.getCurrentSession(); session.beginTransaction(); Person aPerson = new Person(); aPerson.setName("John"); session.save(aPerson); session.getTransaction().commit(); } public static void main(String[] args) { config = new AnnotationConfiguration(); config.addAnnotatedClass(Person.class); config.configure("hibernate.cfg.xml"); //is the default already new SchemaExport(config).create(true, true); //print and execute for (int i = 0; i < 2000; i++) { insertPerson(); } } } What I get after a while is: Exception in thread "main" org.hibernate.exception.JDBCConnectionException: Cannot open connection Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Too many connections Now I know that probably if I put the transaction outside the loop it would work, but mine was a test to see what happens when executing multiple transactions. And since there is only one open at each time, it should work. I tried to add session.close() after the commit, but I got Exception in thread "main" org.hibernate.SessionException: Session was already closed So how to solve the problem?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >