Search Results

Search found 8997 results on 360 pages for 'apt cache'.

Page 136/360 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Can't connect to STunnel when it's running as a service

    - by John Francis
    I've got STunnel configured to proxy non SSL POP3 requests to GMail on port 111. This is working fine when STunnel is running as a desktop app, but when I run the STunnel service, I can't connect to port 111 on the machine (using Outlook Express for example). The Stunnel log file shows the port binding is succeeding, but it never sees a connection. There's something preventing the connection to that port when STunnel is running as a service? Here's stunnel.conf cert = stunnel.pem ; Some performance tunings socket = l:TCP_NODELAY=1 socket = r:TCP_NODELAY=1 ; Some debugging stuff useful for troubleshooting debug = 7 output = stunnel.log ; Use it for client mode client = yes ; Service-level configuration [gmail] accept = 127.0.0.1:111 connect = pop.gmail.com:995 stunnel.log from service 2010.10.07 12:14:22 LOG5[80444:72984]: Reading configuration from file stunnel.conf 2010.10.07 12:14:22 LOG7[80444:72984]: Snagged 64 random bytes from C:/.rnd 2010.10.07 12:14:23 LOG7[80444:72984]: Wrote 1024 new random bytes to C:/.rnd 2010.10.07 12:14:23 LOG7[80444:72984]: PRNG seeded successfully 2010.10.07 12:14:23 LOG7[80444:72984]: Certificate: stunnel.pem 2010.10.07 12:14:23 LOG7[80444:72984]: Certificate loaded 2010.10.07 12:14:23 LOG7[80444:72984]: Key file: stunnel.pem 2010.10.07 12:14:23 LOG7[80444:72984]: Private key loaded 2010.10.07 12:14:23 LOG7[80444:72984]: SSL context initialized for service gmail 2010.10.07 12:14:23 LOG5[80444:72984]: Configuration successful 2010.10.07 12:14:23 LOG5[80444:72984]: No limit detected for the number of clients 2010.10.07 12:14:23 LOG7[80444:72984]: FD=156 in non-blocking mode 2010.10.07 12:14:23 LOG7[80444:72984]: Option SO_REUSEADDR set on accept socket 2010.10.07 12:14:23 LOG7[80444:72984]: Service gmail bound to 0.0.0.0:111 2010.10.07 12:14:23 LOG7[80444:72984]: Service gmail opened FD=156 2010.10.07 12:14:23 LOG5[80444:72984]: stunnel 4.34 on x86-pc-mingw32-gnu with OpenSSL 1.0.0a 1 Jun 2010 2010.10.07 12:14:23 LOG5[80444:72984]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 stunnel.log from desktop (working) process 2010.10.07 12:10:31 LOG5[80824:81200]: Reading configuration from file stunnel.conf 2010.10.07 12:10:31 LOG7[80824:81200]: Snagged 64 random bytes from C:/.rnd 2010.10.07 12:10:32 LOG7[80824:81200]: Wrote 1024 new random bytes to C:/.rnd 2010.10.07 12:10:32 LOG7[80824:81200]: PRNG seeded successfully 2010.10.07 12:10:32 LOG7[80824:81200]: Certificate: stunnel.pem 2010.10.07 12:10:32 LOG7[80824:81200]: Certificate loaded 2010.10.07 12:10:32 LOG7[80824:81200]: Key file: stunnel.pem 2010.10.07 12:10:32 LOG7[80824:81200]: Private key loaded 2010.10.07 12:10:32 LOG7[80824:81200]: SSL context initialized for service gmail 2010.10.07 12:10:32 LOG5[80824:81200]: Configuration successful 2010.10.07 12:10:32 LOG5[80824:81200]: No limit detected for the number of clients 2010.10.07 12:10:32 LOG7[80824:81200]: FD=156 in non-blocking mode 2010.10.07 12:10:32 LOG7[80824:81200]: Option SO_REUSEADDR set on accept socket 2010.10.07 12:10:32 LOG7[80824:81200]: Service gmail bound to 0.0.0.0:111 2010.10.07 12:10:32 LOG7[80824:81200]: Service gmail opened FD=156 2010.10.07 12:10:33 LOG5[80824:81200]: stunnel 4.34 on x86-pc-mingw32-gnu with OpenSSL 1.0.0a 1 Jun 2010 2010.10.07 12:10:33 LOG5[80824:81200]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.10.07 12:10:33 LOG7[80824:81844]: Service gmail accepted FD=188 from 127.0.0.1:24813 2010.10.07 12:10:33 LOG7[80824:81844]: Creating a new thread 2010.10.07 12:10:33 LOG7[80824:81844]: New thread created 2010.10.07 12:10:33 LOG7[80824:25144]: Service gmail started 2010.10.07 12:10:33 LOG7[80824:25144]: FD=188 in non-blocking mode 2010.10.07 12:10:33 LOG7[80824:25144]: Option TCP_NODELAY set on local socket 2010.10.07 12:10:33 LOG5[80824:25144]: Service gmail accepted connection from 127.0.0.1:24813 2010.10.07 12:10:33 LOG7[80824:25144]: FD=212 in non-blocking mode 2010.10.07 12:10:33 LOG6[80824:25144]: connect_blocking: connecting 209.85.227.109:995 2010.10.07 12:10:33 LOG7[80824:25144]: connect_blocking: s_poll_wait 209.85.227.109:995: waiting 10 seconds 2010.10.07 12:10:33 LOG5[80824:25144]: connect_blocking: connected 209.85.227.109:995 2010.10.07 12:10:33 LOG5[80824:25144]: Service gmail connected remote server from 192.168.1.9:24814 2010.10.07 12:10:33 LOG7[80824:25144]: Remote FD=212 initialized 2010.10.07 12:10:33 LOG7[80824:25144]: Option TCP_NODELAY set on remote socket 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): before/connect initialization 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write client hello A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server hello A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server certificate A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server done A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write client key exchange A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write change cipher spec A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write finished A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 flush data 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read finished A 2010.10.07 12:10:33 LOG7[80824:25144]: 1 items in the session cache 2010.10.07 12:10:33 LOG7[80824:25144]: 1 client connects (SSL_connect()) 2010.10.07 12:10:33 LOG7[80824:25144]: 1 client connects that finished 2010.10.07 12:10:33 LOG7[80824:25144]: 0 client renegotiations requested 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server connects (SSL_accept()) 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server connects that finished 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server renegotiations requested 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache hits 2010.10.07 12:10:33 LOG7[80824:25144]: 0 external session cache hits 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache misses 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache timeouts 2010.10.07 12:10:33 LOG6[80824:25144]: SSL connected: new session negotiated 2010.10.07 12:10:33 LOG6[80824:25144]: Negotiated ciphers: RC4-MD5 SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5 2010.10.07 12:10:34 LOG7[80824:25144]: SSL socket closed on SSL_read 2010.10.07 12:10:34 LOG7[80824:25144]: Sending socket write shutdown 2010.10.07 12:10:34 LOG5[80824:25144]: Connection closed: 53 bytes sent to SSL, 118 bytes sent to socket 2010.10.07 12:10:34 LOG7[80824:25144]: Service gmail finished (0 left)

    Read the article

  • 401 Using Multiple Authentication methods IE 10 only

    - by jon3laze
    I am not sure if this is more of a coding issue or server setup issue so I've posted it on stackoverflow and here... On our production site we've run into an issue that is specific to Internet Explorer 10. I am using jQuery doing an ajax POST to a web service on the same domain and in IE10 I am getting a 401 response, IE9 works perfectly fine. I should mention that we have mirrored code in another area of our site and it works perfectly fine in IE10. The only difference between the two areas is that one is under a subdomain and the other is at the root level. www.my1stdomain.com vs. portal.my2nddomain.com The directory structure on the server for these are: \my1stdomain\webservice\name\service.aspx \portal\webservice\name\service.aspx Inside of the \portal\ and \my1stdomain\ folders I have a page that does an ajax call, both pages are identical. $.ajax({ type: 'POST', url: '/webservice/name/service.aspx/function', cache: false, contentType: 'application/json; charset=utf-8', dataType: 'json', data: '{ "json": "data" }', success: function() { }, error: function() { } }); I've verified permissions are the same on both folders on the server side. I've applied a workaround fix of placing the <meta http-equiv="X-UA-Compatible" value="IE=9"> to force compatibility view (putting IE into compatibility mode fixes the issue). This seems to be working in IE10 on Windows 7, however IE 10 on Windows 8 still sees the same issue. These pages are classic asp with the headers that are being included, also there are no other meta tags being used. The doctype is being specified as <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//" "http://www.w3.org/TR/html4/loose.dtd"> on the portal page and <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> on the main domain. UPDATE1 I used Microsoft Network Monitor 3.4 on the server to capture the request. I used the following filter to capture the 401: Property.HttpStatusCode.StringToNumber == 401 This was the response - Http: Response, HTTP/1.1, Status: Unauthorized, URL: /webservice/name/service.aspx/function Using Multiple Authetication Methods, see frame details ProtocolVersion: HTTP/1.1 StatusCode: 401, Unauthorized Reason: Unauthorized - ContentType: application/json; charset=utf-8 - MediaType: application/json; charset=utf-8 MainType: application/json charset: utf-8 Server: Microsoft-IIS/7.0 jsonerror: true - WWWAuthenticate: Negotiate - Authenticate: Negotiate WhiteSpace: AuthenticateData: Negotiate - WWWAuthenticate: NTLM - Authenticate: NTLM WhiteSpace: AuthenticateData: NTLM XPoweredBy: ASP.NET Date: Mon, 04 Mar 2013 21:13:39 GMT ContentLength: 105 HeaderEnd: CRLF - payload: HttpContentType = application/json; charset=utf-8 HTTPPayloadLine: {"Message":"Authentication failed.","StackTrace":null,"ExceptionType":"System.InvalidOperationException"} The thing here that really stands out is Unauthorized, URL: /webservice/name/service.aspx/function Using Multiple Authentication Methods With this I'm still confused as to why this only happens in IE10 if it's a permission/authentication issue. What was added to 10, or where should I be looking for the root cause of this? UPDATE2 Here are the headers from the client machine from fiddler (server information removed): Main SESSION STATE: Done. Request Entity Size: 64 bytes. Response Entity Size: 9 bytes. == FLAGS ================== BitFlags: [ServerPipeReused] 0x10 X-EGRESSPORT: 44537 X-RESPONSEBODYTRANSFERLENGTH: 9 X-CLIENTPORT: 44770 UI-COLOR: Green X-CLIENTIP: 127.0.0.1 UI-OLDCOLOR: WindowText UI-BOLD: user-marked X-SERVERSOCKET: REUSE ServerPipe#46 X-HOSTIP: ***.***.***.*** X-PROCESSINFO: iexplore:2644 == TIMING INFO ============ ClientConnected: 14:43:08.488 ClientBeginRequest: 14:43:08.488 GotRequestHeaders: 14:43:08.488 ClientDoneRequest: 14:43:08.488 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 0ms HTTPS Handshake: 0ms ServerConnected: 14:40:28.943 FiddlerBeginRequest: 14:43:08.488 ServerGotRequest: 14:43:08.488 ServerBeginResponse: 14:43:08.592 GotResponseHeaders: 14:43:08.592 ServerDoneResponse: 14:43:08.592 ClientBeginResponse: 14:43:08.592 ClientDoneResponse: 14:43:08.592 Overall Elapsed: 0:00:00.104 The response was buffered before delivery to the client. == WININET CACHE INFO ============ This URL is not present in the WinINET cache. [Code: 2] Portal SESSION STATE: Done. Request Entity Size: 64 bytes. Response Entity Size: 105 bytes. == FLAGS ================== BitFlags: [ClientPipeReused, ServerPipeReused] 0x18 X-EGRESSPORT: 44444 X-RESPONSEBODYTRANSFERLENGTH: 105 X-CLIENTPORT: 44439 X-CLIENTIP: 127.0.0.1 X-SERVERSOCKET: REUSE ServerPipe#7 X-HOSTIP: ***.***.***.*** X-PROCESSINFO: iexplore:7132 == TIMING INFO ============ ClientConnected: 14:37:59.651 ClientBeginRequest: 14:38:01.397 GotRequestHeaders: 14:38:01.397 ClientDoneRequest: 14:38:01.397 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 0ms HTTPS Handshake: 0ms ServerConnected: 14:37:57.880 FiddlerBeginRequest: 14:38:01.397 ServerGotRequest: 14:38:01.397 ServerBeginResponse: 14:38:01.464 GotResponseHeaders: 14:38:01.464 ServerDoneResponse: 14:38:01.464 ClientBeginResponse: 14:38:01.464 ClientDoneResponse: 14:38:01.464 Overall Elapsed: 0:00:00.067 The response was buffered before delivery to the client. == WININET CACHE INFO ============ This URL is not present in the WinINET cache. [Code: 2]

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Tuning Red Gate: #4 of Some

    - by Grant Fritchey
    First time connecting to these servers directly (keys to the kingdom, bwa-ha-ha-ha. oh, excuse me), so I'm going to take a look at the server properties, just to see if there are any issues there. Max memory is set, cool, first possible silly mistake clear. In fact, these look to be nicely set up. Oh, I'd like to see the ANSI Standards set by default, but it's not a big deal. The default location for database data is the F:\ drive, where I saw all the activity last time. Cool, the people maintaining the servers in our company listen, parallelism threshold is set to 35 and optimize for ad hoc is enabled. No shocks, no surprises. The basic setup is appropriate. On to the problem database. Nothing wrong in the properties. The database is in SIMPLE recovery, but I think it's a reporting system, so no worries there. Again, I'd prefer to see the ANSI settings for connections, but that's the worst thing I can see. Time to look at the queries, tables, indexes and statistics because all the information I've collected over the last several days suggests that we're not looking at a systemic problem (except possibly not enough memory), but at the traditional tuning issues. I just want to note that, I started looking at the system, not the queries. So should you when tuning your environment. I know, from the data collected through SQL Monitor, what my top poor performing queries are, and the most frequently called, etc. I'm starting with the most frequently called. I'm going to get the execution plan for this thing out of the cache (although, with the cache dumping constantly, I might not get it). And it's not there. Called 1.3 million times over the last 3 days, but it's not in cache. Wow. OK. I'll see what's in cache for this database: SELECT  deqs.creation_time,         deqs.execution_count,         deqs.max_logical_reads,         deqs.max_elapsed_time,         deqs.total_logical_reads,         deqs.total_elapsed_time,         deqp.query_plan,         SUBSTRING(dest.text, (deqs.statement_start_offset / 2) + 1,                   (deqs.statement_end_offset - deqs.statement_start_offset) / 2                   + 1) AS QueryStatement FROM    sys.dm_exec_query_stats AS deqs         CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest         CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp WHERE   dest.dbid = DB_ID('Warehouse') AND deqs.statement_end_offset > 0 AND deqs.statement_start_offset > 0 ORDER BY deqs.max_logical_reads DESC ; And looking at the most expensive operation, we have our first bad boy: Multiple table scans against very large sets of data and a sort operation. a sort operation? It's an insert. Oh, I see, the table is a heap, so it's doing an insert, then sorting the data and then inserting into the primary key. First question, why isn't this a clustered index? Let's look at some more of the queries. The next one is deceiving. Here's the query plan: You're thinking to yourself, what's the big deal? Well, what if I told you that this thing had 8036318 reads? I know, you're looking at skinny little pipes. Know why? Table variable. Estimated number of rows = 1. Actual number of rows. well, I'm betting several more than one considering it's read 8 MILLION pages off the disk in a single execution. We have a serious and real tuning candidate. Oh, and I missed this, it's loading the table variable from a user defined function. Let me check, let me check. YES! A multi-statement table valued user defined function. And another tuning opportunity. This one's a beauty, seriously. Did I also mention that they're doing a hash against all the columns in the physical table. I'm sure that won't lead to scans of a 500,000 row table, no, not at all. OK. I lied. Of course it is. At least it's on the top part of the Loop which means the scan is only executed once. I just did a cursory check on the next several poor performers. all calling the UDF. I think I found a big tuning opportunity. At this point, I'm typing up internal emails for the company. Someone just had their baby called ugly. In addition to a series of suggested changes that we need to implement, I'm also apologizing for being such an unkind monster as to question whether that third eye & those flippers belong on such an otherwise lovely child.

    Read the article

  • 10 Windows Tweaking Myths Debunked

    - by Chris Hoffman
    Windows is big, complicated, and misunderstood. You’ll still stumble across bad advice from time to time when browsing the web. These Windows tweaking, performance, and system maintenance tips are mostly just useless, but some are actively harmful. Luckily, most of these myths have been stomped out on mainstream sites and forums. However, if you start searching the web, you’ll still find websites that recommend you do these things. Erase Cache Files Regularly to Speed Things Up You can free up disk space by running an application like CCleaner, another temporary-file-cleaning utility, or even the Windows Disk Cleanup tool. In some cases, you may even see an old computer speed up when you erase a large amount of useless files. However, running CCleaner or similar utilities every day to erase your browser’s cache won’t actually speed things up. It will slow down your web browsing as your web browser is forced to redownload the files all over again, and reconstruct the cache you regularly delete. If you’ve installed CCleaner or a similar program and run it every day with the default settings, you’re actually slowing down your web browsing. Consider at least preventing the program from wiping out your web browser cache. Enable ReadyBoost to Speed Up Modern PCs Windows still prompts you to enable ReadyBoost when you insert a USB stick or memory card. On modern computers, this is completely pointless — ReadyBoost won’t actually speed up your computer if you have at least 1 GB of RAM. If you have a very old computer with a tiny amount of RAM — think 512 MB — ReadyBoost may help a bit. Otherwise, don’t bother. Open the Disk Defragmenter and Manually Defragment On Windows 98, users had to manually open the defragmentation tool and run it, ensuring no other applications were using the hard drive while it did its work. Modern versions of Windows are capable of defragmenting your file system while other programs are using it, and they automatically defragment your disks for you. If you’re still opening the Disk Defragmenter every week and clicking the Defragment button, you don’t need to do this — Windows is doing it for you unless you’ve told it not to run on a schedule. Modern computers with solid-state drives don’t have to be defragmented at all. Disable Your Pagefile to Increase Performance When Windows runs out of empty space in RAM, it swaps out data from memory to a pagefile on your hard disk. If a computer doesn’t have much memory and it’s running slow, it’s probably moving data to the pagefile or reading data from it. Some Windows geeks seem to think that the pagefile is bad for system performance and disable it completely. The argument seems to be that Windows can’t be trusted to manage a pagefile and won’t use it intelligently, so the pagefile needs to be removed. As long as you have enough RAM, it’s true that you can get by without a pagefile. However, if you do have enough RAM, Windows will only use the pagefile rarely anyway. Tests have found that disabling the pagefile offers no performance benefit. Enable CPU Cores in MSConfig Some websites claim that Windows may not be using all of your CPU cores or that you can speed up your boot time by increasing the amount of cores used during boot. They direct you to the MSConfig application, where you can indeed select an option that appears to increase the amount of cores used. In reality, Windows always uses the maximum amount of processor cores your CPU has. (Technically, only one core is used at the beginning of the boot process, but the additional cores are quickly activated.) Leave this option unchecked. It’s just a debugging option that allows you to set a maximum number of cores, so it would be useful if you wanted to force Windows to only use a single core on a multi-core system — but all it can do is restrict the amount of cores used. Clean Your Prefetch To Increase Startup Speed Windows watches the programs you run and creates .pf files in its Prefetch folder for them. The Prefetch feature works as a sort of cache — when you open an application, Windows checks the Prefetch folder, looks at the application’s .pf file (if it exists), and uses that as a guide to start preloading data that the application will use. This helps your applications start faster. Some Windows geeks have misunderstood this feature. They believe that Windows loads these files at boot, so your boot time will slow down due to Windows preloading the data specified in the .pf files. They also argue you’ll build up useless files as you uninstall programs and .pf files will be left over. In reality, Windows only loads the data in these .pf files when you launch the associated application and only stores .pf files for the 128 most recently launched programs. If you were to regularly clean out the Prefetch folder, not only would programs take longer to open because they won’t be preloaded, Windows will have to waste time recreating all the .pf files. You could also modify the PrefetchParameters setting to disable Prefetch, but there’s no reason to do that. Let Windows manage Prefetch on its own. Disable QoS To Increase Network Bandwidth Quality of Service (QoS) is a feature that allows your computer to prioritize its traffic. For example, a time-critical application like Skype could choose to use QoS and prioritize its traffic over a file-downloading program so your voice conversation would work smoothly, even while you were downloading files. Some people incorrectly believe that QoS always reserves a certain amount of bandwidth and this bandwidth is unused until you disable it. This is untrue. In reality, 100% of bandwidth is normally available to all applications unless a program chooses to use QoS. Even if a program does choose to use QoS, the reserved space will be available to other programs unless the program is actively using it. No bandwidth is ever set aside and left empty. Set DisablePagingExecutive to Make Windows Faster The DisablePagingExecutive registry setting is set to 0 by default, which allows drivers and system code to be paged to the disk. When set to 1, drivers and system code will be forced to stay resident in memory. Once again, some people believe that Windows isn’t smart enough to manage the pagefile on its own and believe that changing this option will force Windows to keep important files in memory rather than stupidly paging them out. If you have more than enough memory, changing this won’t really do anything. If you have little memory, changing this setting may force Windows to push programs you’re using to the page file rather than push unused system files there — this would slow things down. This is an option that may be helpful for debugging in some situations, not a setting to change for more performance. Process Idle Tasks to Free Memory Windows does things, such as creating scheduled system restore points, when you step away from your computer. It waits until your computer is “idle” so it won’t slow your computer and waste your time while you’re using it. Running the “Rundll32.exe advapi32.dll,ProcessIdleTasks” command forces Windows to perform all of these tasks while you’re using the computer. This is completely pointless and won’t help free memory or anything like that — all you’re doing is forcing Windows to slow your computer down while you’re using it. This command only exists so benchmarking programs can force idle tasks to run before performing benchmarks, ensuring idle tasks don’t start running and interfere with the benchmark. Delay or Disable Windows Services There’s no real reason to disable Windows services anymore. There was a time when Windows was particularly heavy and computers had little memory — think Windows Vista and those “Vista Capable” PCs Microsoft was sued over. Modern versions of Windows like Windows 7 and 8 are lighter than Windows Vista and computers have more than enough memory, so you won’t see any improvements from disabling system services included with Windows. Some people argue for not disabling services, however — they recommend setting services from “Automatic” to “Automatic (Delayed Start)”. By default, the Delayed Start option just starts services two minutes after the last “Automatic” service starts. Setting services to Delayed Start won’t really speed up your boot time, as the services will still need to start — in fact, it may lengthen the time it takes to get a usable desktop as services will still be loading two minutes after booting. Most services can load in parallel, and loading the services as early as possible will result in a better experience. The “Delayed Start” feature is primarily useful for system administrators who need to ensure a specific service starts later than another service. If you ever find a guide that recommends you set a little-known registry setting to improve performance, take a closer look — the change is probably useless. Want to actually speed up your PC? Try disabling useless startup programs that run on boot, increasing your boot time and consuming memory in the background. This is a much better tip than doing any of the above, especially considering most Windows PCs come packed to the brim with bloatware.     

    Read the article

  • Server Memory with Magento

    - by Mohamed Elgharabawy
    I have a cloud server with the following specifications: 2vCPUs 4G RAM 160GB Disk Space Network 400Mb/s System Image: Ubuntu 12.04 LTS I am only running Magento CE 1.7.0.2 on this server. Nothing else. Usually, the server has a loading time of 4-5 seconds. Recently, this has dropped to over 30 seconds and sometimes the server just goes away and I get HTTP error reports to my email stating that HTTP requests took more than 20000ms. Running top command and sorting them returns the following: top - 15:29:07 up 3:40, 1 user, load average: 28.59, 25.95, 22.91 Tasks: 112 total, 30 running, 82 sleeping, 0 stopped, 0 zombie Cpu(s): 90.2%us, 9.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.2%st PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31901 www-data 20 0 360m 71m 5840 R 7 1.8 1:39.51 apache2 32084 www-data 20 0 362m 72m 5548 R 7 1.8 1:31.56 apache2 32089 www-data 20 0 348m 59m 5660 R 7 1.5 1:41.74 apache2 32295 www-data 20 0 343m 54m 5532 R 7 1.4 2:00.78 apache2 32303 www-data 20 0 354m 65m 5260 R 7 1.6 1:38.76 apache2 32304 www-data 20 0 346m 56m 5544 R 7 1.4 1:41.26 apache2 32305 www-data 20 0 348m 59m 5640 R 7 1.5 1:50.11 apache2 32291 www-data 20 0 358m 69m 5256 R 6 1.7 1:44.26 apache2 32517 www-data 20 0 345m 56m 5532 R 6 1.4 1:45.56 apache2 30473 www-data 20 0 355m 66m 5680 R 6 1.7 2:00.05 apache2 32093 www-data 20 0 352m 63m 5848 R 6 1.6 1:53.23 apache2 32302 www-data 20 0 345m 56m 5512 R 6 1.4 1:55.87 apache2 32433 www-data 20 0 346m 57m 5500 S 6 1.4 1:31.58 apache2 32638 www-data 20 0 354m 65m 5508 R 6 1.6 1:36.59 apache2 32230 www-data 20 0 347m 57m 5524 R 6 1.4 1:33.96 apache2 32231 www-data 20 0 355m 66m 5512 R 6 1.7 1:37.47 apache2 32233 www-data 20 0 354m 64m 6032 R 6 1.6 1:59.74 apache2 32300 www-data 20 0 355m 66m 5672 R 6 1.7 1:43.76 apache2 32510 www-data 20 0 347m 58m 5512 R 6 1.5 1:42.54 apache2 32521 www-data 20 0 348m 59m 5508 R 6 1.5 1:47.99 apache2 32639 www-data 20 0 344m 55m 5512 R 6 1.4 1:34.25 apache2 32083 www-data 20 0 345m 56m 5696 R 5 1.4 1:59.42 apache2 32085 www-data 20 0 347m 58m 5692 R 5 1.5 1:42.29 apache2 32293 www-data 20 0 353m 64m 5676 R 5 1.6 1:52.73 apache2 32301 www-data 20 0 348m 59m 5564 R 5 1.5 1:49.63 apache2 32528 www-data 20 0 351m 62m 5520 R 5 1.6 1:36.11 apache2 31523 mysql 20 0 3460m 576m 8288 S 5 14.4 2:06.91 mysqld 32002 www-data 20 0 345m 55m 5512 R 5 1.4 2:01.88 apache2 32080 www-data 20 0 357m 68m 5512 S 5 1.7 1:31.30 apache2 32163 www-data 20 0 347m 58m 5512 S 5 1.5 1:58.68 apache2 32509 www-data 20 0 345m 56m 5504 R 5 1.4 1:49.54 apache2 32306 www-data 20 0 358m 68m 5504 S 4 1.7 1:53.29 apache2 32165 www-data 20 0 344m 55m 5524 S 4 1.4 1:40.71 apache2 32640 www-data 20 0 345m 56m 5528 R 4 1.4 1:36.49 apache2 31888 www-data 20 0 359m 70m 5664 R 4 1.8 1:57.07 apache2 32511 www-data 20 0 357m 67m 5512 S 3 1.7 1:47.00 apache2 32054 www-data 20 0 357m 68m 5660 S 2 1.7 1:53.10 apache2 1 root 20 0 24452 2276 1232 S 0 0.1 0:01.58 init Moreover, running free -m returns the following: total used free shared buffers cached Mem: 4003 3919 83 0 118 901 -/+ buffers/cache: 2899 1103 Swap: 0 0 0 To investigate this further, I have installed apache buddy, it recommeneded that I need to reduce the maxclient connections. Which I did. I also installed MysqlTuner and it suggests that I need to set my innodb_buffer_pool_size to = 3.0G. However, I cannot do that, since the whole memory is 4G. Here is the output from apache buddy: ### GENERAL REPORT ### Settings considered for this report: Your server's physical RAM: 4003MB Apache's MaxClients directive: 40 Apache MPM Model: prefork Largest Apache process (by memory): 73.77MB [ OK ] Your MaxClients setting is within an acceptable range. Max potential memory usage: 2950.8 MB Percentage of RAM allocated to Apache 73.72 % And this is the output of MySQLTuner: -------- Performance Metrics ------------------------------------------------- [--] Up for: 47m 22s (675K q [237.552 qps], 12K conn, TX: 1B, RX: 300M) [--] Reads / Writes: 45% / 55% [--] Total buffers: 2.1G global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 2.5G (64% of installed RAM) [OK] Slow queries: 0% (0/675K) [OK] Highest usage of available connections: 26% (40/151) [OK] Key buffer size / total MyISAM indexes: 36.0M/18.7M [OK] Key buffer hit rate: 100.0% (245K cached / 105 reads) [OK] Query cache efficiency: 92.5% (500K cached / 541K selects) [!!] Query cache prunes per day: 302886 [OK] Sorts requiring temporary tables: 0% (1 temp sorts / 15K sorts) [!!] Joins performed without indexes: 12135 [OK] Temporary tables created on disk: 25% (8K on disk / 32K total) [OK] Thread cache hit rate: 90% (1K created / 12K connections) [!!] Table cache hit rate: 17% (400 open / 2K opened) [OK] Open file limit used: 12% (123/1K) [OK] Table locks acquired immediately: 100% (196K immediate / 196K locks) [!!] InnoDB buffer pool / data size: 2.0G/3.5G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: query_cache_size ( 64M) join_buffer_size ( 128.0K, or always use indexes with joins) table_cache ( 400) innodb_buffer_pool_size (= 3G) Last but not least, the server still has more than 60% of free disk space. Now, based on the above, I have few questions: Are these numbers normal? Do they make sense? Do I need to upgrade the server? If I don't need to upgrade and my configuration is not correct, how do I optimize it?

    Read the article

  • Can't connect to STunnel when it's running as a service

    - by John Francis
    I've got STunnel configured to proxy non SSL POP3 requests to GMail on port 111. This is working fine when STunnel is running as a desktop app, but when I run the STunnel service, I can't connect to port 111 on the machine (using Outlook Express for example). The Stunnel log file shows the port binding is succeeding, but it never sees a connection. There's something preventing the connection to that port when STunnel is running as a service? Here's stunnel.conf cert = stunnel.pem ; Some performance tunings socket = l:TCP_NODELAY=1 socket = r:TCP_NODELAY=1 ; Some debugging stuff useful for troubleshooting debug = 7 output = stunnel.log ; Use it for client mode client = yes ; Service-level configuration [gmail] accept = 127.0.0.1:111 connect = pop.gmail.com:995 stunnel.log from service 2010.10.07 12:14:22 LOG5[80444:72984]: Reading configuration from file stunnel.conf 2010.10.07 12:14:22 LOG7[80444:72984]: Snagged 64 random bytes from C:/.rnd 2010.10.07 12:14:23 LOG7[80444:72984]: Wrote 1024 new random bytes to C:/.rnd 2010.10.07 12:14:23 LOG7[80444:72984]: PRNG seeded successfully 2010.10.07 12:14:23 LOG7[80444:72984]: Certificate: stunnel.pem 2010.10.07 12:14:23 LOG7[80444:72984]: Certificate loaded 2010.10.07 12:14:23 LOG7[80444:72984]: Key file: stunnel.pem 2010.10.07 12:14:23 LOG7[80444:72984]: Private key loaded 2010.10.07 12:14:23 LOG7[80444:72984]: SSL context initialized for service gmail 2010.10.07 12:14:23 LOG5[80444:72984]: Configuration successful 2010.10.07 12:14:23 LOG5[80444:72984]: No limit detected for the number of clients 2010.10.07 12:14:23 LOG7[80444:72984]: FD=156 in non-blocking mode 2010.10.07 12:14:23 LOG7[80444:72984]: Option SO_REUSEADDR set on accept socket 2010.10.07 12:14:23 LOG7[80444:72984]: Service gmail bound to 0.0.0.0:111 2010.10.07 12:14:23 LOG7[80444:72984]: Service gmail opened FD=156 2010.10.07 12:14:23 LOG5[80444:72984]: stunnel 4.34 on x86-pc-mingw32-gnu with OpenSSL 1.0.0a 1 Jun 2010 2010.10.07 12:14:23 LOG5[80444:72984]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 stunnel.log from desktop (working) process 2010.10.07 12:10:31 LOG5[80824:81200]: Reading configuration from file stunnel.conf 2010.10.07 12:10:31 LOG7[80824:81200]: Snagged 64 random bytes from C:/.rnd 2010.10.07 12:10:32 LOG7[80824:81200]: Wrote 1024 new random bytes to C:/.rnd 2010.10.07 12:10:32 LOG7[80824:81200]: PRNG seeded successfully 2010.10.07 12:10:32 LOG7[80824:81200]: Certificate: stunnel.pem 2010.10.07 12:10:32 LOG7[80824:81200]: Certificate loaded 2010.10.07 12:10:32 LOG7[80824:81200]: Key file: stunnel.pem 2010.10.07 12:10:32 LOG7[80824:81200]: Private key loaded 2010.10.07 12:10:32 LOG7[80824:81200]: SSL context initialized for service gmail 2010.10.07 12:10:32 LOG5[80824:81200]: Configuration successful 2010.10.07 12:10:32 LOG5[80824:81200]: No limit detected for the number of clients 2010.10.07 12:10:32 LOG7[80824:81200]: FD=156 in non-blocking mode 2010.10.07 12:10:32 LOG7[80824:81200]: Option SO_REUSEADDR set on accept socket 2010.10.07 12:10:32 LOG7[80824:81200]: Service gmail bound to 0.0.0.0:111 2010.10.07 12:10:32 LOG7[80824:81200]: Service gmail opened FD=156 2010.10.07 12:10:33 LOG5[80824:81200]: stunnel 4.34 on x86-pc-mingw32-gnu with OpenSSL 1.0.0a 1 Jun 2010 2010.10.07 12:10:33 LOG5[80824:81200]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.10.07 12:10:33 LOG7[80824:81844]: Service gmail accepted FD=188 from 127.0.0.1:24813 2010.10.07 12:10:33 LOG7[80824:81844]: Creating a new thread 2010.10.07 12:10:33 LOG7[80824:81844]: New thread created 2010.10.07 12:10:33 LOG7[80824:25144]: Service gmail started 2010.10.07 12:10:33 LOG7[80824:25144]: FD=188 in non-blocking mode 2010.10.07 12:10:33 LOG7[80824:25144]: Option TCP_NODELAY set on local socket 2010.10.07 12:10:33 LOG5[80824:25144]: Service gmail accepted connection from 127.0.0.1:24813 2010.10.07 12:10:33 LOG7[80824:25144]: FD=212 in non-blocking mode 2010.10.07 12:10:33 LOG6[80824:25144]: connect_blocking: connecting 209.85.227.109:995 2010.10.07 12:10:33 LOG7[80824:25144]: connect_blocking: s_poll_wait 209.85.227.109:995: waiting 10 seconds 2010.10.07 12:10:33 LOG5[80824:25144]: connect_blocking: connected 209.85.227.109:995 2010.10.07 12:10:33 LOG5[80824:25144]: Service gmail connected remote server from 192.168.1.9:24814 2010.10.07 12:10:33 LOG7[80824:25144]: Remote FD=212 initialized 2010.10.07 12:10:33 LOG7[80824:25144]: Option TCP_NODELAY set on remote socket 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): before/connect initialization 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write client hello A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server hello A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server certificate A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server done A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write client key exchange A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write change cipher spec A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write finished A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 flush data 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read finished A 2010.10.07 12:10:33 LOG7[80824:25144]: 1 items in the session cache 2010.10.07 12:10:33 LOG7[80824:25144]: 1 client connects (SSL_connect()) 2010.10.07 12:10:33 LOG7[80824:25144]: 1 client connects that finished 2010.10.07 12:10:33 LOG7[80824:25144]: 0 client renegotiations requested 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server connects (SSL_accept()) 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server connects that finished 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server renegotiations requested 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache hits 2010.10.07 12:10:33 LOG7[80824:25144]: 0 external session cache hits 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache misses 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache timeouts 2010.10.07 12:10:33 LOG6[80824:25144]: SSL connected: new session negotiated 2010.10.07 12:10:33 LOG6[80824:25144]: Negotiated ciphers: RC4-MD5 SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5 2010.10.07 12:10:34 LOG7[80824:25144]: SSL socket closed on SSL_read 2010.10.07 12:10:34 LOG7[80824:25144]: Sending socket write shutdown 2010.10.07 12:10:34 LOG5[80824:25144]: Connection closed: 53 bytes sent to SSL, 118 bytes sent to socket 2010.10.07 12:10:34 LOG7[80824:25144]: Service gmail finished (0 left)

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • ddclient - wont update Invalid Keyword for 'ip' =

    - by stueng
    /etc/ddclient.conf use=web, web=checkip.dyndns.org/, web-skip='IP Address' protocol=easydns ssl=yes server=members.easydns.com login=stueng password='****' home.***.** /var/log/syslog Jun 4 13:02:34 XBMCuntu ddclient[10554]: WARNING: file /var/cache/ddclient/ddclient.cache, line 3: Invalid Value for keyword 'ip' = '' Jun 4 13:02:34 XBMCuntu ddclient[10554]: WARNING: skipping update of home.***.** from <nothing> to 90.193.*.*. Jun 4 13:02:34 XBMCuntu ddclient[10554]: WARNING: last updated <never> but last attempt on Mon Jun 4 13:01:57 2012 failed. Jun 4 13:02:34 XBMCuntu ddclient[10554]: WARNING: Wait at least 5 minutes between update attempts. Help?

    Read the article

  • Chromium/Chrome tabs crashes on Ubuntu/Edubuntu 12.04.3

    - by Joakim Waern
    For some reason the tabs of Chromium and Chrome have started to crash (He's dead Jim) when running on Ubuntu or Edubuntu 12.04.3. It seems to happen only when I log in. I tried the browsers on Ubuntu 13.10 and everything worked fine. Any suggestions? Ps. The browser doesn't crash, just the tabs, and I can open new ones (but they also crash after a while). Here's the output command free on the terminal after a tab crash: total used free shared buffers cached Mem: 2041116 1722740 318376 0 2868 267256 -/+ buffers/cache: 1452616 588500 Swap: 2085884 104 2085780 Compared to the output before the crash: total used free shared buffers cached Mem: 2041116 1967108 74008 0 14632 253036 -/+ buffers/cache: 1699440 341676 Swap: 2085884 40 2085844

    Read the article

  • CodePlex Daily Summary for Wednesday, February 29, 2012

    CodePlex Daily Summary for Wednesday, February 29, 2012Popular ReleasesZXing.Net: ZXing.Net 0.4.0.0: sync with rev. 2196 of the java version important fix for RGBLuminanceSource generating barcode bitmaps Windows Phone demo client (only tested with emulator, because I don't have a Windows Phone) Barcode generation support for Windows Forms demo client Webcam support for Windows Forms demo clientOrchard Project: Orchard 1.4: Please read our release notes for Orchard 1.4: http://docs.orchardproject.net/Documentation/Orchard-1-4-Release-NotesFluentData -Micro ORM with a fluent API that makes it simple to query a database: FluentData version 1.2: New features: - QueryValues method - Added support for automapping to enumerations (both int and string are supported). Fixed 2 reported issues.NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.15: 3.6.0.15 28-Feb-2012 • Fix: The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state. Work Item 10435: http://netsqlazman.codeplex.com/workitem/10435 • Fix: Made StorageCache thread safe. Thanks to tangrl. • Fix: Members property of SqlAzManApplicationGroup is not functioning. Thanks to tangrl. Work Item 10267: http://netsqlazman.codeplex.com/workitem/10267 • Fix: Indexer are making database calls. Thanks to t...SCCM Client Actions Tool: Client Actions Tool v1.1: SCCM Client Actions Tool v1.1 is the latest version. It comes with following changes since last version: Added stop button to stop the ongoing process. Added action "Query update status". Added option "saveOnlineComputers" in config.ini to enable saving list of online computers from last session. Default value for "LatestClientVersion" set to SP2 R3 (4.00.6487.2157). Wuauserv service manual startup mode is considered healthy on Windows 7. Errors are now suppressed in checkReleases...Document.Editor: 2012.1: Whats new for Document.Editor 2012.1: Improved Recent Documents list Improved Insert Shape Improved Dialogs Minor Bug Fix's, improvements and speed upsKinect PowerPoint Control: Kinect PowerPoint Control v1.1: Updated for Kinect SDK 1.0.SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.8: API Updates: SOLID Extract Method for Archives (7Zip and RAR). ExtractAllEntries method on Archive classes will extract archives as a streaming file. This can offer better 7Zip extraction performance if any of the entries are solid. The IsSolid method on 7Zip archives will return true if any are solid. Removed IExtractionListener was removed in favor of events. Unit tests show example. Bug fixes: PPMd passes tests plus other fixes (Thanks Pavel) Zip used to always write a Post Descri...Social Network Importer for NodeXL: SocialNetImporter(v.1.3): This new version includes: - Download new networks for Facebook fan pages. - New options for downloading more posts - Bug fixes To use the new graph data provider, do the following: Unzip the Zip file into the "PlugIns" folder that can be found in the NodeXL installation folder (i.e "C:\Program Files\Social Media Research Foundation\NodeXL Excel Template\PlugIns") Open NodeXL template and you can access the new importer from the "Import" menuASP.NET REST Services Framework: Release 1.1 - Standard version: Beginning from v1.1 the REST-services Framework is compatible with ASP.NET Routing model as well with CRUD (Create, Read, Update, and Delete) principle. These two are often important when building REST API functionality within your application. It also includes ability to apply Filters to a class to target all WebRest methods, as well as some performance enhancements. New version includes Metadata Explorer providing ability exploring the existing services that becomes essential as the number ...SQL Live Monitor: SQL Live Monitor 1.31: A quick fix to make it this version work with SQL 2012. Version 2 already has 2012 working, but am still developing the UI in version 2, so this is just an interim fix to allow user to monitor SQL 2012.DotNet.Highcharts: DotNet.Highcharts 1.1 with Examples: Fixed small bug in JsonSerializer about the numbers represented as string. Fixed Issue 310: decimal values don't work Fixed Issue 345: Disable Animation Refactored Highcharts class. Implemented Issue 341: More charts on one page. Added new class Container which can combine and display multiple charts. Usage: new Container(new[] { chart1, chart2, chart3, chart4 }) Implemented Feature 302: Inside an UpdatePanel - Added method (InFunction) which create the Highchart inside JavaScript f...Content Slider Module for DotNetNuke: 01.02.00: This release has the following updates and new features: Feature: One-Click Enabling of Pager Setting Feature: Cache Sliders for Performance Feature: Configurable Cache Setting Enhancement: Transitions can be Selected Bug: Secure Folder Images not Viewable Bug: Sliders Disappear on Postback Bug: Remote Images Cause Error Bug: Deleted Images Cause Error System Requirements DotNetNuke v06.00.00 or newer .Net Framework v3.5 SP1 or newer SQL Server 2005 or newerImage Resizer for Windows: Image Resizer 3 Preview 3: Here is yet another iteration toward what will eventually become Image Resizer 3. This release is stable. However, I'm calling it a preview since there are still many features I'd still like to add before calling it complete. Updated on February 28 to fix an issue with installing on multi-user machines. As usual, here is my progress report. Done Preview 3 Fix: 3206 3076 3077 5688 Fix: 7420 Fix: 7527 Fix: 7576 7612 Preview 2 6308 6309 Fix: 7339 Fix: 7357 Preview 1 UI...Finestra Virtual Desktops: 2.5.4500: This is a bug fix release for version 2.5. It fixes several things and adds a couple of minor features. See the 2.5 release notes for more information on the major new features in that version. Important - If Finestra crashes on startup for you, you must install the Visual C++ 2010 runtime from http://www.microsoft.com/download/en/details.aspx?id=5555. Fixes a bug with window animations not refreshing the screen on XP and with DWM off Fixes a bug with with crashing on XP due to a bug in t...AutoExpandOver - Show popup on mouseover, focus, click in Silverlight: AutoExpandOver 2.1: Many fixes. All leaks are gone. Features added.FileSquirrel: FileSquirrel Alpha 1.2.1 32bit and x64: FileSquirrel Alpha 1.2.1Publishing SCSM Work Item to a Sharepoint Calendar: PublishWI command line tool (Updated): Updated for SCSM 2012 (RC) PublishWI command line tool Usage: PublishWI.exe WIID URL CalendarName WIID: ID of the Work Item to publish (for example, CR3333) URL: URL of the SharePoint site, such as http://www.sharepoint.com CalendarName: The name of the sharepoint calendar to publish WI to.HttpRider - Tool for Web Site Performance and Stress Tests: HttpRider 1.0: Please let me know any issues you face. ThanksMedia Companion: MC 3.432b Release: General Now remembers window location. Catching a few more exceptions when an image is blank TV A couple of UI tweaks Movies Fixed the actor name displaying HTML Fixed crash when using Save files as "movie.nfo", "movie.tbn", & "fanart.jpg" New CSV template for HTML output function Added <createdate> tag for HTML output A couple of UI tweaks Known Issues Multiepisodes are not handled correctly in MC. The created nfo is valid, but they are not displayed in MC correctly & saving the...New Projects.NET Implementation of Extensions for Financial Services: NXFS is planned to be a compliant .Net implementation of CEN/XFS (currently version 3.20) to overcome some deficits of native implementation such as not supporting highly productive programming languages (like C#) and supporting only Windows XP and x86 platform.BigCoder WebMaster Tools: BigCoder WebMaster ToolsBigCoder Whois Program: BigCoder Whois ProgrambiscuitCMS: biscuitCMS makes it easier for <target user group> to <activity>. You'll no longer have to <activity>. It's developed in <programming language>. BridgeSeismic: BridgeSeismicData Normilizer: Decomposer for SQL fact tables, for BI projectsDownload Indexed Cache: "Download Indexed Cache" implements the Bing API Version 2 to retrieve content indexed within the Bing Cache to support the "Search Engine Reconnaissance" section of the OWASP Testing Guide v3. FITSExplorer: FITS Explorer is designed to allow astronomers to quickly and easily browse and preview the image and metadata stored in FITS files. The application was developed using primarily WPF and C#, with some low-level data manipulation routines written in C++ for high-performance.ICalSync: icalsyncLeague Manager: Nosso gerenciador de campeonatosLemcube SISTRI: interfacciamento ai servizi di Interoperabilità SIS.: Lemcube SISTRI: interfacciamento ai servizi di Interoperabilità SIS.LittlePluginLib: A small plugin system based on the .net framework 3.5Log reading: Reading and parsing huge log files, download from ftp server and upload to msql database.Mail Aggregator Service: To address the problem of being 'spammed' by your own alert emails this service was created to group/batch mail messages sent to the same 'to' address. It can integrate into QuickMon or even be used as a stand-alone tool to send out alert notifications. MangoTicTacToe: Demo application of a simple Silverlight game for Windows Phone 7. This tictactoe game features some of the most basics requirements for making a Silverlight game for Windows Phone 7.MonkeyButt: Web alternatives to commercial software like Facebook and Google.PizzaSoft: PizzaSoftproject: project demoSharePoint 2010 Automatically Generated Solution Demo: In this demo project I show how to generate Sandboxed Solutions by code at runtime.SoftService: Software ServiceSPAdmin: SharePoint WarmUp ToolSSASNet: SSASNetSSIS Data Masker: A SSIS Data Flow Transformation Component To Provide Basic Data Masking Capabilities.UpiGppf: This is a high shcool projectXMPP/Media Library for .NET and Windows Phone 7.5: .NET libraries for XMPP, TLS, RTP, STUN, SOCKS and more for windows and windows phone.

    Read the article

  • Internet stops working after heavy downloading, video/audio streaming etc

    - by Kuba Szwed
    As mentioned in title, Internet stops working on my PC after heavy downloading, video/audio streaming etc. There are no errors, no disconnections etc. Simply after some time (certain amount of data downloaded) I can't get any more. If I try using ping afterwards nothing happens. If ping is running simultaneously with streaming/downloading I get some correct responses and then it keeps showing an error. What helps is re-plugging my Pentagram USB wifi card, but I hope there is a better solution. Edit: One more thing: my friend who works in IT suggested that it might have something to do with cache (DNS cache? I don't remember him specifying) getting filled while it should be emptied automatically.

    Read the article

  • Why don't we use dynamic (server-side generated) CSS?

    - by ern0
    As server-side generated HTML is trivial (and it was the only way to make dynamic webpages before AJAX), server-side generated CSS is not. Actually, I've never seen it. There are CSS compilers, but they generate CSS files which can be used as static. Technically, it requires no special libraries, the HTML style tag should reference to the PHP(/ASP/whatever) templater script instead of the static CSS file, and the script should send out CSS content-type header - that's all. Does it have cache problems? I don't think so. The script should send out no-cache etc. headers. Is it problem for designers? No, they should edit the CSS template (as they edit the HTML template). Why we don't use dynamic CSS generators? Or if there's any, please let me know.

    Read the article

  • How to do a cacheable redirection?

    - by John Doe
    When users enter my website example.com, their "preferred" language is detected and they are redirected (using a 301 Moved Permanently redirection) to example.com/en/ (for english), example.com/it/ (for italian), etc. It works perfectly, but when I analized my website with the Google Page Speed tool it gave me the following advice. Many pages, especially mobile pages, redirect users to a different URL, for instance from www.example.com to m.example.com. Making this redirect cacheable by the user's browser can speed up page load times for repeat visitors to a site. And later it says We recommend using a 302 redirect with a cache lifetime of one day. The redirect should include a Vary: User-Agent header as well as a Cache-Control: private header. So my questions are, how can I do a "cacheable" redirection in PHP? Would the following be enough? header("HTTP/1.0 302 Moved Temporarily"); header("Location: example.com/whatever"); exit;

    Read the article

  • Berkeley DB Java Edition 4.0.103 Available

    - by charles.lamb
    We'd like to let you know that JE 4.0.103 is now at http://www.oracle.com/technology/software/products/berkeley-db/je/index.html. The patch release contains both small features and bug fixes, many of which were prompted by feedback on this forum. Some items to note: New CacheMode values for more control over cache policies, and new statistics to enable better interpretation of caching behavior. These are just one initial part of our continuing work in progress to make JE caching more efficient. Fixes for proper cache utilization calculations when using the -XX:+UseCompressedOops JVM option. A variety of other bug fixes. There is no file format or API changes. As always, we encourage users to move promptly to this new release.

    Read the article

  • Why does flush dns often fail to work?

    - by Sharen Eayrs
    C:\Windows\system32>ipconfig /flushdns Windows IP Configuration Successfully flushed the DNS Resolver Cache. C:\Windows\system32>ping beautyadmired.com Pinging beautyadmired.com [xxx.45.62.2] with 32 bytes of data: Reply from xxx.45.62.2: bytes=32 time=253ms TTL=49 Reply from xxx.45.62.2: bytes=32 time=249ms TTL=49 Reply from xxx.45.62.2: bytes=32 time=242ms TTL=49 Reply from xxx.45.62.2: bytes=32 time=258ms TTL=49 Ping statistics for xxx.45.62.2: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 242ms, Maximum = 258ms, Average = 250ms My site should point to xx.73.42.27 I change the name server. It's been 3 hours. It still points to xxx.45.62.2 Actually what happen after we change name server anyway? Wait for what? I already flush dns. Why it still points to the wrong IP? Also most other people that do not have the DNS cache also still go to the wrong IP

    Read the article

  • Install Ubuntu on Mac OS X Mavericks, MacBook Air

    - by Unknown
    I was wondering if its okay to install Ubuntu on my Macbook Air, and if it is okay please let me know the procedure. I would prefer to do it by NOT using reFind (not sure what the name is). The following is my system specification. Hardware Overview: Model Name: MacBook Air Model Identifier: MacBookAir6,2 Processor Name: Intel Core i5 Processor Speed: 1.3 GHz Number of Processors: 1 Total Number of Cores: 2 L2 Cache (per Core): 256 KB L3 Cache: 3 MB Memory: 8 GB System Software Overview: System Version: OS X 10.9.2 (13C1021) Kernel Version: Darwin 13.1.0 Boot Volume: Macintosh HD Boot Mode: Normal MacBook Air (13-inch Mid 2013), OS X Mavericks (10.9.2)

    Read the article

  • alternative to environment variables

    - by tonyl7126
    The amount of servers and the complexity of our application is growing and we now have servers in different regions (hosted on AWS). Certain database operations require low latency so we have stuck a database in each region (which is basically a user cache) to keep the network latency low. The way the application server currently knows which user cache/database to make its call to depends on an environemnt variable set in it. This has been working fine, but it seems hacky and not optimal. Is there any way for this to be done automatically? I was considering using a package like fping and pinging each database when the app server reloads (or caching it the first time) and using the corresponding latencies to decide which database has the lowest latency for each app server. Not sure if this is the best idea though.

    Read the article

  • Lesser Known NHibernate Session Methods

    - by Ricardo Peres
    The NHibernate ISession, the core of NHibernate usage, has some methods which are quite misunderstood and underused, to name a few, Merge, Persist, Replicate and SaveOrUpdateCopy. Their purpose is: Merge: copies properties from a transient entity to an eventually loaded entity with the same id in the first level cache; if there is no loaded entity with the same id, one will be loaded and placed in the first level cache first; if using version, the transient entity must have the same version as in the database; Persist: similar to Save or SaveOrUpdate, attaches a maybe new entity to the session, but does not generate an INSERT or UPDATE immediately and thus the entity does not get a database-generated id, it will only get it at flush time; Replicate: copies an instance from one session to another session, perhaps from a different session factory; SaveOrUpdateCopy: attaches a transient entity to the session and tries to save it. Here are some samples of its use. ISession session = ...; AuthorDetails existingDetails = session.Get<AuthorDetails>(1); //loads an entity and places it in the first level cache AuthorDetails detachedDetails = new AuthorDetails { ID = existingDetails.ID, Name = "Changed Name" }; //a detached entity with the same ID as the existing one Object mergedDetails = session.Merge(detachedDetails); //merges the Name property from the detached entity into the existing one; the detached entity does not get attached session.Flush(); //saves the existingDetails entity, since it is now dirty, due to the change in the Name property AuthorDetails details = ...; ISession session = ...; session.Persist(details); //details.ID is still 0 session.Flush(); //saves the details entity now and fetches its id ISessionFactory factory1 = ...; ISessionFactory factory2 = ...; ISession session1 = factory1.OpenSession(); ISession session2 = factory2.OpenSession(); AuthorDetails existingDetails = session1.Get<AuthorDetails>(1); //loads an entity session2.Replicate(existingDetails, ReplicationMode.Overwrite); //saves it into another session, overwriting any possibly existing one with the same id; other options are Ignore, where any existing record with the same id is left untouched, Exception, where an exception is thrown if there is a record with the same id and LatestVersion, where the latest version wins SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • More on PHP and Oracle 11gR2 Improvements to Client Result Caching

    - by christopher.jones
    Oracle 11.2 brought several improvements to Client Result Caching. CRC is way for the results of queries to be cached in the database client process for reuse.  In an Oracle OpenWorld presentation "Best Practices for Developing Performant Application" my colleague Luxi Chidambaran had a (non-PHP generated) graph for the Niles benchmark that shows a DB CPU reduction up to 600% and response times up to 22% faster when using CRC. Sometimes CRC is called the "Consistent Client Cache" because Oracle automatically invalidates the cache if table data is changed.  This makes it easy to use without needing application logic rewrites. There are a few simple database settings to turn on and tune CRC, so management is also easy. PHP OCI8 as a "client" of the database can use CRC.  The cache is per-process, so plan carefully before caching large data sets.  Tables that are candidates for caching are look-up tables where the network transfer cost dominates. CRC is really easy in 11.2 - I'll get to that in a moment.  It was also pretty easy in Oracle 11.1 but it needed some tiny application changes.  In PHP it was used like: $s = oci_parse($c, "select /*+ result_cache */ * from employees"); oci_execute($s, OCI_NO_AUTO_COMMIT); // Use OCI_DEFAULT in OCI8 <= 1.3 oci_fetch_all($s, $res); I blogged about this in the past.  The query had to include a specific hint that you wanted the results cached, and you needed to turn off auto committing during execution either with the OCI_DEFAULT flag or its new, better-named alias OCI_NO_AUTO_COMMIT.  The no-commit flag rule didn't seem reasonable to me because most people wouldn't be specific about the commit state for a query. Now in Oracle 11.2, DBAs can now nominate tables for caching, either with CREATE TABLE or ALTER TABLE.  That means you don't need the query hint anymore.  As well, the no-commit flag requirement has been lifted.  Your code can now look like: $s = oci_parse($c, "select * from employees"); oci_execute($s); oci_fetch_all($s, $res); Since your code probably already looks like this, your DBA can find the top queries in the database and simply tune the system by turning on CRC in the database and issuing an ALTER TABLE statement for candidate tables.  Voila. Another CRC improvement in Oracle 11.2 is that it works with DRCP connection pooling. There is some fine print about what is and isn't cached, check the Oracle manuals for details.  If you're using 11.1 or non-DRCP "dedicated servers" then make sure you use oci_pconnect() persistent connections.  Also in PHP don't bind strings in the query, although binding as SQLT_INT is OK.

    Read the article

  • Problem in Loading certain Websites in Ubuntu 11.10

    - by Cody
    I have a DSL connection in Ubuntu 11.04 and am having problems loading content from certain websites mostly providing CDN services. I have tried every suggestion asked at problem loading internet pages as I have the same problem. Also tried to remove the DNS Cache and Browser Cache. I have tried using Google DNS Server: 8.8.8.8 and 8.8.4.4, but nothing seems to work. Though when I browse the websites in Windows XP then it does not pose any problem. This problem has come up only few days ago and has affected every browser(Chrome, Firefox and Opera) in Ubuntu 11.04. Thanks in advance.

    Read the article

  • Timeout when trying to install ubuntu 12.04

    - by Oskars
    When i click on "try ubuntu" after booting it from a USB the screen goes black and a lot of text comes out. But after a while it says "timeout: killing /blablabalba" or something like and it just keep on saying that. What have I done wrong? <.< My computer is a Acer ASPIRE M5201 if that is of any importance. Edit: I noticed now that before it says "timeout" it says something about not supporting "DNS" or "Asn" or something like that. I'm not sure if it was exactly dns and asn but do this mean that I can't run ubuntu on this computer? Edit2: The exact strings are: "[sdg] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA " "timeout: killing '/sbin/modprobe -bv pci:v00001002d00"(and more nubmers wich I didn't catch)

    Read the article

  • segmented controls mangled during initial transition animation

    - by dLux
    greetings and salutations folks, i'm relatively new to objective c & iphone programming, so bare with me if i've overlooked something obvious.. i created a simple app to play with the different transition animations, setting up a couple segmented controls and a slider.. (Flip/Curl), (left/right) | (up/down), (EaseInOut/EaseIn/EaseOut/Linear) i created a view controller class, and the super view controller switches between 2 instances of the sub class. as you can see from the following image, the first time switching to the 2nd instance, while the animation is occurring the segmented controls are mangled; i'd guess they haven't had enuff time to draw themselves completely.. http://img689.imageshack.us/img689/2320/mangledbuttonsduringtra.png they're fine once the animation is done, and any subsequent times.. if i specify cache:NO in the setAnimationTransition it helps, but there still seems to be some sort of progressive reveal for the text in the segmented controls; they still don't seem to be pre-rendered or initialized properly.. (and surely there's a way to do this while caching the view being transitioned to, since in this case the view isn't changing and should be cacheable.) i'm building my code based on a couple tutorials from a book, so i updated the didReceiveMemoryWarning to set the instanced view controllers to nil; when i invoke a memory warning in the simulator, i assume it's purging the other view, and it acts like a first transition after loading, the view being transitioned to appears just like the image above.. i guess it can't hurt to include the code (sorry if it's considered spamming), this is basically half of it, with a similar chunk following this in an else statement, for the case of the 2nd side being present, switching back to the 1st..: - (IBAction)switchViews:(id)sender { [UIView beginAnimations:@"Transition Animation" context:nil]; if (self.sideBViewController.view.superview == nil) // sideA is active, sideB is coming { if (self.sideBViewController == nil) { SideAViewController *sBController = [[SideAViewController alloc] initWithNibName:@"SideAViewController" bundle:nil]; self.sideBViewController = sBController; [sBController release]; } [UIView setAnimationDuration:sideAViewController.transitionDurationSlider.value]; if ([sideAViewController.transitionAnimation selectedSegmentIndex] == 0) { // flip: 0 == left, 1 == right if ([sideAViewController.flipDirection selectedSegmentIndex] == 0) [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromLeft forView:self.view cache:YES]; else [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromRight forView:self.view cache:YES]; } else { // curl: 0 == up, 1 == down if ([sideAViewController.curlDirection selectedSegmentIndex] == 0) [UIView setAnimationTransition:UIViewAnimationTransitionCurlUp forView:self.view cache:YES]; else [UIView setAnimationTransition:UIViewAnimationTransitionCurlDown forView:self.view cache:YES]; } if ([sideAViewController.animationCurve selectedSegmentIndex] == 0) [UIView setAnimationCurve:UIViewAnimationCurveEaseInOut]; else if ([sideAViewController.animationCurve selectedSegmentIndex] == 1) [UIView setAnimationCurve:UIViewAnimationCurveEaseIn]; else if ([sideAViewController.animationCurve selectedSegmentIndex] == 2) [UIView setAnimationCurve:UIViewAnimationCurveEaseOut]; else if ([sideAViewController.animationCurve selectedSegmentIndex] == 3) [UIView setAnimationCurve:UIViewAnimationCurveLinear]; [sideBViewController viewWillAppear:YES]; [sideAViewController viewWillDisappear:YES]; [sideAViewController.view removeFromSuperview]; [self.view insertSubview:sideBViewController.view atIndex:0]; [sideBViewController viewDidAppear:YES]; [sideAViewController viewDidDisappear:YES]; } any other tips or pointers about writing good clean code is also appreciated, i realize i still have a lot to learn.. thank u for ur time, -- d

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >