Search Results

Search found 19555 results on 783 pages for 'job performance'.

Page 105/783 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Will disabling hyperthreading improve performance on our SQL Server install

    - by Sam Saffron
    Related to: Current wisdom on SQL Server and Hyperthreading Recently we upgraded our Windows 2008 R2 database server from an X5470 to a X5560. The theory is both CPUs have very similar performance, if anything the X5560 is slightly faster. However, SQL Server 2008 R2 performance has been pretty bad over the last day or so and CPU usage has been pretty high. Page life expectancy is massive, we are getting almost 100% cache hit for the pages, so memory is not a problem. When I ran: SELECT * FROM sys.dm_os_wait_stats order by signal_wait_time_ms desc I got: wait_type waiting_tasks_count wait_time_ms max_wait_time_ms signal_wait_time_ms ------------------------------------------------------------ -------------------- -------------------- -------------------- -------------------- XE_TIMER_EVENT 115166 2799125790 30165 2799125065 REQUEST_FOR_DEADLOCK_SEARCH 559393 2799053973 5180 2799053973 SOS_SCHEDULER_YIELD 152289883 189948844 960 189756877 CXPACKET 234638389 2383701040 141334 118796827 SLEEP_TASK 170743505 1525669557 1406 76485386 LATCH_EX 97301008 810738519 1107 55093884 LOGMGR_QUEUE 16525384 2798527632 20751319 4083713 WRITELOG 16850119 18328365 1193 2367880 PAGELATCH_EX 13254618 8524515 11263 1670113 ASYNC_NETWORK_IO 23954146 6981220 7110 1475699 (10 row(s) affected) I also ran -- Isolate top waits for server instance since last restart or statistics clear WITH Waits AS ( SELECT wait_type, wait_time_ms / 1000. AS [wait_time_s], 100. * wait_time_ms / SUM(wait_time_ms) OVER() AS [pct], ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS [rn] FROM sys.dm_os_wait_stats WHERE wait_type NOT IN ('CLR_SEMAPHORE','LAZYWRITER_SLEEP','RESOURCE_QUEUE', 'SLEEP_TASK','SLEEP_SYSTEMTASK','SQLTRACE_BUFFER_FLUSH','WAITFOR','LOGMGR_QUEUE', 'CHECKPOINT_QUEUE','REQUEST_FOR_DEADLOCK_SEARCH','XE_TIMER_EVENT','BROKER_TO_FLUSH', 'BROKER_TASK_STOP','CLR_MANUAL_EVENT','CLR_AUTO_EVENT','DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT','XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN')) SELECT W1.wait_type, CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s, CAST(W1.pct AS DECIMAL(12, 2)) AS pct, CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct HAVING SUM(W2.pct) - W1.pct < 95; -- percentage threshold And got wait_type wait_time_s pct running_pct CXPACKET 554821.66 65.82 65.82 LATCH_EX 184123.16 21.84 87.66 SOS_SCHEDULER_YIELD 37541.17 4.45 92.11 PAGEIOLATCH_SH 19018.53 2.26 94.37 FT_IFTSHC_MUTEX 14306.05 1.70 96.07 That shows huge amounts of time synchronizing queries involving parallelism (high CXPACKET). Additionally, anecdotally many of these problem queries are being executed on multiple cores (we have no MAXDOP hints anywhere in our code) The server has not been under load for more than a day or so. We are experiencing a large amount of variance with query executions, typically many queries appear to be slower that they were on our previous DB server and CPU is really high. Will disabling Hyperthreading help at reducing our CPU usage and increase throughput?

    Read the article

  • SQL 2005 indexed queries slower than unindexed queries

    - by uos??
    Adding a seemingly perfectly index is having an unexpectedly adverse affect on a query performance... -- [Data] has a predictable structure and a simple clustered index of the primary key: ALTER TABLE [dbo].[Data] ADD PRIMARY KEY CLUSTERED ( [ID] ) -- My query, joins on itself looking for a certain kind of "overlapping" records SELECT DISTINCT [Data].ID AS [ID] FROM dbo.[Data] AS [Data] JOIN dbo.[Data] AS [Compared] ON [Data].[A] = [Compared].[A] AND [Data].[B] = [Compared].[B] AND [Data].[C] = [Compared].[C] AND ([Data].[D] = [Compared].[D] OR [Data].[E] = [Compared].[E]) AND [Data].[F] <> [Compared].[F] WHERE 1=1 AND [Data].[A] = @A AND @CS <= [Data].[C] AND [Data].[C] < @CE -- Between a range [Data] has about a quarter-million records so far, 10% to 50% of the data satisfies the where clause depending on @A, @CS, and @CE. As is, the query takes 1 second to return about 300 rows when querying 10%, and 30 seconds to return 3000 rows when querying 50% of the data. Curiously, the estimated/actual execution plan indicates two parallel Clustered Index Scans, but the clustered index is only of the ID, which isn't part of the conditions of the query, only the output. ?? If I add this hand-crafted [IDX_A_B_C_D_E_F] index which I fully expected to improve performance, the query slows down by a factor of 8 (8 seconds for 10% & 4 minutes for 50%). The estimated/actual execution plans show an Index Seek, which seems like the right thing to be doing, but why so slow?? CREATE UNIQUE INDEX [IDX_A_B_C_D_E_F] ON [dbo].[Data] ([A], [B], [C], [D], [E], [F]) INCLUDE ([ID], [X], [Y], [Z]); The Data Engine Tuning wizard suggests a similar index with no noticeable difference in performance from this one. Moving AND [Data].[F] <> [Compared].[F] from the join condition to the where clause makes no difference in performance. I need these and other indexes for other queries. I'm sure I could hint that the query should refer to the Clustered Index, since that's currently winning - but we all know it is not as optimized as it could be, and without a proper index, I can expect the performance will get much worse with additional data. What gives?

    Read the article

  • Career as a Software Tester

    - by mgj
    Respected all, I am a fresher who is interested in a job as a software tester. I had few general queries regarding the prospects of this kind of a job in a software company. What are the kind of challenges that a tester faces in real life situations that make his/her job more interesting and self-motivating? What are the growth opportunities for an individual in a software company who wants to pursue a career as a software tester? Are software developers and software testers treated alike in terms of growth opportunities or otherwise? If not so why? How does one(software tester or any one else) deal with such situation such that its a win win situation for both the company and the software tester? I am really looking forward to the answers that you can give from your personal experiences and insights. Thank you..:)

    Read the article

  • Performance Test and TCP tuning

    - by Mithir
    We are in the process of performance testing an application which receives tcp requests converts them to soap requests (WCF-httpBinding) which other services work on. The server is Windows Server 2008 R2. The TCP requests are received by TcpListener instance (.NET C#). There are 3 http-binded WCF services running on the same server. We have built a performance test client which goal is to simulate multiple concurrent requests(each request has to be different and recognizable by the application). We built a test running 150 requests that run on the same time (by 150 different threads), and we noticed straight away that some requests get the TCP connection slowly, but once they get it, they act fast. A single request writes twice on the same connection- request and an application ack. Although a single request+ack can take about 150ms, the 150 test takes about 7 seconds. The Problem When we try to run this test from 2 different computers we lose requests. some clients requests are getting no connection was made because the target machine actively refused it So I got here and got convinced it was because of the backlog. I changed the TcpListener parameters and did the registry AFD backlog changes written here but it still didn't work, so I inserted all of the TCP tuning suggested plus some netsh commands which were recommended, but still no change, we still get that error. Is there anything else I need to know? Are there any other solutions?

    Read the article

  • ASA Slow IPSec Performance with Inconsistent Window Size

    - by Brent
    I have a IPSec link between two sites over ASA 5520s running 8.4(3) and I am getting extremely poor performance when traffic passes over the IPSec VPN. CPU on the devices is ~13%, Memory at 408 MB, and active VPN sessions 2. The load on both of the the devices is particularly low. Latency between the two sites is ~40ms. Screenshot of wireshark file transfer between the two hosts over the firewall IPSec VPN performing at 10MBPS. Note the changing window size. http://imgur.com/wGTB8Cr Screenshot of wireshark file transfer between the two hosts over the firewall not going over IPSec performing at 55MBPS. Constant window size. http://imgur.com/EU23W1e I'm showing an inconsistent window size when transferring over the IPSec VPN ranging in 46,796 to 65535. When performing at 55+MBPS, the window size is consistently 65,535. Does this show a problem in my configuration of the IPSec VPN in the ASA or a Layer1/2 issue? Using ping xxxxxx -f -l I finally get a non-fragment at 1418 bytes so 1418+28 for IP/ICMP headers = 1446. I know that I have 1500 set on the ASA and Ethernet. I do have "Force Maximum segment size for TCP proxy connection to be" "1380" bytes set under Configuration Advanced TCP Options on the ASA. Using IPERF, I am getting a "TCP Window Full" every few seconds and ~3 MBPS performance. http://imgur.com/elRlMpY Show Run on the ASA http://pastebin.com/uKM4Jh76 Show cry accelerator stats http://pastebin.com/xQahnqK3

    Read the article

  • Regex - Ignore lines with matching text

    - by codem
    I need the RegEx command to get all the lines which DO NOT have the job name containing "filewatch". Any help will be greatly appreciated! Thanks. STATUS: FAILURE JOB: i3-imds-dcp-pd-bo1-05-ftpfilewatcher STATUS: FAILURE JOB: i3-cur-atmrec-pd-TD_FTP_Forecast_File_Del_M_Su ALARM: JOBFAILURE JOB: i3-cur-atmrec-pd-TD_FTP_Forecast_File_Del_M_Su STATUS: FAILURE JOB: i3-sss-system-heartbeat ALARM: JOBFAILURE JOB: i3-sss-system-heartbeat STATUS: FAILURE JOB: i3-chq-cspo-pd-batch-daily-renametable-fileok STATUS: FAILURE JOB: i3-chq-cspo-pd-batch-daily-renametable-file ALARM: JOBFAILURE JOB: i3-chq-cspo-pd-batch-daily-renametable-fileok ALARM: JOBFAILURE JOB: i3-chq-cspo-pd-batch-daily-renametable-file STATUS: FAILURE JOB: i3-imds-dcp-pd-bo1-35-filewatcher STATUS: FAILURE JOB: i3-imds-dcp-pd-bo1-05-ftpfilewatcher STATUS: FAILURE JOB: i3-imds-dcp-pd-bo1-35-filewatcher STATUS: FAILURE JOB: i3-imds-dcp-pd-bo1-05-ftpfilewatcher

    Read the article

  • Fastest PNG decoder for .NET

    - by sboisse
    Our web server needs to process many compositions of large images together before sending the results to web clients. This process is performance critical because the server can receive several thousands of requests per hour. Right now our solution loads PNG files (around 1MB each) from the HD and sends them to the video card so the composition is done on the GPU. We first tried loading our images using the PNG decoder exposed by the XNA API. We saw the performance was not too good. To understand if the problem was loading from the HD or the decoding of the PNG, we modified that by loading the file in a memory stream, and then sending that memory stream to the .NET PNG decoder. The difference of performance using XNA or using System.Windows.Media.Imaging.PngBitmapDecoder class is not significant. We roughly get the same levels of performance. Our benchmarks show the following performance results: Load images from disk: 37.76ms 1% Decode PNGs: 2816.97ms 77% Load images on Video Hardware: 196.67ms 5% Composition: 87.80ms 2% Get composition result from Video Hardware: 166.21ms 5% Encode to PNG: 318.13ms 9% Store to disk: 3.96ms 0% Clean up: 53.00ms 1% Total: 3680.50ms 100% From these results we see that the slowest parts are when decoding the PNG. So we are wondering if there wouldn't be a PNG decoder we could use that would allow us to reduce the PNG decoding time. We also considered keeping the images uncompressed on the hard disk, but then each image would be 10MB in size instead of 1MB and since there are several tens of thousands of these images stored on the hard disk, it is not possible to store them all without compression.

    Read the article

  • Help on Website response time KPI parameters

    - by geeth
    I am working on improving website performance. Here are the list of key performance indicators I am looking at for each page Total Bytes downloaded Number of requests DNS look up time FirstByte Download time DOM content load time Total load time Is there any optimum value for each KPI to indicate website performance? Please help me in this regard.

    Read the article

  • FreeBSD ZFS RAID-Z2 performance issues

    - by Axel Gneiting
    I'm trying to build my own network attached storage based on FreeBSD+ZFS+standard components, but there are strange performance issues. The hardware specs are: AMD Athlon II X2 240e processor ASUS M4A78LT-M LE mainboard 2GiB Kingston ECC DDR3 (two sticks) Intel Pro/1000 CT PCIe network adapter 5x Western Digital Caviar Green 1.5TB I created a RAID-Z2 zpool from all disks. I installed FreeBSD 8.1 on that zpool following the tutorial. The SATA controllers are running in AHCI mode. Output of zpool status: pool: zroot state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 raidz2 ONLINE 0 0 0 gptid/7ef815fc-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/80344432-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/81741ad9-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/824af5cb-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/82f98a65-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 The problem is that write performance on the pool is very very bad (<10 MB/s) and every application that is accessing the disk is unresponsive every few seconds when writing. It seems like writing is fine until the ZFS ark cache is full and then ZFS stalls the entire system I/O till it's finished writing that data. Also I'm getting kmem_malloc to small kernel panics. I've already tried to put vm.kmem_size="1500M" vm.kmem_size_max="1500M" into /boot/loader.conf, but it doesn't help. Does anyone know what's going on here? Am I really not having enough memory for ZFS to handle this RAID-Z2?

    Read the article

  • ORM solutions (JPA; Hibernate) vs. JDBC

    - by Grasper
    I need to be able to insert/update objects at a consistent rate of at least 8000 objects every 5 seconds in an in-memory HSQL database. I have done some comparison performance testing between Spring/Hibernate/JPA and pure JDBC. I have found a significant difference in performance using HSQL.. With Spring/Hib/JPA, I can insert 3000-4000 of my 1.5 KB objects (with a One-Many and a Many-Many relationship) in 5 seconds, while with direct JDBC calls I can insert 10,000-12,000 of those same objects. I cannot figure out why there is such a huge discrepancy. I have tweaked the Spring/Hib/JPA settings a lot trying to get close in performance without luck. I want to use Spring/Hib/JPA for future purposes, expandability, and because the foreign key relationships (one-many and many-many) are difficult to maintain by hand; but the performance requirements seem to point towards using pure JDBC. Any ideas of why there would be such a huge discrepancy?

    Read the article

  • Need help trying to diagnose Symmetrix SAN performance issues

    - by arcain
    I am helping to benchmark hardware for a new SQL Server instance, and the volume presented to the OS for the data files is carved from a set of spindles on a Symmetrix SAN. The server has yet to have SQL Server installed, so the only activity on the box is our benchmarking. Now, our storage engineers say that this volume and it's resources are dedicated to our new server (I don't have access to see the actual SAN config) however the performance benchmarks are troubling. For example, the numbers look good until suddenly, and randomly, we see in our IO benchmarking tool wait times of 100 seconds, and disk queue lengths of 255 in perfmon. This SAN has an 8 GB cache, plus there are other applications besides ours that use the SAN. I'm wondering if (even though the spindles for our volumes should be dedicated to us) the cache may be getting hammered during the performance testing, or perhaps the spindles our volumes are on aren't really dedicated to us. We're not getting much traction from our storage engineers in helping us track down the problem, so if anybody has experience with diagnosing a problem like this and would like to share insights and troubleshooting methodologies, I'd appreciate it.

    Read the article

  • Are there any other Java schedulers besides Quartz(FOSS) and Flux(Commercial)

    - by mP
    I am interested in finding out about other job scheduling packages besides Quartz and Flux. Given the plethora of web frameworks i find it perculiar that there is really only one scheduler. Are there others that perhaps are very much unknown/unpopular ? Spring Batch Not really a scheduling solution but rather a batch job coordinator etc. http://static.springsource.org/spring-batch/faq.html#schedulers How does Spring Batch differ from Quartz? Is there a place for them both in a solution? Spring Batch and Quartz have different goals. Spring Batch provides functionality for processing large volumes of data and Quartz provides functionality for scheduling tasks. So Quartz could complement Spring Batch, but are not excluding technologies. A common combination would be to use Quartz as a trigger for a Spring Batch job using a Cron expression and the Spring Core convenience SchedulerFactoryBean .

    Read the article

  • python threading and performace?

    - by kumar
    I had to do heavy I/o bound operation, i.e Parsing large files and converting from one format to other format. Initially I used to do it serially, i.e parsing one after another..! Performance was very poor ( it used take 90+ seconds). So I decided to use threading to improve the performance. I created one thread for each file. ( 4 threads) for file in file_list: t=threading.Thread(target = self.convertfile,args = file) t.start() ts.append(t) for t in ts: t.join() But for my astonishment, there is no performance improvement whatsoever. Now also it takes around 90+ seconds to complete the task. As this is I/o bound operation , I had expected to improve the performance. What am I doing wrong?

    Read the article

  • How does TopCoder evaluates code?

    - by Carlos
    If you are familiar with TopCoder you know that your source-code gets a final "grade/points" this depends on time, how many compiles, etc, one of the highest weighted being performance. But how can they test that, is there some sort of simple code (java or c++) to do it that you could share for me to evaluate and hopefully write my own to test the programs I write for University? This is sort of a follow up question to this one where I ask if shorter code results in best performance. P.S: Im interested in both of how topcoders knows performance and writing code to test performance.

    Read the article

  • What is the correct stage to use for Google Guice in production in an application server?

    - by Yishai
    It seems like a strange question (the obvious answer would Production, duh), but if you read the java docs: /** * We want fast startup times at the expense of runtime performance and some up front error * checking. */ DEVELOPMENT, /** * We want to catch errors as early as possible and take performance hits up front. */ PRODUCTION Assuming a scenario where you have a stateless call to an application server, the initial receiving method (or there abouts) creates the injector new every call. If there all of the module bindings are not needed in a given call, then it would seem to have been better to use the Development stage (which is the default) and not take the performance hit upfront, because you may never take it at all, and here the distinction between "upfront" and "runtime performance" is kind of moot, as it is one call. Of course the downside of this would appear to be that you would lose the error checking, causing potential code paths to cause a problem by surprise. So the question boils down to are the assumptions in the above correct? Will you save performance on a large set of modules when the given lifetime of an injector is one call?

    Read the article

  • Why does derivative trading position always require C++ knowledge?

    - by Jeffrey
    I’ve never worked in trading environment before and I was curious to see that few of the trading houses seem to use C# but most of them do heavily rely on C++. Why is it? Is it because C++ is better performance wise? Is it because of legacy code base? Is it because cross platform issue? What about dynamic languages (ruby, python)? Are they too slow for this kind of work in terms of performance? Updated: If realibility and performance are important would "Erlang" be the "next big thing" in trading platform?

    Read the article

  • Weird exception with delayed_job

    - by Tam
    Trying to queue a job with delayed_job as follows: Delayed::Job.enqueue(BackgroundProcess.new(current_user, object)) current_user and object are not nil when I print them out. The weird thing is that sometimes refreshing the page or running the command again works! Here is the exception trace: Delayed::Backend::ActiveRecord::Job Columns (44.8ms) SHOW FIELDS FROM `delayed_jobs` TypeError (wrong argument type nil (expected Data)): /Users/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/yaml.rb:391:in `emit' /Users/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/yaml.rb:391:in `quick_emit' /Users/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/yaml/rubytypes.rb:86:in `to_yaml' vendor/plugins/delayed_job/lib/delayed/backend/base.rb:65:in `payload_object=' activerecord (2.3.9) lib/active_record/base.rb:2918:in `block in assign_attributes' activerecord (2.3.9) lib/active_record/base.rb:2914:in `each' activerecord (2.3.9) lib/active_record/base.rb:2914:in `assign_attributes' activerecord (2.3.9) lib/active_record/base.rb:2787:in `attributes=' activerecord (2.3.9) lib/active_record/base.rb:2477:in `initialize' activerecord (2.3.9) lib/active_record/base.rb:725:in `new' activerecord (2.3.9) lib/active_record/base.rb:725:in `create' vendor/plugins/delayed_job/lib/delayed/backend/base.rb:21:in `enqueue'

    Read the article

  • My program is spending most of its time in objc_msgSend. Does that mean that Objective-C has bad per

    - by Paperflyer
    Hello Stackoverflow. I have written an application that has a number of custom views and generally draws a lot of lines and bitmaps. Since performance is somewhat critical for the application, I spent a good amount of time optimizing draw performance. Now, activity monitor tells me that my application is usually using about 12% CPU and Instrument (the profiler) says that a whopping 10% CPU is spent in objc_msgSend (mostly in drawing related system calls). On the one hand, I am glad about this since it means that my drawing is about as fast as it gets and my optimizations where a huge success. On the other hand, it seems to imply that the only thing that is still using my CPU is the Objective-C overhead for messages (objc_msgSend). Hence, that if I had written the application in, say, Carbon, its performance would be drastically better. Now I am tempted to conclude that Objective-C is a language with bad performance, even though Cocoa seems to be awfully efficient since it can apparently draw faster than Objective-C can send messages. So, is Objective-C really a language with bad performance? What do you think about that?

    Read the article

  • Linux iptables / conntrack performance issue

    - by tim
    I have a test-setup in the lab with 4 machines: 2 old P4 machines (t1, t2) 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t3) Intel e1000 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t4) Intel e1000 to test linux firewall performance since we got bitten by a number of syn-flood attacks in the last months. All machines run Ubuntu 12.04 64bit. t1, t2, t3 are interconnected through an 1GB/s switch, t4 is connected to t3 via an extra interface. So t3 simulates the firewall, t4 is the target, t1,t2 play the attackers generating a packetstorm thorugh (192.168.4.199 is t4): hping3 -I eth1 --rand-source --syn --flood 192.168.4.199 -p 80 t4 drops all incoming packets to avoid confusion with gateways, performance issues of t4 etc. I watch the packet stats in iptraf. I have configured the firewall (t3) as follows: stock 3.2.0-31-generic #50-Ubuntu SMP kernel rhash_entries=33554432 as kernel parameter sysctl as follows: net.ipv4.ip_forward = 1 net.ipv4.route.gc_elasticity = 2 net.ipv4.route.gc_timeout = 1 net.ipv4.route.gc_interval = 5 net.ipv4.route.gc_min_interval_ms = 500 net.ipv4.route.gc_thresh = 2000000 net.ipv4.route.max_size = 20000000 (I have tweaked a lot to keep t3 running when t1+t2 are sending as many packets as possible). The result of this efforts are somewhat odd: t1+t2 manage to send each about 200k packets/s. t4 in the best case sees aroung 200k in total so half of the packets are lost. t3 is nearly unusable on console though packets are flowing through it (high numbers of soft-irqs) the route cache garbage collector is no way near to being predictable and in the default setting overwhelmed by very few packets/s (<50k packets/s) activating stateful iptables rules makes the packet rate arriving on t4 drop to around 100k packets/s, efectively losing more than 75% of the packets And this - here is my main concern - with two old P4 machines sending as many packets as they can - which means nearly everyone on the net should be capable of this. So here goes my question: Did I overlook some importand point in the config or in my test setup? Are there any alternatives for building firewall system especially on smp systems?

    Read the article

  • nVidia performance with newer X and newer driver abysmal with Compiz

    - by Nakedible
    I recently upgraded Debian to Xorg 2.9.4 and installed nvidia-glx from experimental, version 260.19.21. This was somewhat of an uphill battle as the dependencies for the experimental nvidia-glx package are still somewhat broken. I got it to work without forcing the installation of any packages and without modifying the packages. However, after the upgrade compiz performance has been abysmal. I am using the desktop wall plugin and switching viewports is really slow - takes a few seconds for each switch. In addition to this, every effect that compiz does, such as zoom animations for icons when launching applications, takes seconds. The viewport switching speed changes relative to the amount of windows on that virtual screen - empty screens switch almost at normal speed, single browser windows work almost decently, but just 4 rxvt terminals slows the switches down to a crawl. My compiz configuration should be pretty basic. Xorg is likewise configured without anything special - the only "custom" configuration is forcing the driver name to be "nvidia". I've fiddled around with the nvidia-settings and compizconfig trying different VSync settings, but none of those helped. My graphics card is: NVIDIA GPU NVS 3100M (GT218) at PCI:1:0:0 (GPU-0). This is laptop GPU that is from the Geforce GTX 200 series. Graphics card performance should naturally be no problem. EDIT: In the end, nothing really worked, and I got really annoyed with the state of compiz and its support in Debian. Many nVidia driver revisions have passed and I am using Gnome 3 now, so I am accepting the best answers to this question even though the issue was not resolved.

    Read the article

  • Developing high-performance and scalable zend framework website

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand in terms of servers approch (MySQL, NGINX, MongoDB or pgsql) for a scalable application expected to have a lot of traffic using PHP?...I would be interested in your approch. Note: I'm a Zend Framework developer and don't have to much experience with the servers part (to determin what would be best solution for my scalable application)

    Read the article

  • Hibernate Lazy init exception in spring scheduled job

    - by Noam Nevo
    I have a spring scheduled job (@Scheduled) that sends emails from my system according to a list of recipients in the DB. This method is annotated with the @Scheduled annotation and it invokes a method from another interface, the method in the interface is annotated with the @Transactional annotation. Now, when i invoke the scheduled method manually, it works perfectly. But when the method is invoked by spring scheduler i get the LazyInitFailed exception in the method implementing the said interface. What am I doing wrong? code: The scheduled method: @Component public class ScheduledReportsSender { public static final int MAX_RETIRES = 3; public static final long HALF_HOUR = 1000 * 60 * 30; @Autowired IScheduledReportDAO scheduledReportDAO; @Autowired IDataService dataService; @Autowired IErrorService errorService; @Scheduled(cron = "0 0 3 ? * *") // every day at 2:10AM public void runDailyReports() { // get all daily reports List<ScheduledReport> scheduledReports = scheduledReportDAO.getDaily(); sendScheduledReports(scheduledReports); } private void sendScheduledReports(List<ScheduledReport> scheduledReports) { if(scheduledReports.size()<1) { return; } //check if data flow ended its process by checking the report_last_updated table in dwh int reportTimeId = scheduledReportDAO.getReportTimeId(); String todayTimeId = DateUtils.getTimeid(DateUtils.getTodayDate()); int yesterdayTimeId = Integer.parseInt(DateUtils.addDaysSafe(todayTimeId, -1)); int counter = 0; //wait for time id to update from the daily flow while (reportTimeId != yesterdayTimeId && counter < MAX_RETIRES) { errorService.logException("Daily report sender, data not ready. Will try again in one hour.", null, null, null); try { Thread.sleep(HALF_HOUR); } catch (InterruptedException ignore) {} reportTimeId = scheduledReportDAO.getReportTimeId(); counter++; } if (counter == MAX_RETIRES) { MarketplaceServiceException mse = new MarketplaceServiceException(); mse.setMessage("Data flow not done for today, reports are not sent."); throw mse; } // get updated timeid updateTimeId(); for (ScheduledReport scheduledReport : scheduledReports) { dataService.generateScheduledReport(scheduledReport); } } } The Invoked interface: public interface IDataService { @Transactional public void generateScheduledReport(ScheduledReport scheduledReport); } The implementation (up to the line of the exception): @Service public class DataService implements IDataService { public void generateScheduledReport(ScheduledReport scheduledReport) { // if no recipients or no export type - return if(scheduledReport.getRecipients()==null || scheduledReport.getRecipients().size()==0 || scheduledReport.getExportType() == null) { return; } } } Stack trace: ERROR: 2012-09-01 03:30:00,365 [Scheduler-15] LazyInitializationException.<init>(42) | failed to lazily initialize a collection of role: com.x.model.scheduledReports.ScheduledReport.recipients, no session or session was closed org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: com.x.model.scheduledReports.ScheduledReport.recipients, no session or session was closed at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:383) at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationExceptionIfNotConnected(AbstractPersistentCollection.java:375) at org.hibernate.collection.AbstractPersistentCollection.readSize(AbstractPersistentCollection.java:122) at org.hibernate.collection.PersistentBag.size(PersistentBag.java:248) at com.x.service.DataService.generateScheduledReport(DataService.java:219) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:309) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:110) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202) at $Proxy208.generateScheduledReport(Unknown Source) at com.x.scheduledJobs.ScheduledReportsSender.sendScheduledReports(ScheduledReportsSender.java:85) at com.x.scheduledJobs.ScheduledReportsSender.runDailyReports(ScheduledReportsSender.java:38) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.springframework.util.MethodInvoker.invoke(MethodInvoker.java:273) at org.springframework.scheduling.support.MethodInvokingRunnable.run(MethodInvokingRunnable.java:65) at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:51) at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636) ERROR: 2012-09-01 03:30:00,366 [Scheduler-15] MethodInvokingRunnable.run(68) | Invocation of method 'runDailyReports' on target class [class com.x.scheduledJobs.ScheduledReportsSender] failed org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: com.x.model.scheduledReports.ScheduledReport.recipients, no session or session was closed at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:383) at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationExceptionIfNotConnected(AbstractPersistentCollection.java:375) at org.hibernate.collection.AbstractPersistentCollection.readSize(AbstractPersistentCollection.java:122) at org.hibernate.collection.PersistentBag.size(PersistentBag.java:248) at com.x.service.DataService.generateScheduledReport(DataService.java:219) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:309) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:110) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202) at $Proxy208.generateScheduledReport(Unknown Source) at com.x.scheduledJobs.ScheduledReportsSender.sendScheduledReports(ScheduledReportsSender.java:85) at com.x.scheduledJobs.ScheduledReportsSender.runDailyReports(ScheduledReportsSender.java:38) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.springframework.util.MethodInvoker.invoke(MethodInvoker.java:273) at org.springframework.scheduling.support.MethodInvokingRunnable.run(MethodInvokingRunnable.java:65) at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:51) at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636)

    Read the article

  • Should I work for free while applying for a job?

    - by Jevgeni Bogatyrjov
    An employer usually asks a candidate to do a small project at home ("homework") as a part of applying for a job. Last time I applied for a job (as a web developer), there were aproximately 10 applicants who were all given different tasks. Despite the fact that there was only one vacancy, the company used the work of all of the candidates in one of its projects. Actually, it is quite reasonable for a company to create these "vacancies" just to make people work for free - I estimate, that aproximately 2 weeks of programmer's work was saved with all of the job applications that company had on one vacancy. Is this a common practice and how can you protect yourself from working for free in the future? Have you seen this during your career?

    Read the article

  • Should I enable "Intel NIC DMA Channels"?

    - by javapowered
    I have HP DL360p Gen8 646902-xx1 I'm trying to optimize my config for low latency trading. Should I enable "Intel NIC DMA Channels"? Will that help/affect my system? From HP doc: Added a new ROM Based Setup Utility (RBSU) Advanced Performance Option menu that allows the user to enable Intel NIC DMA Channels (IOAT). This option is disabled by default. When enabled, certain networking devices may see an improvement in performance by utilizing Intel's DMA engine to offload network activity. Please consult documentation from the network adapter to determine if this feature is supported.

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >