Search Results

Search found 13654 results on 547 pages for 'ssis performance'.

Page 67/547 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Performance Tuning in the Age of Big Data

    Database Administrators must now deal with large volumes of data and new forms of high-speed data analysis. If your responsibility includes performance tuning, here are the areas to focus on that will become more and more important in the age of Big Data. Total DeploymentEnjoy easy release management for your .NET apps, services, and databases with Deployment Manager. Get your free Starter edition now

    Read the article

  • How do I know if I've gone too far with processing things in a game?

    - by ThePlan
    A common programming quote I see every day is: Premature optimization is the root of all evil! I admit I'm one of those guys that like to do premature optimization in a pretty obssessive manner but that's probably because I'm not aware how powerful modern processors are. I can think of lots of sollutions for a problem, but all of them are tough on the memory side, and I keep thinking "This will hurt me more in the future when I'll have to re-do it because it's bad performance-wise." How do you know when the code you are thinking of is going too far and is not a case of premature optimization? How much can your game handle at a time before performance becomes a problem?

    Read the article

  • Best way to set up servers for .NET performance [migrated]

    - by msigman
    Assume we have 3 physical servers and let's say we are only interested in performance, and not reliability. Is it better to give each server a specific function or make them all duplicates and split the traffic between them? In other words dedicate 1 as DB server, 1 as web server, and 1 as reporting server/data warehouse, or better to put all three services on each server and use them as web farm?

    Read the article

  • OpenCL 1.1 backward compatible, enhanced performance

    <b>Linux Magazine: </b>"The Khronos Group today announced OpenCL 1.1, a backwards compatible update that boosts performance in the parallel programming standard. OpenCL is a free programming standard designed from the ground up to optimize coding in muliticore processors."

    Read the article

  • CPU and Scheduler Performance Monitoring using SQL Server and Excel

    This article will demonstrate a method of creating an Excel-based CPU/scheduler performance dashboard for SQL Server 2005+. NEW! Deployment Manager Early Access ReleaseDeploy SQL Server changes and .NET applications fast, frequently, and without fuss, using Deployment Manager, the new tool from Red Gate. Try the Early Access Release to get a 20% discount on Version 1. Download the Early Access Release.

    Read the article

  • How can I avoid the engineering mistakes of PDT?

    - by ashy_32bit
    As a developer with enough experience to evaluate a tool, I may say that PDT is very huge in size and slow in performance for a PHP IDE. It gets bigger by release and exponentially slower by the size of the projects. Add some extra syntax coloring rules and it literally crawls, code completion works randomly and building workspace takes like forever. Java black magic (-Xmx etc) eases the pain a little but that's it. So my questions are: Why is PDT like this? What design or engineering factors led to its poor performance? How can I avoid making these same mistakes in my own products?

    Read the article

  • Analyzing I/O performance in Linux

    <b>cmdln.org:</b> "Monitoring and analyzing performance is an important task for any sysadmin. Disk I/O bottlenecks can bring applications to a crawl. What is an IOP? Should I use SATA, SAS, or FC? How many spindles do I need?"

    Read the article

  • Correcting CS0009 Error When Creating Integration Services Project

    - by ajdams
    Tried to open an SSIS project I had been working on today and received this lovely error: Unable to generate temporary class (result=1) error CS0009: Metadata file 'c:\WINDOWS\assembly\GAC_MSIL\System.Xml\2.0.0.0_b77a5c561934e089\System.XML.dll' could not be opened -- 'No metadata was found.' Anyone know why this happens and how to correct it, I've Googled and haven't found any valid solutions relating directly to SSIS. It is only happening with BIDS 2008 and SSIS project types and I tried the same packages (as well as creating a new one) on my other machine and it was fine. Any ideas? Thank you.

    Read the article

  • ODBC Proxy for remotely accessing legacy resources?

    - by Winston Fassett
    Our project uses AcuCorp's AcuODBC driver to access a legacy Vision database. The problem is that we only have a 32-bit driver and the installer simply won't run on our 64-bit servers. I need a way to use SSIS to pull data from that system. As far as I can tell, there are 3 options: Set up a whole new SQL Server instance with SSIS and the AcuODBC drivers on a 32-bit VM (costly) Try to hack the 32-bit driver onto our 64-bit server manually (failure prone and unsupported) Set up a 32-bit VM with some sort of "proxy" service that our 64-bit SSIS can use to pull the data. The first option is the least desirable. If you have any suggestions for options 2 or 3, or anything else I haven't thought of, I'd love to hear them.

    Read the article

  • Minimum needs to Deploy SQL Server Integration Services 2008

    - by Tim
    Hi, I would like to run SSIS 2008 packages on a server that does not have SQL Server 2008 installed on it. I have a simple package to test the concept, but it fails to execute. The return code is 9020 which I have not seen listed as a return code elsewhere. I have copied the following files to the test server that does not have SQL Server 2008 installed on it: SelfContainedSample.dtsConfig Package.dtsx DTExec.exe I am attempting to run the package using a batch file. The line in the batch file that runs the package is: "%dtexecloc%\dtexec.exe" /FILE "%packagefolder%\Package.dtsx" /CONFIGFILE "%configfolder\SelfContainedSample.dtsConfig" /CHECKPOINTING OFF /REPORTING E %logfile% set rc=%errorlevel% I am wondering if there are other requirements that need to be satified to run an SSIS 2008 package on a server that does not have SQL Server 2008 on it? .NET Runtime? SSIS 2008 runtime? Please share your advice if you have a solution or have met this issue before. Thanks, Tim

    Read the article

  • Is there a performance difference between Windows 7 on SSD installed from scratch versus it using a recent ghost/clone drive image from a harddisk?

    - by therobyouknow
    I'm planning to upgrade a notebook PC to a Solid-State Flash Drive (SSD) soon. I want to use the notebook before that and am considering installing Windows 7 on the hard disk (spinning variety, 5400rpm) before I get the SSD. To save time I am wondering if I can ghost/clone the installation of Windows 7 from the hard drive and put on the SSD. Would the performance of this clone from the harddisk onto the SSD be different from starting again and reinstalling Windows 7 from scratch on the SSD? (Windows 7 32bit professional)

    Read the article

  • Dual boot, win7 & ubuntu. Gparted, resize not move. Performance?

    - by data_jepp
    I installed dual boot on a computer that already had win7 installed. The question here is about gparted ability to move partitions. I made place for ubuntu in the computers "Data" partition, by resizing it. But I canceled the "move" action. Was that incredibly stupid, or is this care? Maybe performance is affected. Can this effect the hd's lifespan? The computer is UL30A.

    Read the article

  • How close can I get C# to the performance of C++ for small intensive tasks?

    - by SLC
    I was thinking about the speed difference of C++ to C# being mostly about C# compiling to byte-code that is taken in by the JIT compiler (is that correct?) and all the checks C# does. I notice that it is possible to turn a lot of these functions off, both in the compile options, and possibly through using the unsafe keyword as unsafe code is not verifiable by the common language runtime. Therefore if you were to write a simple console application in both languages, that flipped an imaginary coin an infinite number of times and displayed the results to the screen every 10,000 or so iterations, how much speed difference would there be? I chose this because it's a very simple program. I'd like to test this but I don't know C++ or have the tools to compile it. This is my C# version though: static void Main(string[] args) { unsafe { Random rnd = new Random(); int heads = 0, tails = 0; while (true) { if (rnd.NextDouble() > 0.5) heads++; else tails++; if ((heads + tails) % 1000000 == 0) Console.WriteLine("Heads: {0} Tails: {1}", heads, tails); } } } Is the difference enough to warrant deliberately compiling sections of code "unsafe" or into DLLs that do not have some of the compile options like overflow checking enabled? Or does it go the other way, where it would be beneficial to compile sections in C++? I'm sure interop speed comes into play too then. To avoid subjectivity, I reiterate the specific parts of this question as: Does C# have a performance boost from using unsafe code? Do the compile options such as disabling overflow checking boost performance, and do they affect unsafe code? Would the program above be faster in C++ or negligably different? Is it worth compiling long intensive number-crunching tasks in a language such as C++ or using /unsafe for a bonus? Less subjectively, could I complete an intensive operation faster by doing this?

    Read the article

  • How to increase the performance of a loop which runs for every 'n' minutes.

    - by GustlyWind
    Hi Giving small background to my requirement and what i had accomplished so far: There are 18 Scheduler tasks run at regular intervals (least being 30 mins) takes input of nearly 5000 eligible employees run into a static method for iteration and generates a mail content for that employee and mails. An average task takes about 9 min multiplied by 18 will be roughly 162 mins meanwhile there would be next tasks which will be in queue (I assume). So my plan is something like the below loop try { // Handle Arraylist of alerts eligible employees Iterator employee = empIDs.iterator(); while (employee.hasNext()) { ScheduledImplementation.getInstance().sendAlertInfoToEmpForGivenAlertType((Long)employee.next(), configType,schedType); } } catch (Exception vEx) { _log.error("Exception Caught During sending " + configType + " messages:" + configType, vEx); } Since I know how many employees would come to my method I will divide the while loop into two and perform simultaneous operations on two or three employees at a time. Is this possible. Or is there any other ways I can improve the performance. Some of the things I had implemented so far 1.Wherever possible made methods static and variables too Didn't bother to catch exceptions and send back because these are background tasks. (And I assume this improves performance) Get the DB values in one query instead of multiple hits. If am successful in optimizing the while loop I think i can save couple of mins. Thanks

    Read the article

  • Is having a lot of DOM elements bad for performance?

    - by rFactor
    Hi, I am making a button that looks like this: <!-- Container --> <div> <!-- Top --> <div> <div></div> <div></div> <div></div> </div> <!-- Middle --> <div> <div></div> <div></div> <div></div> </div> <!-- Bottom --> <div> <div></div> <div></div> <div></div> </div> </div> It has many elements, because I want it to be skinnable without limiting the skinners abilities. However, I am concerned about performance. Does having a lot of DOM elements refrect bad performance? Obviously there will always be an impact, but how great is that?

    Read the article

  • Performance of a get unique elements/group by operation on an IEnumerable<T>.

    - by tolism7
    I was wondering how could I improve the performance of the following code: public class MyObject { public int Year { get; set; } } //In my case I have 30000 IEnumerable<MyObject> data = MethodThatReturnsManyMyObjects(); var groupedByYear = data.GroupBy(x => x.Year); //Here is the where it takes around 5 seconds foreach (var group in groupedByYear) //do something here. The idea is to get a set of objects with unique year values. In my scenario there are only 6 years included in the 30000 items in the list so the foreach loop will be executed 6 times only. So we have many items needing to be grouped in a few groups. Using the .Distinct() with an explicit IEqualityComparer would be an alternative but somehow I feel that it wont make any difference. I can understand if 30000 items is too much and that i should be happy with the 5 seconds I get, but I was wondering if the above can be imporved performance wise. Thanks.

    Read the article

  • Rewriting jQuery to plain old javascript - are the performance gains worth it?

    - by Swader
    Since jQuery is an incredibly easy and banal library, I've developed a rather complex project fairly quickly with it. The entire interface is jQuery based, and memory is cleaned regularly to maintain optimum performance. Everything works very well in Firefox, and exceptionally so in Chrome (other browsers are of no concern for me as this is not a commercial or publicly available product). What I'm wondering now is - since pure plain old javascript is really not a complicated language to master, would it be performance enhancing to rewrite the whole thing in plain old JS, and if so, how much of a boost would you expect to get from it? If the answers prove positive enough, I'll go ahead and do it, run a benchmark and report back with the precise findings. Cheers Edit: Thanks guys, valuable insight. The purpose was not to "re-invent the wheel" - it was just for experience and personal improvement. Just because something exists, doesn't mean you shouldn't explore it into greater detail, know how it works or try to recreate it. This is the same reason I seldom use frameworks, I would much rather use my own code and iron it out and gain massive experience doing it, than start off by using someone else's code, regardless of how ironed out it is. Anyway, won't be doing it, thanks for saving me the effort :)

    Read the article

  • MySQL - What is wrong with this query or my database? Terrible performance.

    - by Moss
    SELECT * from `employees` a LEFT JOIN (SELECT phone1 p1, count(*) c, FROM `employees` GROUP BY phone1) b ON a.phone1 = b.p1; I'm not sure if it is this query in particular that has the problem. I have been getting terrible performance in general with this database. The table in question has 120,000 rows. I have tried this particular query remotely and locally with the MyISAM and InnoDB engines, with different types of joins, and with and without an index on phone1. I can get this to complete in about 4 minutes on a 10,000 row table successfully but performance drops exponentially with larger tables. Remotely it will lose connection to the server and locally it brings my system to its knees and seems to go on forever. This query is only a smaller step I was trying to do when a larger query couldn't complete. Maybe I should explain the whole scenario. I have one big flat ugly table that lists a bunch of people and their contact info and the info of the companies they work for. I'm trying to normalize the database and intelligently determine which phone numbers apply to individual people and which apply to an office location. My reasoning is that if a phone number occurs multiple times and the number of occurrence equals the number of times that the street address it is attached to occurs then it must be an office number. So the first step is to count each phone number grouping by phone number. Normally if you just use COUNT()...GROUP BY it will only list the first record it finds in that group so I figured I have to join the full table to the count table where the phone number matches. This does work but as I said I can't successfully complete it on any table much larger than 10,000 rows. This seems pathetic and this doesn't seem like a crazy query to do. Is there a better way to achieve what I want or do I have to break my large table into 12 pieces or is there something wrong with the table or db?

    Read the article

  • HD video editing system with Truecrypt

    - by Rob
    I'm looking to do hi-def video editing and transcoding on an unencrypted standard partition, with Truecrypt on the system partition for sensitive data. I'm aiming to keep certain data private but still have performance where needed. Goals: Maximum, unimpacted, performance possible for hi-def video editing, encryption of video not required Encrypt system partition, using Truecrypt, for web/email privacy, etc. in the event of loss In other words I want to selectively encrypt the hard drive - i.e. make the system partition encrypted but not impact the original maximum performance that would be available to me for hi-def/HD video editing. The thinking is to use an unencrypted partition for the video and set up video applications to point at that. Assuming that they would use that partition only for their workspace and not the encrypted system partition, then I should expect to not get any performance hit. Would I be correct? I guess it might depend on the application, if that app is hard-wired to use the system partition always for temporary storage during editing and transcoding, or if it has to be installed on the C: system partition always. So some real data on how various apps behave in the respect would be useful, e.g. Adobe, Cyberlink, Nero etc. etc. I have a Intel i7 Quad-core (8 threads) 1.6Ghz (up to 2.8Ghz turbo-boost) 4Gb, 7200rpm SATA, nvidia HP laptop. I've read the excellent posting about the general performance impact of truecrypt but the benchmarks weren't specific enough for my needs where I'm dealing with HD-video and using a non-encrypted partition to maintain max performance.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >