Search Results

Search found 88749 results on 3550 pages for 'code tuning'.

Page 14/3550 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • How to protect compiled Java classes?

    - by Registered User
    I know, many similar questions has been asked here. I am not asking if I can protect my compiled Java class - because obviously you will say 'no you can't'. I am asking what is the best known method of protecting Java classes against de-compiling? If you aware of any research or academic paper in this field please do let me know. Also if you have used some methods or software please share you experience? Any kind of information will be very useful. Thank you.

    Read the article

  • How can I hide tracing code in Visual Studio IDE C# ?

    - by Mark
    As I'm starting to put more tracing in my code, i'm realizing it adds a lot of clutter. I know Visual Studio allows you to hide and reveal code, however, i'd like to be able group code into "tracing" code and then hide it and reveal at will as i'm reading the code. I suppose it could do this either per file or per class or per function. Is there any way to do this? What do you guys do? Adding some clarification The hide current feature kind of allows you to do this except that when the code is hidden, you can't tell if its tracing or not. You also can't say "hide all tracing code" and "reveal all tracing code" which is useful when reading a function depending on what you are trying to do.

    Read the article

  • Using Entity Framework (code first) migrations in production

    - by devdigital
    I'm just looking into using EF migrations for our project, and in particular for performing schema changes in production between releases. I can see that generating SQL (delta) script files is an option, but is there no way of running the migrations programmatically on app start up? Will I instead need to add each delta script to the solution and write my own framework to apply them in order (from the current version to the latest) on bootstrap?

    Read the article

  • Ideas for my MSc project and Google Summer of Code 2011

    - by Chris Wilson
    I'm currently putting together ideas for my master's project which I'll be working on over the summer, and I would like to be able to use this time to help Ubuntu in some way. I have the freedom to come up with pretty much any project in the field of software development/engineering provided it Is a substantial piece of software (for reference, I will be working on it for five full months) Solves a problem for more people than just myself I was hoping to use this project as an opportunity to get some experience with the underbelly of Linux, so that I can mention on my CV that I have 'experience in developing for *NIX in C++', which I'm noticing more and more companies are looking for these days, probably because stuff's moving to cloud servers and that's where Linux rules the roost. My problem is that, since I don't have the experience to begin with, I'm not sure what to do for such a project, and I was wondering if anyone could help me with this. I've noticed from Daniel Holbach's blog that Ubuntu participated in the Google Summer of Code 2010, and that project ideas for that can be found here. However, I have not been able to find anything related to Ubuntu and GSoC 2011, but I have noticed from the GSoC timeline that the list of mentoring organisations will not be published until March 18th. I have two questions here. Has Ubuntu applied to be a part of Summer of Code 2011, and what is the status of the 2010 project list linked to earlier. Were they all implemented or are there still some that can be picked up now, should I not participate in GSoC? I'd like to do something for Ubuntu, but I'd rather not spend my time reinventing the wheel.

    Read the article

  • Collision Detection Code Structure with Sloped Tiles

    - by ProgrammerGuy123
    Im making a 2D tile based game with slopes, and I need help on the collision detection. This question is not about determining the vertical position of the player given the horizontal position when on a slope, but rather the structure of the code. Here is my pseudocode for the collision detection: void Player::handleTileCollisions() { int left = //find tile that's left of player int right = //find tile that's right of player int top = //find tile that's above player int bottom = //find tile that's below player for(int x = left; x <= right; x++) { for(int y = top; y <= bottom; y++) { switch(getTileType(x, y)) { case 1: //solid tile { //resolve collisions break; } case 2: //sloped tile { //resolve collisions break; } default: //air tile or whatever else break; } } } } When the player is on a sloped tile, he is actually inside the tile itself horizontally, that way the player doesn't look like he is floating. This creates a problem because when there is a sloped tile next to a solid square tile, the player can't move passed it because this algorithm resolves any collisions with the solid tile. Here is a gif showing this problem: So what is a good way to structure my code so that when the player is inside a sloped tile, solid tiles get ignored?

    Read the article

  • Is a code review which uses only code comments a good idea?

    - by gaRex
    Preconditions Team uses DVCS IDE supports comments parsing (like TODO and etc.) Tools like CodeCollaborator are expensive for budget Tools like gerrit are too complex for install or not usable Workflow Author publishes somewhere on central repo feature branch Reviewer fetch it and start review In case of some question/issue reviewer create comment with special label, like "REV". Such label MUST not be in production code -- only on review stage: $somevar = 123; // REV Why do echo this here? echo $somevar; When reviewer finish post comments -- it just commits with stupid message "comments" and pushes back Author pulls feature branch back and answer comments in similar way or improve code and push it back When "REV" comments have gone we can think, that review has successfully finished. Author interactively rebases feature branch, squashes it to remove those "comment" commits and now is ready to merge feature to develop or make any action that usualy could be after successful internal review IDE support I know, that custom comment tags are possible in eclipse & netbeans. Sure it also should be in blablaStorm family. Questions Do you think this methodology is viable? Do you know something similar? What can be improved in it?

    Read the article

  • Macedonian Code Camp 2011

    - by hajan
    Autumn was filled with lot of conferences, events, speaking engagements and many interesting happenings in Skopje, Macedonia. First at October 20, I was speaking at Microsoft Vizija 9 on topic ASP.NET MVC3 and Razor. One week ago, November 15 I was speaking for first time on topic not related to web development (but still deployment of web apps was part of the demos) on topic “Cloud Computing – Windows Azure” at Microsoft BizSpark Bootcamp. The next event, which is the biggest event by the number of visitors and number of tracks is the Code Camp 2011 event. After we opened the registrations for the event, we sold out (free) 600 tickets in the first 15 hours! We all got astonished by the extremely big number of responses we’ve got… In this event, I can freely say that we expect about 700 attendees to come, and we already have 900+ registered. The event will be held at Saturday, 26 November 2011. At Code Camp 2011, I will speak on topic ASP.NET MVC Best Practices. There are many interesting things to say on this presentation, I will mainly focus on Tips, Tricks, Guidelines and other Practices that I have been using in real-life projects developed by using ASP.NET MVC Framework, with special focus on ASP.NET MVC3 and the next release, ASP.NET MVC4 Developer Preview. There are big number of known local and regional speakers, including 7 MVPs. You can find more info about this event at the official event website: http://codecamp.mkdot.net As for my session, if you have some interesting trick or good practice you have been using in your ASP.NET MVC projects, you can freely share it with me… If I find it interesting and if it’s not part of the current practices I have included for the presentation (I can’t tell you which ones for now… *secret* ;))… I will consider including it in the presentation. Stay tuned for more info soon… Regards, Hajan

    Read the article

  • How to turn on/off code modules?

    - by Safran Ali
    I am trying to run multiple sites using single code base and code base consist of the following module (i.e. classes) User module Q & A module Faq module and each class works on MVC pattern i.e. it consist of Entity class Helper class (i.e. static class) View (i.e. pages and controls) and let's say I have 2 sites site1.com and site2.com. And I am trying to achieve following functionality site1.com can have User, Q & A and Faq module up and running site2.com can have User and Q & A module live while Faq module is switched off but it can be turned-on if needed, so my query here is what is the best way to achieve such functionality Do I introduce a flag bit that I check on every page and control belonging to that module? It's more like CMS where you can turn on/off different features. I am trying to get my head around it, please provide me with an example or point out if I am taking the wrong approach.

    Read the article

  • How to ignore certain coding standard errors in PHP CodeSniffer

    - by Tom
    We have a PHP 5 web application and we're currently evaluating PHP CodeSniffer in order to decide whether forcing code standards improves code quality without causing too much of a headache. If it seems good we will add a SVN pre-commit hook to ensure all new files committed on the dev branch are free from coding standard smells. Is there a way to configure PHP codeSniffer to ignore a particular type of error? or get it to treat a certain error as a warning instead? Here an example to demonstrate the issue: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> </head> <body> <div> <?php echo getTabContent('Programming', 1, $numX, $numY); if (isset($msg)) { echo $msg; } ?> </div> </body> </html> And this is the output of PHP_CodeSniffer: > phpcs test.php -------------------------------------------------------------------------------- FOUND 2 ERROR(S) AND 1 WARNING(S) AFFECTING 3 LINE(S) -------------------------------------------------------------------------------- 1 | WARNING | Line exceeds 85 characters; contains 121 characters 9 | ERROR | Missing file doc comment 11 | ERROR | Line indented incorrectly; expected 0 spaces, found 4 -------------------------------------------------------------------------------- I have a issue with the "Line indented incorrectly" error. I guess it happens because I am mixing the PHP indentation with the HTML indentation. But this makes it more readable doesn't it? (taking into account that I don't have the resouces to move to a MVC framework right now). So I'd like to ignore it please.

    Read the article

  • Are there any actual case studies on rewrites of software success/failure rates?

    - by James Drinkard
    I've seen multiple posts about rewrites of applications being bad, peoples experiences about it here on Programmers, and an article I've ready by Joel Splosky on the subject, but no hard evidence of case studies. Other than the two examples Joel gave and some other posts here, what do you do with a bad codebase and how do you decide what to do with it based on real studies? For the case in point, there are two clients I know of that both have old legacy code. They keep limping along with it because as one of them found out, a rewrite was a disaster, it was expensive and didn't really work to improve the code much. That customer has some very complicated business logic as the rewriters quickly found out. In both cases, these are mission critical applications that brings in a lot of revenue for the company. The one that attempted the rewrite felt that they would hit a brick wall at some point if the legacy software didn't get upgraded at some point in the future. To me, that kind of risk warrants research and analysis to ensure a successful path. My question is have there been actual case studies that have investigated this? I wouldn't want to attempt a major rewrite without knowing some best practices, pitfalls, and successes based on actual studies. Aftermath: okay, I was wrong, I did find one article: Rewrite or Reuse. They did a study on a Cobol app that was converted to Java.

    Read the article

  • Breaking up classes and methods into smaller units

    - by micahhoover
    During code reviews a couple devs have recommended I break up my methods into smaller methods. Their justification was (1) increased readability and (2) the back trace that comes back from production showing the method name is more specific to the line of code that failed. There may have also been some colorful words about functional programming. Additionally I think I may have failed an interview a while back because I didn't give an acceptable answer about when to break things up. My inclination is that when I see a bunch of methods in a class or across a bunch of files, it isn't clear to me how they flow together, and how many times each one gets called. I don't really have a good feel for the linearity of it as quickly just by eye-balling it. The other thing is a lot of people seem to place a premium of organization over content (e.g. 'Look at how organized my sock drawer is!' Me: 'Overall, I think I can get to my socks faster if you count the time it took to organize them'). Our business requirements are not very stable. I'm afraid that if the classes/methods are very granular it will take longer to refactor to requirement changes. I'm not sure how much of a factor this should be. Anyway, computer science is part art / part science, but I'm not sure how much this applies to this issue.

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • Creating packages in code - Package Configurations

    Continuing my theme of building various types of packages in code, this example shows how to building a package with package configurations. Incidentally it shows you how to add a variable, and a connection too. It covers the five most common configurations: Configuration File Indirect Configuration File SQL Server Indirect SQL Server Environment Variable  For a general overview try the SQL Server Books Online Package Configurations topic. The sample uses a a simple helper function ApplyConfig to create or update a configuration, although in the example we will only ever create. The most useful knowledge is the configuration string (Configuration.ConfigurationString) that you need to set. Configuration Type Configuration String Description Configuration File The full path and file name of an XML configuration file. The file can contain one or more configuration and includes the target path and new value to set. Indirect Configuration File An environment variable the value of which contains full path and file name of an XML configuration file as per the Configuration File type described above. SQL Server A three part configuration string, with each part being quote delimited and separated by a semi-colon. -- The first part is the connection manager name. The connection tells you which server and database to look for the configuration table. -- The second part is the name of the configuration table. The table is of a standard format, use the Package Configuration Wizard to help create an example, or see the sample script files below. The table contains one or more rows or configuration items each with a target path and new value. -- The third and final part is the optional filter name. A configuration table can contain multiple configurations, and the filter is  literal value that can be used to group items together and act as a filter clause when configurations are being read. If you do not need a filter, just leave the value empty. Indirect SQL Server An environment variable the value of which is the three part configuration string as per the SQL Server type described above. Environment Variable An environment variable the value of which is the value to set in the package. This is slightly different to the other examples as the configuration definition in the package also includes the target information. In our ApplyConfig function this is the only example that actually supplies a target value for the Configuration.PackagePath property. The path is an XPath style path for the target property, \Package.Variables[User::Variable].Properties[Value], the equivalent of which can be seen in the screenshot below, with the object being our variable called Variable, and the property to set is the Value property of that variable object. The configurations as seen when opening the generated package in BIDS: The sample code creates the package, adds a variable and connection manager, enables configurations, and then adds our example configurations. The package is then saved to disk, useful for checking the package and testing, before finally executing, just to prove it is valid. There are some external resources used here, namely some environment variables and a table, see below for more details. namespace Konesans.Dts.Samples { using System; using Microsoft.SqlServer.Dts.Runtime; public class PackageConfigurations { public void CreatePackage() { // Create a new package Package package = new Package(); package.Name = "ConfigurationSample"; // Add a variable, the target for our configurations package.Variables.Add("Variable", false, "User", 0); // Add a connection, for SQL configurations // Add the SQL OLE-DB connection ConnectionManager connectionManagerOleDb = package.Connections.Add("OLEDB"); connectionManagerOleDb.Name = "SQLConnection"; connectionManagerOleDb.ConnectionString = "Provider=SQLOLEDB.1;Data Source=(local);Initial Catalog=master;Integrated Security=SSPI;"; // Add our example configurations, first must enable package setting package.EnableConfigurations = true; // Direct configuration file, see sample file this.ApplyConfig(package, "Configuration File", DTSConfigurationType.ConfigFile, "C:\\Temp\\XmlConfig.dtsConfig", string.Empty); // Indirect configuration file, the emvironment variable XmlConfigFileEnvironmentVariable // contains the path to the configuration file, e.g. C:\Temp\XmlConfig.dtsConfig this.ApplyConfig(package, "Indirect Configuration File", DTSConfigurationType.IConfigFile, "XmlConfigFileEnvironmentVariable", string.Empty); // Direct SQL Server configuration, uses the SQLConnection package connection to read // configurations from the [dbo].[SSIS Configurations] table, with a filter of "SampleFilter" this.ApplyConfig(package, "SQL Server", DTSConfigurationType.SqlServer, "\"SQLConnection\";\"[dbo].[SSIS Configurations]\";\"SampleFilter\";", string.Empty); // Indirect SQL Server configuration, the environment variable "SQLServerEnvironmentVariable" // contains the configuration string e.g. "SQLConnection";"[dbo].[SSIS Configurations]";"SampleFilter"; this.ApplyConfig(package, "Indirect SQL Server", DTSConfigurationType.ISqlServer, "SQLServerEnvironmentVariable", string.Empty); // Direct environment variable, the value of the EnvironmentVariable environment variable is // applied to the target property, the value of the "User::Variable" package variable this.ApplyConfig(package, "EnvironmentVariable", DTSConfigurationType.EnvVariable, "EnvironmentVariable", "\\Package.Variables[User::Variable].Properties[Value]"); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); Console.WriteLine(@"C:\Temp\{0}.dtsx", package.Name); #endif // Execute package package.Execute(); // Basic check for warnings foreach (DtsWarning warning in package.Warnings) { Console.WriteLine("WarningCode : {0}", warning.WarningCode); Console.WriteLine(" SubComponent : {0}", warning.SubComponent); Console.WriteLine(" Description : {0}", warning.Description); Console.WriteLine(); } // Basic check for errors foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); Console.WriteLine(); } package.Dispose(); } /// <summary> /// Add or update an package configuration. /// </summary> /// <param name="package">The package.</param> /// <param name="name">The configuration name.</param> /// <param name="type">The type of configuration</param> /// <param name="setting">The configuration setting.</param> /// <param name="target">The target of the configuration, leave blank if not required.</param> internal void ApplyConfig(Package package, string name, DTSConfigurationType type, string setting, string target) { Configurations configurations = package.Configurations; Configuration configuration; if (configurations.Contains(name)) { configuration = configurations[name]; } else { configuration = configurations.Add(); } configuration.Name = name; configuration.ConfigurationType = type; configuration.ConfigurationString = setting; configuration.PackagePath = target; } } } The following table lists the environment variables required for the full example to work along with some sample values. Variable Sample value EnvironmentVariable 1 SQLServerEnvironmentVariable "SQLConnection";"[dbo].[SSIS Configurations]";"SampleFilter"; XmlConfigFileEnvironmentVariable C:\Temp\XmlConfig.dtsConfig Sample code, package and configuration file. ConfigurationApplication.cs ConfigurationSample.dtsx XmlConfig.dtsConfig

    Read the article

  • Should you use "internal abbreviations" in code comments?

    - by Anto
    Should you use "internal abbreviations/slang" inside comments, that is, abbreviations and slang people outside the project could have trouble understanding, for instance, using something like //NYI instead of //Not Yet Implemented? There are advantages of this, such as there is less "code" to type (though you could use autocomplete on the abbreviations) and you can read something like NYE faster than something like Not Yet Implemented, assuming you are aware of the abbreviation and its (unabbreviated) meaning. Myself, I would be careful with this as long as it is not a project on which I for sure will be the only developer.

    Read the article

  • When does 'optimizing code' == 'structuring data'?

    - by NewAlexandria
    A recent article by ycombinator lists a comment with principles of a great programmer. #7. Good programmer: I optimize code. Better programmer: I structure data. Best programmer: What's the difference? Acknowledging subjective and contentious concepts - does anyone have a position on what this means? I do, but I'd like to edit this question later with my thoughts so-as not to predispose the answers.

    Read the article

  • Using public domain source code from JDK in my application

    - by user2941369
    Can I use source code from ThreadPoolExecutor.java taken from JDK 1.7 considering that the following clausule is at the beginning of the ThreadPoolExecutor.java: /* * Written by Doug Lea with assistance from members of JCP JSR-166 * Expert Group and released to the public domain, as explained at * http://creativecommons.org/licenses/publicdomain */ And just before that there is also: /* * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms. */

    Read the article

  • How can i manage my personal notes , code snippets files in one place online [closed]

    - by user1758043
    Whenever i work on any project , then i have so much notes , diagrams files , image s, brainstorming ideas which i want to keep. i want to put them in one place so that i can see the history of my work. Is there any toll whichere i can store this online. my company is using confluence but thats costly for me. I want something for single user but online in clou where i can store Notes Code snippets Diagrams , flowchart Attah files , images Books marks , sites

    Read the article

  • When to do code reviews when doing continuous integration?

    - by SpecialEd
    We are trying to switch to a continuous integration environment but are not sure when to do code reviews. From what I've read of continuous integration, we should be attempting to check in code as often as multiple times a day. I assume, this even means for features that are not yet complete. So the question is, when do we do the code reviews? We can't do it before we check in the code, because that would slow down the process where we will not be able to do daily checkins, let alone multiple checkins per day. Also, if the code we are checking in merely compiles but is not feature complete, doing a code review then is pointless, as most code reviews are best done as the feature is finalized. Does this mean we should do code reviews when a feature is completed, but that unreviewed code will get into the repository?

    Read the article

  • Richmond Code Camp 2010.1

    - by andyleonard
    I can't believe it - Richmond Code Camp 2010.1 is less than two weeks away! Once again, the leadership team has outdone themselves. We have a bunch of great speakers, 9 tracks, 45 sessions - there's something for everyone. If you're going to be in the area and are interested, register today. :{> Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Clean Code says to avoid protected variables

    - by Matsemann
    I have a question to a statement in Clean Code. I don't fully understand the reasoning to why we should avoid protected variables. It's from the chapter about Formatting, section about Vertical Distance: Concepts that are closely related should be kept vertically close to each other. Clearly this rule doesn't work for concepts that belong in separate files. But then closely related concepts should not be separated into different files unless you have a very good reason. Indeed, this is one of the reasons that protected variables should be avoided.

    Read the article

  • Extended Events Code Generator v1.001 - A Quick Fix

    - by Adam Machanic
    If you're one of the estimated 3-5 people who've downloaded and are using my XE Code Generator , please note that version 1.000 has a small bug: text data (such as query text) larger than 8000 bytes is truncated. I've fixed this issue and am pleased to present version 1.001, attached to this post. Enjoy, and stay tuned for slightly more interesting enhancements! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • SQL Query Builder/Designer and code Formating

    - by DavRob60
    I write SQL query every now and then, I could easily write them freehand, but sometimes I do create SQL queries using SQL Query Designers for various reason. (I wont start to enumerate them here and/or argue about their usefulness, so let's just say they are sometime useful.) Anyway, I currently use 2 Query Designers : SQL server management studio's Query Designer. Visual Studio 2010's Query Builder (must often within the Table adapter Query Configuration Wizard.) There's something I hate about those two (I don't know about the others), it's the way they throw away my Code formatting of SQL queries after an edit. Is there any way to configure something to automatically reformat the SQL output or is there any external tool/plug-in that I could use to do that job?

    Read the article

  • Presenting at Roanoke Code Camp Saturday!

    - by andyleonard
    Introduction I am honored to once again be selected to present at Roanoke Code Camp ! An Introductory Topic One of my presentations is titled "I See a Control Flow Tab. Now What?" It's a Level 100 talk for those wishing to learn how to build their very first SSIS package. This highly-interactive, demo-intense presentation is for beginners and developers just getting started with SSIS. Attend and learn how to build SSIS packages from the ground up . Designing an SSIS Framework I'm also presenting...(read more)

    Read the article

  • Coding standards in programming?

    - by vicky
    I am an WordPress Plugin Developer. I am not sure how to follow the coding standard while creating a plugin of wordpress. I check with some of the plugins like woocommerce and All in one SEO Plugin in that they are maintaining the proper coding standard. Basically I am Using the NetBeans IDE. Is it possible to make the proper space and coding standards in that IDE. I am Wondering to View his code is very neat and clean. How can i do this or how they are maintaining this. Anyone suggest me to make the wordpress plugin with well coding standards. Thanks, vicky

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >