Search Results

Search found 31319 results on 1253 pages for 'source engine'.

Page 636/1253 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • How come many project-hosting sites doesn't have a forum feature?

    - by george
    I'm considering starting an open-source project, so I shopped around some popular project hosting sites. What I find surprising is that many (see here for a nice feature table) of the popular project hosting sites (e.g. GitHub, BitBucket) don't have a forum feature, i.e. a place where users can talk to the devs, ask questions, raise ideas, etc. IMHO an active forum is an important factor in creating a user community around a project, so I would expect that most project owners would be interested in such a feature. I've also noticed that some projects do have support forums (or mailing lists) hosted elsewhere - e.g. Ruby on Rails is hosted on GitHub but has a Google Groups support group, and TortoiseHG is hosted on BitBucket but has a mailing list on SourceForge - so it's not like this feature is unneeded. So how come many project hosting sites don't have a forum feature?

    Read the article

  • Version control and project management for freelancing jobs

    - by Groo
    Are there version control and project management tools which "work well" with freelancing jobs, if I want to keep my customer involved in the project at all times? What concerns me is that repository hosting providers have their fees based on the "number of users", which I feel is the number which will constantly increase as I finish one project after another. For each project, for example, I would have to add permissions to my contractor to allow him to pull the source code and collaborate. So how does that work in practice? Do I "remove" the contractor from the project once it's done? This means I basically state that I offer no support and bugfixes anymore. Or do freelances end up paying more and more money for these services? Do you use such online services, or you host them by yourself? Or do you simply send your code to your customer by e-mail in weekly iterations?

    Read the article

  • Ubuntu Laptop Screen does not turn on after sleep

    - by Gage
    I recently put Ubuntu 10.10 on my laptop and after fixing the wireless problem I am only stuck on my screen turning back on after sleep. I have had this problem since Ubuntu 7. I tried using Ubuntu way back then and had a whole bunch of issues with the sleep and the wireless(Broadcom 4311). Anyways, I have an ATI Radeon express 200M graphics card (old laptop). When I go to Hardware drivers it doesn't give me any options to use the closed source drivers. Any suggestions on what I should do? I am going to try what is suggested in this thread but I am at work right now. Laptop does not wake up after sleep Thanks in advance for any help.

    Read the article

  • Are VB.NET to C# converters actually compilers?

    - by Rowan Freeman
    Whenever I see programs or scripts that convert between high-level programming languages they are always labelled as converters. "VB.NET to C# converter" on Google results in expected, useful hits. However "VB.NET to C# compiler" on Google results in things like comparisons between the C# and VB.NET compilers and other hits that are not quite what you'd be looking for. Webopedia defines Compiler as A program that translates source code into object code Eric Lipper in an answer to: "How do I create my own programming language and a compiler for it" suggests: One of the best ways to get started writing a compiler is by writing a high-level-language-to-high-level-language compiler. Is a converter really just a compiler? What separates the two?

    Read the article

  • Windows Azure End to End Examples

    - by BuckWoody
    I’m fascinated by the way people learn. I’m told there are several methods people use to understand new information, from reading to watching, from experiencing to exploring. Personally, I use multiple methods of learning when I encounter a new topic, usually starting with reading a bit about the concepts. I quickly want to put those into practice, however, especially in the technical realm. I immediately look for examples where I can start trying out the concepts. But I often want a “real” example – not just something that represents the concept, but something that is real-world, showing some feature I could actually use. And it’s no different with the Windows Azure platform – I like finding things I can do now, and actually use. So when I started learning Windows Azure, I of course began with the Windows Azure Training Kit – which has lots of examples and labs, presentations and so on. But from there, I wanted more examples I could learn from, and eventually teach others with. I was asked if I would write a few of those up, so here are the ones I use. CodePlex CodePlex is Microsoft’s version of an “Open Source” repository. Anyone can start a project, add code, documentation and more to it and make it available to the world, free of charge, using various licenses as they wish. Microsoft also uses this location for most of the examples we publish, and sample databases for SQL Server. If you search in CodePlex for “Azure”, you’ll come back with a list of projects that folks have posted, including those of us at Microsoft. The source code and documentation are there, so you can learn using actual examples of code that will do what you need. There’s everything from a simple table query to a full project that is sort of a “Corporate Dropbox” that uses Windows Azure Storage. The advantage is that this code is immediately usable. It’s searchable, and you can often find a complete solution to meet your needs. The disadvantage is that the code is pretty specific – it may not cover a huge project like you’re looking for. Also, depending on the author(s), you might not find the documentation level you want. Link: http://azureexamples.codeplex.com/site/search?query=Azure&ac=8    Tailspin Microsoft Patterns and Practices is a group here that does an amazing job at sharing standard ways of doing IT – from operations to coding. If you’re not familiar with this resource, make sure you read up on it. Long before I joined Microsoft I used their work in my daily job – saved a ton of time. It has resources not only for Windows Azure but other Microsoft software as well. The Patterns and Practices group also publishes full books – you can buy these, but many are also online for free. There’s an end-to-end example for Windows Azure using a company called “Tailspin”, and the work covers not only the code but the design of the full solution. If you really want to understand the thought that goes into a Platform-as-a-Service solution, this is an excellent resource. The advantages are that this is a book, it’s complete, and it includes a discussion of design decisions. The disadvantage is that it’s a little over a year old – and in “Cloud” years that’s a lot. So many things have changed, improved, and have been added that you need to treat this as a resource, but not the only one. Still, highly recommended. Link: http://msdn.microsoft.com/en-us/library/ff728592.aspx Azure Stock Trader Sometimes you need a mix of a CodePlex-style application, and a little more detail on how it was put together. And it would be great if you could actually play with the completed application, to see how it really functions on the actual platform. That’s the Azure Stock Trader application. There’s a place where you can read about the application, and then it’s been published to Windows Azure – the production platform – and you can use it, explore, and see how it performs. I use this application all the time to demonstrate Windows Azure, or a particular part of Windows Azure. The advantage is that this is an end-to-end application, and online as well. The disadvantage is that it takes a bit of self-learning to work through.  Links: Learn it: http://msdn.microsoft.com/en-us/netframework/bb499684 Use it: https://azurestocktrader.cloudapp.net/

    Read the article

  • Behaviour Driven Maturity Model

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/07/02/153326.aspxFor anyone who is interested I have written a small paper about the theory behind the BizTalk Maturity Assessment using a generic framework I have called the "Behaviour Driven Maturity Model" and then how it could be applied to the assessment of other subjects.The paper is on the following link:http://btsmaturity.blob.core.windows.net/behaviour-driven-model/Behaviour%20Based%20Maturity%20Model%20-%20Introduction.pdfIf you would like to create a model for a different subject area based on the details of this paper then I would encourage this as much as possible, all I ask is the following:1. Let us know your doing it so we can help tell people about each others activities2. Make it free to the community3. Reference back to BizTalkMaturity.com as the source of your model

    Read the article

  • 'Development dashboard' web application

    - by espais
    Hi all, I am not sure if something like this exists in that it is ready out of the box. I currently have some web space that I use for various projects, and I would like to setup an area for some friends and I to develop web applications together. My ideal setup would be to create a folder, say, webdev.domain.com. We could all go to this domain, login, and then be able to setup new applications, pick which language will be used, setup database tables, allow HTML based file uploading, and create sub-folders to basically have a test bed for the applications. In retrospect, it seems like I'm describing a limited version of cpanel. I could come up with something in Drupal I'm sure, but I don't want to have to really spend time configuring much. Like I said, I want to install it and have minimal configuration. Does something like this exist (preferably in open-source)?

    Read the article

  • SQL SERVER – SSIS Parameters in Parent-Child ETL Architectures – Notes from the Field #040

    - by Pinal Dave
    [Notes from Pinal]: SSIS is very well explored subject, however, there are so many interesting elements when we read, we learn something new. A similar concept has been Parent-Child ETL architecture’s relationship in SSIS. Linchpin People are database coaches and wellness experts for a data driven world. In this 40th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to understand SSIS Parameters in Parent-Child ETL Architectures. In this brief Notes from the Field post, I will review the use of SSIS parameters in parent-child ETL architectures. A very common design pattern used in SQL Server Integration Services is one I call the parent-child pattern.  Simply put, this is a pattern in which packages are executed by other packages.  An ETL infrastructure built using small, single-purpose packages is very often easier to develop, debug, and troubleshoot than large, monolithic packages.  For a more in-depth look at parent-child architectures, check out my earlier blog post on this topic. When using the parent-child design pattern, you will frequently need to pass values from the calling (parent) package to the called (child) package.  In older versions of SSIS, this process was possible but not necessarily simple.  When using SSIS 2005 or 2008, or even when using SSIS 2012 or 2014 in package deployment mode, you would have to create package configurations to pass values from parent to child packages.  Package configurations, while effective, were not the easiest tool to work with.  Fortunately, starting with SSIS in SQL Server 2012, you can now use package parameters for this purpose. In the example I will use for this demonstration, I’ll create two packages: one intended for use as a child package, and the other configured to execute said child package.  In the parent package I’m going to build a for each loop container in SSIS, and use package parameters to pass in a value – specifically, a ClientID – for each iteration of the loop.  The child package will be executed from within the for each loop, and will create one output file for each client, with the source query and filename dependent on the ClientID received from the parent package. Configuring the Child and Parent Packages When you create a new package, you’ll see the Parameters tab at the package level.  Clicking over to that tab allows you to add, edit, or delete package parameters. As shown above, the sample package has two parameters.  Note that I’ve set the name, data type, and default value for each of these.  Also note the column entitled Required: this allows me to specify whether the parameter value is optional (the default behavior) or required for package execution.  In this example, I have one parameter that is required, and the other is not. Let’s shift over to the parent package briefly, and demonstrate how to supply values to these parameters in the child package.  Using the execute package task, you can easily map variable values in the parent package to parameters in the child package. The execute package task in the parent package, shown above, has the variable vThisClient from the parent package mapped to the pClientID parameter shown earlier in the child package.  Note that there is no value mapped to the child package parameter named pOutputFolder.  Since this parameter has the Required property set to False, we don’t have to specify a value for it, which will cause that parameter to use the default value we supplied when designing the child pacakge. The last step in the parent package is to create the for each loop container I mentioned earlier, and place the execute package task inside it.  I’m using an object variable to store the distinct client ID values, and I use that as the iterator for the loop (I describe how to do this more in depth here).  For each iteration of the loop, a different client ID value will be passed into the child package parameter. The final step is to configure the child package to actually do something meaningful with the parameter values passed into it.  In this case, I’ve modified the OleDB source query to use the pClientID value in the WHERE clause of the query to restrict results for each iteration to a single client’s data.  Additionally, I’ll use both the pClientID and pOutputFolder parameters to dynamically build the output filename. As shown, the pClientID is used in the WHERE clause, so we only get the current client’s invoices for each iteration of the loop. For the flat file connection, I’m setting the Connection String property using an expression that engages both of the parameters for this package, as shown above. Parting Thoughts There are many uses for package parameters beyond a simple parent-child design pattern.  For example, you can create standalone packages (those not intended to be used as a child package) and still use parameters.  Parameter values may be supplied to a package directly at runtime by a SQL Server Agent job, through the command line (via dtexec.exe), or through T-SQL. Also, you can also have project parameters as well as package parameters.  Project parameters work in much the same way as package parameters, but the parameters apply to all packages in a project, not just a single package. Conclusion Of the numerous advantages of using catalog deployment model in SSIS 2012 and beyond, package parameters are near the top of the list.  Parameters allow you to easily share values from parent to child packages, enabling more dynamic behavior and better code encapsulation. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Entity Framework Batch Update and Future Queries

    - by pwelter34
    Entity Framework Extended Library A library the extends the functionality of Entity Framework. Features Batch Update and Delete Future Queries Audit Log Project Package and Source NuGet Package PM> Install-Package EntityFramework.Extended NuGet: http://nuget.org/List/Packages/EntityFramework.Extended Source: http://github.com/loresoft/EntityFramework.Extended Batch Update and Delete A current limitations of the Entity Framework is that in order to update or delete an entity you have to first retrieve it into memory. Now in most scenarios this is just fine. There are however some senerios where performance would suffer. Also, for single deletes, the object must be retrieved before it can be deleted requiring two calls to the database. Batch update and delete eliminates the need to retrieve and load an entity before modifying it. Deleting //delete all users where FirstName matches context.Users.Delete(u => u.FirstName == "firstname"); Update //update all tasks with status of 1 to status of 2 context.Tasks.Update( t => t.StatusId == 1, t => new Task {StatusId = 2}); //example of using an IQueryable as the filter for the update var users = context.Users .Where(u => u.FirstName == "firstname"); context.Users.Update( users, u => new User {FirstName = "newfirstname"}); Future Queries Build up a list of queries for the data that you need and the first time any of the results are accessed, all the data will retrieved in one round trip to the database server. Reducing the number of trips to the database is a great. Using this feature is as simple as appending .Future() to the end of your queries. To use the Future Queries, make sure to import the EntityFramework.Extensions namespace. Future queries are created with the following extension methods... Future() FutureFirstOrDefault() FutureCount() Sample // build up queries var q1 = db.Users .Where(t => t.EmailAddress == "[email protected]") .Future(); var q2 = db.Tasks .Where(t => t.Summary == "Test") .Future(); // this triggers the loading of all the future queries var users = q1.ToList(); In the example above, there are 2 queries built up, as soon as one of the queries is enumerated, it triggers the batch load of both queries. // base query var q = db.Tasks.Where(t => t.Priority == 2); // get total count var q1 = q.FutureCount(); // get page var q2 = q.Skip(pageIndex).Take(pageSize).Future(); // triggers execute as a batch int total = q1.Value; var tasks = q2.ToList(); In this example, we have a common senerio where you want to page a list of tasks. In order for the GUI to setup the paging control, you need a total count. With Future, we can batch together the queries to get all the data in one database call. Future queries work by creating the appropriate IFutureQuery object that keeps the IQuerable. The IFutureQuery object is then stored in IFutureContext.FutureQueries list. Then, when one of the IFutureQuery objects is enumerated, it calls back to IFutureContext.ExecuteFutureQueries() via the LoadAction delegate. ExecuteFutureQueries builds a batch query from all the stored IFutureQuery objects. Finally, all the IFutureQuery objects are updated with the results from the query. Audit Log The Audit Log feature will capture the changes to entities anytime they are submitted to the database. The Audit Log captures only the entities that are changed and only the properties on those entities that were changed. The before and after values are recorded. AuditLogger.LastAudit is where this information is held and there is a ToXml() method that makes it easy to turn the AuditLog into xml for easy storage. The AuditLog can be customized via attributes on the entities or via a Fluent Configuration API. Fluent Configuration // config audit when your application is starting up... var auditConfiguration = AuditConfiguration.Default; auditConfiguration.IncludeRelationships = true; auditConfiguration.LoadRelationships = true; auditConfiguration.DefaultAuditable = true; // customize the audit for Task entity auditConfiguration.IsAuditable<Task>() .NotAudited(t => t.TaskExtended) .FormatWith(t => t.Status, v => FormatStatus(v)); // set the display member when status is a foreign key auditConfiguration.IsAuditable<Status>() .DisplayMember(t => t.Name); Create an Audit Log var db = new TrackerContext(); var audit = db.BeginAudit(); // make some updates ... db.SaveChanges(); var log = audit.LastLog;

    Read the article

  • Mysql my.cnf as simbolic link in Ubuntu 12.04

    - by Juan Cruz
    I am not able to use symlink for my.cnf file (Ubuntu 12.04 server). I added the alias to /etc/apparmor.d/tunables/alias file (as I did for 10.04 and worked) but I get: May 30 16:00:01 ip-10-242-209-203 kernel: [176926.213403] type=1400 audit(1338393601.350:244): apparmor="DENIED" operation="open" parent=1 profile="/usr/sbin/mysqld" name="/opt/data/my.cnf" pid=18128 comm="mysqld" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 May 30 16:00:01 ip-10-242-209-203 kernel: [176926.222016] init: mysql main process (18128) terminated with status 1 May 30 16:00:01 ip-10-242-209-203 kernel: [176926.222084] init: mysql respawning too fast, stopped As a workaround I added the following line /etc/mysql/my.cnf r, to the /etc/apparmor.d/local/usr.sbin.mysqld file. The default configuration is /etc/mysql/*.cnf r, Is this a bug? is an apparmor bug or a mysql bug? It seems that that configuration has changed since MySql 5.1 (https://bugs.launchpad.net/ubuntu/+source/mysql-5.1/+bug/619172) but now worked for me. Thanks!

    Read the article

  • The Breakpoint Ep 3: The Sourcemap Spectacular with Paul Irish and Addy Osmani

    The Breakpoint Ep 3: The Sourcemap Spectacular with Paul Irish and Addy Osmani Ask and vote for questions at goo.gl Take Coffeescript to Javascript to Minified and all the way back with source maps. In addition to a new Coffeescript sourcemap workflow, we'll cover the latest sourcemap updates so you can understand how to dramatically improve your debugging experience. Finally, Paul and Addy will be joined by special guest—Yeoman core contributor Sindre Sorhus—to discuss what big new changes are coming to the project. From: GoogleDevelopers Views: 0 0 ratings Time: 01:00:00 More in Science & Technology

    Read the article

  • how to use gps receiver bu-353 in ubuntu 10.10

    - by Parimal
    Hi I have a gps receiver bu-353 with usb interface i want to know how can i use it under ubuntu I ran the following command gpsd -n -N -D 2 /dev/ttyUSB0 i got the output as: gpsd: launching (Version 2.94) gpsd: listening on port gpsd gpsd: running with effective group ID 1000 gpsd: running with effective user ID 1000 gpsd: opening GPS data source type 3 at '/dev/ttyUSB0' gpsd: speed 38400, 8N1 gpsd: Garmin: garmin_gps Linux USB module not active. gpsd: speed 9600, 8O1 gpsd: speed 38400, 8N1 gpsd: gpsd_activate(): opened GPS (fd 6) gpsd: speed 4800, 8N1 gpsd: NTPD ntpd_link_activate: 0 gpsd: /dev/ttyUSB0 identified as type SiRF binary (2.687608 sec @ 4800bps) gpsd: detaching 127.0.0.1 (sub 1, fd 8) in detach_client gpsd: detaching 127.0.0.1 (sub 1, fd 8) in detach_client after this i started tangoGPS, which said no gps and no gpsd found

    Read the article

  • Managing Multiple dedicated servers centrally using a Web GUI tools?

    - by Sampath
    Application Architecture I am having a single ruby on rails application code running with multiple instances (ie. each client having identical sub domains) running on a multiple dedicated server using phusion passenger + nginx. sub domains setup done using vhost option in nginx passenger module. For Example server 1 serving 1 - 100 client with identical sub domains www.client1.product.com upto www.client100.product.com server 2 serving 101 - 200 client with identical sub domains www.client101.product.com upto www.client200.product.com server 3 serving 201 - 300 client with identical sub domains www.client201.product.com upto www.client300.product.com What my question is i need to centrally manage all my N dedicated servers using an gui tool I am looking for Web GUI tool to manage tasks like 1) backup all mysql databases automatically from all dedicated servers and send it to an some FTP backup drive 2) back files and folders from all dedicated servers and send it to an some FTP backup drive 3) need to manage firewall (CSF http://configserver.com/cp/csf.html) centrally for all dedicated servers 4) look to see server load , bandwidth used in graphical manner for all N no of dedicated servers Note: I am prefer to looking for an open source solution

    Read the article

  • What are the legal risks if any of using a GPL or AGPL Web Application Framework/CMS?

    - by Seth Spearman
    Tried to ask this on SO but was referred here... Am I correct in saying that using a GPL'ed web application framework such as Composite C1 would NOT obligate a company to share the source code we write against said framework? That is the purpose of the AGPL, am I correct? Does this also apply to Javascript frameworks like KendoUI? The GPL would require any changes that we make to the framework be made available to others if we were to offer it for download. In other words, merely loading a web sites content into my browser is not "conveying" or "distributing" that software. I have been arguing that we should avoid GPL web frameworks and now after researching I am pretty sure I am wrong but wanted to get other opinions? Seth

    Read the article

  • Tool to identify potential reviewers for a proposed change

    - by Lorin Hochstein
    Is there a tool that takes as input a proposed patch and a git repository, and identifies the developers are the best candidates for reviewing the patch? It would use the git history to identify the authors that have the most experience with the files / sections of code that are being changed. Edit: The use case is a large open source project (OpenStack Compute), where merge proposals come in, and I see a merge proposal on a chunk of code I'm not familiar with, and I want to add somebody else's name to the list of suggested reviewers so that person gets a notification to look at the merge proposal.

    Read the article

  • How do I report a missing package dependency during an upgrade?

    - by crasic
    A friend of mine (somewhat new to linux) recently upgraded from 10.10 to 11.04 and his OS broke from the upgrade. A few minutes of troubleshooting showed that the culprit was the PAE kernel that the upgrade decided to install since it determined he had 4GB of phyisical RAM. More specifically the upgrade forgot to install the linux-headers-generic-pae required by the closed source nvidia drivers. I'm not entirely sure how to report this bug to the devs. Its an easy fix (after booting into the non-pae kernel and installing the package everything worked), but they are encouraging users to use the built-in bug reporting system and I'm not entirely certain how to report update bugs.

    Read the article

  • What are the legal risks if any of using a GPL'ed Web Application Framework/CMS?

    - by Seth Spearman
    Tried to ask this on SO but was referred here... Am I correct in saying that using a GPL'ed web application framework such as Composite C1 would NOT obligate a company to share the source code we write against said framework? That is the purpose of the AGPL, am I correct? Does this also apply to Javascript frameworks like KendoUI? The GPL would require any changes that we make to the framework be made available to others if we were to offer it for download. In other words, merely loading a web sites content into my browser is not "conveying" or "distributing" that software. I have been arguing that we should avoid GPL web frameworks and now after researching I am pretty sure I am wrong but wanted to get other opinions? Seth

    Read the article

  • Trash Destination Adapter

    The Trash Destination and this article came from early experiences of using SSIS and community feedback at the time. When developing a package it is very useful to have a destination adapter that does nothing but consume rows with no setup requirement. You often want run a package part way through development, or just add a path so you can set a Data Viewer. There are stock tasks that can be used, but with the Trash Destination all columns are treated as selected automatically (usage type of read-only), so the pipeline knows they are required. It is also obvious that this is for development or diagnostic purposes, and is clearly not a part of the functional design of the package. It is also ideal for just playing around and exploring concepts in SSIS, and is often used in conjunction with the Data Generator Source. Using these two components it is easy to setup a test of an expression in the Derived Column Transformation for example. The Data Generator Source provides some dummy data, and the Trash Destination allows you to anchor the output path and set a Data Viewer to examine the results. It can also be used when performance tuning packages. It is a consistent and known quantity that has no external influences, so it is ideal as a destination when breaking the data flow into sections to isolate a bottleneck. The adapter is really simple to use and requires no setup. Simply drop it onto the pipeline designer and use it to terminate your data flow path. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally, for 2005/2008, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trash Destination transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Downloads The Trash Destination is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Trash Destination for SQL Server 2005 Trash Destination for SQL Server 2008 Trash Destination for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.34 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.33 - SQL Server 2008 release. Includes support for upgrade of 2005 packages. RTM compatible, previously February 2008 CTP. (4 Mar 2008) Version 2.0.0.31 - SQL Server 2008 November 2007 CTP. (14 Feb 2008) SQL Server 2005 Version 1.0.2.18 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.1.1 - SQL Server 2005 IDW 15 June CTP. Minor enhancements over v1.0.1.0. (11 Jun 2005) Version 1.0.1.0 - SQL Server 2005 IDW 14 April CTP. First Public Release. (30 May 2005) Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. The full error message is shown below for reference: TITLE: Microsoft Visual Studio ------------------------------ The component could not be added to the Data Flow task. Please verify that this component is properly installed. ------------------------------ ADDITIONAL INFORMATION: The data flow object "Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=1.0.1.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc" is not installed correctly on this computer. (Microsoft.DataTransformationServices.Design) For 2005/2008, once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? This is not necessary for SQL Server 2012 as the new SSIS toolbox automatically detects components. If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed.

    Read the article

  • PHP Development Environment (Host: Windows 7, Guest: Ubuntu)

    - by Kristian Leiws Jones
    Since editing files live from a remote server slows down development. I use XAMPP on windows to develop then run the web app's on a Linux server. However to avoid environment dependencies I'd like to mirror the live environment and the development environments. What I'm asking is running development server on Ubuntu inside VirtualBox whilst editing the source files via ftp/Dreamweaver is a good idea? If so, and I wanted to view the local website on the host OS (windows) how would I do this? does the guest OS have a LAN/Local IP address? I notice on windows "ipconfig /all" there are "tunneling" adapters which I assume is for VirtualBox, so I guess the guest OS has the same LAN/Local IP address? if so how would I view the websites hosted on the guest OS on the host OS? I'd also need to host FTP server on guest OS. Note: I need windows! I would love to use Linux all the way -.-

    Read the article

  • How to get Spotify running?

    - by Dante Ashton
    A while ago Spotify (the streaming music service) came out with a preview for Linux of their client. I had succesfully run it throughout 10.04. Now I'm on 10.10, I can't seem to find it in the package manager, let alone install it. Software Sources gives me this; Failed to fetch http://repository.spotify.com/dists/stable/Release Unable to find expected entry non-free/source/Sources in Meta-index file (malformed Release file?) Some index files failed to download, they have been ignored, or old ones used instead. So...as I'm paying for Spotify what...umm...do I do? :P

    Read the article

  • OutOfMemoryException in Microsoft WSE 3.0 Diagnostics.TraceInputFilter

    - by Michael Freidgeim
    We are still using Microsoft WSE 3.0 and on test server started to get   Event Type:        Error Event Source:    Microsoft WSE 3.0 WSE054: An error occurred during the operation of the TraceInputFilter: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.    at System.String.GetStringForStringBuilder(String value, Int32 startIndex, Int32 length, Int32 capacity)    at System.Text.StringBuilder.GetThreadSafeString(IntPtr& tid)    at System.Text.StringBuilder.set_Length(Int32 value)    at System.Xml.BufferBuilder.Clear()    at System.Xml.BufferBuilder.set_Length(Int32 value)    at System.Xml.XmlTextReaderImpl.ParseText()    at System.Xml.XmlTextReaderImpl.ParseElementContent()    at System.Xml.XmlTextReaderImpl.Read()    at System.Xml.XmlLoader.LoadNode(Boolean skipOverWhitespace)    at System.Xml.XmlLoader.LoadDocSequence(XmlDocument parentDoc)    at System.Xml.XmlLoader.Load(XmlDocument doc, XmlReader reader, Boolean preserveWhitespace)    at System.Xml.XmlDocument.Load(XmlReader reader)    at System.Xml.XmlDocument.Load(Stream inStream)    at Microsoft.Web.Services3.Diagnostics.TraceInputFilter.OpenLoadExistingFile(String path)    at Microsoft.Web.Services3.Diagnostics.TraceInputFilter.Load(String path)    at Microsoft.Web.Services3.Diagnostics.TraceInputFilter.TraceMessage(String messageId, Collection`1 traceEntries).   After investigation it was found, that the problem related to trace files, that become too big. When they were deleted and new files were created, error gone.

    Read the article

  • Add an Image Properties Listing to the Context Menu in Chrome and Iron

    - by Asian Angel
    Is the lack of an Image Properties listing in the Context Menu of your favorite Chromium-based browser driving you crazy? If you have been missing this extremely useful function, then the Image Properties Context Menu extension is here to save the day. As soon as you get the extension installed you can start enjoying access to image property information as seen here. Very nice! Image Properties Context Menu [via Shankar Ganesh (@shankargan)] Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Never Call Me at Work [Humorous Star Wars Video] Add an Image Properties Listing to the Context Menu in Chrome and Iron Add an Easy to View Notification Badge to Tabs in Firefox SpellBook Parks Bookmarklets in Chrome’s Context Menu Drag2Up Brings Multi-Source Drag and Drop Uploading to Firefox Enchanted Swing in the Forest Wallpaper

    Read the article

  • “No Network Devices Available” for BCM43241(SDIO) after fresh install of 13.10

    - by 200gaga
    I have a laptop vaio duo 13 which has a broadcom 43241 wireless card. My problem is similar to this one Broadcom Wi-Fi Adapter not recognized . I tried to install the driver brcmfmac(SDIO) here http://wireless.kernel.org/en/users/Drivers/brcm80211 and copied the nvram to /lib/firmware/brcm as the page tells, but this didn't work. Most questions I searched are about br43 such as How to Install Broadcom Wireless Drivers (BCM43xx) . I tried part of solution like unblock all in rfkill, put an # in front of blacklist bcm43xx, but none worked. I didn't uninstall or install any drivers such as b43-installer, b43legacy-installer, bcmwl-kernel-source (only tried b43-fwcutter) because they are not compatible with bcm43241. Above is all information I can provide for help. Thanks.

    Read the article

  • dns hosting - url forwarding - hiding forwarded url?

    - by jeremycollins
    I have free dns hosting with the domain registrar and I'd like the dns hosted domain www.example.com to display contents of www.myotherlongdomain.com. I only have 301/302/iframe forwarding options, however I want to mask the redirected (longdomain) url. If I use frames, users can view the source and see the (longdomain) url the contents are coming from. How can I hide it so it always displays www.example.com? There is no cloaking/masking option with the registrar. Thanks.

    Read the article

  • Do Not Optimize Without Measuring

    - by Alois Kraus
    Recently I had to do some performance work which included reading a lot of code. It is fascinating with what ideas people come up to solve a problem. Especially when there is no problem. When you look at other peoples code you will not be able to tell if it is well performing or not by reading it. You need to execute it with some sort of tracing or even better under a profiler. The first rule of the performance club is not to think and then to optimize but to measure, think and then optimize. The second rule is to do this do this in a loop to prevent slipping in bad things for too long into your code base. If you skip for some reason the measure step and optimize directly it is like changing the wave function in quantum mechanics. This has no observable effect in our world since it does represent only a probability distribution of all possible values. In quantum mechanics you need to let the wave function collapse to a single value. A collapsed wave function has therefore not many but one distinct value. This is what we physicists call a measurement. If you optimize your application without measuring it you are just changing the probability distribution of your potential performance values. Which performance your application actually has is still unknown. You only know that it will be within a specific range with a certain probability. As usual there are unlikely values within your distribution like a startup time of 20 minutes which should only happen once in 100 000 years. 100 000 years are a very short time when the first customer tries your heavily distributed networking application to run over a slow WIFI network… What is the point of this? Every programmer/architect has a mental performance model in his head. A model has always a set of explicit preconditions and a lot more implicit assumptions baked into it. When the model is good it will help you to think of good designs but it can also be the source of problems. In real world systems not all assumptions of your performance model (implicit or explicit) hold true any longer. The only way to connect your performance model and the real world is to measure it. In the WIFI example the model did assume a low latency high bandwidth LAN connection. If this assumption becomes wrong the system did have a drastic change in startup time. Lets look at a example. Lets assume we want to cache some expensive UI resource like fonts objects. For this undertaking we do create a Cache class with the UI themes we want to support. Since Fonts are expensive objects we do create it on demand the first time the theme is requested. A simple example of a Theme cache might look like this: using System; using System.Collections.Generic; using System.Drawing; struct Theme { public Color Color; public Font Font; } static class ThemeCache { static Dictionary<string, Theme> _Cache = new Dictionary<string, Theme> { {"Default", new Theme { Color = Color.AliceBlue }}, {"Theme12", new Theme { Color = Color.Aqua }}, }; public static Theme Get(string theme) { Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } return cached; } } class Program { static void Main(string[] args) { Theme item = ThemeCache.Get("Theme12"); item = ThemeCache.Get("Theme12"); } } This cache does create font objects only once since on first retrieve of the Theme object the font is added to the Theme object. When we let the application run it should print “Creating new font” only once. Right? Wrong! The vigilant readers have spotted the issue already. The creator of this cache class wanted to get maximum performance. So he decided that the Theme object should be a value type (struct) to not put too much pressure on the garbage collector. The code Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } does work with a copy of the value stored in the dictionary. This means we do mutate a copy of the Theme object and return it to our caller. But the original Theme object in the dictionary will have always null for the Font field! The solution is to change the declaration of struct Theme to class Theme or to update the theme object in the dictionary. Our cache as it is currently is actually a non caching cache. The funny thing was that I found out with a profiler by looking at which objects where finalized. I found way too many font objects to be finalized. After a bit debugging I found the allocation source for Font objects was this cache. Since this cache was there for years it means that the cache was never needed since I found no perf issue due to the creation of font objects. the cache was never profiled if it did bring any performance gain. to make the cache beneficial it needs to be accessed much more often. That was the story of the non caching cache. Next time I will write something something about measuring.

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >