Search Results

Search found 11048 results on 442 pages for 'concrete syntax tree'.

Page 359/442 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • How to remove the Mint stuff from my system after installing MATE desktop using MINT repository

    - by tinuz
    I installed the MATE desktop using this manual http://www.unixmen.com/linux-tutorials/linux-distributions/linux-distributions4-ubuntu/1975-install-linuxmint12-extensions-mate-a-mgse-in-ubuntu-1110- but now i can't open my Ubuntu Software Center and can't open the settings from the update manager. I removed mate desktop but it doesn't fix the problem, i also reinstalled the software center, software-properties-gtk and software-properties-common using: sudo apt-get update; sudo apt-get --purge --reinstall install software-center software-properties-common software-properties-gtk. But when using this line i get the following error: Reading package lists... Done Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 3 reinstalled, 0 to remove and 0 not upgraded. Need to get 0 B/735 kB of archives. After this operation, 0 B of additional disk space will be used. (Reading database ... 304824 files and directories currently installed.) Preparing to replace software-center 5.0.2 (using .../software-center_5.0.2_all.deb) ... Unpacking replacement software-center ... Preparing to replace software-properties-common 0.81.13.1 (using .../software-properties-common_0.81.13.1_all.deb) ... Unpacking replacement software-properties-common ... Preparing to replace software-properties-gtk 0.81.13.1 (using .../software-properties-gtk_0.81.13.1_all.deb) ... Unpacking replacement software-properties-gtk ... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for hicolor-icon-theme ... Processing triggers for man-db ... Processing triggers for shared-mime-info ... Unknown media type in type 'all/all' Unknown media type in type 'all/allfiles' Unknown media type in type 'uri/mms' Unknown media type in type 'uri/mmst' Unknown media type in type 'uri/mmsu' Unknown media type in type 'uri/pnm' Unknown media type in type 'uri/rtspt' Unknown media type in type 'uri/rtspu' Unknown media type in type 'interface/x-winamp-skin' Setting up software-center (5.0.2) ... Traceback (most recent call last): File "/usr/sbin/update-software-center", line 38, in <module> from softwarecenter.db.update import rebuild_database File "/usr/share/software-center/softwarecenter/db/update.py", line 59, in <module> from softwarecenter.db.database import parse_axi_values_file File "/usr/share/software-center/softwarecenter/db/database.py", line 26, in <module> from softwarecenter.db.application import Application File "/usr/share/software-center/softwarecenter/db/application.py", line 25, in <module> from softwarecenter.backend.channel import is_channel_available File "/usr/share/software-center/softwarecenter/backend/channel.py", line 25, in <module> from softwarecenter.distro import get_distro File "/usr/share/software-center/softwarecenter/distro/__init__.py", line 165, in <module> distro_instance=_get_distro() File "/usr/share/software-center/softwarecenter/distro/__init__.py", line 148, in _get_distro module = __import__(distro_id, globals(), locals(), [], -1) ImportError: No module named LinuxMint Setting up software-properties-common (0.81.13.1) ... Setting up software-properties-gtk (0.81.13.1) ... $ Is there a way to fix this problem without having to reinstall Ubuntu 11.10?? thanks in advance tinuz

    Read the article

  • Setting up SharePoint without Active Directory

    - by eJugnoo
    In order to setup SharePoint without AD, you need to run following PowerShell command on Management Shell after installing SharePoint on your server, but before running Config Wizard: (we don’t want to run this SP farm in stand-alone mode!) 1. New-SPConfigurationDatabase SYNOPSIS     Creates a new configuration database. SYNTAX     New-SPConfigurationDatabase [-DatabaseName] <String> [-DatabaseServer] <String> [[-DirectoryDomain] <String>] [[-DirectoryOrganizationUnit] <String>]     [[-AdministrationContentDatabaseName] <String>] [[-DatabaseCredentials] <PSCredential>] [-FarmCredentials] <PSCredential> [-Passphrase] <SecureString>      [-AssignmentCollection <SPAssignmentCollection>] [<CommonParameters>] DESCRIPTION     The New-SPConfigurationDatabase cmdlet creates a new configuration database on the specified database server. This is the central database for a new SharePoint farm.     For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation (http://go.microsoft.com/fwlink/?LinkId=163185). RELATED LINKS     Backup-SPConfigurationDatabase     Disconnect-SPConfigurationDatabase     Connect-SPConfigurationDatabase     Remove-SPConfigurationDatabase REMARKS     To see the examples, type: "get-help New-SPConfigurationDatabase -examples".     For more information, type: "get-help New-SPConfigurationDatabase -detailed".     For technical information, type: "get-help New-SPConfigurationDatabase -full". NOTE: Use –AdministrationContentDatabaseName switch to pass the name of Admin database you want instead of GUID-based name it automatically creates. Hence, one can pretty much easily control Admin, Config, and Content database names at the time of farm creation. If creating new farm, you can also delete and re-provision any service databases automatically created, from UI, to decide what database names you want. 2. Run SharePoint Configuration Wizard, and you’ll following as already added to farm. Select do not discconect from farm, and proceed… Select the port, and authentication (NTLM in my case). Click next, and wizard will complete the remaining steps of provisioning, including creation of Central Admin Web App on the desired port. Once successful, it will open the Central Admin site and ask you to run Farm Config Wizard. I chose to skip and do things manually, to remain in control of what is happening on the farm. Like creating web-app for site collections, creating the very first site collection, and any other service applications. I needed this to create a public-facing installation of SharePoint Foundation RTM on a server which didn’t have AD. Now I am going to setup FBA, and possibly Live ID Auth as well. I will be also setting up RBS, and multi-tenancy on this farm ,and would post any notes, and findings here… --Sharad

    Read the article

  • SQL SERVER – Top 10 “Ease of Use” Features of expressor Studio

    - by pinaldave
    expressor Studio is new data integration platform that is being marketed as the most easy to use tool of its kind.  But “easy to use” can be a relative term – an expert can find a very complex system easy, but a beginner might be stumped.  A recent article online discussed exactly what makes expressor Studio so easy use, and here is my view on this subject. Simple Installation There is one pop-up for one .exe file, and nothing more.  You can’t get much simpler than this.  It is also in the familiar Windows design, so there should be no surprises. No 3rd party software dependency Have you ever tried to download software, only to be slowed down by the need to download a compatible system to run the program, and another to read the user manual, and so on?  expressor Studio was designed specifically to avoid this problem. Microsoft Office Like Ribbon Bar and Menus As mentioned before, everything is in the familiar Windows design, from the pop up windows to the tool bars and menus.  There should be no learning curve for using this program, or even simply trying to navigate around a new system. General Development Design Interface This software has been designed to be simple and straightforward.  Projects can be arranged in a simple “tree” design, that is totally collapsible and can easy be added to or “trimmed” with a click of a button.  It was meant to be logical and easy to follow. Integrated Contextual Help This is a fancy way of saying that you can practically yell “help!” if you do get stuck on something.  Solving a problem is as simple as highlighting and hitting F1 for contextual help. Visual Indicators and Messages Wouldn’t it be nice to know exactly where something has gone wrong before trying to complete a project.  expressor Studio has a built in system to catch mistakes and highlight them in a bright color, flash a warning message, and even disable functions before you can continue – and possibly lose hours of work. Property Inputs and Selectors Every operator will have a list of requirements that need to be filled in.  But don’t worry; you won’t have to make stuff up to fill in the boxes.  Each one will have a drop-down menu with options to choose from – but not too many as to be confusing. Connection Wizards Configuring connections can be the hardest part of a project.  But not with the expressor Studioconnection wizard.  A familiar, Windows-style menu will walk you through connections so quickly you’ll forget what trouble it used to be. Templates With large, complex projects, a majority of your time is often spent simply setting up the files and inputting data.  But expressor Studio allows you to create one file and then save it as a template, saving you hours of boring data input. Extension Manager Let’s say that you need a little more functionality or some new features on your program. A lot of software requires you to download complex plug-ins that need to be decompressed and installed.  However, expressor Studio has extended its system to an Extension Manager, which allows for quick and easy installation of the functionality you need, without the need to download and decompress. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Drupal Modules for SEO & Content

    - by Aditi
    When we talk about Drupal SEO, there are two things to consider one is about the relevant SEO practices and about appropriate Drupal Modules available. Optimizing your website for search engines is one of the most important aspect of launching & promoting your website especially if ranking matters to you. Understanding SEO For starters, you have begin with Keyword research and then optimize your content according to your findings by tagging, meta tags etc, Drupal modules once installed help you manage a lot of such parameters. Identifying the target keywords Using the Page Title and Token modules PathAuto configuration <H1> heading tags Optimizing Drupal’s default robots.txt file Etc. While Drupal gives you a lot of ability to make your website content worthy & search engine friendly it is important for you to make sure you are not crossing the line or you could get penalized. Modules Overview Drupal Power is at its best when you have these modules & great brain working together. The basic SEO improvements can be achieved easily with the modules enlisted below, but you can win magical rankings if you use them logically & wisely. Understanding your keyword competition & enhancing your content is the basic key to success and ofcourse the modules: Pathauto Automatically create search enging friendly readable URLS from tokens. A token is a piece of data from content, say the author’s username, or the content’s title. For example mysite.com/an-article, rather than mysite.com/node/114 for every node you make. NodeWords Amazingly useful drupal module that allows you to create custom meta tags and descriptions for your nodes, which gives you the ability to target specific keywords and phrases. Page Title Enables you to set an alternative title for the <title></title> tags and for the <h1></h1> tags on a node. Global Redirect Manage content duplication, 301 redirects, and URL validation with this small, but powerful module. Taxonomy manager Make large additions, or changes to taxonomy very easy. This module provides a powerful interface for managing taxonomies. A vocabulary gets displayed in a dynamic tree view, where parent terms can be expanded to list their nested child terms or can be collapsed. robotstxt A robots.txt file is vital for ensuring that search engine spiders don’t index the unwanted areas of your site. This Drupal module gives you the ability to manage your robots.txt file through the CMS admin. xmlsitemap An XML Sitemap lets the search engines index your website content. This module helps in generating and maintaining a complete sitemap for your website and gives you control over exactly which parts of the site you want to be included in the index. It even gives you the ability to automatically submit your sitemap to Google, Yahoo!, Ask.com and Windows Live every time you update a node or at specific interval. Node Import This module allows you to import a set of nodes from a Comma Seperated Values (CSV) or Tab Seperated Values (TSV) text file. Makes it easy to import hundreds-thousands of csv rows and you get to tie up these rows to CCK fields (or locations), and it can file it under the right taxonomy hierarchy. This is Super life saver module.

    Read the article

  • MySQL – Scalability on Amazon RDS: Scale out to multiple RDS instances

    - by Pinal Dave
    Today, I’d like to discuss getting better MySQL scalability on Amazon RDS. The question of the day: “What can you do when a MySQL database needs to scale write-intensive workloads beyond the capabilities of the largest available machine on Amazon RDS?” Let’s take a look. In a typical EC2/RDS set-up, users connect to app servers from their mobile devices and tablets, computers, browsers, etc.  Then app servers connect to an RDS instance (web/cloud services) and in some cases they might leverage some read-only replicas.   Figure 1. A typical RDS instance is a single-instance database, with read replicas.  This is not very good at handling high write-based throughput. As your application becomes more popular you can expect an increasing number of users, more transactions, and more accumulated data.  User interactions can become more challenging as the application adds more sophisticated capabilities. The result of all this positive activity: your MySQL database will inevitably begin to experience scalability pressures. What can you do? Broadly speaking, there are four options available to improve MySQL scalability on RDS. 1. Larger RDS Instances – If you’re not already using the maximum available RDS instance, you can always scale up – to larger hardware.  Bigger CPUs, more compute power, more memory et cetera. But the largest available RDS instance is still limited.  And they get expensive. “High-Memory Quadruple Extra Large DB Instance”: 68 GB of memory 26 ECUs (8 virtual cores with 3.25 ECUs each) 64-bit platform High I/O Capacity Provisioned IOPS Optimized: 1000Mbps 2. Provisioned IOPs – You can get provisioned IOPs and higher throughput on the I/O level. However, there is a hard limit with a maximum instance size and maximum number of provisioned IOPs you can buy from Amazon and you simply cannot scale beyond these hardware specifications. 3. Leverage Read Replicas – If your application permits, you can leverage read replicas to offload some reads from the master databases. But there are a limited number of replicas you can utilize and Amazon generally requires some modifications to your existing application. And read-replicas don’t help with write-intensive applications. 4. Multiple Database Instances – Amazon offers a fourth option: “You can implement partitioning,thereby spreading your data across multiple database Instances” (Link) However, Amazon does not offer any guidance or facilities to help you with this. “Multiple database instances” is not an RDS feature.  And Amazon doesn’t explain how to implement this idea. In fact, when asked, this is the response on an Amazon forum: Q: Is there any documents that describe the partition DB across multiple RDS? I need to use DB with more 1TB but exist a limitation during the create process, but I read in the any FAQ that you need to partition database, but I don’t find any documents that describe it. A: “DB partitioning/sharding is not an official feature of Amazon RDS or MySQL, but a technique to scale out database by using multiple database instances. The appropriate way to split data depends on the characteristics of the application or data set. Therefore, there is no concrete and specific guidance.” So now what? The answer is to scale out with ScaleBase. Amazon RDS with ScaleBase: What you get – MySQL Scalability! ScaleBase is specifically designed to scale out a single MySQL RDS instance into multiple MySQL instances. Critically, this is accomplished with no changes to your application code.  Your application continues to “see” one database.   ScaleBase does all the work of managing and enforcing an optimized data distribution policy to create multiple MySQL instances. With ScaleBase, data distribution, transactions, concurrency control, and two-phase commit are all 100% transparent and 100% ACID-compliant, so applications, services and tooling continue to interact with your distributed RDS as if it were a single MySQL instance. The result: now you can cost-effectively leverage multiple MySQL RDS instance to scale out write-intensive workloads to an unlimited number of users, transactions, and data. Amazon RDS with ScaleBase: What you keep – Everything! And how does this change your Amazon environment? 1. Keep your application, unchanged – There is no change your application development life-cycle at all.  You still use your existing development tools, frameworks and libraries.  Application quality assurance and testing cycles stay the same. And, critically, you stay with an ACID-compliant MySQL environment. 2. Keep your RDS value-added services – The value-added services that you rely on are all still available. Amazon will continue to handle database maintenance and updates for you. You can still leverage High Availability via Multi A-Z.  And, if it benefits youra application throughput, you can still use read replicas. 3. Keep your RDS administration – Finally the RDS monitoring and provisioning tools you rely on still work as they did before. With your one large MySQL instance, now split into multiple instances, you can actually use less expensive, smallersmaller available RDS hardware and continue to see better database performance. Conclusion Amazon RDS is a tremendous service, but it doesn’t offer solutions to scale beyond a single MySQL instance. Larger RDS instances get more expensive.  And when you max-out on the available hardware, you’re stuck.  Amazon recommends scaling out your single instance into multiple instances for transaction-intensive apps, but offers no services or guidance to help you. This is where ScaleBase comes in to save the day. It gives you a simple and effective way to create multiple MySQL RDS instances, while removing all the complexities typically caused by “DIY” sharding andwith no changes to your applications . With ScaleBase you continue to leverage the AWS/RDS ecosystem: commodity hardware and value added services like read replicas, multi A-Z, maintenance/updates and administration with monitoring tools and provisioning. SCALEBASE ON AMAZON If you’re curious to try ScaleBase on Amazon, it can be found here – Download NOW. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • 2010 April Fools Joke

    - by Dane Morgridge
    I started at my current job at the end of March last year and there were some pretty funny April fools jokes.  Nothing super crazy, but pretty funny.  One guy came in and there was a tree in his cube.  We (me and the rest of my team) were planning for a couple of weeks on what we could do that would be just awesome.  We had a lot of really good ideas but nothing was spectacular.  Then Steve Andrews had a brilliant idea (yes it's true).  Since we have internal DNS servers we could redirect DNS to our internal servers for a site such as cnn.com.  Then we would lift the code from the site and create our own home page that would contain news about people in the company.  Steve was actually laughing so hard when he thought of the idea that it took him almost 30 minutes to spit it out. I thought, "this is perfect". I had enlisted a couple of people to help come up with the stories and at the same time we were trying to figure out how to get everybody to the site the morning of the 1st.  Then it hit me.  We could have the main article be one of my getting picked up by the FBI on hacking charges.  Then Chris (my boss) could send an email out telling everyone that I would not be there today and direct them to the site.  That would for sure get everyone to go to cnn.com first thing and see our prank.  I begun the process of looking for photos I could crop myself into and found the perfect one.  Then my wife took a good pic with our Canon 40D and I went to work.  The night before I didn't have any other stories due to everyone being really busy at work, but I decided to go ahead with just the FBI bust on it's own.  I got everything working and tested and coordinated with Chris for me to come in late so no one would see me at the office until after everyone had seen the joke. And so the morning of April fools came and I was waiting at home and the email was perfect.  Chris told everyone that I wouldn't be in and that not to answer any questions if you got any calls from anybody.  The Photoshop job I did was not perfect, but good enough and I even wrote an article with it that went into more detail about how I had been classified as a terrorist and all kinds of stuff. People at work started getting the emails and a few people didn't realize it was a joke (as I had hoped), including some from senior management (one person in particular who shall remain nameless in this post).  Emails started flying around about how to contain the situation and how to handle bad PR.  He basically bought it hook, line and sinker and then went in to crisis mode.  It was awesome! He did finally realize it was a joke and I will likely print and frame the email he sent out.  In short, April fools this year was a huge success.

    Read the article

  • No more internet connection after update in 14.04 with Intel Dual Band Wireless AC 7260

    - by luis
    My Dell XPS 15 (haswell) was working fine until I stupidly accepted recently to apply Ubuntu updates. Since then, my wifi does not work (it shows "device not managed" when clicking wifi icon in toolbar). Even USB to Ethernet adapter does not seem to work. Bluetooth at least "sees" other bluetooth devices around... See below output from dmesg (dmesg |grep iwl) : [ 886.462459] iwlwifi 0000:06:00.0: irq 51 for MSI/MSI-X [ 886.462561] iwlwifi 0000:06:00.0: Direct firmware load failed with error -2 [ 886.462562] iwlwifi 0000:06:00.0: Falling back to user helper [ 886.463284] iwlwifi 0000:06:00.0: loaded firmware version 22.1.7.0 op_mode iwlmvm [ 886.475345] iwlwifi 0000:06:00.0: Detected Intel(R) Dual Band Wireless AC 7260, REV=0x144 [ 886.475433] iwlwifi 0000:06:00.0: L1 Enabled; Disabling L0S [ 886.475684] iwlwifi 0000:06:00.0: L1 Enabled; Disabling L0S [ 886.689214] ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs' Below the output from modinfo iwlwifi: filename: /lib/modules/3.13.0-29- generic/kernel/drivers/net/wireless/iwlwifi/iwlwifi.ko license: GPL author: Copyright(c) 2003-2013 Intel Corporation <[email protected]> version: in-tree: description: Intel(R) Wireless WiFi driver for Linux firmware: iwlwifi-100-5.ucode firmware: iwlwifi-1000-5.ucode firmware: iwlwifi-135-6.ucode firmware: iwlwifi-105-6.ucode firmware: iwlwifi-2030-6.ucode firmware: iwlwifi-2000-6.ucode firmware: iwlwifi-5150-2.ucode firmware: iwlwifi-5000-5.ucode firmware: iwlwifi-6000g2b-6.ucode firmware: iwlwifi-6000g2a-5.ucode firmware: iwlwifi-6050-5.ucode firmware: iwlwifi-6000-4.ucode firmware: iwlwifi-3160-7.ucode firmware: iwlwifi-7260-7.ucode srcversion: 1E6912E109D5A43B310FB34 alias: pci:v00008086d0000095Asv*sd00005490bc*sc*i* (a pack of lines of kind "alias: pci:xxxxx...." that I guess are not helpful) alias: pci:v00008086d0000095Bsv*sd00005290bc*sc*i* depends: cfg80211 intree: Y vermagic: 3.13.0-29-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: 66:02:CB:36:F1:31:3B:EA:01:C4:BD:A9:65:67:CF:A7:23:C9:70:D8 sig_hashalgo: sha512 parm: swcrypto:using crypto in software (default 0 [hardware]) (int) parm: 11n_disable:disable 11n functionality, bitmap: 1: full, 2: disable agg TX, 4: disable agg RX, 8 enable agg TX (uint) parm: amsdu_size_8K:enable 8K amsdu size (default 0) (int) parm: fw_restart:restart firmware in case of error (default true) (bool) parm: antenna_coupling:specify antenna coupling in dB (defualt: 0 dB) (int) parm: wd_disable:Disable stuck queue watchdog timer 0=system default, 1=disable, 2=enable (default: 0) (int) parm: nvm_file:NVM file name (charp) parm: bt_coex_active:enable wifi/bt co-exist (default: enable) (bool) parm: led_mode:0=system default, 1=On(RF On)/Off(RF Off), 2=blinking, 3=Off (default: 0) (int) parm: power_save:enable WiFi power management (default: disable) (bool) parm: power_level:default power save level (range from 1 - 5, default: 1) (int) I downloaded the latest versions of iwlwifi firmware from git (git clone git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git; copy iwlwifi-3160-9.ucode iwlwifi-7260-9.ucode iwlwifi-7265-9.ucode to /lib/firmware and reboot) but as you can imagine it did not help. Update #1: Downloaded from http://wireless.kernel.org/en/users/Drivers/iwlwifi?action=AttachFile&do=get&target=iwlwifi-7260-ucode-22.15.8.0.tgz and copied the file into /lib/firmware. After reloading it with modprobe, it seems to be OK: [ 14.761283] iwlwifi 0000:06:00.0: enabling device (0000 -> 0002) [ 14.761472] iwlwifi 0000:06:00.0: irq 51 for MSI/MSI-X [ 14.772478] iwlwifi 0000:06:00.0: loaded firmware version 22.15.8.0 op_mode iwlmvm [ 14.800274] iwlwifi 0000:06:00.0: Detected Intel(R) Dual Band Wireless AC 7260, REV=0x144 [ 14.800349] iwlwifi 0000:06:00.0: L1 Enabled; Disabling L0S [ 14.800657] iwlwifi 0000:06:00.0: L1 Enabled; Disabling L0S [ 15.007048] ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs' However, clicking in wifi in the toolbar still shows "device not managed". Any clues? Many thanks! Luis

    Read the article

  • Adding Fake Build Information in TFS 2010

    - by Jakob Ehn
    We have been using TFS 2010 build for distributing a build in parallel on several agents, but where the actual compilation is done by a bunch of external tools and compilers, e.g. no MSBuild involved. We are using the ParallelTemplate.xaml template that Jim Lamb blogged about previously, which distributes each configuration to a different agent. We developed custom activities for running these external compilers and collecting the information and errors by reading standard out/error and pushing it back to the build log. But since we aren’t using MSBuild we don’t the get nice configuration summary section on the build summary page that we are used to. We would like to show the result of each configuration with any errors/warnings as usual, together with a link to the log file. TFS 2010 API to the rescue! What we need to do is adding information to the InformationNode structure that is associated with every TFS build. The log that you normally see in the Log view is built up as a tree structure of IBuildInformationNode objects. This structure can we accessed by using the InformationNodeConverters class. This class also contain some helper methods for creating BuildProjectNode, which contain the information about each project that was build, for example which configuration, number of errors and warnings and link to the log file. Here is a code snippet that first creates a “fake” build from scratch and the add two BuildProjectNodes, one for Debug|x86 and one for Release|x86 with some release information:   TfsTeamProjectCollection collection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://lt-jakob2010:8080/tfs")); IBuildServer buildServer = collection.GetService<IBuildServer>(); var buildDef = buildServer.GetBuildDefinition("TeamProject", "BuildDefinition"); //Create fake build with random build number var detail = buildDef.CreateManualBuild(new Random().Next().ToString()); // Create Debug|x86 project summary IBuildProjectNode buildProjectNode = detail.Information.AddBuildProjectNode(DateTime.Now, "Debug", "MySolution.sln", "x86", "$/project/MySolution.sln", DateTime.Now, "Default"); buildProjectNode.CompilationErrors = 1; buildProjectNode.CompilationWarnings = 1; buildProjectNode.Node.Children.AddBuildError("Compilation", "File1.cs", 12, 5, "", "Syntax error", DateTime.Now); buildProjectNode.Node.Children.AddBuildWarning("File2.cs", 3, 1, "", "Some warning", DateTime.Now, "Compilation"); buildProjectNode.Node.Children.AddExternalLink("Log File", new Uri(@"\\server\share\logfiledebug.txt")); buildProjectNode.Save(); // Create Releaes|x86 project summary buildProjectNode = detail.Information.AddBuildProjectNode(DateTime.Now, "Release", "MySolution.sln", "x86", "$/project/MySolution.sln", DateTime.Now, "Default"); buildProjectNode.CompilationErrors = 0; buildProjectNode.CompilationWarnings = 0; buildProjectNode.Node.Children.AddExternalLink("Log File", new Uri(@"\\server\share\logfilerelease.txt")); buildProjectNode.Save(); detail.Information.Save(); detail.FinalizeStatus(BuildStatus.Failed); When running this code, it will a create a build that looks like this: As you can see, it created two configurations with error and warning information and a link to a log file. Just like a regular MSBuild would have done. This is very useful when using TFS 2010 Build in heterogeneous environments. It would also be possible to do this when running compilations completely outside TFS build, but then push the results of the into TFS for easy access. You can push all information, including the compilation summary, drop location, test results etc using the API.

    Read the article

  • Guest (and occasional co-host) on Jesse Liberty's Yet Another Podcast

    - by Jon Galloway
    I was a recent guest on Jesse Liberty's Yet Another Podcast talking about the latest Visual Studio, ASP.NET and Azure releases. Download / Listen: Yet Another Podcast #75–Jon Galloway on ASP.NET/ MVC/ Azure Co-hosted shows: Jesse's been inviting me to co-host shows and I told him I'd show up when I was available. It's a nice change to be a drive-by co-host on a show (compared with the work that goes into organizing / editing / typing show notes for Herding Code shows). My main focus is on Herding Code, but it's nice to pop in and talk to Jesse's excellent guests when it works out. Some shows I've co-hosted over the past year: Yet Another Podcast #76–Glenn Block on Node.js & Technology in China Yet Another Podcast  #73 - Adam Kinney on developing for Windows 8 with HTML5 Yet Another Podcast #64 - John Papa & Javascript Yet Another Podcast #60 - Steve Sanderson and John Papa on Knockout.js Yet Another Podcast #54–Damian Edwards on ASP.NET Yet Another Podcast #53–Scott Hanselman on Blogging Yet Another Podcast #52–Peter Torr on Windows Phone Multitasking Yet Another Podcast #51–Shawn Wildermuth: //build, Xaml Programming & Beyond And some more on the way that haven't been released yet. Some of these I'm pretty quiet, on others I get wacky and hassle the guests because, hey, not my podcast so not my problem. Show notes from the ASP.NET / MVC / Azure show: What was just released Visual Studio 2012 Web Developer features ASP.NET 4.5 Web Forms Strongly Typed data controls Data access via command methods Similar Binding syntax to ASP.NET MVC Some context: Damian Edwards and WebFormsMVP Two questions from Jesse: Q: Are you making this harder or more complicated for Web Forms developers? Short answer: Nothing's removed, it's just a new option History of SqlDataSource, ObjectDataSource Q: If I'm using some MVC patterns, why not just move to MVC? Short answer: This works really well in hybrid applications, doesn't require a rewrite Allows sharing models, validation, other code between Web Forms and MVC ASP.NET MVC Adaptive Rendering (oh, also, this is in Web Forms 4.5 as well) Display Modes Mobile project template using jQuery Mobile OAuth login to allow Twitter, Google, Facebook, etc. login Jon (and friends') MVC 4 book on the way: Professional ASP.NET MVC 4 Windows 8 development Jesse and Jon announce they're working on a new book: Pro Windows 8 Development with XAML and C# Jon and Jesse agree that it's nice to be able to write Windows 8 applications using the same skills they picked up for Silverlight, WPF, and Windows Phone development. Compare / contrast ASP.NET MVC and Windows 8 development Q: Does ASP.NET and HTML5 development overlap? Jon thinks they overlap in the MVC world because you're writing HTML views without controls Jon describes how his web development career moved from a preoccupation with server code to a focus on user interaction, which occurs in the browser Jon mentions his NDC Oslo presentation on Learning To Love HTML as Beautiful Code Q: How do you apply C# / XAML or HTML5 skills to Windows 8 development? Q: If I'm a XAML programmer, what's the learning curve on getting up to speed on ASP.NET MVC? Jon describes the difference in application lifecycle and state management Jon says it's nice that web development is really interactive compared to application development Q: Can you learn MVC by reading a book? Or is it a lot bigger than that? What is Azure, and why would I use it? Jon describes the traditional Azure platform mode and how Azure Web Sites fits in Q: Why wouldn't Jesse host his blog on Azure Web Sites? Domain names on Azure Web Sites File hosting options Q: Is Azure just another host? How is it different from any of the other shared hosting options? A: Azure gives you the ability to scale up or down whenever you want A: Other services are available if or when you want them

    Read the article

  • The Evolution Of C#

    - by Paulo Morgado
    The first release of C# (C# 1.0) was all about building a new language for managed code that appealed, mostly, to C++ and Java programmers. The second release (C# 2.0) was mostly about adding what wasn’t time to built into the 1.0 release. The main feature for this release was Generics. The third release (C# 3.0) was all about reducing the impedance mismatch between general purpose programming languages and databases. To achieve this goal, several functional programming features were added to the language and LINQ was born. Going forward, new trends are showing up in the industry and modern programming languages need to be more: Declarative With imperative languages, although having the eye on the what, programs need to focus on the how. This leads to over specification of the solution to the problem in hand, making next to impossible to the execution engine to be smart about the execution of the program and optimize it to run it more efficiently (given the hardware available, for example). Declarative languages, on the other hand, focus only on the what and leave the how to the execution engine. LINQ made C# more declarative by using higher level constructs like orderby and group by that give the execution engine a much better chance of optimizing the execution (by parallelizing it, for example). Concurrent Concurrency is hard and needs to be thought about and it’s very hard to shoehorn it into a programming language. Parallel.For (from the parallel extensions) looks like a parallel for because enough expressiveness has been built into C# 3.0 to allow this without having to commit to specific language syntax. Dynamic There was been lots of debate on which ones are the better programming languages: static or dynamic. The fact is that both have good qualities and users of both types of languages want to have it all. All these trends require a paradigm switch. C# is, in many ways, already a multi-paradigm language. It’s still very object oriented (class oriented as some might say) but it can be argued that C# 3.0 has become a functional programming language because it has all the cornerstones of what a functional programming language needs. Moving forward, will have even more. Besides the influence of these trends, there was a decision of co-evolution of the C# and Visual Basic programming languages. Since its inception, there was been some effort to position C# and Visual Basic against each other and to try to explain what should be done with each language or what kind of programmers use one or the other. Each language should be chosen based on the past experience and familiarity of the developer/team/project/company and not by particular features. In the past, every time a feature was added to one language, the users of the other wanted that feature too. Going forward, when a feature is added to one language, the other will work hard to add the same feature. This doesn’t mean that XML literals will be added to C# (because almost the same can be achieved with LINQ To XML), but Visual Basic will have auto-implemented properties. Most of these features require or are built on top of features of the .NET Framework and, the focus for C# 4.0 was on dynamic programming. Not just dynamic types but being able to talk with anything that isn’t a .NET class. Also introduced in C# 4.0 is co-variance and contra-variance for generic interfaces and delegates. Stay tuned for more on the new C# 4.0 features.

    Read the article

  • How To Delete Built-in Windows 7 Power Plans (and Why You Probably Shouldn’t)

    - by The Geek
    Do you actually use the Windows 7 power management features? If so, have you ever wanted to just delete one of the built-in power plans? Here’s how you can do so, and why you probably should leave it alone. Just in case you’re new to the party, we’re talking about the power plans that you see when you click on the battery/plug icon in the system tray. The problem is that one of the built-in plans always shows up there, even if you only use custom plans. When you go to “More power options” on the menu there, you’ll be taken to a list of them, but you’ll be unable to get rid of any of the built-in ones, even if you have your own. You can actually delete the power plans, but it will probably cause problems, so we highly recommend against it. If you still want to proceed, keep reading. Delete Built-in Power Plans in Windows 7 Open up an Administrator mod command prompt by right-clicking on the command prompt and choosing “Run as Administrator”, then type in the following command, which will show you a whole list of the plans. powercfg list Do you see that really long GUID code in the middle of each listing? That’s what we’re going to need for the next step. To make it easier, we’ll provide the codes here, just in case you don’t know how to copy to the clipboard from the command prompt. Power Scheme GUID: 381b4222-f694-41f0-9685-ff5bb260df2e  (Balanced) Power Scheme GUID: 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c  (High performance)Power Scheme GUID: a1841308-3541-4fab-bc81-f71556f20b4a  (Power saver) Before you do any deleting, what you’re going to want to do is export the plan to a file using the –export parameter. For some unknown reason, I used the .xml extension when I did this, though the file isn’t in XML format. Moving on… here’s the syntax of the command: powercfg –export balanced.xml 381b4222-f694-41f0-9685-ff5bb260df2e This will export the Balanced plan to the file balanced.xml. And now, we can delete the plan by using the –delete parameter, and the same GUID.  powercfg –delete 381b4222-f694-41f0-9685-ff5bb260df2e If you want to import the plan again, you can use the -import parameter, though it has one weirdness—you have to specify the full path to the file, like this: powercfg –import c:\balanced.xml Using what you’ve learned, you can export each of the plans to a file, and then delete the ones you want to delete. Why Shouldn’t You Do This? Very simple. Stuff will break. On my test machine, for example, I removed all of the built-in plans, and then imported them all back in, but I’m still getting this error anytime I try to access the panel to choose what the power buttons do: There’s a lot more error messages, but I’m not going to waste your time with all of them. So if you want to delete the plans, do so at your own peril. At least you’ve been warned! Similar Articles Productive Geek Tips Learning Windows 7: Manage Power SettingsCreate a Shortcut or Hotkey to Switch Power PlansDisable Power Management on Windows 7 or VistaChange the Windows 7 or Vista Power Buttons to Shut Down/Sleep/HibernateDisable Windows Vista’s Built-in CD/DVD Burning Features TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • Creating dynamic breadcrumb in asp.net mvc with mvcsitemap provider

    - by Jalpesh P. Vadgama
    I have done lots breadcrumb kind of things in normal asp.net web forms I was looking for same for asp.net mvc. After searching on internet I have found one great nuget package for mvpsite map provider which can be easily implemented via site map provider. So let’s check how its works. I have create a new MVC 3 web application called breadcrumb and now I am adding a reference of site map provider via nuget package like following. You can find more information about MVC sitemap provider on following URL. https://github.com/maartenba/MvcSiteMapProvid So once you add site map provider. You will find a Mvc.SiteMap file like following. And following is content of that file. <?xml version="1.0" encoding="utf-8" ?> <mvcSiteMap xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://mvcsitemap.codeplex.com/schemas/MvcSiteMap-File-3.0" xsi:schemaLocation="http://mvcsitemap.codeplex.com/schemas/MvcSiteMap-File-3.0 MvcSiteMapSchema.xsd" enableLocalization="true"> <mvcSiteMapNode title="Home" controller="Home" action="Index"> <mvcSiteMapNode title="About" controller="Home" action="About"/> </mvcSiteMapNode> </mvcSiteMap> So now we have added site map so now its time to make breadcrumb dynamic. So as we all know that with in the standard asp.net mvc template we have action link by default for Home and About like following. <div id="menucontainer"> <ul id="menu"> <li>@Html.ActionLink("Home", "Index", "Home")</li> <li>@Html.ActionLink("About", "About", "Home")</li> </ul> </div> Now I want to replace that with our sitemap provider and make it dynamic so I have added the following code. <div id="menucontainer"> @Html.MvcSiteMap().Menu(true) </div> That’s it. This is the magic code @Html.MvcSiteMap will dynamically create breadcrumb for you. Now let’s run this in browser. You can see that it has created breadcrumb dynamically without writing any action link code. So here you can see with MvcSiteMap provider we don’t have to write any code we just need to add menu syntax and rest it will do automatically. That’s it. Hope you liked it. Stay tuned for more till then happy programming.

    Read the article

  • Assign highest priority to my local repository

    - by Anwar Shah
    Original question was : "How to assign highest priority to local repository without using sources.list file" I have setup a local repository with packages I downloaded. I use it to avoid downloading the same packages over the Internet, when I need to reinstall my Ubuntu. It is a basic repository, created with apt-ftparchive packages . > Packages. I made this a trusted repository to avoid "unauthenticated repository" warning. (When you have a untrusted repository, apt or synaptic try to download the same packages over the Internet, 'cause it is trusted). I have been using this local repository for at least 1 years. But I have to always put my local repository line at the top of the sources.list file to use this. But this is annoying, since I must open a terminal and do some typing on it every time I reinstall Ubuntu, though there is a better tool software-properties-gtk. I cannot use this tool since it place the source line at the end of `sources.list. And the real problem is that, the apt or synaptic always download a package from the source which is mentioned earlier, without inspecting whether the packages are already available in the local repository. So, I have no choice but to place the local source at the top of sources.list doing terminal (I actually don't hate terminal, but I need a solution) . I have tried this method. But this does not help me. My preference file is this in /etc/apt/preferences.d/local-pin-900 Package: * Pin: release o=Local,n=ubuntu-local Pin-Priority: 900 My release file is this Origin: Local Label: Local-Ubuntu Description: Local Ubuntu Repository Codename: ubuntu-local MD5Sum: ed43222856d18f389c637ac3d7dd6f85 1043412 Packages d41d8cd98f00b204e9800998ecf8427e 0 Sources When I enable the apt-preference, the apt-cache policy correctly shows the preference, e.g. It shows the local repository has the highest priority. But when I do this sudo apt-get install <package-name>, apt tries to download it from Internet. But when I place my local-repo at the top, it installs from local repository. So, My question is - 'Is it possible to force apt to use local repository when the package is available in local repository, without explicitly placing "the local source" at the top of my repository list (e.g sources.list file) ?' Edit: output of apt-cache policy $package_name is as follows nautilus-wipe: Installed: (none) Candidate: 0.1.1-2 Version table: 0.1.1-2 0 500 http://archive.ubuntu.com/ubuntu/ precise/universe i386 Packages 900 file:/media/Main/Linux-Software/Ubuntu/Precise/ Packages It is showing that my local repository has higher preference, though it is not the one which comes first in sources.list file. Here is the output of apt-get install nautilus-wipe Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: nautilus-wipe 0 upgraded, 1 newly installed, 0 to remove and 131 not upgraded. Need to get 30.7 kB of archives. After this operation, 150 kB of additional disk space will be used. 'http://archive.ubuntu.com/ubuntu/pool/universe/n/nautilus-wipe/nautilus-wipe_0.1.1-2_i386.deb' nautilus-wipe_0.1.1-2_i386.deb 30730 MD5Sum:7d497b8dfcefe1c0b51a45f3b0466994 It is still trying to get the file from Internet, though I think it should be happy with the local one.

    Read the article

  • Ubuntu 12.10 broken, shutdown while updating

    - by UnknownDude
    I got the following problem: My PC shutdown while upgrading from 12.04 to 12.10. Everything seems to work but I can't install the missing updates. It always tells me to run apt-get install -f, but when I do so it just says: (It's in German, hope it doesn't matter) Paketlisten werden gelesen... Fertig Abhängigkeitsbaum wird aufgebaut Statusinformationen werden eingelesen... Fertig Abhängigkeiten werden korrigiert... Fertig Die folgenden Pakete wurden automatisch installiert und werden nicht mehr benötigt: espeak gcc-4.6-base:i386 gir1.2-notify-0.7 libcamel-1.2-29 libebook-1.2-12 libedataserver-1.2-15 libgconf2-4 libgnome-bluetooth8 libgnome-menu2 libgnomekbd7 libgomp1:i386 libgweather-3-0 libimobiledevice2 libindicate5 libkpathsea5 libpoppler19 libusbmuxd1 python-gmenu Verwenden Sie »apt-get autoremove«, um sie zu entfernen. Die folgenden zusätzlichen Pakete werden installiert: nvidia-current-updates xserver-xorg-core xserver-xorg-input-evdev xserver-xorg-input-mouse xserver-xorg-input-synaptics xserver-xorg-input-vmmouse xserver-xorg-input-wacom xserver-xorg-video-cirrus xserver-xorg-video-fbdev xserver-xorg-video-mga xserver-xorg-video-neomagic xserver-xorg-video-nouveau xserver-xorg-video-openchrome xserver-xorg-video-qxl xserver-xorg-video-savage xserver-xorg-video-sis xserver-xorg-video-sisusb xserver-xorg-video-tdfx xserver-xorg-video-vesa xserver-xorg-video-vmware Vorgeschlagene Pakete: xfonts-100dpi xfonts-75dpi gpointing-device-settings touchfreeze firmware-linux Die folgenden Pakete werden ENTFERNT: nvidia-current Die folgenden Pakete werden aktualisiert (Upgrade): nvidia-current-updates xserver-xorg-core xserver-xorg-input-evdev xserver-xorg-input-mouse xserver-xorg-input-synaptics xserver-xorg-input-vmmouse xserver-xorg-input-wacom xserver-xorg-video-cirrus xserver-xorg-video-fbdev xserver-xorg-video-mga xserver-xorg-video-neomagic xserver-xorg-video-nouveau xserver-xorg-video-openchrome xserver-xorg-video-qxl xserver-xorg-video-savage xserver-xorg-video-sis xserver-xorg-video-sisusb xserver-xorg-video-tdfx xserver-xorg-video-vesa xserver-xorg-video-vmware 20 aktualisiert, 0 neu installiert, 1 zu entfernen und 133 nicht aktualisiert. 8 nicht vollständig installiert oder entfernt. Es müssen noch 0 B von 70,6 MB an Archiven heruntergeladen werden. Nach dieser Operation werden 184 MB Plattenplatz freigegeben. Möchten Sie fortfahren [J/n]? j (Lese Datenbank ... 242727 Dateien und Verzeichnisse sind derzeit installiert.) Entfernen von nvidia-current ... Removing all DKMS Modules Error! There are no instances of module: nvidia-current 295.40 located in the DKMS tree. Done. Traceback (most recent call last): File "/usr/bin/quirks-handler", line 26, in <module> import Quirks.quirkapplier File "/usr/lib/python2.7/dist-packages/Quirks/quirkapplier.py", line 26, in <module> import XKit.xutils ImportError: No module named XKit.xutils dpkg: Fehler beim Bearbeiten von nvidia-current (--remove): Unterprozess installiertes pre-removal-Skript gab den Fehlerwert 1 zurück Trigger für bamfdaemon werden verarbeitet ... Rebuilding /usr/share/applications/bamf.index... Trigger für libc-bin werden verarbeitet ... ldconfig deferred processing now taking place Fehler traten auf beim Bearbeiten von: nvidia-current E: Sub-process /usr/bin/dpkg returned an error code (1) When I try to remove nvidia-current it tells me to run apt-get install -f. Do you guys have any idea? I don't want to reinstall my whole system, takes a lot of time to encrypt everything again and so on.

    Read the article

  • libreoffice-base not configured yet

    - by Wicky
    I have the LibreOffice ppa installed (ppa:libreoffice/ppa) and today I had a problem after updating. I got the following error. Reading package lists ... Done Building dependency tree Reading state information ... Ready You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libreoffice-base: Depends: libreoffice-base-core (= 1: 4.3.0-0ubuntu1 ~ precise1) but 4.3.0-3ubuntu1 ~ precise1 is installed Depends: libreoffice-base-drivers (= 1: 4.3.0-0ubuntu1 ~ precise1) but 4.3.0-3ubuntu1 ~ precise1 is installed Depends: libreoffice-core (= 1: 4.3.0-0ubuntu1 ~ precise1) but 4.3.0-3ubuntu1 ~ precise1 is installed libreoffice-core: Breaks: libreoffice-base (<1: ~ 4.3.0-3ubuntu1 precise1) but 4.3.0-0ubuntu1 ~ precise1 is installed E: Unmet dependencies. Try to use -f. After trying sudo apt-get install -f I got the following output Pakketlijsten worden ingelezen... Klaar Boom van vereisten wordt opgebouwd De status informatie wordt gelezen... Klaar Vereisten worden gecorrigeerd... Klaar De volgende extra pakketten zullen geïnstalleerd worden: libreoffice-base Voorgestelde pakketten: libreoffice-gcj libreoffice-report-builder unixodbc De volgende pakketten zullen opgewaardeerd worden: libreoffice-base 1 pakketten opgewaardeerd, 0 pakketten nieuw geïnstalleerd, 0 te verwijderen en 0 niet opgewaardeerd. 3 pakketten niet volledig geïnstalleerd of verwijderd. Er moeten 0 B/2170 kB aan archieven opgehaald worden. Door deze operatie zal er 2841 kB extra schijfruimte gebruikt worden. Wilt u doorgaan [J/n]? dpkg: vereistenproblemen verhinderen de configuratie van libreoffice-base: libreoffice-base is afhankelijk van libreoffice-base-core (= 1:4.3.0-0ubuntu1~precise1); maar: Versie van libreoffice-base-core op dit systeem is 1:4.3.0-3ubuntu1~precise1. libreoffice-base is afhankelijk van libreoffice-base-drivers (= 1:4.3.0-0ubuntu1~precise1); maar: Versie van libreoffice-base-drivers op dit systeem is 1:4.3.0-3ubuntu1~precise1. libreoffice-base is afhankelijk van libreoffice-core (= 1:4.3.0-0ubuntu1~precise1); maar: Versie van libreoffice-core op dit systeem is 1:4.3.0-3ubuntu1~precise1. libreoffice-core (1:4.3.0-3ubuntu1~precise1) breaks libreoffice-base (<< 1:4.3.0-3ubuntu1~precise1) and is geïnstalleerd. Version of libreoffice-base to be configured is 1:4.3.0-0ubuntu1~precise1. dpkg: fout bij afhandelen van libreoffice-base (--configure): vereistenproblemen - blijft ongeconfigureerd dpkg: vereistenproblemen verhinderen de configuratie van libreoffice-report-builder-bin: libreoffice-report-builder-bin is afhankelijk van libreoffice-base; maar:Er is geen apport-verslag weggeschreven omdat de foutmelding volgt op een eerdere mislukking. Pakket libreoffice-base is nog niet geconfigureerd. dpkg: fout bij afhandelen van libreoffice-report-builder-bin (--configure): vereistenproblemen - blijft ongeconfigureerd dpkg: vereistenproblemen verhinderen de configuratie van libreoffice: libreoffice is afhankelijk van libreoffice-base; maar: Pakket libreoffice-base is nog niet geconfigureerd. libreoffice is afhankelijk van libreoffice-report-builder-bin; maar: Pakket libreoffice-report-builder-bin is nog niet geconfigureerd. dpkg: fout bij afhandelen van libreoffice (--configure): vereistenproblemen - blijft ongeconfigureerd Er is geen apport-verslag weggeschreven omdat de foutmelding volgt op een eerdere mislukking. Er is geen apport-verslag weggeschreven omdat de foutmelding volgt op een eerdere mislukking. Fouten gevonden tijdens behandelen van: libreoffice-base libreoffice-report-builder-bin libreoffice E: Sub-process /usr/bin/dpkg returned an error code (1) How can I solve this problem so the dependencies are solved? Do I have to configure libreoffice-base manually?

    Read the article

  • SQL SERVER – Solution of Puzzle – Swap Value of Column Without Case Statement

    - by pinaldave
    Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. I will be not able to cover every single solution which is posted as a comment, however, I would like to for sure cover few interesting entries. However, I am selecting 5 solutions which are different (not necessary they are most optimal or best – just different and interesting). Just for clarity I am involving the original problem statement here. USE tempdb GO CREATE TABLE SimpleTable (ID INT, Gender VARCHAR(10)) GO INSERT INTO SimpleTable (ID, Gender) SELECT 1, 'female' UNION ALL SELECT 2, 'male' UNION ALL SELECT 3, 'male' GO SELECT * FROM SimpleTable GO -- Insert Your Solutions here -- Swap value of Column Gender SELECT * FROM SimpleTable GO DROP TABLE SimpleTable GO Here are the five most interesting and different solutions I have received. Solution by Roji P Thomas UPDATE S SET S.Gender = D.Gender FROM SimpleTable S INNER JOIN SimpleTable D ON S.Gender != D.Gender I really loved the solutions as it is very simple and drives the point home – elegant and will work pretty much for any values (not necessarily restricted by the option in original question ‘male’ or ‘female’). Solution by Aneel CREATE TABLE #temp(id INT, datacolumn CHAR(4)) INSERT INTO #temp VALUES(1,'gent'),(2,'lady'),(3,'lady') DECLARE @value1 CHAR(4), @value2 CHAR(4) SET @value1 = 'lady' SET @value2 = 'gent' UPDATE #temp SET datacolumn = REPLACE(@value1 + @value2,datacolumn,'') Aneel has very interesting solution where he combined both the values and replace the original value. I personally liked this creativity of the solution. Solution by SIJIN KUMAR V P UPDATE SimpleTable SET Gender = RIGHT(('fe'+Gender), DIFFERENCE((Gender),SOUNDEX(Gender))*2) Sijin has amazed me with Difference and Soundex function. I have never visualized that above two functions can resolve the problem. Hats off to you Sijin. Solution by Nikhildas UPDATE St SET St.Gender = t.Gender FROM SimpleTable St CROSS Apply (SELECT DISTINCT gender FROM SimpleTable WHERE St.Gender != Gender) t I was expecting that someone will come up with this solution where they use CROSS APPLY. This is indeed very neat and for sure interesting exercise. If you do not know how CROSS APPLY works this is the time to learn. Solution by mistermagooo UPDATE SimpleTable SET Gender=X.NewGender FROM (VALUES('male','female'),('female','male')) AS X(OldGender,NewGender) WHERE SimpleTable.Gender=X.OldGender As per author this is a slow solution but I love how syntaxes are placed and used here. I love how he used syntax here. I will say this is the most beautifully written solution (not necessarily it is best). Bonus: Solution by Madhivanan Somehow I was confident Madhi – SQL Server MVP will come up with something which I will be compelled to read. He has written a complete blog post on this subject and I encourage all of you to go ahead and read it. Now personally I wanted to list every single comment here. There are some so good that I am just amazed with the creativity. I will write a part of this blog post in future. However, here is the challenge for you. Challenge: Go over 50+ various solutions listed to the simple problem here. Here are my two asks for you. 1) Pick your best solution and list here in the comment. This exercise will for sure teach us one or two things. 2) Write your own solution which is yet not covered already listed 50 solutions. I am confident that there is no end to creativity. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Using progress dialog in Visual Studio extensions

    - by Utkarsh Shigihalli
    Originally posted on: http://geekswithblogs.net/onlyutkarsh/archive/2014/05/23/using-progress-dialog-in-visual-studio-extensions.aspxAs a Visual Studio extension developer you are required to keep the aesthetics of Visual Studio in tact when you integrate your extension with Visual Studio. Your extension looks odd when you try to use windows controls and dialogs in your extensions. Visual Studio SDK exposes many interfaces so that your extension looks as integrated with Visual Studio as possible. When your extension is performing a long running task, you have many options to notify the progress to the user. One such option is through Visual Studio status bar. I have previously blogged about displaying progress through Visual Studio status bar. In this blog post I am going to highlight another way using IVsThreadedWaitDialog2 interface. One thing to note is, as the IVsThreadedWaitDialog2 interface name suggests it is a dialog hence user cannot perform any action when the dialog is being shown. So Visual Studio seems responsive to user, even when a task is being performed. Visual Studio itself makes use of this interface heavily. One example is when you are loading a solution (.sln) with lot of projects Visual Studio displays dialog implemented by this interface (screenshot below). So the first step is to get the instance of IVsThreadedWaitDialog2 interface using IServiceProvider interface. var dialogFactory = _serviceProvider.GetService(typeof(SVsThreadedWaitDialogFactory)) as IVsThreadedWaitDialogFactory; IVsThreadedWaitDialog2 dialog = null; if (dialogFactory != null) { dialogFactory.CreateInstance(out dialog); } So if your have the package initialized properly out object dialog will be not null and would contain the instance of IVsThreadedWaitDialog2 interface. Once the instance is got, you call the different methods to manage the dialog. I will cover 3 methods StartWaitDialog, EndWaitDialog and HasCanceled in this blog post. You show the progress dialog as below. if (dialog != null && dialog.StartWaitDialog( "Threaded Wait Dialog", "VS is Busy", "Progress text", null, "Waiting status bar text", 0, false, true) == VSConstants.S_OK) { Thread.Sleep(4000); } As you can see from the method syntax it is very similar to standard windows message box. If you pass true to the 7th parameter to StartWaitDialog method, you will also see a cancel button allowing user to cancel the running task. You can react when user cancels the task as below. bool isCancelled; dialog.HasCanceled(out isCancelled); if (isCancelled) { MessageBox.Show("Cancelled"); } Finally, you can close the dialog when you complete the task running as below. int usercancel; dialog.EndWaitDialog(out usercancel); To help you quickly experience the above code, I have created a sample. It is available for download from GitHub. The sample creates a tool window with two buttons to demo the above explained scenarios. The tool window can be accessed by clicking View –> Other Windows -> ProgressDialogDemo Window

    Read the article

  • Unable to apt-get upgrade in ubuntu 11.10

    - by blackhole
    These are the errors shows by different client Update Manager: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 968, in simulate trans.unauthenticated = self._simulate_helper(trans) File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1092, in _simulate_helper return depends, self._cache.required_download, \ File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 235, in required_download pm.get_archives(fetcher, self._list, self._records) SystemError: E:Method has died unexpectedly!, E:Sub-process returned an error code (100), E:Method /usr/lib/apt/methods/ did not start correctly Synaptic package Manager E: Method has died unexpectedly! E: Sub-process returned an error code (100) E: Method /usr/lib/apt/methods/ did not start correctly E: Unable to lock the download directory Command: sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: libfreetype6 libfreetype6-dev 2 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Failed to exec method /usr/lib/apt/methods/ E: Method has died unexpectedly! E: Sub-process returned an error code (100) E: Method /usr/lib/apt/methods/ did not start correctly Can anyone one tell me how to resolve these issues ? I have no volatile packages or anything so i am even posting the preview of my sources.list file. # deb cdrom:[Ubuntu 10.10 _Maverick Meerkat_ - Release i386 (20101007)]/ maverick main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://in.archive.ubuntu.com/ubuntu/ oneiric main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://in.archive.ubuntu.com/ubuntu/ oneiric-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://in.archive.ubuntu.com/ubuntu/ oneiric universe deb http://in.archive.ubuntu.com/ubuntu/ oneiric-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://in.archive.ubuntu.com/ubuntu/ oneiric multiverse deb http://in.archive.ubuntu.com/ubuntu/ oneiric-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb http://in.archive.ubuntu.com/ubuntu/ maverick-backports main restricted universe multiverse # deb-src http://in.archive.ubuntu.com/ubuntu/ maverick-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu oneiric partner deb-src http://archive.canonical.com/ubuntu oneiric partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu oneiric main deb-src http://extras.ubuntu.com/ubuntu oneiric main deb http://in.archive.ubuntu.com/ubuntu/ oneiric-security main restricted deb http://in.archive.ubuntu.com/ubuntu/ oneiric-security universe deb http://in.archive.ubuntu.com/ubuntu/ oneiric-security multiverse # deb http://archive.canonical.com/ lucid partner Here is the preview of my sources.list file

    Read the article

  • Microsoft ReportViewer SetParameters continuous refresh issue

    - by Ilya Verbitskiy
    Originally posted on: http://geekswithblogs.net/ilich/archive/2013/10/16/microsoft-reportviewer-setparameters-continuous-refresh-issue.aspxI am a big fun of using ASP.NET MVC for building web-applications. It allows us to create simple, robust and testable solutions. However, .NET world is not perfect. There is tons of code written in ASP.NET web-forms. You cannot simply ignore it, even if you want to. Sometimes ASP.NET web-forms controls bring us non-obvious issues. The good example is Microsoft ReportViewer control. I have an example for you. 1: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> 2: <%@ Register Assembly="Microsoft.ReportViewer.WebForms, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" Namespace="Microsoft.Reporting.WebForms" TagPrefix="rsweb" %> 3:   4: <!DOCTYPE html> 5:   6: <html xmlns="http://www.w3.org/1999/xhtml"> 7: <head runat="server"> 8: <title>Report Viewer Continiuse Resfresh Issue Example</title> 9: </head> 10: <body> 11: <form id="form1" runat="server"> 12: <div> 13: <asp:ScriptManager runat="server"></asp:ScriptManager> 14: <rsweb:ReportViewer ID="_reportViewer" runat="server" Width="100%" Height="100%"></rsweb:ReportViewer> 15: </div> 16: </form> 17: </body> 18: </html>   The back-end code is simple as well. I want to show a report with some parameters to a user. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: _reportViewer.ProcessingMode = ProcessingMode.Remote; 4: _reportViewer.ShowParameterPrompts = false; 5:   6: var serverReport = _reportViewer.ServerReport; 7: serverReport.ReportServerUrl = new Uri("http://localhost/ReportServer_SQLEXPRESS"); 8: serverReport.ReportPath = "/Reports/TestReport"; 9:   10: var reportParameter1 = new ReportParameter("Parameter1"); 11: reportParameter1.Values.Add("Hello World!"); 12:   13: var reportParameter2 = new ReportParameter("Parameter2"); 14: reportParameter2.Values.Add("10/16/2013"); 15:   16: var reportParameter3 = new ReportParameter("Parameter3"); 17: reportParameter3.Values.Add("10"); 18:   19: serverReport.SetParameters(new[] { reportParameter1, reportParameter2, reportParameter3 }); 20: }   I set ShowParametersPrompts to false because I do not want user to refine the search. It looks good until you run the report. The report will refresh itself all the time. The problem caused by ServerReport.SetParameters method in Page_Load. The method cause ReportViewer control to execute the report on the NEXT post back. That is why the page has continuous post-backs. The fix is very simple: do nothing if Page_Load method executed during post-back. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: if (IsPostBack) 4: { 5: return; 6: } 7:   8: _reportViewer.ProcessingMode = ProcessingMode.Remote; 9: _reportViewer.ShowParameterPrompts = false; 10:   11: var serverReport = _reportViewer.ServerReport; 12: serverReport.ReportServerUrl = new Uri("http://localhost/ReportServer_SQLEXPRESS"); 13: serverReport.ReportPath = "/Reports/TestReport"; 14:   15: var reportParameter1 = new ReportParameter("Parameter1"); 16: reportParameter1.Values.Add("Hello World!"); 17:   18: var reportParameter2 = new ReportParameter("Parameter2"); 19: reportParameter2.Values.Add("10/16/2013"); 20:   21: var reportParameter3 = new ReportParameter("Parameter3"); 22: reportParameter3.Values.Add("10"); 23:   24: serverReport.SetParameters(new[] { reportParameter1, reportParameter2, reportParameter3 }); 25: } You can download sample code from GitHub - https://github.com/ilich/Examples/tree/master/ReportViewerContinuousRefresh

    Read the article

  • kubuntu muon package manager stop working

    - by aseed
    i have kubuntu today after updating the muon package manager stuck at 64% so i closes it. and after that when i try to update or reinstall or install software the manger stuck. so how can i reinstall the muon package manger from terminal?? i try sudo apt-get install muon and i get this messege Reading package lists... Done Building dependency tree Reading state information... Done muon is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libopencv-dev : Depends: libopencv-core-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-ml-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-imgproc-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-video-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-objdetect-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-gpu-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-highgui-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-calib3d-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-flann-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-features2d-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-legacy-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-contrib-dev (= 2.3.1-4ppa1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). so what to do, i need to reinstall it because it not working ~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of libopencv-dev: libopencv-dev depends on libopencv-core-dev (= 2.3.1-4ppa1); however: Package libopencv-core-dev is not installed. libopencv-dev depends on libopencv-ml-dev (= 2.3.1-4ppa1); however: Package libopencv-ml-dev is not installed. libopencv-dev depends on libopencv-imgproc-dev (= 2.3.1-4ppa1); however: Package libopencv-imgproc-dev is not installed. libopencv-dev depends on libopencv-video-dev (= 2.3.1-4ppa1); however: Package libopencv-video-dev is not installed. libopencv-dev depends on libopencv-objdetect-dev (= 2.3.1-4ppa1); however: Package libopencv-objdetect-dev is not installed. libopencv-dev depends on libopencv-gpu-dev (= 2.3.1-4ppa1); however: Package libopencv-gpu-dev is not installed. libopencv-dev depends on libopencv-highgui-dev (= 2.3.1-4ppa1); however: Package libopencv-highgui-dev is not installed. libopencv-dev depends on libopencv-calib3d-dev (= 2.3.1-4ppa1); however: Package libopencv-calib3d-dev is not installed. libopencv-dev depends on libopencv-flann-dev (= 2.3.1-4ppa1); however: Package libopencv-flann-dev is not installed. libopencv-dev depends on libopencv-features2d-dev (= 2.3.1-4ppa1); however: Package libopencv-features2d-dev is not installed. libopencv-dev depends on libopencv-legacy-dev (= 2.3.1-4ppa1); however: Package libopencv-legacy-dev is not installed. libopencv-dev depends on libopencv-contrib-dev (= 2.3.1-4ppa1); however: Package libopencv-contrib-dev is not installed. dpkg: error processing libopencv-dev (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: libopencv-dev sudo apt-get install -f sudo dpkg --configure -a and still same problem... and i think getting this problem because of updating kubuntu today

    Read the article

  • Silverlight Cream for January 04, 2011 -- #1022

    - by Dave Campbell
    In this Issue: Dennis Doomen, Doug Holland, Kunal Chowdhury, Sacha Barber, Paul Sheriff, Mike Snow(-2-), Peter Kuhn(-2-), and Mike Ormond. Above the Fold: Silverlight: "Silverlight: Fixing the BookShelf Sample" Peter Kuhn WP7: "Searching the Windows Phone 7 Marketplace Programmatically" Doug Holland Prism/Cinch: "PRISM 4 Custom Transitioning Region" Sacha Barber Shoutouts: Sacha Barber the author of Cinch asks for some advice from users: Cinch V2 : Question For The Reader Michael Crump introduces us to SnippetManager as a way to organize your Silverlight snippets... I'm thinking any snippet: A better way to organize your Silverlight Code Snippets. Andy Beaulieu announced an update of Physics Helper 4.2 using Farseer 3.2 ... check out the breaking changes though! Dennis Doomen blogged about a new release of his Fluent Assertions: A new year with a new release of Fluent Assertions, with a blog post about it below From SilverlightCream.com: Verifying PropertyChanged events in Silverlight using Fluent Assertions Dennis Doomen release his latest Fluent Assertions for .NET and Silverlight and wrote up a big post about the new event monitoring syntax. Searching the Windows Phone 7 Marketplace Programmatically Doug Holland has a post up on MSDN blogs talking about searching the WP7 Marketplace programmatically... ya know you should be able to do it... here's how. Beginners Guide to Visual Studio LightSwitch (Part - 5) Kunal Chowdhury has Part 5 of a tutorial series on Lightswitch up at SilverlightShow... working with custom validation this time, and for the first time in this series so far actually writes some code! PRISM 4 Custom Transitioning Region Sacha Barber took time to look at Prism4/MEF and Cinch2 and found things to be fine then wrote a custom PRISM region adaptor that uses a TransitionalElement from the Microsoft Transitionals project... code available, blog post to come. Get Application Title from Windows Phone Paul Sheriff has a cool chunk of code up... getting the Application's title programmatically... and other attributes as well, if you were wondering why you might wanna do that. Detecting Users Win7 Mobile Theme Color Mike Snow has a couple as well... first up is how to detect your user's theme... obviously useful if you wanna match it. Selecting an Item in a ComboBox after Adding Items Second for Mike Snow is a general Silverlight issue... setting the selected item on a ComboBox after filling it... if you haven't stumbled across this yet, you will... A Simplified Grid Markup Reloaded Peter Kuhn has a pair of posts up since last time... this first is an extension of Colin Eberhardt's simplified Grid markup system, but it's only useful if you don't plan on using Blend... can we get a show of hands? :) Silverlight: Fixing the BookShelf Sample Next Peter Kuhn has some changes to the Bookshelf code, but more importantly has some excelling tips about shader effects, Effects on Visual Elements and how to make best use of all the above. Displaying HTML Content in Windows Phone 7 Mike Ormond has a WP7 post up describing problems a customer had early on displaying rich text and an attempt to use the WebBrowser control to pull it off and the problems that caused... check out the resultant code, and read the comments as well. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Floating Panels and Describe Windows in Oracle SQL Developer

    - by thatjeffsmith
    One of the challenges I face as I try to share tips about our software is that I tend to assume there are features that you just ‘know about.’ Either they’re so intuitive that you MUST know about them, or it’s a feature that I’ve been using for so long I forget that others may have never even seen it before. I want to cover two of those today - Describe (DESC) – SHIFT+F4 Floating Panels My super-exciting desktop SQL Developer and Describe DESC or Describe is an Oracle SQL*Plus command. It shows what a table or view is composed of in terms of it’s column definition. Here’s an example: SQL*Plus: Release 11.2.0.3.0 Production on Fri Sep 21 14:25:37 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> desc beer; Name Null? Type ----------------------------------------- -------- ---------------------------- BREWERY NOT NULL VARCHAR2(100) CITY VARCHAR2(100) STATE VARCHAR2(100) COUNTRY VARCHAR2(100) ID NUMBER SQL> You can get the same information – and a good bit more – in SQL Developer using the SQL Developer DESC command. You invoke it with SHIFT+F4. It will open a floating (non-modal!) window with the information you want. Here’s an example: I can see my column definitions, constratins, stats, privs, etc A few ‘cool’ things you should be aware of: I can open as many as I want, and still work in my worksheet, browser, etc. I can also DESC an index, user, or most any other database object I can of course move them off my primary desktop display The DESC panel’s are read-only. I can’t drop a constraint from within the DESC window of a given table. But for dragging columns into my worksheet, and checking out the stats for my objects as I query them – it’s very, very handy. Try This Right Now Type ‘scott.emp’ (or some other table you have), place your cursor on the text, and hit SHIFT+F4. You’ll see the EMP object open. Now click into a column name in the columns page. Drag it into your worksheet. It will paste that column name into your query. This is an alternative for those that don’t like our code insight feature or dragging columns off the connection tree (new for v3.2!) Got it? SQL Developer’s Floating Panels Ok, let’s talk about a similar feature. Did you know that any dockable panel from the View menu can also be ‘floated?’ One of my favorite features is the SQL History. Every query I run is recorded, and I can recall them later without having to remember what I ran and when. And I USUALLY use the keyboard shortcuts for this. Let your trouble float away…if only it were so easy as a right-click in the real world. But sometimes I still want to see my recall list without having to give up my screen real estate. So I just mouse-right click on the panel tab and select ‘Float.’ Then I move it over to my secondary display – see the poorly lit picture in the beginning of this post. And that’s it. Simple, I know. But I thought you should know about these two things!

    Read the article

  • Mirroring git and mercurial repos the lazy way

    - by Greg Malcolm
    I maintain Python Koans on mirrored on both Github using git and Bitbucket using mercurial. I get pull requests from both repos but it turns out keeping the two repos in sync is pretty easy. Here is how it's done... Assuming I’m starting again on a clean laptop, first I clone both repos ~/git $ hg clone https://bitbucket.org/gregmalcolm/python_koans ~/git $ git clone [email protected]:gregmalcolm/python_koans.git python_koans2 The only thing that makes a folder a git or mercurial repository is the .hg folder in the root of python_koans and the .git folder in the root of python_koans2. So I just need to move the .git folder over into the python_koans folder I'm using for mercurial: ~/git $ rm -rf python_koans/.git ~/git $ mv python_koans2/.git python_koans ~/git $ ls -la python_koans total 48 drwxr-xr-x 11 greg staff 374 Mar 17 15:10 . drwxr-xr-x 62 greg staff 2108 Mar 17 14:58 .. drwxr-xr-x 12 greg staff 408 Mar 17 14:58 .git -rw-r--r-- 1 greg staff 34 Mar 17 14:54 .gitignore drwxr-xr-x 13 greg staff 442 Mar 17 14:54 .hg -rw-r--r-- 1 greg staff 48 Mar 17 14:54 .hgignore -rw-r--r-- 1 greg staff 365 Mar 17 14:54 Contributor Notes.txt -rw-r--r-- 1 greg staff 1082 Mar 17 14:54 MIT-LICENSE -rw-r--r-- 1 greg staff 5765 Mar 17 14:54 README.txt drwxr-xr-x 10 greg staff 340 Mar 17 14:54 python 2 drwxr-xr-x 10 greg staff 340 Mar 17 14:54 python 3 That’s about it! Now git and mercurial are tracking files in the same folder. Of course you will still need to set up your .gitignore to ignore mercurial’s dotfiles and .hgignore to ignore git’s dotfiles or there will be squabbling in the backseat. ~/git $ cd python_koans/ ~/git/python_koans $ cat .gitignore *.pyc *.swp .DS_Store answers .hg <-- Ignore mercurial ~/git/python_koans $ cat .hgignore syntax: glob *.pyc *.swp .DS_Store answers .git <-- Ignore git Because both my mirrors are both identical as far as tracked files are concerned I won’t yet see anything if I check statuses at this point: ~/git/python_koans $ git status # On branch master nothing to commit (working directory clean) ~/git/python_koans $ hg status ~/git/python_koans But how about if I accept a pull request from the bitbucket (mercuial) site? ~/git/python_koans $ hg status ~/git/python_koans $ git status # On branch master # Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: python 2/koans/about_decorating_with_classes.py # modified: python 2/koans/about_iteration.py # modified: python 2/koans/about_with_statements.py # modified: python 3/koans/about_decorating_with_classes.py # modified: python 3/koans/about_iteration.py # modified: python 3/koans/about_with_statements.py Mercurial doesn’t have any changes to track right now, but git has changes. Commit and push them up to github and balance is restored to the force: ~/git/python_koans $ git commit -am "Merge from bitbucket mirror: 'gpiancastelli - Fix for issue #21 and some other tweaks'" [master 79ca184] Merge from bitbucket mirror: 'gpiancastelli - Fix for issue #21 and some other tweaks' 6 files changed, 78 insertions(+), 63 deletions(-) ~/git/python_koans $ git push origin master Or just use hg-git? The github developers have actually published a plugin for automatic mirroring: http://hg-git.github.com I haven’t used it because at the time I tried it a couple of years ago I was having problems getting all the parts to play nice with each other. Probably works fine now though..

    Read the article

  • Tomcat dying silently on regular basis

    - by Hendrik
    My tomcat (6.0.32, Java Sun 1.6.0_22-b04 on Ubuntu 10.04) keeps crashing multiple times daily without any specific output in catalina.out. This usually happens on high load (see top output). Update: The pid-file is properly removed when this happens. Update 2: No CATALINA_OPTS set, _JAVA_OPTS are: export _JAVA_OPTIONS="-Xms128m -Xmx1024m -XX:MaxPermSize=512m \ -XX:MinHeapFreeRatio=20 \ -XX:MaxHeapFreeRatio=40 \ -XX:NewSize=10m \ -XX:MaxNewSize=10m \ -XX:SurvivorRatio=6 \ -XX:TargetSurvivorRatio=80 \ -XX:+CMSClassUnloadingEnabled \ -Djava.awt.headless=true \ -Dcom.sun.management.jmxremote \ -Dcom.sun.management.jmxremote.port=37331 \ -Dcom.sun.management.jmxremote.ssl=false \ -Dcom.sun.management.jmxremote.authenticate=true \ -Djava.rmi.server.hostname=(myhostname) \ -Dcom.sun.management.jmxremote.password.file=/etc/java-6-sun/management/jmxremote.password \ -Dcom.sun.management.jmxremote.access.file=/etc/java-6-sun/management/jmxremote.access" Top: top - 12:40:03 up 9 days, 12:15, 3 users, load average: 30.00, 22.39, 21.91 Tasks: 89 total, 4 running, 85 sleeping, 0 stopped, 0 zombie Cpu(s): 53.2%us, 9.7%sy, 0.0%ni, 34.7%id, 1.5%wa, 0.0%hi, 0.8%si, 0.0%st Mem: 4194304k total, 3311304k used, 883000k free, 0k buffers Swap: 4194304k total, 0k used, 4194304k free, 0k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 25850 tomcat6 20 0 1981m 1.2g 11m S 161 29.6 11:41.56 java 12632 mysql 20 0 393m 97m 4452 S 141 2.4 1690:05 mysqld 14932 nobody 20 0 253m 44m 9152 R 56 1.1 3:26.57 php-cgi 7011 nobody 20 0 241m 31m 9124 S 30 0.8 1:35.96 php-cgi 10093 nobody 20 0 228m 18m 8520 S 25 0.5 2:29.97 php-cgi 27071 nobody 20 0 237m 28m 8640 S 11 0.7 3:13.72 php-cgi 3306 nobody 20 0 227m 16m 6736 R 7 0.4 2:29.83 php-cgi 7756 nobody 20 0 261m 58m 15m R 5 1.4 2:22.33 php-cgi 7129 www-data 20 0 3646m 7228 1896 S 2 0.2 0:36.65 nginx 2657 nobody 20 0 228m 18m 8540 S 1 0.5 1:59.51 php-cgi 7131 www-data 20 0 3645m 6464 1960 S 1 0.2 0:34.13 nginx 7140 www-data 20 0 3652m 12m 1896 S 1 0.3 0:35.80 nginx 619 nobody 20 0 231m 29m 15m S 0 0.7 2:33.46 php-cgi 16552 nobody 20 0 250m 41m 8784 S 0 1.0 2:48.12 php-cgi 17134 nobody 20 0 239m 37m 16m S 0 0.9 2:32.86 php-cgi 21004 nobody 20 0 243m 34m 8700 S 0 0.8 1:19.85 php-cgi 26105 root 20 0 19220 1392 1060 R 0 0.0 0:00.82 top 32430 nobody 20 0 256m 47m 9196 S 0 1.2 2:19.01 php-cgi 314 nobody 20 0 256m 47m 8804 S 0 1.1 1:46.00 php-cgi 2111 nobody 20 0 253m 44m 9196 S 0 1.1 3:01.14 php-cgi 2142 root 20 0 26452 2564 868 S 0 0.1 0:00.56 screen 2144 root 20 0 19484 2012 1368 S 0 0.0 0:00.00 bash 2333 nobody 20 0 249m 41m 9160 S 0 1.0 1:10.33 php-cgi 2552 root 20 0 19484 2260 1620 S 0 0.1 0:00.01 bash 2587 nobody 20 0 258m 49m 9192 S 0 1.2 2:04.50 php-cgi 2684 root 20 0 4092 652 540 S 0 0.0 0:00.00 xvfb-run 2696 root 20 0 60720 13m 2352 S 0 0.3 0:09.12 Xvfb 2759 root 20 0 617m 12m 4676 S 0 0.3 0:00.66 node 3514 nobody 20 0 270m 61m 9216 S 0 1.5 3:13.69 php-cgi 5270 root 20 0 25164 1324 1036 S 0 0.0 0:00.01 screen 5402 nobody 20 0 227m 16m 8032 S 0 0.4 1:33.61 php-cgi 5765 root 20 0 81180 3820 3028 S 0 0.1 0:00.31 sshd 5798 nobody 20 0 242m 32m 9124 S 0 0.8 1:52.08 php-cgi 5856 root 20 0 19496 2292 1636 S 0 0.1 0:00.03 bash 6442 root 20 0 62332 20m 1960 S 0 0.5 0:30.58 mrtg 7082 root 20 0 88992 1916 1636 S 0 0.0 0:00.00 PassengerWatchd I can't find any concrete reason for it, no Exceptions or messages of a shutdown in catalina.out (and no other logs in tomcat's log dir). I can start up the service and it will run for a few days or just minutes before dying again. Is there somewhere else i could look for output? Could the kernel start killing threads due to a lack of ressources and by that bring the VM down?

    Read the article

  • Build Your Own CE6 Kernel

    - by Kate Moss' Big Fan
    The Share Source Program in Windows CE provides many modules in %_WINCEROOT%\Private\ tree, and the kernel is one of them! Although it is not full source of kernel but it is good enough for tracing it, even tweak the kernel. Tracing the kernel and see how it works is lots of fun, but it is fascinated to modify and verify the change you made. So first comes first, where is the source of kernel? It's in your %_WINCEROOT%\private\winceos\COREOS\nk\ And next question will be "How do I build it?", Some of you may say just "build -c" there and it should be good. If you are the owner of kernel and got full source, that is definitely the right answer, but none of them are applied to our case though. So what should I do? Let's dig deeper into the coreos\nk folder, there are a couples of subfolder, CELOG, KDSTUB, KERNEL and etc. KERNEL\ is the main component of kernel.dll, in the other word, most of the modify to kernel is going to happen here. And the good thing is, you could "build -c" in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ with no error at all. But before doing that, remember to backup eveything you are going to modify, including the source and binaries; remember, this is not something belong to you, and if you didn't restore them back later, it could end up confuse the subsequence QFE updates! Here is the steps Backup the source code, I will suggest the whole %_WINCEROOT%\private\winceos\COREOS\nk\ Backup the binaries in common\oak\lib\, and again if you are not sure which files, backup the whole %_WINCEROOT%\common\oak\lib\ is the safest way. Do whatever modification you want in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ build -c in %_WINCEROOT%\private\winceos\COREOS\nk\kernel If everything went well so far, you should get a new nkmain.lib,nkmain.pdb, nkprmain.lib and nkprmain.pdb in %_WINCEROOT%\public\common\oak\lib\%_TGTCPU%\%WINCEDEBUG%\ Basically, you just rebuild your new kernel, the rest is to "blddemo clean -q" to have your new kernel SYSGEN'd and include in your OS Image. Or just "set WINCEREL=1" then "sysgen -p common nk nkprof" and "makeimg" if you can't wait another minutes for "blddemo clean -q" Tat sounds good, but some of you may not like the idea to alter any code in private folder, and not to mention how annoying to backup/restore files every time. Better idea? Yes, Microsoft provides a tool SYSGEN_CAPTURE (http://msdn.microsoft.com/en-us/library/ee504678.aspx for detail and usage) to creates Sources files for public drivers that you want to modify and build in your platform directory. In fact, not only public drivers, virtually anything in the %_WINCEROOT%\public\<project name>\cesysgen\makefile can be captured, and of course including kernel. So I am going to introduce a second way to build your own kernel by using SYSGEN_CAPTURE tool. Again the steps Create a folder in your BSP for building kernel, says %_TARGETPLATROOT%\SRC\Kernel. Use "SYSGEN_CAPTURE -p common nk" and then you will get a SOURCES.KERN, you could also "SYSGEN_CAPTURE -p common nkprof" to generate profiler enabled kernel. rename the SOURCE.KERN to SOURCES and copy one of the sample makefile into your kernel directory. For example the one in PRIVATE\WINCEOS\COREOS\NK\KERNEL\NKNORMAL. Copy the source files you want to modify from private\winceos\coreos\nk\kernel\ into your kernel directory. Modifying the SOURCES= macro to the source files you addes in step 4. For example, if you copied the vm.c, it is going to be SOURCES=vm.c Refer to the private\winceos\COREOS\nk\kernel\sources.inc and add macro defines and proper include path in your SOURCES file. "set WINCEREL=1", "build -c" in your kernel directory and "makeimg", voila! Here is an example for the MACROS you need to add in x86 Here are the macros for x86 CDEFINES=$(CDEFINES) -DIN_KERNEL -DWINCEMACRO -DKERN_CORE # Machine independent defines CDEFINES=$(CDEFINES) -DDBGSUPPORT _COREOSROOT=$(_WINCEROOT)\private\winceos\coreos INCLUDES=$(_COREOSROOT)\inc;$(_COREOSROOT)\nk\inc !IFDEF DP_SETTINGS CDEFINES=$(CDEFINES) -DDP_SETTINGS=$(DP_SETTINGS) !ENDIF ASM_SAFESEH=1 CDEFINES=$(CDEFINES) -Gs100000 -DENCODE_GS_COOKIE

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >