Search Results

Search found 5454 results on 219 pages for 'soa purge instances dehyd'.

Page 73/219 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • How to Banish Duplicate Photos with VisiPic

    - by Jason Fitzpatrick
    You meant well, you intended to be a good file custodian, but somewhere along the way things got out of hand and you’ve got duplicate photos galore. Don’t be afraid to delete them and lose important photos, read on as we show you how to clean safely. Deleting duplicate files, especially important ones like personal photos, makes a lot of people quite anxious (and rightfully so). Nobody wants to be the one to realize that they deleted all the photos of their child’s first birthday party during a hard drive purge gone wrong. In this tutorial we’re going to show you how to go beyond the limited reach of  tools which simply compare file names and file sizes. Instead we’ll be using a program that combines that kind of comparison with actual image analysis to help you weed out not just perfect 1:1 file duplicates but also those piles of resized for email images, cropped images, and other modified images that might be cluttering up your hard drive. How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • I removed nvidia driver and lshw -c video still shows nvidia

    - by sinekonata
    Today I tried to activate the newer experimental drivers and both 304 and 310 failed to even install. So I tried the regular nvidia driver 295.40 for the 20th time today (I had lag issues and was testing Nouveau vs Nvidia with dual monitor and Unity2D-3D) Within my tty1 I tried to remove nvidia: sudo apt-get remove nvidia-settings nvidia-current and purge too reboot, nothing. So when lshw -c video displayed nvidia as my driver I tried sudo rm /etc/X11/xorg.conf since I read ubuntu would "reset" the GUI conf but reboot, nothing. So next I tried sudo jockey-text --disable=xorg:nvidia_current And nothing has worked...

    Read the article

  • Will mono be carried forward by upgrading to 12.04?

    - by vasa1
    I plan to upgrade from 11.10 to 12.04 when 12.04 is released. I noticed from my previous experience in upgrading from 11.04 to 11.10, that Synaptic, which was dropped in 11.10, was retained on my system presumably because I upgraded from 11.04 (which did have Synaptic by default) and didn't do a new install. Given that Synaptic was retained, my question relates to mono. According to the list here, I don't have any programs that need mono. (I've just purged Banshee.) If I don't remove mono, will it be carried forward when I upgrade to 12.04 just like Synaptic was? If the suggestion is to remove mono before the upgrade to 12.04, is sudo apt-get purge libmono* libgdiplus cli-common libglitz-glx1 libglitz1 still the recommended way to remove mono-associated material from 11.10?

    Read the article

  • How do I consistently activate USB speakers

    - by Andrew.Healy
    I have a pair of Ricco USB speakers. As these are external, I sought to use them with 11.10. They are recognised by 'aplay -l' and Pulseaudio- I have selected them, but, have to reset the volume, every time I log on. I can also only recieve feedback and sound from apps and programs- like Kaffeine- on this boosted volume- I cannot get System Sounds, like logon or logoff. I have tried the alsamixer install that used to work, to no effect. Is their an app that I should be trying or is this feat impossible ? I have tried to delete all the files in .pulse and rebooting or just deletion of the entire .purge directory, as Michael K suggested- neither work as the directory is automatically rewritten on reboot. Ideas?

    Read the article

  • How do i clean up my harddrive?

    - by acidzombie24
    Not to long ago i was using only 35% of my HD. Just recently it shot up to 54% and my diskspace is 16gb so thats more then 3gigs that have been taken. From what i remember i failed to build mysql, i install gitolite which required me to build git from source which had a tons of dependencies (i think it was for building docs, i think i saw latex and other packages but i was drozy when i was installing). I suspect that is what is taking the diskspace. Anyways so far i deleted source folders i know i had and ran these commands. What else can i do? (3gigs is mighty) sudo apt-get autoclean sudo deborphan | xargs sudo apt-get -y remove --purge

    Read the article

  • JUnit Testing in Multithread Application

    - by e2bady
    This is a problem me and my team faces in almost all of the projects. Testing certain parts of the application with JUnit is not easy and you need to start early and to stick to it, but that's not the question I'm asking. The actual problem is that with n-Threads, locking, possible exceptions within the threads and shared objects the task of testing is not as simple as testing the class, but testing them under endless possible situations within threading. To be more precise, let me tell you about the design of one of our applications: When a user makes a request several threads are started that each analyse a part of the data to complete the analysis, these threads run a certain time depending on the size of the chunk of data (which are endless and of uncertain quality) to analyse, or they may fail if the data was insufficient/lacking quality. After each completed its analysis they call upon a handler which decides after each thread terminates if the collected analysis-data is sufficient to deliver an answer to the request. All of these analysers share certain parts of the applications (some parts because the instances are very big and only a certain number can be loaded into memory and those instances are reusable, some parts because they have a standing connection, where connecting takes time, ex.gr. sql connections) so locking is very common (done with reentrant-locks). While the applications runs very efficient and fast, it's not very easy to test it under real-world conditions. What we do right now is test each class and it's predefined conditions, but there are no automated tests for interlocking and synchronization, which in my opionion is not very good for quality insurances. Given this example how would you handle testing the threading, interlocking and synchronization?

    Read the article

  • Remove kubuntu-desktop from ubuntu 12.04 [closed]

    - by Meijuh
    Possible Duplicate: How to completely remove desktop? So, I thought I managed to remove KDE completely, but apparently that did not work at all, because every KDE application is back, including the KDE splash screen. I ran sudo apt-get autoremove --purge kubuntu-desktop Then I ran sudo apt-get install --reinstall ubuntu-desktop Then I ran sudo sudo update-alternatives --config default.plymouth Then I rebooted and everything seemed to be the original ubuntu-desktop (without the kde splash screen and other KDE applications). But now, one week later I still boot to ubuntu-desktop, but like I said, the kde splash screen and applications are all back. How should I remove kubuntu-desktop?

    Read the article

  • How do i completely remove phpmyadmin?

    - by blade19899
    I messed up my phpmyadmin, I haven't logged in, in phpmyadmin, in a while, and as a result i forgot my password, so i purged it like so: sudo apt-get purge phpmyadmin. I did get some error messages asking for my password but i forgot that, so i just pressed ignore, after that i installed phpmyadmin again like so: sudo apt-get install phpmyadmin. This time i wont be forgetting my password. But now, when i logging my phpmyadmin I get a 404 not found error page!? Question: How do i completely remove phpmyadmin and as a result get phpmyadmin working again Note: I am running Ubuntu 12.10(AMD64)

    Read the article

  • BizTalk 2009 - Scoped Record Counting in Maps

    - by StuartBrierley
    Within BizTalk there is a functoid called Record Count that will return the number of instances of a repeated record or repeated element that occur in a message instance. The input to this functoid is the record or element to be counted. As an example take the following Source schema, where the Source message has a repeated record called Box and each Box has a repeated element called Item: An instance of this Source schema may look as follows; 2 box records - one with 2 items and one with only 1 item. Our destination schema has a number of elements and a repeated box record.  The top level elements contain totals for the number of boxes and the overall number of items.  Each box record contains a single element representing the number of items in that box. Using the Record Count functoid it is easy to map the top level elements, producing the expected totals of 2 boxes and 3 items: We now need to map the total number of items per box, but how will we do this?  We have already seen that the record count functoid returns the total number of instances for the entire message, and unfortunately it does not allow you to specify a scoping parameter.  In order to acheive Scoped Record Counting we will need to make use of a combination of functoids. As you can see above, by linking to a Logical Existence functoid from the record/element to be counted we can then feed the output into a Value Mapping functoid.  Set the other Value Mapping parameter to "1" and link the output to a Cumulative Sum functoid. Set the other Cumulative Sum functoid parameter to "1" to limit the scope of the Cumulative Sum. This gives us the expected results of Items per Box of 2 and 1 respectively. I ran into this issue with a larger schema on a more complex map, but the eventual solution is still the same.  Hopefully this simplified example will act as a good reminder to me and save someone out there a few minutes of brain scratching.

    Read the article

  • How do I fix gnome shell themes?

    - by Chris
    This is my fifth full format and install of Ubuntu in under a month. I finally have my Gnome 3 desktop working again, but again, the Gnome shell themes is not select-able. I have asked the question of how to fix this common issue before, but I have seen no positive resolution. Does anybody know of a simple fix? This is a common issue and I have seen hundreds of postings related to it, but other users only seem to get half-way answers also and it goes unresolved. Would it be advisable to completely purge Gnome desktop and reinstall? If so how would I do this? I cannot use any extensions if the shell is not working, so I am desperately seeking resolution for the issue. Thanks in advance.

    Read the article

  • Repairing or recreating a bootloader on a multi-booting EFI GPT system

    - by Emre
    Reinstalling Ubuntu messed up my boot loader so I I tried to fix it with boot repair. It detected my OSX installation and asked about removing the "separate boot/EFI". It also said my partition was full despite the fact that it wasn't and asked me to remove stuff. I declined both and proceeded. It's been stuck at the "purge and reinstall the GRUB" stage for half an hour. Is this typical, bearing in mind I have a fast SSD and CPU? Is there a better way to re-install grub on a multi-booting UEFI system? Does my pastebin provide any insight?

    Read the article

  • Using EC2 instance as main development platform

    - by David
    My problem I am working as a consultant for various companies. Each company provides me with a laptop where with their software on and I also have my own where I have my development environment. I tend to buy a new laptop every second year and find myself spending lots of time configuring and installing software. I also sometimes spend a lot of time waiting for my laptop to process things. To solve all these issues, I am now considering using EC2 (running windows instances) as my main development platform and just access this from any PC I happen to be at. I calculated that running the High-CPU On-Demand Instances (medium) for 8 hours a day for a year costs me 580$, which is acceptable. I imagine that when I approach the workplace each day, I will make a single click my phone to fire up the instance, so it is ready when I get to work. I should have different icons on my phone to fire up the various instance types. The same software should of course automatically be loaded on the various hardware (sometimes I would even need their instance with 68.4 GB of memory). Another advantage is that if I am having a specific problem with my instance, I could fire up another instance and have someone look into the problem and update the image. My question: Does anyone have experience with such a setup on EC2? What kind of problems do you forsee?

    Read the article

  • Xubuntu, LXDE, USB Booting

    - by Kosciak
    Welcome, My problem appeared today - I was using Xubuntu for a long time, but find out that LXDE should be faster than Xfce, so I installed it. After installing I followed tutorial for removing Xfce, cause disk in my computer is very small and I wanted to release some free space. I used command from this tutorial: How to remove xubuntu-desktop? but instead of remove I entered purge command… and rebooted at the end. And I uninstalled whole my things. The problem is in installing system again - it's old Sony Vaio laptop (PCG-GR250) and I have broken CD-DVD drive. It's possible to boot from USB? I can access recovery mode, will this help me? Please answer fast, because it's my brother computer, and his going to kill me if I won't fix this fast :(

    Read the article

  • How to remove Launchpad app/webapp?

    - by Exomancer
    I was browsing Launchpad and it asked to install a helper app. I thought that this was the Firefox extension so I said yes. Now I have this launcher app that will open a Firefox window directly to Launchpad from my unity dash. I want to remove it and I can't. I removed the Firefox extension and it's still there. I removed the Unity webapps Firefox extension and it's still there. I tried to purge "launchpad" and my system says that it doesn't exist. I searched Launchpad for information on this app but found nothing I could understand that seemed to be relevant. But it's still there, staring at me from the Unity dash. Can anyone help me get this thing out of my system? Running Ubuntu 13.10

    Read the article

  • How to get httpd to forward to multiple tomcats for different urls, including / ?

    - by Nick Foote
    Ok So I've got multiple tomcat instances setup on several AJP ports, I also have Apache httpd listening on port 8090 (cos I've got another app already using 8080 at the moment). I've successfully mapped urls such as mydomain.com:8090/demo and mydomain.com:8090/preprod to their respective tomcat instances using Jk Mount and the following vhosts config; <VirtualHost *:8090> JkMount /preprod* preprod JkMount /demo* demo </VirtualHost> But I also want the "root" address to map to another tomcat instance, what will become live/production, ie I want mydomain.com:8090/ to map a 3rd tomcat instance. At the moment nothing happens or changes if I just add to the above config a line; JkMount /* rootwar if I browse to mydomain.com:8090 I just get the same boring apache httpd landing page letting me know its running (ie index.html in httpd/htdocs) Is it possible to use JkMount to redirect the "root" address stuff to a tomcat instance? I can see that a rule like /* will also match URLs like mydomain.com/preprod but I was hoping the rules are applied in order so if /* appears at the end it effectively would be a "if its not one of the other environments, then direct to root/production" Just to be clear I'm trying to setup the following; mydomain.com:8090/preprod --> myApp running in tomcat1 mydomain.com:8090/demo --> myApp running in tomcat2 mydomain.com:8090 --> myApp running in tomcat3

    Read the article

  • netflix on ubuntu 12.04

    - by tsi25
    So I have got Ubuntu 12.04 on a system 76 lemur ultra laptop. I installed netflix via terminal with the following chain of commands: sudo apt-get update sudo apt-add-repository ppa:ehoover/compholio sudo apt-get update sudo apt-get install netflix-desktop when it was finished installing, I clicked the icon and something came up asking if you want to install some dependent software, but wouldn't let me interact with the window - so I hit tab and worked my way through that. But my computer shutdown before I could fully install GECKO. Now I have the netflix icon, but when i click it or right click it nothing happens. I had tried uninstalling it with the following commands, sudo apt-add --purge remove netflix-desktop and then reinstalling it but there's no change. does anyone know what I can do to get netflix to run from here? or what I can do to start troubleshooting? I searched around on AskUbuntu but couldn't find any answers to this specific problem.

    Read the article

  • Where are the systray icons for Dropbox in Ubuntu desktop 13.04 (minimal)?

    - by samvv
    I reinstalled Ubuntu desktop using the minimal CD image and the following command: $ sudo apt-get install ubuntu-desktop --no-install-recommends After that I used Ubuntu Software Center to make sure Unity supports application indicators: http://i.imgur.com/bYF162w.png. Everything works great, except for Dropbox. For some reason the icon doesn't appear in the tray, even though the application is running. Steam on the other hand runs just fine, so it seems like there is nothing wrong with the tray itself. According to this post the tray icons should be in /usr/shared/icons/hicolor/22x22/status but it doesn't contain any Dropbox icons. Neither do any of the other resolutions. The answer is a bit outdated, so I'm not entirely sure it is still applicable to the current version of Dropbox. I did the usual thing of reinstalling dropbox with: sudo apt-get purge nautilus-dropbox sudo apt-get install nautilus-dropbox But that didn't solve anything either. Does somebody know how to fix this issue?

    Read the article

  • Unable to remove some unity lenses

    - by S Prasanth
    I removed the files, video, photos and friends lenses with the following command. sudo apt-get purge unity-lens-files unity-lens-video unity-lens-photos unity-lens-friends Although the corresponding results have disappeared from dash, only the friends tab has been removed. There still are tabs for files, video and photos, albeit empty. How do I remove these empty tabs? I use Ubuntu 13.10 Saucy Salamander. I understand that this issue didn't exist in 12.04. The directory structure of unity lenses seems to have changed from 12.04 to 13.10. Earlier the lenses were stored in /usr/share/unity/lenses/. That isn't the case now, rendering this answer inappropriate: http://askubuntu.com/a/120116/111720

    Read the article

  • How to fix a gstreamer error

    - by BJsgoodlife
    I upgraded to 14.04 and now gstreamer is not working. What I am trying to accomplish is to hear the audio input on my computer coming from a ham radio. This is the command that I am using: gst-launch pulserc ! pulseink. This is the error message that I am receiving: ERROR: pipeline could not be constructed: no element "pulserc". I am wondering if I should purge gstreamer, or if there is another command that I should put it. Thank you again everyone for all of your help.

    Read the article

  • Upgrade to Quantal unity top bar, side bar, and window declorations missing

    - by Nicky Bailuc
    I upgraded from 12.04.1 to 12.10 via the Update Manager and the upgrade said it completed successfuly. However after rebooting the unity task bar was missing along with the launch bar and the window declorations. All compiz settings seemed to be purge deleted, and at first bootup it gave me a system error. The desktop exists, and once I remember I messed up the compiz settings and just had to press Ctrl+Alt+F1 and in the virtual terminal type "unity --reset" then "sudo reboot". Everything worked as if i reinstalled the entire operating system. This time it said “Warning no variable set. setting to :0. The reset option is now dupricated”. What am I suppose to do now? I need this fixed as soon as possible because I need a couple of certainly installed programs and the data within them (long story short).

    Read the article

  • Object oriented wrapper around a dll

    - by Tom Davies
    So, I'm writing a C# managed wrapper around a native dll. The dll contains several hundred functions. In most cases, the first argument to each function is an opaque handle to a type internal to the dll. So, an obvious starting point for defining some classes in the wrapper would be to define classes corresponding to each of these opaque types, with each instance holding and managing the opaque handle (passed to its constructor) Things are a little awkward when dealing with callbacks from the dll. Naturally, the callback handlers in my wrapper have to be static, but the callbacks arguments invariable contain an opaque handle. In order to get from the static callback back to an object instance, I've created a static dictionary in each class, associating handles with class instances. In the constructor of each class, an entry is put into the dictionary, and this entry is then removed in the Destructors. When I receive a callback, I can then consult the dictionary to retrieve the class instance corresponding to the opaque reference. Are there any obvious flaws to this? Something that seems to be a problem is that the existence static dictionary means that the garbage collector will not act on my class instances that are otherwise unreachable. As they are never garbage collected, they never get removed from the dictionary, so the dictionary grows. It seems I might have to manually dispose of my objects, which is something absolutely would like to avoid. Can anyone suggest a good design that allows me to avoid having to do this?

    Read the article

  • How can the original icons and fonts be restored in ubuntu 12.10?

    - by Harsh
    Recently i installed cairo-dock and changed the original ubuntu theme. then i uninstalled it. but the icons on the left side didn't restored into its original look. Somewhere I found a command sudo apt-get autoremove --purge light-themes sudo apt-get install light-themes After running this command the theme didn't changed but it seems that the font of the original ubuntu changed a little. I want the theme of ubuntu as it was when i newly installed it. Moreover, the alt+tab and some other key combination not working normally. please help.

    Read the article

  • Skype on Ubuntu 12.04 x64 - anyone has it installed?

    - by Zevaka
    I am wondering if there's anyone here who managed to use skype @ 12.04. It used to work (ever since i first tried Ubuntu 10.04) flawlessly, but after upgrade to 12.04 just doesn't start. If I start skype from terminal, it just says "Segmentation fault (core dumped). I tried to remove skype and installed it from different locations: Skype website (did not install at all, "package error") From official Ubuntu repository (doesn't start at all, segmentation fault) From repository "deb http://archive.canonical.com/ $(lsb_release -sc) partner" (doesn't start at all, segmentation fault) Interesting thing: even if I remove / purge skype from the system, it still shows up in Unity menu (of course doesn't start). Any help here? Thanks!

    Read the article

  • BizTalk 2009 - How do I do t"HAT"?

    - by StuartBrierley
    In my previous life working with BizTalk Server 2004, I came to view HAT (the Health and Activity Tracking tool) as one of my first ports of call in the case of problems with any of our BizTalk solutions.  When you move to BizTalk Server 2009 it is quickly apparent that HAT is no longer with us. HAT was useful in BizTalk 2004 mainly as it provided developers and administrators with a number of useful queries and views of what was going on inside BizTalk at runtime; when and what type of messages were received and sent, what messages had been suspended, what orchestration were running or suspended, you could even follow the process flow of a message or orchestration to see what was going on. With BizTalk Server 2009 much of the functionality of HAT can now be found in the BizTalk Administration console.  Select a BizTalk Group and you will be shown the Group Hub Overview page.  This provides a number of default queries that replicate some of those found in the old HAT. You can also use the Group Hub page to create new queries.  These can then be saved and loaded in other Group Hub instances - useful for creating queries in development for later use in Test, Psuedo-Live and Live environments. In the next few posts I am going to look at some of the common queries that we might miss from HAT and recreate them (or something close) using the new query option. Messages - last 100 received Messages - last 100 sent Messages - last 50 suspended Service instances - last 100 I have yet to try the updated Admin-HAT-Console in anger, and after using old-HAT for so long it may take some getting uesd to, but so far I would say that moving the HAT functionality into the BizTalk Administration console was probably the correct way to go.  Having one tool as the place to look for the combined functionality on offer certainly seems to be the sensible option.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >