Search Results

Search found 5949 results on 238 pages for 'jump lists'.

Page 54/238 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Unable to install Skype ("Unmet dependencies" error)

    - by Alex Maslakov
    Trying to install skype on Ubuntu 12, I faced and issue. When I type: sudo apt-get update sudo apt-get install skype I get an error Reading package lists... Done Building dependency tree Reading state information... Done skype is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: skype : Depends: lib32stdc++6 (>= 4.1.1-21) but it is not going to be installed Depends: lib32asound2 (> 1.0.14) but it is not going to be installed Depends: ia32-libs but it is not going to be installed Depends: libc6-i386 (>= 2.7-1) but it is not going to be installed Depends: lib32gcc1 (>= 1:4.1.1-21+ia32.libs.1.19) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). How do I solve it? Is the way I'm using the right one to install skype? UPDATE: If I try to do sudo apt-get install lib32stdc++6 lib32asound2 ia32-libs libc6-i386 lib32gcc1 skype then I get Reading package lists... Done Building dependency tree Reading state information... Done skype is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch lib32asound2 : Depends: libasound2 (= 1.0.25-1ubuntu10) E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

    Read the article

  • this error appeared when upgrating 12.04 LTS to 12.10 [closed]

    - by habcity
    Possible Duplicate: How do I fix a “Problem with MergeList” error when trying to do an update? ryder@ryder-Q1500M:~$ do-release-upgrade Checking for a new Ubuntu release Get:1 Upgrade tool signature [198 B] Get:2 Upgrade tool [1,200 kB] Fetched 1,200 kB in 6s (6,988 B/s) authenticate 'quantal.tar.gz' against 'quantal.tar.gz.gpg' extracting 'quantal.tar.gz' [sudo] password for ryder: Reading cache A fatal error occurred Please report this as a bug and include the files /var/log/dist-upgrade/main.log and /var/log/dist-upgrade/apt.log in your report. The upgrade has aborted. Your original sources.list was saved in /etc/apt/sources.list.distUpgrade. Traceback (most recent call last): File "/tmp/update-manager-63XThv/quantal", line 10, in sys.exit(main()) File "/tmp/update-manager-63XThv/DistUpgrade/DistUpgradeMain.py", line 237, in main save_system_state(logdir) File "/tmp/update-manager-63XThv/DistUpgrade/DistUpgradeMain.py", line 130, in save_system_state scrub_sources=True) File "/tmp/update-manager-63XThv/DistUpgrade/apt_clone.py", line 146, in save_state self._write_state_installed_pkgs(sourcedir, tar) File "/tmp/update-manager-63XThv/DistUpgrade/apt_clone.py", line 173, in _write_state_installed_pkgs cache = self._cache_cls(rootdir=sourcedir) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 102, in init self.open(progress) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 145, in open self._cache = apt_pkg.Cache(progress) SystemError: E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_precise-backports_multiverse_i18n_Translation-en, E:The package lists or status file could not be parsed or opened.

    Read the article

  • Multiple URL's going to same page - Kosher for Google?

    - by Ashoka15
    I hear conflicting answers from people about this, and I'm a developer by trade, and my SEO knowledge is not what it should be. Here's my situation: I run a website that lists hotels, restaurants, bars, shops, etc for a small Asian beach town. Lots of establishments here are hotels with a restaurant and bar, as well as restaurants that are also bars. As en example, a Mexican restaurant that also functions as a full cocktail bar. I first set it up so each establishment has one page, but can create multiple pages based on their other areas of business. This forces people to create TWO listings under the same name, and most just add the exact same information onto each page, making things redundant. I am re-arranging the database so that a establishment has only ONE listing (one unique page referenced by the unique code '12345ABCDEF') that is accessible from browsing under "Restaurants" and "Bars", and has the URL structures: site.com/dining/mexican/12345ABCDEF/business-name.html site.com/bars/cocktail_bars/12345ABCDEF/business-name.html I could easily simplify the URL to just the unique code and name: site.com/12345ABCDEF/business-name.html But, I found that Google has parsed by URL structure and lists like this on their SERP: Home > Dining > Mexican With each pointing to the default page for homepage, restaurants and Mexican restaurants. If I simplify the URL structure, will I lose these associations? Could Google also be picking up this structure from my breadcrumb trail at the top of the page? What is the best way to set up URL's on these pages so I am not penalized by Google for having identical information on two URL's, while still being able to have places show up as they did with the old system?

    Read the article

  • Can't update packages or use Ubuntu Software Center (E: Some index files failed to download. They have been ignored, or old ones used instead.)

    - by user94189
    I get the following errors when trying to run the auto Update Manager Could not initialize the package information An unresolvable problem occurred while initializing the package information. Please report this bug against the 'update-manager' package and include the following error message: 'E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/ppa.launchpad.net_bugs-sehe_zfs-fuse_ubuntu_dists_precise_main_binary-amd64_Packages, E:The package lists or status file could not be parsed or opened.' When I run sudo apt-get update, I get these errors W: GPG error: http://us.archive.ubuntu.com precise Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: GPG error: http://ppa.launchpad.net precise Release: The following signatures were invalid: BADSIG 1FFD34C9EB13C954 Launchpad CoverGloobus W: GPG error: http://ppa.launchpad.net precise Release: The following signatures were invalid: BADSIG 4A67F94CBF965FF5 Launchpad PPA for Happy-Neko W: Failed to fetch http://ppa.launchpad.net/bugs-sehe/zfs-fuse/ubuntu/dists/precise/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/bugs-sehe/zfs-fuse/ubuntu/dists/precise/main/binary-amd64/Packages 404 Not Found W: Failed to fetch http://ppa.launchpad.net/bugs-sehe/zfs-fuse/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Cannot install ia32-lib package

    - by A British Person
    I have several programs that reuquire 32 bit packages (pointing to the ia32-lib package). However, when I try to install it, this happens. spirit@ubuntu:~$ sudo apt-get install ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch but it is not installable E: Unable to correct problems, you have held broken packages. No big whoop, packages die all the time. I tried a month later however and I still got this error, trying to install the specific package produces this error. spirit@ubuntu:~$ sudo apt-get install ia32-libs-multiarch Reading package lists... Done Building dependency tree Reading state information... Done Package ia32-libs-multiarch is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'ia32-libs-multiarch' has no installation candidate I am no Linux whizz-kid, but this seems to be that the package doesn't exist. I searched for Skype in the software centre (I was told this installs the 32-bit packages) and it does not appear in the software centre, and the downloadable from their website produces an error about - funnily enough - no 32-bit packages. Anyone who helps me will get a medal from the gods with the weight of a thousand planets. Just don't wear it for god's sake.

    Read the article

  • Wireless switch on Dell XT2 - strange behaviour of rfkill

    - by DyP
    I have an Dell Latitude XT2 using an Intel WLAN card (lspci lists it as "Intel Corporation Ultimate N WiFi Link 5300") running Lubuntu 12.04 with recent updates. The laptop has a hardware WLAN switch. I have problems activating the WLAN when booting with the hardware switch set to "off". The situation is a bit confusing, unfortunately. rfkill lists two WLAN devices (though lspci only shows the Intel one). This is the situation when booting with the hardware switch set to "Off": 0: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes 1: dell-bluetooth: Bluetooth Soft blocked: yes Hard blocked: yes 2: phy0: Wireless LAN Soft blocked: yes Hard blocked: yes From some tests, I conclude WLAN is only activated when both, the dell-wifi and phy0, are unblocked by soft- and hardware. But I can only unblock dell-wifi after the hardware switch is set to "on". Procedure right from boot with hardware switch set to "Off": Soft-unblocking phy0 works as expected. Could be done by start-up script. sudo rfkill unblock 0: nothing happens. Soft block of dell-wifi not removed. Set the hardware switch to "on": phy0 gets its hard block removed. Still no WLAN. sudo rfkill unblock 0: both the soft and hard lock of dell-wifi are removed. WLAN is now active and works. sudo rfkill block 0: only adds the soft block as expected. WLAN goes off again. So, in order to activate WLAN, I have to use the hardware switch and afterwards (manually) run a script - that's a bit inconvenient. Does someone know a better solution? Maybe a daemon could help that listens to rfkill events to unblock dell-wifi after I have set the hardware switch to "on"? (sounds like another workaround) When booting with the hardware switch set to "On", nothing is blocked neither hard nor soft.

    Read the article

  • Errors when installing updates

    - by user71613
    I am getting the following errors when installing updates. They started to appear after I upgraded my system to 12.04. Errors were encountered while processing: samba-common samba-common-bin samba grub-pc grub-gfxpayload-lists Setting up samba-common (2:3.6.3-2ubuntu2.2) ... perl: error while loading shared libraries: libperl.so.5.12: cannot open shared object file: No such file or directory dpkg: error processing samba-common (--configure): subprocess installed post-installation script returned error exit status 127 dpkg: dependency problems prevent configuration of samba-common-bin: samba-common-bin depends on samba-common (>= 2:3.4.0~pre1-2); however: Package samba-common is not configured yet. dpkg: error processing samba-common-bin (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of samba: samba depends on samba-common (= 2:3.6.3-2ubuntu2.2); however: Package samba-common is not configured yet. samba depends on samba-common-bin; however: Package samba-common-bin is not configured yet. dpkg: error processing samba (--configure): dependency problems - leaving unconfigured Setting up grub-gfxpayload-lists (0.6) ... Setting up grub-pc (1.99-21ubuntu3.1) ... perl: error while loading shared libraries: libperl.so.5.12: cannot open shared object file: No such file or directory dpkg: error processing grub-pc (--configure): subprocess installed post-installation script returned error exit status 127 Any ideas how to fix this?

    Read the article

  • How to ask developers to think about high resolution screens

    - by WhiteWind
    I just got a ASUS N56VZ laptop, and it`s all good, but the screen resolution 1920x1080px. I am not asking, how to scale ui elements. If someone is interested I have found some tricks: increase font size in system settings increase unity dock size in MyUnity or system settings in modern Ubuntu versions. tweak userChrome.js of FireFox to make buttons|panels|icons larger add DefaultZoomLevel extension to FireFox to make it zoom pages initially. But all of it is miserable, because there are some big bugs: window decoration elements are way too small to pick them with mouse. I can't scale window easily and I can't position my cursor fast on the close|maximize buttons. Tuning lines like in sound volume dialog are hardly clickable at all. Unity top panel (status panel & tray) hardly can contain the bigger font, so it looks ugly, but icons are still the same. Sometimes I can`t read text, as it is cropped (and I cant scale some dialogs as it has fixed size) Chromium is not usable at all (ok, it's not Ubuntu problem, but the problem still exists) JAVA applications are not scalable (same as above) In FF I am able to get descent results in most cases, but multiplication of system font increase and browser font increase makes system controls (combobox, lists, drop down lists) extremely big, so I cant even control the zoom level on the page. IMHO, we should post a bug report (but what kind of bug?) and vote for it! The problem is even deeper, but at least we should ask developers to think about it. So my question is: how can I post a report (the right words and right place) and how can we (who already has that problem, or who want it to be solved before hardware upgrade) vote for faster solution. Any ideas?

    Read the article

  • Rendering design. How can I effectively deal with forward, deferred and transparent rendering?

    - by user1423893
    I have many objects in my game world that all derive from one base class. Each object will have different materials and will therefore be required to be drawn using various rendering techniques. I currently use the following order for rendering my objects. Deferred Forward Transparent (order independent) Each object has a rendering flag that denotes which one of the above methods should be used. The list of base objects in the scene are then iterated through and added to separate lists of deferred, forward or transparent objects based on their rendering flag value. The individual lists are then iterated through and drawn using the order above. Each list is cleared at the end of the frame. This methods works fairly well but it requires different draw methods for each material type. For example each object will require the following methods in order to be compatible with the possible flag settings. object.DrawDeferred() object.DrawForward() object.DrawTransparent() It is also hard to see where methods outside of materials, such as rendering shadow maps, would fit using this "flag & method" design. object.DrawShadow() I was hoping that someone may have some suggestions for improving this rendering process, possibly making it more generic and less verbose?

    Read the article

  • Laptop power button not working

    - by DyP
    I have a (seemingly rather exotic) Dell Latitude XT2 notebook running Lubuntu 12.04, fresh install. Tried to get the power button to work as expected (opening lubuntu-logout, just as I think it was meant to be out-of-the box), but no success: power button does nothing but forced power-off on long press. Here some more information, please ask if something more could help: Power button is recognized as a device: xinput lists Power Button as a slave keyboard at /dev/input/event1, and I can run a tool to listen to that device and invoke some action (successfully). But using .config/openbox/lubuntu-rc.xml to assign some action to XF86ShutDown or XF86PowerOff does nothing. Works well for any key of the keyboard. xinput --test reports a keycode (key press, key release) of 124 when pressing & releasing the power button. xmodmap -pk lists 124 mapped to XF86PowerOff which seems correct to me (btw: is this the same as XF86ShutDown?) When assigning another keysymname to the 124 keycode, e.g. 'w', still nothing happens when pressing the power button (no window seems to receive a 'w'). Works well for any key on the keyboard. The outputs of xev for a press/release of the power button irritate me: MappingNotify event, serial 41, synthetic NO, window 0x0, request MappingKeyboard, first_keycode 8, count 248 FocusOut event, serial 41, synthetic NO, window 0x2a00001, mode NotifyGrab, detail NotifyAncestor FocusIn event, serial 41, synthetic NO, window 0x2a00001, mode NotifyUngrab, detail NotifyAncestor KeymapNotify event, serial 41, synthetic NO, window 0x0, keys: 93 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 (instead of 93, I also get different values) I might have missed something very basic since I'm not very familiar with both openbox/lubuntu and the X input system. Btw.: same goes for three other special buttons on my laptop..

    Read the article

  • State Changes in a Component Based Architecture [closed]

    - by Maxem
    I'm currently working on a game and using the naive component based architecture thingie (Entities are a bag of components, entity.Update() calls Update on each updateable component), while the addition of new features is really simple, it makes a few things really difficult: a) multithreading / currency b) networking c) unit testing. Multithreading / Concurrency is difficult because I basically have to do poor mans concurrency (running the entity updates in separate threads while locking only stuff that crashes (like lists) and ignoring the staleness of read state (some states are already updated, others aren't)) Networking: There are no explicit state changes that I could efficiently push over the net. Unit testing: All updates may or may not conflict, so automated testing is at least awkward. I was thinking about these issues a bit and would like your input on these changes / idea: Switch from the naive cba to a cba with sub systems that work on lists of components Make all state changes explicit Combine 1 and 2 :p Example world update: statePostProcessing.Wait() // ensure that post processing has finished Apply(postProcessedState) state = new StateBag() Concurrently( () => LifeCycleSubSystem.Update(state), // populates the state bag () => MovementSubSystem.Update(state), // populates the state bag .... }) statePostProcessing = Future(() => PostProcess(state)) statePostProcessing.Start() // Tick is finished, the post processing happens in the background So basically the changes are (consistently) based on the data for the last tick; the post processing can a) generate network packages and b) fix conflicts / remove useless changes (example: entity has been destroyed - ignore movement etc.). EDIT: To clarify the granularity of the state changes: If I save these post processed state bags and apply them to an empty world, I see exactly what has happened in the game these state bags originated from - "Free" replay capability. EDIT2: I guess I should have used the term Event instead of State Change and point out that I kind of want to use the Event Sourcing pattern

    Read the article

  • Lubuntu 13.04, kernel still 3.8.0-32-generic

    - by Blue Ice
    My question is very similar to Ubuntu 13.10, kernel still 3.8.0-31-generic. Recently was updating to Saucy and the ethernet cable got unplugged. So I decided to run Software Update again, to reinstall files. It returned that "everything is up to date". But according to these command-line searches, that is incorrect. How can I install Saucy now safely from the command line? sudo apt-get install lubuntu-desktop Reading package lists... Done Building dependency tree Reading state information... Done lubuntu-desktop is already the newest version. The following package was automatically installed and is no longer required: linux-image-extra-3.8.0-19-generic Use 'apt-get autoremove' to remove it. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. sudo apt-get update Get:1 http://extras.ubuntu.com raring Release.gpg [72 B] Hit http://extras.ubuntu.com raring Release Get:2 http://az-1.hpcloud.mirror.websitedevops.com raring Release.gpg Hit http://extras.ubuntu.com raring/main Sources Hit http://extras.ubuntu.com raring/main i386 Packages Get:3 http://az-1.hpcloud.mirror.websitedevops.com raring-updates Release.gpg Get:4 http://az-1.hpcloud.mirror.websitedevops.com raring-backports Release.gpg Get:5 http://az-1.hpcloud.mirror.websitedevops.com raring-security Release.gpg Get:6 http://az-1.hpcloud.mirror.websitedevops.com raring Release Ign http://extras.ubuntu.com raring/main Translation-en_US Ign http://extras.ubuntu.com raring/main Translation-en Hit http://ppa.launchpad.net raring Release.gpg Ign http://az-1.hpcloud.mirror.websitedevops.com raring Release E: GPG error: http://az-1.hpcloud.mirror.websitedevops.com raring Release: The following signatures were invalid: NODATA 1 NODATA 2 lsb_release -rd Description: Ubuntu 13.04 Release: 13.04 uname -r 3.8.0-32-generic

    Read the article

  • Project Navigation and File Nesting in ASP.NET MVC Projects

    - by Rick Strahl
    More and more I’m finding myself getting lost in the files in some of my larger Web projects. There’s so much freaking content to deal with – HTML Views, several derived CSS pages, page level CSS, script libraries, application wide scripts and page specific script files etc. etc. Thankfully I use Resharper and the Ctrl-T Go to Anything which autocompletes you to any file, type, member rapidly. Awesome except when I forget – or when I’m not quite sure of the name of what I’m looking for. Project navigation is still important. Sometimes while working on a project I seem to have 30 or more files open and trying to locate another new file to open in the solution often ends up being a mental exercise – “where did I put that thing?” It’s those little hesitations that tend to get in the way of workflow frequently. To make things worse most NuGet packages for client side frameworks and scripts, dump stuff into folders that I generally don’t use. I’ve never been a fan of the ‘Content’ folder in MVC which is just an empty layer that doesn’t serve much of a purpose. It’s usually the first thing I nuke in every MVC project. To me the project root is where the actual content for a site goes – is there really a need to add another folder to force another path into every resource you use? It’s ugly and also inefficient as it adds additional bytes to every resource link you embed into a page. Alternatives I’ve been playing around with different folder layouts recently and found that moving my cheese around has actually made project navigation much easier. In this post I show a couple of things I’ve found useful and maybe you find some of these useful as well or at least get some ideas what can be changed to provide better project flow. The first thing I’ve been doing is add a root Code folder and putting all server code into that. I’m a big fan of treating the Web project root folder as my Web root folder so all content comes from the root without unneeded nesting like the Content folder. By moving all server code out of the root tree (except for Code) the root tree becomes a lot cleaner immediately as you remove Controllers, App_Start, Models etc. and move them underneath Code. Yes this adds another folder level for server code, but it leaves only code related things in one place that’s easier to jump back and forth in. Additionally I find myself doing a lot less with server side code these days, more with client side code so I want the server code separated from that. The root folder itself then serves as the root content folder. Specifically I have the Views folder below it, as well as the Css and Scripts folders which serve to hold only common libraries and global CSS and Scripts code. These days of building SPA style application, I also tend to have an App folder there where I keep my application specific JavaScript files, as well as HTML View templates for client SPA apps like Angular. Here’s an example of what this looks like in a relatively small project: The goal is to keep things that are related together, so I don’t end up jumping around so much in the solution to get to specific project items. The Code folder may irk some of you and hark back to the days of the App_Code folder in non Web-Application projects, but these days I find myself messing with a lot less server side code and much more with client side files – HTML, CSS and JavaScript. Generally I work on a single controller at a time – once that’s open it’s open that’s typically the only server code I work with regularily. Business logic lives in another project altogether, so other than the controller and maybe ViewModels there’s not a lot of code being accessed in the Code folder. So throwing that off the root and isolating seems like an easy win. Nesting Page specific content In a lot of my existing applications that are pure server side MVC application perhaps with some JavaScript associated with them , I tend to have page level javascript and css files. For these types of pages I actually prefer the local files stored in the same folder as the parent view. So typically I have a .css and .js files with the same name as the view in the same folder. This looks something like this: In order for this to work you have to also make a configuration change inside of the /Views/web.config file, as the Views folder is blocked with the BlockViewHandler that prohibits access to content from that folder. It’s easy to fix by changing the path from * to *.cshtml or *.vbhtml so that view retrieval is blocked:<system.webServer> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*.cshtml" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" /> </handlers> </system.webServer> With this in place, from inside of your Views you can then reference those same resources like this:<link href="~/Views/Admin/QuizPrognosisItems.css" rel="stylesheet" /> and<script src="~/Views/Admin/QuizPrognosisItems.js"></script> which works fine. JavaScript and CSS files in the Views folder deploy just like the .cshtml files do and can be referenced from this folder as well. Making this happen is not really as straightforward as it should be with just Visual Studio unfortunately, as there’s no easy way to get the file nesting from the VS IDE directly (you have to modify the .csproj file). However, Mads Kristensen has a nice Visual Studio Add-in that provides file nesting via a short cut menu option. Using this you can select each of the ‘child’ files and then nest them under a parent file. In the case above I select the .js and .css files and nest them underneath the .cshtml view. I was even toying with the idea of throwing the controller.cs files into the Views folder, but that’s maybe going a little too far :-) It would work however as Visual Studio doesn’t publish .cs files and the compiler doesn’t care where the files live. There are lots of options and if you think that would make life easier it’s another option to help group related things together. Are there any downside to this? Possibly – if you’re using automated minification/packaging tools like ASP.NET Bundling or Grunt/Gulp with Uglify, it becomes a little harder to group script and css files for minification as you may end up looking in multiple folders instead of a single folder. But – again that’s a one time configuration step that’s easily handled and much less intrusive then constantly having to search for files in your project. Client Side Folders The particular project shown above in the screen shots above is a traditional server side ASP.NET MVC application with most content rendered into server side Razor pages. There’s a fair amount of client side stuff happening on these pages as well – specifically several of these pages are self contained single page Angular applications that deal with 1 or maybe 2 separate views and the layout I’ve shown above really focuses on the server side aspect where there are Razor views with related script and css resources. For applications that are more client centric and have a lot more script and HTML template based content I tend to use the same layout for the server components, but the client side code can often be broken out differently. In SPA type applications I tend to follow the App folder approach where all the application pieces that make the SPA applications end up below the App folder. Here’s what that looks like for me – here this is an AngularJs project: In this case the App folder holds both the application specific js files, and the partial HTML views that get loaded into this single SPA page application. In this particular Angular SPA application that has controllers linked to particular partial views, I prefer to keep the script files that are associated with the views – Angular Js Controllers in this case – with the actual partials. Again I like the proximity of the view with the main code associated with the view, because 90% of the UI application code that gets written is handled between these two files. This approach works well, but only if controllers are fairly closely aligned with the partials. If you have many smaller sub-controllers or lots of directives where the alignment between views and code is more segmented this approach starts falling apart and you’ll probably be better off with separate folders in js folder. Following Angular conventions you’d have controllers/directives/services etc. folders. Please note that I’m not saying any of these ways are right or wrong  – this is just what has worked for me and why! Skipping Project Navigation altogether with Resharper I’ve talked a bit about project navigation in the project tree, which is a common way to navigate and which we all use at least some of the time, but if you use a tool like Resharper – which has Ctrl-T to jump to anything, you can quickly navigate with a shortcut key and autocomplete search. Here’s what Resharper’s jump to anything looks like: Resharper’s Goto Anything box lets you type and quick search over files, classes and members of the entire solution which is a very fast and powerful way to find what you’re looking for in your project, by passing the solution explorer altogether. As long as you remember to use (which I sometimes don’t) and you know what you’re looking for it’s by far the quickest way to find things in a project. It’s a shame that this sort of a simple search interface isn’t part of the native Visual Studio IDE. Work how you like to work Ultimately it all comes down to workflow and how you like to work, and what makes *you* more productive. Following pre-defined patterns is great for consistency, as long as they don’t get in the way you work. A lot of the default folder structures in Visual Studio for ASP.NET MVC were defined when things were done differently. These days we’re dealing with a lot more diverse project content than when ASP.NET MVC was originally introduced and project organization definitely is something that can get in the way if it doesn’t fit your workflow. So take a look and see what works well and what might benefit from organizing files differently. As so many things with ASP.NET, as things evolve and tend to get more complex I’ve found that I end up fighting some of the conventions. The good news is that you don’t have to follow the conventions and you have the freedom to do just about anything that works for you. Even though what I’ve shown here diverges from conventions, I don’t think anybody would stumble over these relatively minor changes and not immediately figure out where things live, even in larger projects. But nevertheless think long and hard before breaking those conventions – if there isn’t a good reason to break them or the changes don’t provide improved workflow then it’s not worth it. Break the rules, but only if there’s a quantifiable benefit. You may not agree with how I’ve chosen to divert from the standard project structures in this article, but maybe it gives you some ideas of how you can mix things up to make your existing project flow a little nicer and make it easier to navigate for your environment. © Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • Have suggestions for these assembly mnemonics?

    - by Noctis Skytower
    Greetings! Last semester in college, my teacher in the Computer Languages class taught us the esoteric language named Whitespace. In the interest of learning the language better with a very busy schedule (midterms), I wrote an interpreter and assembler in Python. An assembly language was designed to facilitate writing programs easily, and a sample program was written with the given assembly mnemonics. Now that it is summer, a new project has begun with the objective being to rewrite the interpreter and assembler for Whitespace 0.3, with further developments coming afterwards. Since there is so much extra time than before to work on its design, you are presented here with an outline that provides a revised set of mnemonics for the assembly language. This post is marked as a wiki for their discussion. Have you ever had any experience with assembly languages in the past? Were there some instructions that you thought should have been renamed to something different? Did you find yourself thinking outside the box and with a different paradigm than in which the mnemonics were named? If you can answer yes to any of those questions, you are most welcome here. Subjective answers are appreciated! Stack Manipulation (IMP: [Space]) Stack manipulation is one of the more common operations, hence the shortness of the IMP [Space]. There are four stack instructions. hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item Arithmetic (IMP: [Tab][Space]) Arithmetic commands operate on the top two items on the stack, and replace them with the result of the operation. The first item pushed is considered to be left of the operator. add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo Heap Access (IMP: [Tab][Tab]) Heap access commands look at the stack to find the address of items to be stored or retrieved. To store an item, push the address then the value and run the store command. To retrieve an item, push the address and run the retrieve command, which will place the value stored in the location at the top of the stack. save Store load Retrieve Flow Control (IMP: [LF]) Flow control operations are also common. Subroutines are marked by labels, as well as the targets of conditional and unconditional jumps, by which loops can be implemented. Programs must be ended by means of [LF][LF][LF] so that the interpreter can exit cleanly. L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller halt End the program I/O (IMP: [Tab][LF]) Finally, we need to be able to interact with the user. There are IO instructions for reading and writing numbers and individual characters. With these, string manipulation routines can be written. The read instructions take the heap address in which to store the result from the top of the stack. print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack Question: How would you redesign, rewrite, or rename the previous mnemonics and for what reasons?

    Read the article

  • Are their any suggestions for this new assembly language?

    - by Noctis Skytower
    Greetings! Last semester in college, my teacher in the Computer Languages class taught us the esoteric language named Whitespace. In the interest of learning the language better with a very busy schedule (midterms), I wrote an interpreter and assembler in Python. An assembly language was designed to facilitate writing programs easily, and a sample program was written with the given assembly mnemonics. Now that it is summer, a new project has begun with the objective being to rewrite the interpreter and assembler for Whitespace 0.3, with further developments coming afterwards. Since there is so much extra time than before to work on its design, you are presented here with an outline that provides a revised set of mnemonics for the assembly language. This post is marked as a wiki for their discussion. Have you ever had any experience with assembly languages in the past? Were there some instructions that you thought should have been renamed to something different? Did you find yourself thinking outside the box and with a different paradigm than in which the mnemonics were named? If you can answer yes to any of those questions, you are most welcome here. Subjective answers are appreciated! Stack Manipulation (IMP: [Space]) Stack manipulation is one of the more common operations, hence the shortness of the IMP [Space]. There are four stack instructions. hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item Arithmetic (IMP: [Tab][Space]) Arithmetic commands operate on the top two items on the stack, and replace them with the result of the operation. The first item pushed is considered to be left of the operator. add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo Heap Access (IMP: [Tab][Tab]) Heap access commands look at the stack to find the address of items to be stored or retrieved. To store an item, push the address then the value and run the store command. To retrieve an item, push the address and run the retrieve command, which will place the value stored in the location at the top of the stack. save Store load Retrieve Flow Control (IMP: [LF]) Flow control operations are also common. Subroutines are marked by labels, as well as the targets of conditional and unconditional jumps, by which loops can be implemented. Programs must be ended by means of [LF][LF][LF] so that the interpreter can exit cleanly. L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller exit End the program I/O (IMP: [Tab][LF]) Finally, we need to be able to interact with the user. There are IO instructions for reading and writing numbers and individual characters. With these, string manipulation routines can be written. The read instructions take the heap address in which to store the result from the top of the stack. print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack Question: How would you redesign, rewrite, or rename the previous mnemonics and for what reasons?

    Read the article

  • Are there any suggestions for these new assembly mnemonics?

    - by Noctis Skytower
    Greetings! Last semester in college, my teacher in the Computer Languages class taught us the esoteric language named Whitespace. In the interest of learning the language better with a very busy schedule (midterms), I wrote an interpreter and assembler in Python. An assembly language was designed to facilitate writing programs easily, and a sample program was written with the given assembly mnemonics. Now that it is summer, a new project has begun with the objective being to rewrite the interpreter and assembler for Whitespace 0.3, with further developments coming afterwards. Since there is so much extra time than before to work on its design, you are presented here with an outline that provides a revised set of mnemonics for the assembly language. This post is marked as a wiki for their discussion. Have you ever had any experience with assembly languages in the past? Were there some instructions that you thought should have been renamed to something different? Did you find yourself thinking outside the box and with a different paradigm than in which the mnemonics were named? If you can answer yes to any of those questions, you are most welcome here. Subjective answers are appreciated! Stack Manipulation (IMP: [Space]) Stack manipulation is one of the more common operations, hence the shortness of the IMP [Space]. There are four stack instructions. hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item Arithmetic (IMP: [Tab][Space]) Arithmetic commands operate on the top two items on the stack, and replace them with the result of the operation. The first item pushed is considered to be left of the operator. add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo Heap Access (IMP: [Tab][Tab]) Heap access commands look at the stack to find the address of items to be stored or retrieved. To store an item, push the address then the value and run the store command. To retrieve an item, push the address and run the retrieve command, which will place the value stored in the location at the top of the stack. save Store load Retrieve Flow Control (IMP: [LF]) Flow control operations are also common. Subroutines are marked by labels, as well as the targets of conditional and unconditional jumps, by which loops can be implemented. Programs must be ended by means of [LF][LF][LF] so that the interpreter can exit cleanly. L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller halt End the program I/O (IMP: [Tab][LF]) Finally, we need to be able to interact with the user. There are IO instructions for reading and writing numbers and individual characters. With these, string manipulation routines can be written. The read instructions take the heap address in which to store the result from the top of the stack. print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack Question: How would you redesign, rewrite, or rename the previous mnemonics and for what reasons?

    Read the article

  • Do you have suggestions for these assembly mnemonics?

    - by Noctis Skytower
    Greetings! Last semester in college, my teacher in the Computer Languages class taught us the esoteric language named Whitespace. In the interest of learning the language better with a very busy schedule (midterms), I wrote an interpreter and assembler in Python. An assembly language was designed to facilitate writing programs easily, and a sample program was written with the given assembly mnemonics. Now that it is summer, a new project has begun with the objective being to rewrite the interpreter and assembler for Whitespace 0.3, with further developments coming afterwards. Since there is so much extra time than before to work on its design, you are presented here with an outline that provides a revised set of mnemonics for the assembly language. This post is marked as a wiki for their discussion. Have you ever had any experience with assembly languages in the past? Were there some instructions that you thought should have been renamed to something different? Did you find yourself thinking outside the box and with a different paradigm than in which the mnemonics were named? If you can answer yes to any of those questions, you are most welcome here. Subjective answers are appreciated! Stack Manipulation (IMP: [Space]) Stack manipulation is one of the more common operations, hence the shortness of the IMP [Space]. There are four stack instructions. hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item Arithmetic (IMP: [Tab][Space]) Arithmetic commands operate on the top two items on the stack, and replace them with the result of the operation. The first item pushed is considered to be left of the operator. add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo Heap Access (IMP: [Tab][Tab]) Heap access commands look at the stack to find the address of items to be stored or retrieved. To store an item, push the address then the value and run the store command. To retrieve an item, push the address and run the retrieve command, which will place the value stored in the location at the top of the stack. save Store load Retrieve Flow Control (IMP: [LF]) Flow control operations are also common. Subroutines are marked by labels, as well as the targets of conditional and unconditional jumps, by which loops can be implemented. Programs must be ended by means of [LF][LF][LF] so that the interpreter can exit cleanly. L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller halt End the program I/O (IMP: [Tab][LF]) Finally, we need to be able to interact with the user. There are IO instructions for reading and writing numbers and individual characters. With these, string manipulation routines can be written. The read instructions take the heap address in which to store the result from the top of the stack. print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack Question: How would you redesign, rewrite, or rename the previous mnemonics and for what reasons?

    Read the article

  • Game logic dynamically extendable architecture implementation patterns

    - by Vlad
    When coding games there are a lot of cases when you need to inject your logic into existing class dynamically and without making unnecessary dependencies. For an example I have a Rabbit which can be affected by freeze ability so it can't jump. It could be implemented like this: class Rabbit { public bool CanJump { get; set; } void Jump() { if (!CanJump) return; ... } } But If I have more than one ability that can prevent it from jumping? I can't just set one property because some circumstances can be activated simultanously. Another solution? class Rabbit { public bool Frozen { get; set; } public bool InWater { get; set; } bool CanJump { get { return !Frozen && !InWater; } } } Bad. The Rabbit class can't know all the circumstances it can run into. Who knows what else will game designer want to add: may be an ability that changes gravity on an area? May be make a stack of bool values for CanJump property? No, because abilities can be deactivated not in that order in which they were activated. I need a way to seperate ability logic that prevent the Rabbit from jumping from the Rabbit itself. One possible solution for this is making special checking event: class Rabbit { class CheckJumpEventArgs : EventArgs { public bool Veto { get; set; } } public event EventHandler<CheckJumpEvent> OnCheckJump; void Jump() { var args = new CheckJumpEventArgs(); if (OnCheckJump != null) OnCheckJump(this, args); if (!args.Veto) return; ... } } But it's a lot of code! A real Rabbit class would have a lot of properties like this (health and speed attributes, etc). I'm thinking of borrowing something from MVVM pattern where you have all the properties and methods of an object implemented in a way where they can be easily extended from outside. Then I want to use it like this: class FreezeAbility { void ActivateAbility() { _rabbit.CanJump.Push(ReturnFalse); } void DeactivateAbility() { _rabbit.CanJump.Remove(ReturnFalse); } // should be implemented as instance member // so it can be "unsubscribed" bool ReturnFalse(bool previousValue) { return false; } } Is this approach good? What also should I consider? What are other suitable options and patterns? Any ready to use solutions? UPDATE The question is not about how to add different behaviors to an object dynamically but how its (or its behavior) implementation can be extended with external logic. I don't need to add a different behavior but I need a way to modify an exitsing one and I also need a possibiliity to undo changes.

    Read the article

  • Update Manager unable to get updates

    - by dPEN
    In last few days my Ubuntu 11.10 update manager is unable to get new updates. When I checked update log I saw that for couple of updates it says "Network isn't available". For other updates it downloaded logs and and internet connection also works fine. Unable to attached screenshot due to SPAM prevention policy. But for below Release gpgv:/var/lib/apt/lists/partial/extras.ubuntu.com_ubuntu_dists_oneiric_Release.gpg it says "Network isn't available" For all other Releases it is downloading fine. And due to this I dont see any update available in last 10 days. LOG OF sudo apt-get update: dipen@EIDLCPU1018:~$ sudo apt-get update [sudo] password for dipen: Ign http:/extras.ubuntu.com oneiric InRelease Ign http:/archive.canonical.com oneiric InRelease Ign http:/archive.canonical.com lucid InRelease Get:1 http:/extras.ubuntu.com oneiric Release.gpg [72 B] Get:2 http:/archive.canonical.com oneiric Release.gpg [198 B] Hit http:/extras.ubuntu.com oneiric Release Get:3 http:/archive.canonical.com lucid Release.gpg [198 B] Err http:/extras.ubuntu.com oneiric Release Hit http:/archive.canonical.com oneiric Release Ign http:/archive.canonical.com oneiric Release Hit http:/archive.canonical.com lucid Release Ign http:/archive.canonical.com lucid Release Ign http:/archive.canonical.com oneiric/partner i386 Packages/DiffIndex Ign http:/archive.canonical.com oneiric/partner TranslationIndex Ign http:/archive.canonical.com lucid/partner i386 Packages/DiffIndex Ign http:/archive.canonical.com lucid/partner TranslationIndex Hit http:/archive.canonical.com oneiric/partner i386 Packages Hit http:/archive.canonical.com lucid/partner i386 Packages Ign http:/dl.google.com stable InRelease Ign http:/archive.canonical.com oneiric/partner Translation-en_IN Ign http:/archive.canonical.com oneiric/partner Translation-en Ign http:/archive.canonical.com lucid/partner Translation-en_IN Ign http:/archive.canonical.com lucid/partner Translation-en Ign http:/in.archive.ubuntu.com oneiric InRelease Ign http:/in.archive.ubuntu.com oneiric-updates InRelease Ign http:/in.archive.ubuntu.com oneiric-security InRelease Get:4 http//dl.google.com stable Release.gpg [198 B] Get:5 http//in.archive.ubuntu.com oneiric Release.gpg [198 B] Get:6 http//in.archive.ubuntu.com oneiric-updates Release.gpg [198 B] Get:7 http//in.archive.ubuntu.com oneiric-security Release.gpg [198 B] Hit http:/in.archive.ubuntu.com oneiric Release Ign http:/in.archive.ubuntu.com oneiric Release Hit http:/in.archive.ubuntu.com oneiric-updates Release Err http:/in.archive.ubuntu.com oneiric-updates Release Hit http:/in.archive.ubuntu.com oneiric-security Release Ign http:/in.archive.ubuntu.com oneiric-security Release Ign http:/in.archive.ubuntu.com oneiric/main i386 Packages/DiffIndex Ign http:/in.archive.ubuntu.com oneiric/restricted i386 Packages/DiffIndex Ign http:/in.archive.ubuntu.com oneiric/universe i386 Packages/DiffIndex Ign http:/in.archive.ubuntu.com oneiric/multiverse i386 Packages/DiffIndex Hit http:/in.archive.ubuntu.com oneiric/main TranslationIndex Hit http:/in.archive.ubuntu.com oneiric/multiverse TranslationIndex Hit http:/in.archive.ubuntu.com oneiric/restricted TranslationIndex Hit http:/in.archive.ubuntu.com oneiric/universe TranslationIndex Ign http:/in.archive.ubuntu.com oneiric-security/main i386 Packages/DiffIndex Ign http:/in.archive.ubuntu.com oneiric-security/restricted i386 Packages/DiffIndex Ign http:/in.archive.ubuntu.com oneiric-security/universe i386 Packages/DiffIndex Ign http:/in.archive.ubuntu.com oneiric-security/multiverse i386 Packages/DiffIndex Hit http:/in.archive.ubuntu.com oneiric-security/main TranslationIndex Hit http:/in.archive.ubuntu.com oneiric-security/multiverse TranslationIndex Hit http:/in.archive.ubuntu.com oneiric-security/restricted TranslationIndex Hit http:/in.archive.ubuntu.com oneiric-security/universe TranslationIndex Hit http:/in.archive.ubuntu.com oneiric/main i386 Packages Hit http:/in.archive.ubuntu.com oneiric/restricted i386 Packages Hit http:/in.archive.ubuntu.com oneiric/universe i386 Packages Hit http:/in.archive.ubuntu.com oneiric/multiverse i386 Packages Hit http:/in.archive.ubuntu.com oneiric/main Translation-en Hit http:/in.archive.ubuntu.com oneiric/multiverse Translation-en Hit http:/in.archive.ubuntu.com oneiric/restricted Translation-en Hit http:/in.archive.ubuntu.com oneiric/universe Translation-en Hit http:/in.archive.ubuntu.com oneiric-security/main i386 Packages Hit http:/in.archive.ubuntu.com oneiric-security/restricted i386 Packages Hit http:/in.archive.ubuntu.com oneiric-security/universe i386 Packages Hit http:/in.archive.ubuntu.com oneiric-security/multiverse i386 Packages Hit http:/in.archive.ubuntu.com oneiric-security/main Translation-en Hit http:/in.archive.ubuntu.com oneiric-security/multiverse Translation-en Get:8 http//dl.google.com stable Release [1,347 B] Hit http:/in.archive.ubuntu.com oneiric-security/restricted Translation-en Hit http:/in.archive.ubuntu.com oneiric-security/universe Translation-en Get:9 http//dl.google.com stable/main i386 Packages [1,214 B] Ign http:/dl.google.com stable/main TranslationIndex Ign http:/dl.google.com stable/main Translation-en_IN Ign http:/dl.google.com stable/main Translation-en Fetched 3,821 B in 41s (91 B/s) Reading package lists... Done W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http:/extras.ubuntu.com oneiric Release: The following signatures were invalid: BADSIG 16126D3A3E5C1192 Ubuntu Extras Archive Automatic Signing Key <[email protected]> W: GPG error: http:/archive.canonical.com oneiric Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: GPG error: http:/archive.canonical.com lucid Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: GPG error: http:/in.archive.ubuntu.com oneiric Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http:/in.archive.ubuntu.com oneiric-updates Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: GPG error: http:/in.archive.ubuntu.com oneiric-security Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: Failed to fetch http:/extras.ubuntu.com/ubuntu/dists/oneiric/Release W: Failed to fetch http:/in.archive.ubuntu.com/ubuntu/dists/oneiric-updates/Release W: Some index files failed to download. They have been ignored, or old ones used instead. dipen@EIDLCPU1018:~$ LOG of sudo apt-get upgrade: dipen@EIDLCPU1018:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: ghc6-doc haskell-zlib-doc libghc6-zlib-doc 0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded. dipen@EIDLCPU1018:~$ /etc/apt/sources.list: deb http:/in.archive.ubuntu.com/ubuntu/ oneiric main restricted deb http:/in.archive.ubuntu.com/ubuntu/ oneiric-updates main restricted deb http:/in.archive.ubuntu.com/ubuntu/ oneiric universe deb http:/in.archive.ubuntu.com/ubuntu/ oneiric-updates universe deb http:/in.archive.ubuntu.com/ubuntu/ oneiric multiverse deb http:/in.archive.ubuntu.com/ubuntu/ oneiric-updates multiverse deb http:/archive.canonical.com/ubuntu oneiric partner deb http:/in.archive.ubuntu.com/ubuntu/ oneiric-security main restricted deb http:/in.archive.ubuntu.com/ubuntu/ oneiric-security universe deb http:/in.archive.ubuntu.com/ubuntu/ oneiric-security multiverse deb http:/extras.ubuntu.com/ubuntu oneiric main #Third party developers repository deb http:/archive.canonical.com/ lucid partner

    Read the article

  • Demystified - BI in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Frequently, my clients ask me if there is a good guide on deciphering the seemingly daunting choice of products from Microsoft when it comes to business intelligence offerings in a SharePoint 2010 world. These are all described in detail in my book, but here is a one (well maybe two) page executive overview. Microsoft Excel: Yes, Microsoft Excel! Your favorite and most commonly used in the world database. No it isn’t a database in technical pure definitions, but this is the most commonly used ‘database’ in the world. You will find many business users craft up very compelling excel sheets with tonnes of logic inside them. Good for: Quick Ad-Hoc reports. Excel 64 bit allows the possibility of very large datasheets (Also see 32 bit vs 64 bit Office, and PowerPivot Add-In below). Audience: End business user can build such solutions. Related technologies: PowerPivot, Excel Services Microsoft Excel with PowerPivot Add-In: The powerpivot add-in is an extension to Excel that adds support for large-scale data. Think of this as Excel with the ability to deal with very large amounts of data. It has an in-memory data store as an option for Analysis services. Good for: Ad-hoc reporting and logic with very large amounts of data. Audience: End business user can build such solutions. Related technologies: Excel, and Excel Services Excel Services: Excel Services is a Microsoft SharePoint Server 2010 shared service that brings the power of Excel to SharePoint Server by providing server-side calculation and browser-based rendering of Excel workbooks. Thus, excel sheets can be created by end users, and published to SharePoint server – which are then rendered right through the browser in read-only or parameterized-read-only modes. They can also be accessed by other software via SOAP or REST based APIs. Good for: Sharing excel sheets with a larger number of people, while maintaining control/version control etc. Sharing logic embedded in excel sheets with other software across the organization via REST/SOAP interfaces Audience: End business users can build such solutions once your tech staff has setup excel services on a SharePoint server instance. Programmers can write software consuming functionality/complex formulae contained in your sheets. Related technologies: PerformancePoint Services, Excel, and PowerPivot. Visio Services: Visio Services is a shared service on the Microsoft SharePoint Server 2010 platform that allows users to share and view Visio diagrams that may or may not have data connected to them. Connected data can update these diagrams allowing a visual/graphical view into the data. The diagrams are viewable through the browser. They are rendered in silverlight, but will automatically down-convert to .png formats. Good for: Showing data as diagrams, live updating. Comes with a developer story. Audience: End business users can build such solutions once your tech staff has setup visio services on a SharePoint server instance. Developers can enhance the visualizations Related Technologies: Visio Services can be used to render workflow visualizations in SP2010 Reporting Services: SQL Server reporting services can integrate with SharePoint, allowing you to store reports and data sources in SharePoint document libraries, and render these reports and associated functionality such as subscriptions through a SharePoint site. In SharePoint 2010, you can also write reports against SharePoint lists (access services uses this technique). Good for: Showing complex reports running in a industry standard data store, such as SQL server. Audience: This is definitely developer land. Don’t expect end users to craft up reports, unless a report model has previously been published. Related Technologies: PerformancePoint Services PerformancePoint Services: PerformancePoint Services in SharePoint 2010 is now fully integrated with SharePoint, and comes with features that can either be used in the BI center site definition, or on their own as activated features in existing site collections. PerformancePoint services allows you to build reports and dashboards that target a variety of back-end datasources including: SQL Server reporting services, SQL Server analysis services, SharePoint lists, excel services, simple tables, etc. Using these you have the ability to create dashboards, scorecards/kpis, and simple reports. You can also create reports targeting hierarchical multidimensional data sources. The visual decomposition tree is a new report type that lets you quickly breakdown multi-dimensional data. Good for: Mostly everything :), except your wallet – it’s not free! But this is the most comprehensive offering. If you have SharePoint server, forget everything and go with performance point. Audience: Developers need to setup the back-end sources, manageability story. DBAs need to setup datawarehouses with cubes. Moderately sophisticated business users, or developers can craft up reports using dashboard designer which is a click-once App that deploys with PerformancePoint Related Technologies: Excel services, reporting services, etc.   Other relevant technologies to know about: Business Connectivity Services: Allows for consumption of external data in SharePoint as columns or external lists. This can be paired with one or more of the above BI offerings allowing insight into such data. Access Services: Allows the representation/publishing of an access database as a SharePoint 2010 site, leveraging many SharePoint features. Reporting services is used by Access services. Secure Store Service: The SP2010 Secure store service is a replacement for the SP2007 single sign on feature. This acts as a credential policeman providing credentials to various applications running with SharePoint. BCS, PerformancePoint Services, Excel Services, and many other apps use the SSS (Secure Store Service) for credential control. Comment on the article ....

    Read the article

  • How to install iodbc in Ubuntu 12.10

    - by Hanynowsky
    I need to install the library iodbc (depends on libodbc2) in my Quantal machine. Yet there is creepy dependency problem. This was replaced somehow by unixodbc which I don't have, installed. Here is what I get when I try to install : sudo apt-get install libiodbc2-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libiodbc2-dev : Depends: libiodbc2 (= 3.52.7-2build2) but it is not going to be installed Depends: iodbc (= 3.52.7-2build2) but it is not going to be installed E: Unable to correct problems, you have held broken packages. I can tell that iodbc conflicts with odbcinst. Yet, I cannot remove it due to the following: sudo apt-get remove odbcinst Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: akonadi-backend-mysql calligra-l10n-engb kde-l10n-engb kdevelop-l10n kdevelop-php-docs-l10n kdevelop-php-l10n libakonadi-kabc4 libakonadi-notes4 libakonadiprotocolinternals1 libboost-thread1.49.0 libdmtx0a libgpgme++2 libkcalcore4 libkdgantt2 libkholidays4 libkimap4 libkldap4 libkmbox4 libkmime4 libkolabxml0 libkpgp4 libkresources4 libksieve4 libprison0 libqgpgme1 libqrencode3 libxerces-c3.1 Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: akonadi-server k3b k3b-i18n katepart kde-runtime kdelibs-bin kdelibs5-plugins kdepim-runtime kdepim-strigi-plugins kdepimlibs-kio-plugins kdoctools kget kmag kmail kmix kmousetool ksystemlog kubuntu-debug-installer language-pack-kde-ar language-pack-kde-en libakonadi-calendar4 libakonadi-contact4 libakonadi-kcal4 libakonadi-kde4 libakonadi-kmime4 libcalendarsupport4 libincidenceeditorsng4 libk3b6 libkabc4 libkactivities-bin libkactivities6 libkalarmcal2 libkatepartinterfaces4 libkcal4 libkcalutils4 libkcddb4 libkde3support4 libkdepim4 libkdepimdbusinterfaces4 libkdewebkit5 libkemoticons4 libkfile4 libkgapi0 libkhtml5 libkio5 libkleo4 libkmanagesieve4 libkmediaplayer4 libknewstuff3-4 libknotifyconfig4 libkolab0 libkonq-common libkonq5abi1 libkontactinterface4 libkparts4 libkpimidentities4 libkpimtextedit4 libkpimutils4 libkprintutils4 libksieveui4 libktexteditor4 libktnef4 libktorrent4 libkworkspace4abi2 libkxmlrpcclient4 libmailcommon4 libmailimporter4 libmailtransport4 libmessagecomposer4 libmessagecore4 libmessagelist4 libmessageviewer4 libmicroblog4 libnepomuk4 libnepomukcore4abi1 libnepomukquery4a libnepomuksync4 libnepomukutils4 libplasma3 libsoprano4 libtemplateparser4 libvirtodbc0 nepomuk-core odbcinst odbcinst1debian2 plasma-scriptengine-javascript qapt-batch soprano-daemon virtuoso-minimal 0 upgraded, 0 newly installed, 89 to remove and 0 not upgraded. After this operation, 126 MB disk space will be freed. Do you want to continue [Y/n]? Other info: Why do the "iodbc" and "libmyodbc" packages conflict with each other? The root problem is that I need to use a new feature introduced in the latest MySQL Workbench builds (DB Migration) which uses odbc for that matter. Here is what Mysql doc says about it : Linux The Migration Wizard uses iODBC as a driver manager for all of its ODBC connections in Linux. This may give you some troubles because most Linux distributions provide ODBC drivers compiled against unixODBC. This is another driver manager not supported by MySQL Workbench so you won’t be able to use those drivers unless you compile them against iODBC. Here’s what you should do. Make sure that you have iODBC installed. If you are using Debian, Ubuntu or another Debian based distro, type this command in your terminal: $ sudo apt-get install iodbc libiodbc2-dev libpq-dev libssl-dev For RPM based distros (RedHat, Fedora, etc.) type this command instead of the previous one: $ sudo yum install iodbc iodbc-dev libpqxx-devel openssl-devel Now we need to install the PostgreSQL ODBC drivers. Download the psqlODBC source tarball from here. Use the latest version available for download. As of this writing the latest version corresponds to the file psqlodbc-09.01.0200.tar.gz. Extract this tarball to a directory in your hard drive and open a terminal and cd into that directory. Type this in the terminal window: $ ./configure --with-iodbc --enable-pthreads $ make $ sudo make install Verify that you have the file psqlodbcw.so in the /usr/local/lib directory. This package seems to pose the problem probably: dpkg -s odbcinst1debian2 Package: odbcinst1debian2 Status: install ok installed Priority: optional Section: libs Installed-Size: 241 Maintainer: Ubuntu Developers <[email protected]> Architecture: amd64 Multi-Arch: same Source: unixodbc Version: 2.2.14p2-5ubuntu4 Replaces: unixodbc (<< 2.1.1-2) Depends: libc6 (>= 2.14), libltdl7 (>= 2.4.2), odbcinst Pre-Depends: multiarch-support Breaks: libiodbc2, libmyodbc (<< 5.1.6-2), odbc-postgresql (<< 1:09.00.0310-1.1), tdsodbc (<< 0.82-8) Conflicts: odbcinst1, odbcinst1debian1 Description: Support library for accessing odbc ini files This package contains the libodbcinst library from unixodbc, a library used by ODBC drivers for reading their configuration settings from /etc/odbc.ini and ~/.odbc.ini. . Also contained in this package are the driver setup plugins, which describe the features supported by individual ODBC drivers. Homepage: http://www.unixodbc.org/ Original-Maintainer: Steve Langasek <[email protected]> I have no broken packages:

    Read the article

  • RDA Health Checks for SOA

    - by ShawnBailey
    What is a health check in RDA? A health check evaluates something in your environment to determine whether a change needs to be considered in order to avoid a problem or optimize fuctionality. Examples of what this 'something' might be are: Configuration Parameters JVM Options Runtime Statistics What have we done for SOA? In the latest release of RDA, 4.30, we have added a Rule Set for SOA called 'Oracle SOA 11g (11.1.1) Post Installation (Generic)'. This Rule Set contains 14 SOA related health checks. These checks were all derived from common issues / solutions we see in support of the SOA product. Many of the recommendations come from the product documentation while others are covered in the SOA Knowledge Base. Our goal is that you will be able to easily identify the areas of concern and understand the guidance available from the output of the Rule Set. Running the health checks for SOA The rules that the checks use are installed with RDA and bundled by product or functional area into what are called 'Rule Sets'. To view the available Rule Sets simply run the command from the RDA home location: rda.cmd (or .sh) -dT hcve This will bring up a list of the available HCVE (Health Check / Verification Engine) Rule Sets. Each Rule Set contains a group of related rules that are used for evalutation and display of results. A rule can be considered synonymous with a single health check and they are assigned an ID, Name and Description that can be seen when they are executed. The Rule Set for SOA is option number 11 and you just enter this selection at the prompt. The Rule Set will then execute to completion. After running an HCVE Rule Set the tool will write the output to the RDA_HOME/output folder. The simplest way to view the output is to drag the .htm file to a browser but of course it can also be uploaded to a Service Request for evaluation by Oracle Support. Many of the Rule Sets will prompt you for information before they can execute their rules but the SOA Rule Set will identify the SOA domains configured in your RDA setup.cfg file. This means that you don't need to answer all of the questions again about where stuff is but it also means that you must have configured RDA for SOA. To run the Rule Set: Download the latest version of RDA from MOS Doc ID 314422.1 Configure RDA for your SOA domains. Detailed steps can be found here In it's simplest form the command is 'rda.cmd (.sh) -S SOA' Go to the RDA home location and enter the command 'rda.cmd (or .sh) -dT hcve' Select option '11' It should be noted that this our first release of a SOA Rule Set so there will probably be some things we need to clean up or fix. None of these rules will actually modify anything on your system as they are read only and do the evaluations internally. Please let us know if you have any issues with the rules or ideas for new ones so we can make them as useful as possible. The Checks Here is a list of the SOA health checks by ID, Name and Description. ID Name Description A00100 SOA Domain Homes Lists the SOA domains that were indentified from the RDA setup.cfg file A00200 Coherence Protocol Conflict Checks to see if you have both Unicast and Multicast configured in the same domain. Checks both the setDomainEnv and config.xml entries (if it exists). We recommend Unicast with fully qualified host names or IP addresses. A00210 Coherence Fully Qualified Host Checks that the host names are fully qualified or that IP addresses are used. Will fail if unqualified host names are detected. A00220 Unicast Local Host Checks that the Coherence localhost is specified for use with Unicast A00300 JTA Timeout Checks that the JTA timeout is configured for the domain and lists the value. The bundled rule will only list the current values of the JTA timeout for each SOA Domain. In the future the rule with fail with a warning if the value is 300 seconds or lower. It is recommended that timeouts follow the pattern 'syncMaxWaitTime' < EJB Timeouts < JTA Timeout. The 300 second value is important because the EJB Timeouts default to 300 seconds. Additional information can be found in MOS Doc ID 880313.1. A00310 XA Max Time Checks that the JTA Maximum XA call time is set for the domain. Fails if it is not explicitly set or if the value is less than or equal to the default of 12000 ms. A00320 XA Timeout Checks that the XA timeout is enabled and that the value is '0' for the SOA Data Source (SOADataSource-jdbc.xml) A00330 JDBC Statement Timeout Checks that the Statement Timeout is set for all SOA Data Sources. Fails if the value is not set or if it is set to the default of -1. A00400 XA Driver Checks that the SOA Data Source is configured to use an XA driver. Fails if it is not. A00410 JDBC Capacity Settings Checks that the minimum and maximum capacity are equal for all SOA Data Sources. Fails if they are not and lists specifically which data sources failed. A00500 SOA Roles Checks that the default SOA roles 'SOAAdmin' and 'SOAOperator' are configured for the soa-infra application in the file sytem-jazn-data.xml. Fails if they are not. A00700 SOA-INFRA Deployment Checks that the soa-infra application is deployed to either a cluster, all members of a cluster or a stand alone server. A00710 SOA Deployments Checks that the SOA related applications are deployed to the same domain members as soa-infra. A00720 SOA Library Deployments Checks that the SOA related libraries are deployed to the same domain members as soa-infra. A00730 Data Source Deployments Checks that the SOA Data Sources are all targeted to the same domain members as soa-infra

    Read the article

  • A deadlock was detected while trying to lock variables in SSIS

    Error: 0xC001405C at SQL Log Status: A deadlock was detected while trying to lock variables "User::RowCount" for read/write access. A lock cannot be acquired after 16 attempts. The locks timed out. Have you ever considered variable locking when building your SSIS packages? I expect many people haven’t just because most of the time you never see an error like the one above. I’ll try and explain a few key concepts about variable locking and hopefully you never will see that error. First of all, what is all this variable locking all about? Put simply SSIS variables have to be locked before they can be accessed, and then of course unlocked once you have finished with them. This is baked into SSIS, presumably to reduce the risk of race conditions, but with that comes some additional overhead in that you need to be careful to avoid lock conflicts in some scenarios. The most obvious place you will come across any hint of locking (no pun intended) is the Script Task or Script Component with their ReadOnlyVariables and ReadWriteVariables properties. These two properties allow you to enter lists of variables to be used within the task, or to put it another way, these lists of variables to be locked, so that they are available within the task. During the task pre-execute phase the variables and locked, you then use them during the execute phase when you code is run, and then unlocked for you during the post-execute phase. So by entering the variable names in one of the two list, the locking is taken care of for you, and you just read and write to the Dts.Variables collection that is exposed in the task for the purpose. As you can see in the image above, the variable PackageInt is specified, which means when I write the code inside that task I don’t have to worry about locking at all, as shown below. public void Main() { // Set the variable value to something new Dts.Variables["PackageInt"].Value = 199; // Raise an event so we can play in the event handler bool fireAgain = true; Dts.Events.FireInformation(0, "Script Task Code", "This is the script task raising an event.", null, 0, ref fireAgain); Dts.TaskResult = (int)ScriptResults.Success; } As you can see as well as accessing the variable, hassle free, I also raise an event. Now consider a scenario where I have an event hander as well as shown below. Now what if my event handler uses tries to use the same variable as well? Well obviously for the point of this post, it fails with the error quoted previously. The reason why is clearly illustrated if you consider the following sequence of events. Package execution starts Script Task in Control Flow starts Script Task in Control Flow locks the PackageInt variable as specified in the ReadWriteVariables property Script Task in Control Flow executes script, and the On Information event is raised The On Information event handler starts Script Task in On Information event handler starts Script Task in On Information event handler attempts to lock the PackageInt variable (for either read or write it doesn’t matter), but will fail because the variable is already locked. The problem is caused by the event handler task trying to use a variable that is already locked by the task in Control Flow. Events are always raised synchronously, therefore the task in Control Flow that is raising the event will not regain control until the event handler has completed, so we really do have un-resolvable locking conflict, better known as a deadlock. In this scenario we can easily resolve the problem by managing the variable locking explicitly in code, so no need to specify anything for the ReadOnlyVariables and ReadWriteVariables properties. public void Main() { // Set the variable value to something new, with explicit lock control Variables lockedVariables = null; Dts.VariableDispenser.LockOneForWrite("PackageInt", ref lockedVariables); lockedVariables["PackageInt"].Value = 199; lockedVariables.Unlock(); // Raise an event so we can play in the event handler bool fireAgain = true; Dts.Events.FireInformation(0, "Script Task Code", "This is the script task raising an event.", null, 0, ref fireAgain); Dts.TaskResult = (int)ScriptResults.Success; } Now the package will execute successfully because the variable lock has already been released by the time the event is raised, so no conflict occurs. For those of you with a SQL Engine background this should all sound strangely familiar, and boils down to getting in and out as fast as you can to reduce the risk of lock contention, be that SQL pages or SSIS variables. Unfortunately we cannot always manage the locking ourselves. The Execute SQL Task is very often used in conjunction with variables, either to pass in parameter values or get results out. Either way the task will manage the locking for you, and will fail when it cannot lock the variables it requires. The scenario outlined above is clear cut deadlock scenario, both parties are waiting on each other, so it is un-resolvable. The mechanism used within SSIS isn’t actually that clever, and whilst the message says it is a deadlock, it really just means it tried a few times, and then gave up. The last part of the error message is actually the most accurate in terms of the failure, A lock cannot be acquired after 16 attempts. The locks timed out.  Now this may come across as a recommendation to always manage locking manually in the Script Task or Script Component yourself, but I think that would be an overreaction. It is more of a reminder to be aware that in high concurrency scenarios, especially when sharing variables across multiple objects, locking is important design consideration. Update – Make sure you don’t try and use explicit locking as well as leaving the variable names in the ReadOnlyVariables and ReadWriteVariables lock lists otherwise you’ll get the deadlock error, you cannot lock a variable twice!

    Read the article

  • Unable to fix broken packages with sudo apt-get install -f

    - by Bob
    Here's my result, of sudo apt-get install -f. i have Ran it twice and got negative result. I believe there is an error at "error in Version string '0:3.6.1-dates for language English Translation data updates for all supported packages for: English" This same statement "error in Version string, caused me three days of attempting to download version 12.04. There is a bug report concerning the quoted text as well. Is there anyway to download the version without the language packs, why would I corrupt version 11.10? Also, when attempting to download Synaptic using sudo apt-get install synaptic, I get the same error message. Again I point out the initial download problems and the same error message receipt. Thanks b0b@b0b-IC780M-A:~$ sudo apt-get install -f [sudo] password for b0b: Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 298 not upgraded. b0b@b0b-IC780M-A:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 298 not upgraded. b0b@b0b-IC780M-A:~$ sudo apt-get upgrade install Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: linux-headers-generic software-center The following packages will be upgraded: accountsservice acpi-support acpid aisleriot alsa-utils app-install-data-partner appmenu-qt apport apport-gtk apt-transport-https apt-utils aptdaemon aptdaemon-data apturl apturl-common banshee banshee-extension-soundmenu banshee-extension-ubuntuonemusicstore baobab bind9-host binutils bluez-alsa bluez-cups bluez-gstreamer brasero brasero-cdrkit brasero-common checkbox checkbox-gtk command-not-found command-not-found-data compiz compiz-core compiz-gnome compiz-plugins-default compiz-plugins-main-default cups cups-bsd cups-client cups-common cups-ppdc deja-dup desktop-file-utils dnsutils empathy empathy-common eog evince evince-common evolution-data-server evolution-data-server-common file-roller firefox firefox-globalmenu firefox-gnome-support gbrainy gcalctool gconf2 gconf2-common gedit gedit-common ghostscript ghostscript-cups ghostscript-x gir1.2-atspi-2.0 gir1.2-gconf-2.0 gir1.2-gnomebluetooth-1.0 gir1.2-gtk-3.0 gir1.2-gtksource-3.0 gir1.2-totem-1.0 gir1.2-unity-4.0 gir1.2-webkit-3.0 gnome-accessibility-themes gnome-bluetooth gnome-control-center gnome-control-center-data gnome-desktop3-data gnome-font-viewer gnome-games-common gnome-icon-theme gnome-mahjongg gnome-online-accounts gnome-orca gnome-power-manager gnome-screenshot gnome-search-tool gnome-session gnome-session-bin gnome-session-canberra gnome-session-common gnome-settings-daemon gnome-sudoku gnome-system-log gnome-system-monitor gnome-utils-common gnomine gstreamer0.10-gconf gstreamer0.10-plugins-good gstreamer0.10-pulseaudio gvfs gvfs-backends gvfs-bin gvfs-fuse gwibber gwibber-service gwibber-service-facebook gwibber-service-identica gwibber-service-twitter hpijs hplip hplip-cups hplip-data indicator-datetime indicator-session indicator-sound isc-dhcp-client isc-dhcp-common jockey-common jockey-gtk language-selector-common language-selector-gnome libaccountsservice0 libapt-inst1.3 libarchive1 libasound2-plugins libatk-adaptor libbind9-60 libbrasero-media3-1 libcamel-1.2-29 libcanberra-gtk-module libcanberra-gtk0 libcanberra-gtk3-0 libcanberra-gtk3-module libcanberra-pulse libcanberra0 libdecoration0 libdns69 libebackend-1.2-1 libebook1.2-12 libecal1.2-10 libedata-book-1.2-11 libedata-cal-1.2-13 libedataserver1.2-15 libedataserverui-3.0-1 libevince3-3 libgconf2-4 libgnome-bluetooth8 libgnome-control-center1 libgnome-desktop-3-2 libgoa-1.0-0 libgrip0 libgs9 libgs9-common libgtk-3-bin libgtksourceview-3.0-0 libgtksourceview-3.0-common libgweather-3-0 libgweather-common libgwibber-gtk2 libgwibber2 libhpmud0 libimobiledevice2 libisc62 libisccc60 libisccfg62 libjasper1 liblightdm-gobject-1-0 liblwres60 libmetacity-private0 libmission-control-plugins0 libmono-zeroconf1.0-cil libnautilus-extension1 libnm-glib-vpn1 libnm-glib4 libnm-util2 libnotify0.4-cil libnux-1.0-0 libnux-1.0-common libpam-gnome-keyring libreoffice-emailmerge libreoffice-style-human libsane-hpaio libsmbclient libsnmp-base libsnmp15 libsyncdaemon-1.0-1 libt1-5 libtotem0 libubuntuone-1.0-1 libubuntuone1.0-cil libunity-2d-private0 libunity-core-4.0-4 libunity6 libusbmuxd1 libwbclient0 libwebkitgtk-1.0-0 libwebkitgtk-1.0-common libwebkitgtk-3.0-0 libwebkitgtk-3.0-common libxml2 linux-generic linux-image-generic metacity metacity-common mobile-broadband-provider-info modemmanager mousetweaks multiarch-support nautilus nautilus-data nautilus-sendto-empathy network-manager nux-tools onboard openssl pulseaudio pulseaudio-esound-compat pulseaudio-module-bluetooth pulseaudio-module-gconf pulseaudio-module-x11 pulseaudio-utils python-apport python-aptdaemon python-aptdaemon-gtk python-aptdaemon.gtk3widgets python-aptdaemon.gtkwidgets python-brlapi python-cups python-cupshelpers python-gobject-cairo python-httplib2 python-launchpadlib python-libxml2 python-pam python-papyon python-pkg-resources python-problem-report python-pyatspi2 python-software-properties python-ubuntuone-client python-ubuntuone-storageprotocol samba-common samba-common-bin seahorse shotwell simple-scan smbclient sni-qt software-properties-common software-properties-gtk sudo system-config-printer-common system-config-printer-gnome system-config-printer-udev telepathy-indicator telepathy-mission-control-5 thunderbird thunderbird-globalmenu thunderbird-gnome-support tomboy totem totem-common totem-mozilla totem-plugins ttf-opensymbol ubuntu-desktop ubuntu-minimal ubuntu-standard ubuntuone-client ubuntuone-client-gnome ubuntuone-couch unity unity-2d unity-2d-launcher unity-2d-panel unity-2d-places unity-2d-spread unity-common unity-lens-applications unity-services update-manager update-manager-core update-notifier update-notifier-common usbmuxd vim-common vim-tiny vinagre vino xorg xserver-xorg xserver-xorg-input-all xserver-xorg-video-all xserver-xorg-video-intel xserver-xorg-video-openchrome xul-ext-ubufox 296 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. Need to get 0 B/159 MB of archives. After this operation, 10.1 MB of additional disk space will be used. Do you want to continue [Y/n]? y Extracting templates from packages: 100% Preconfiguring packages ... dpkg: error: parsing file '/var/lib/dpkg/available' near line 4131 package 'python-zope.interface': error in Version string '0:3.6.1-dates for language English Translation data updates for all supported packages for: English . language-pack-en-base provides the bulk of translation data and is updated only seldom. This package provides frequent translation updates.': version string has embedded spaces E: Sub-process /usr/bin/dpkg returned an error code (2) b0b@b0b-IC780M-A:~$

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >