Search Results

Search found 16237 results on 650 pages for 'lock free'.

Page 84/650 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • The database engine couldn't lock the table, because it is already in use by another person or proce

    - by tintincute
    Hi I tried to do "Split a Database" and after I clicked on the "Split" button, here is what I got: "The database engine couldn't lock the table, because it is already in use by another person or process" Any idea? Thanks additional question: is it possible to split your database many times? first i'm trying it at home and the following day i would like to try it at the office if it works. i already tried the split and if i do it tomorrow at the office would that be a problem? Thanks

    Read the article

  • Is there a way to "lock" the viewport in vim?

    - by breadjesus
    I recently started using Vim with NERDTree. The annoying thing is when I close the buffer, NERDTree expands to fill the rest of the screen, and I have to open another file and reopen NERDTree to get it back to the old layout. Is there a way to "lock" NERDTree in place? Ideally, closing a buffer would replace it with another buffer that's hidden, or open a new blank buffer if no other buffers are open. Thanks!

    Read the article

  • Does this use of Monitor.Wait/Pulse have a race condition?

    - by jw
    I have a simple producer/consumer scenario, where there is only ever a single item being produced/consumed. Also, the producer waits for the worker thread to finish before continuing. I realize that kind of obviates the whole point of multithreading, but please just assume it really needs to be this way (: This code doesn't compile, but I hope you get the idea: // m_data is initially null // This could be called by any number of producer threads simultaneously void SetData(object foo) { lock(x) // Line A { assert(m_data == null); m_data = foo; Monitor.Pulse(x) // Line B while(m_data != null) Monitor.Wait(x) // Line C } } // This is only ever called by a single worker thread void UseData() { lock(x) // Line D { while(m_data == null) Monitor.Wait(x) // Line E // here, do something with m_data m_data = null; Monitor.Pulse(x) // Line F } } Here is the situation that I am not sure about: Suppose many threads call SetData() with different inputs. Only one of them will get inside the lock, and the rest will be blocked on Line A. Suppose the one that got inside the lock sets m_data and makes its way to Line C. Question: Could the Wait() on Line C allow another thread at Line A to obtain the lock and overwrite m_data before the worker thread even gets to it? Supposing that doesn't happen, and the worker thread processes the original m_data, and eventually makes its way to Line F, what happens when that Pulse() goes off? Will only the thread waiting on Line C be able to get the lock? Or will it be competing with all the other threads waiting on Line A as well? Essentially, I want to know if Pulse()/Wait() communicate with each other specially "under the hood" or if they are on the same level with lock(). The solution to these problems, if they exist, is obvious of course - just surround SetData() with another lock - say, lock(y). I'm just curious if it's even an issue to begin with.

    Read the article

  • Why there is no scoped locks for multiple mutexes in C++0x or Boost.Thread?

    - by Vicente Botet Escriba
    C++0x thread library or Boost.thread define non-member variadic template function that lock all lock avoiding dead lock. template <class L1, class L2, class... L3> void lock(L1&, L2&, L3&...); While this function avoid help to deadlock, the standard do not includes the associated scoped lock to write exception safe code. { std::lock(l1,l2); // do some thing // unlock li l2 exception safe } That means that we need to use other mechanism as try-catch block to make exception safe code or define our own scoped lock on multiple mutexes ourselves or even do that { std::lock(l1,l2); std::unique_lock lk1(l1, std::adopted); std::unique_lock lk2(l2, std::adopted); // do some thing // unlock li l2 on destruction of lk1 lk2 } Why the standard doesn't includes a scoped lock on multiple mutexes of the same type, as for example { std::array_unique_lock<std::mutex> lk(l1,l2); // do some thing // unlock l1 l2 on destruction of lk } or tuples of mutexes { std::tuple_unique_lock<std::mutex, std::recursive_mutex> lk(l1,l2); // do some thing // unlock l1 l2 on destruction of lk } Is there something wrong on the design?

    Read the article

  • apt-get install and update fail

    - by sepehr
    I've got a problem with apt-get update and apt-get install ... commands . every time update or installing fails and errors are : Get:1 http://dl.google.com stable Release.gpg [198B] Ign http://dl.google.com/linux/chrome/deb/ stable/main Translation-en_US Get:2 http://dl.google.com stable Release [1,347B] Get:3 http://dl.google.com stable/main Packages [1,227B] Err http://32.repository.backtrack-linux.org revolution Release.gpg Could not connect to 32.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) Err http://32.repository.backtrack-linux.org/ revolution/main Translation-en_US Unable to connect to 32.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org/ revolution/microverse Translation-en_US Unable to connect to 32.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org/ revolution/non-free Translation-en_US Unable to connect to 32.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org/ revolution/testing Translation-en_US Unable to connect to 32.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org revolution Release.gpg Could not connect to all.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) Err http://all.repository.backtrack-linux.org/ revolution/main Translation-en_US Unable to connect to all.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org/ revolution/microverse Translation-en_US Unable to connect to all.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org/ revolution/non-free Translation-en_US Unable to connect to all.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org/ revolution/testing Translation-en_US Unable to connect to all.repository.backtrack-linux.org:http: Ign http://32.repository.backtrack-linux.org revolution Release Ign http://all.repository.backtrack-linux.org revolution Release Ign http://32.repository.backtrack-linux.org revolution/main Packages Ign http://all.repository.backtrack-linux.org revolution/main Packages Ign http://32.repository.backtrack-linux.org revolution/microverse Packages Ign http://32.repository.backtrack-linux.org revolution/non-free Packages Ign http://32.repository.backtrack-linux.org revolution/testing Packages Ign http://all.repository.backtrack-linux.org revolution/microverse Packages Ign http://all.repository.backtrack-linux.org revolution/non-free Packages Ign http://all.repository.backtrack-linux.org revolution/testing Packages Ign http://32.repository.backtrack-linux.org revolution/main Packages Ign http://32.repository.backtrack-linux.org revolution/microverse Packages Ign http://32.repository.backtrack-linux.org revolution/non-free Packages Ign http://all.repository.backtrack-linux.org revolution/main Packages Ign http://all.repository.backtrack-linux.org revolution/microverse Packages Ign http://all.repository.backtrack-linux.org revolution/non-free Packages Ign http://all.repository.backtrack-linux.org revolution/testing Packages Err http://all.repository.backtrack-linux.org revolution/main Packages Unable to connect to all.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org revolution/microverse Packages Unable to connect to all.repository.backtrack-linux.org:http: Ign http://32.repository.backtrack-linux.org revolution/testing Packages Err http://32.repository.backtrack-linux.org revolution/main Packages Unable to connect to 32.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org revolution/microverse Packages Unable to connect to 32.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org revolution/non-free Packages Unable to connect to all.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org revolution/testing Packages Unable to connect to all.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org revolution/non-free Packages Unable to connect to 32.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org revolution/testing Packages Unable to connect to 32.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org revolution Release.gpg Could not connect to source.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) Err http://source.repository.backtrack-linux.org/ revolution/main Translation-en_US Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org/ revolution/microverse Translation-en_US Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org/ revolution/non-free Translation-en_US Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org/ revolution/testing Translation-en_US Unable to connect to source.repository.backtrack-linux.org:http: Ign http://source.repository.backtrack-linux.org revolution Release Ign http://source.repository.backtrack-linux.org revolution/main Packages Ign http://source.repository.backtrack-linux.org revolution/microverse Packages Ign http://source.repository.backtrack-linux.org revolution/non-free Packages Ign http://source.repository.backtrack-linux.org revolution/testing Packages Ign http://source.repository.backtrack-linux.org revolution/main Packages Ign http://source.repository.backtrack-linux.org revolution/microverse Packages Ign http://source.repository.backtrack-linux.org revolution/non-free Packages Ign http://source.repository.backtrack-linux.org revolution/testing Packages Err http://source.repository.backtrack-linux.org revolution/main Packages Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org revolution/microverse Packages Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org revolution/non-free Packages Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org revolution/testing Packages Unable to connect to source.repository.backtrack-linux.org:http: Fetched 2,772B in 1min 3s (44B/s) W: Failed to fetch http://all.repository.backtrack- \linux.org/dists/revolution/Release.gpg Could not connect to all.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/main/i18n/Translation-en_US.bz2 Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/microverse/i18n/Translation-en_US.bz2 Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/non-free/i18n/Translation-en_US.bz2 Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/testing/i18n/Translation-en_US.bz2 Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/Release.gpg Could not connect to 32.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/main/i18n/Translation-en_US.bz2 Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/microverse/i18n/Translation-en_US.bz2 Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/non-free/i18n/Translation-en_US.bz2 Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/testing/i18n/Translation-en_US.bz2 Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/Release.gpg Could not connect to source.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/main/i18n/Translation-en_US.bz2 Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/microverse/i18n/Translation-en_US.bz2 Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/non-free/i18n/Translation-en_US.bz2 Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/testing/i18n/Translation-en_US.bz2 Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/main/binary-i386/Packages.gz Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/microverse/binary-i386/Packages.gz Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/non-free/binary-i386/Packages.gz Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/main/binary-i386/Packages.gz Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/microverse/binary-i386/Packages.gz Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/non-free/binary-i386/Packages.gz Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/testing/binary-i386/Packages.gz Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/testing/binary-i386/Packages.gz Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/main/binary-i386/Packages.gz Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/microverse/binary-i386/Packages.gz Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/non-free/binary-i386/Packages.gz Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/testing/binary-i386/Packages.gz Unable to connect to source.repository.backtrack-linux.org:http: E: Some index files failed to download, they have been ignored, or old ones used instead. I Don't know how to get out of this ! I want to install RPM and YUM package on my backtrack ! I also searched over internet for answer . in backtrack forums or any other sites or weblogs i could'nt find a good answer ! can anyone help ??

    Read the article

  • The emergence of Atlassian's Bamboo (and a free SQL Source Control license offer!)

    - by David Atkinson
    The rise in demand for database continuous integration has forced me to skill-up in various new tools and technologies, particularly build servers. We have been using JetBrain's TeamCity here at Red Gate for a couple of years now, having replaced the ageing CruiseControl.NET, so it was a natural choice for us to use this for our database CI demos. Most of our early adopter customers have also transitioned away from CruiseControl, the majority to TeamCity and Microsoft's TeamBuild. However, more recently, for reasons we've yet to fully comprehend, we've observed a significant surge in the number of evaluators for Atlassian's Bamboo. I installed this a couple of weeks back to satisfy myself that it works seamlessly with Red Gate tools. As you would expect Bamboo's UI has the same clean feel found in any Atlassian tool (we use JIRA extensively here at Red Gate). In the coming weeks I will post a short step-by-step guide to setting up SQL Server continuous integration using the Red Gate command lines. To help us further optimize the integration between these tools I'd be very keen to hear from any Bamboo users who also use Red Gate tools who might be willing to participate in usability tests and other similar research in exchange for Amazon vouchers. If you are interested in helping out please contact me at David dot Atkinson at red-gate.com I recently spoke with Sarah, the product marketing manager for Bamboo, and we ended up having a detailed conversation about database CI, which has been meticulously documented in the form of a blog post on Atlassian's website: http://blogs.atlassian.com/2012/05/database-continuous-integration-redgate/ We've also managed to persuade Red Gate marketing to provide a great free-tool offer, provide a free SQL Source Control or SQL Connect license to Atlassian users provided it is claimed before the end of June! Full details are at the bottom of the post. Technorati Tags: sql server

    Read the article

  • The emergence of Atlassian's Bamboo (and a free SQL Source Control license offer!)

    - by David Atkinson
    The rise in demand for database continuous integration has forced me to skill-up in various new tools and technologies, particularly build servers. We have been using JetBrain's TeamCity here at Red Gate for a couple of years now, having replaced the ageing CruiseControl.NET, so it was a natural choice for us to use this for our database CI demos. Most of our early adopter customers have also transitioned away from CruiseControl, the majority to TeamCity and Microsoft's TeamBuild. However, more recently, for reasons we've yet to fully comprehend, we've observed a significant surge in the number of evaluators for Atlassian's Bamboo. I installed this a couple of weeks back to satisfy myself that it works seamlessly with Red Gate tools. As you would expect Bamboo's UI has the same clean feel found in any Atlassian tool (we use JIRA extensively here at Red Gate). In the coming weeks I will post a short step-by-step guide to setting up SQL Server continuous integration using the Red Gate command lines. To help us further optimize the integration between these tools I'd be very keen to hear from any Bamboo users who also use Red Gate tools who might be willing to participate in usability tests and other similar research in exchange for Amazon vouchers. If you are interested in helping out please contact me at David dot Atkinson at red-gate.com I recently spoke with Sarah, the product marketing manager for Bamboo, and we ended up having a detailed conversation about database CI, which has been meticulously documented in the form of a blog post on Atlassian's website: http://blogs.atlassian.com/2012/05/database-continuous-integration-redgate/ We've also managed to persuade Red Gate marketing to provide a great free-tool offer, provide a free SQL Source Control or SQL Connect license to Atlassian users provided it is claimed before the end of June! Full details are at the bottom of the post. Technorati Tags: sql server

    Read the article

  • WebLogic Application Server: free for developers! by Bruno Borges

    - by JuergenKress
    Great news! Oracle WebLogic Server is now free for developers! What does this mean for you? That you as a developer are permitted to: "[...] deploy the programs only on your single developer desktop computer (of any type, including physical, virtual or remote virtual), to be used and accessed by only (1) named developer." But the most interesting part of the license change is this one: "You may continue to develop, test, prototype and demonstrate your application with the programs under this license after you have deployed the application for any internal data processing, commercial or production purposes" (Read the full license agreement here). If you want to take advantage of this licensing change and start developing Java EE applications with the #1 Application Server in the world, read now the previous post, How To Install WebLogic Zip on Linux! WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: WebLogic free,WebLogic for developers,WebLogic license,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • How to fix the apt-get -f install error [duplicate]

    - by Satish Patel
    This question already has an answer here: How do I fix a “Could not get lock /var/lib/dpkg/lock” problem? 12 answers I am getting continuous error when I run the command sudo apt-get -f install The output of the command is as follows: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: default-jre-headless icedtea-7-jre-jamvm openjdk-7-jre openjdk-7-jre-headless openjdk-7-jre-lib Suggested packages: fonts-ipafont-gothic fonts-ipafont-mincho ttf-telugu-fonts ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts The following NEW packages will be installed: default-jre-headless icedtea-7-jre-jamvm openjdk-7-jre openjdk-7-jre-headless openjdk-7-jre-lib 0 upgraded, 5 newly installed, 0 to remove and 28 not upgraded. 2 not fully installed or removed E: Could not get lock /var/cache/apt/archives/lock - open (11: Resource temporarily unavailable) E: Unable to lock directory /var/cache/apt/archives/ satish@satish-Inspiron-N5010 ~ $ please guide ASAP

    Read the article

  • Need help in determing what, if any, tools can be used to create a free Flash game

    - by ReaperOscuro
    Yes I proudly -and sadly- declare that I am a complete nincompoop when it comes to Flash, and I have been fishing around the big wide web for information. The reason for this is that I have been contracted to create a game(s) for a website -the usual flash-based games caveat. Please I do not mean things like by those gaming generator websites, I mean small yet professional games- but the caveat, as always, is that impossible dream: it needs to be done all for free. The budget...well imagine it as not there. Annoyingly is that I am a game designer yes, but with a ridiculously tight deadline I haven't got much time to re-learn (ah the heady days of programming at uni) everything by the end of March, so I'd like to ask some people who know their stuff rather than keep looking at a gazillion different things. This is my understanding: with the flash sdk you can create a game, albeit you need to be pretty programming savvy. FlashDevelop helps there -yet I am not entirely sure how. Yet even FD says to use Flash for the animation/graphics. Yes its undeniably powerful but as I said there is the unattainable demand of no money. The million dollar question: what, if any, tools can I use to create a free flash game?

    Read the article

  • Two Copies of "Silverlight 5 In Action" to Give Away and a FREE chapter!

    - by Dave Campbell
    I know most of you have seen my post from Tuesday where I talked about giving away 2 copies of Pete's book on Monday morning July 18th. Well... I'm repeating it, because it's a smoking deal... for the cost of an email you too can take a shot at getting Pete's latest released "Silverlight 5 In Action" free. 2 Important Pieces of Information 1) The deadline: midnight Sunday night, July 17, 2012, Arizona time... if you know me, you know I've lived here too long and am timezone stupid... so don't make me calculate it out :) 2) The how: I have a special email address for submittals: mailto:[email protected]?Subject=Giveaway. 3) oh yeah... I lied about only 2 pieces of info... number 3 ... there may be other surprises on Monday morning... 'nuff said 4) and just to pump up the volume on the book... how about a Free chapter you can read right here on Working with RSS and Atom! 5) send me an email and Stay in the 'Light!

    Read the article

  • How to prevent my screen from either dimming or the screen-lock starting when watching YouTube?

    - by Steven Roose
    My screen brightness used to dim after a few seconds to preserve battery. This is default in Ubuntu 12.04. However when watching video it should not dim. This works correctly when I watch videos using native applications like VLC. With in-browser video, however, the screen is not prevented from dimming. This is very annoying as you have to move your cursor every 10 seconds or so. I used to use Mac OSX where I had the same dimming settings and Flash videos were taken into account correctly. Anyone an idea how you can make YouTube prevent your screen from dimming?

    Read the article

  • MOS Community rewards Ram Kasthuri w/ FREE OOW Pass!

    - by cwarticki
    Congratulations Ram Kasthuri on Receiving a Free Full Conference Pass to Oracle OpenWorld!  Thank you for helping other members through your participation in My Oracle Support Community My Oracle Support Community member Ram Kasthuri received a free Oracle OpenWorld Pass from the My Oracle Support Community in appreciation for his work in answering questions posted by other Community members. Ram, an independent consultant, is an Application Solution Architect with Canon. He has been a valued Oracle customer for over 13 years. Ram is an active member in several of the Oracle EBS communities. He has achieved the Expert Level of recognition through his active participation.   Ram described the value he receives from My Oracle Support Community when he said what “I like best about the communities is the vicarious learning from real business scenarios posted by other Community members. The questions are real opportunities to learn all things Oracle, and EBS especially.” Ram is one of those member's who answers more questions than he posts, so he must get a lot of that vicarious learning. Oracle Premier Support customers can get answers and learn from both peers who have faced similar situations and Oracle experts. Join us in My Oracle Support Community. Look for Ram this week at Oracle OpenWorld and join him in My Oracle Support Community when you return to work. And while you’re at Oracle OpenWorld, Oracle Customer Support Services invites you to expand your knowledge by meeting with Oracle Support experts. Learn more about our sessions and network opportunities today!

    Read the article

  • Unable to install vlc and mplayer after update on fedora 18

    - by mahesh
    I just updated fedora 18 using # yum update Then if I try # rpm -ivh http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm I get, Retrieving http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm warning: /var/tmp/rpm-tmp.0K5pWw: Header V3 RSA/SHA256 Signature, key ID 172ff33d: NOKEY error: Failed dependencies: system-release >= 19 is needed by rpmfusion-free-release-19-1.noarch So I tried installing vlc from development version, # rpm -ivh http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-rawhide.noarch.rpm I get, Retrieving http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-rawhide.noarch.rpm warning: /var/tmp/rpm-tmp.WZC0gw: Header V3 RSA/SHA256 Signature, key ID 6446d859: NOKEY error: Failed dependencies: system-release >= 21 is needed by rpmfusion-free-release-21-0.1.noarch There's no system release after 20. What does this mean?

    Read the article

  • Deadlock Analysis in NetBeans 8

    - by Geertjan
    Lock contention profiling is very important in multi-core environments. Lock contention occurs when a thread tries to acquire a lock while another thread is holding it, forcing it to wait. Lock contentions result in deadlocks. Multi-core environments have even more threads to deal with, causing an increased likelihood of lock contentions. In NetBeans 8, the NetBeans Profiler has new support for displaying detailed information about lock contention, i.e., the relationship between the threads that are locked. After all, whenever there's a deadlock, in any aspect of interaction, e.g., a political deadlock, it helps to be able to point to the responsible party or, at least, the order in which events happened resulting in the deadlock. As an example, let's take the handy Deadlock sample code from the Java Tutorial and look at the tools in NetBeans IDE for identifying and analyzing the code. The description of the deadlock is nice: Alphonse and Gaston are friends, and great believers in courtesy. A strict rule of courtesy is that when you bow to a friend, you must remain bowed until your friend has a chance to return the bow. Unfortunately, this rule does not account for the possibility that two friends might bow to each other at the same time. To help identify who bowed first or, at least, the order in which bowing took place, right-click the file and choose "Profile File". In the Profile Task Manager, make the choices below: When you have clicked Run, the Threads window shows the two threads are blocked, i.e., the red "Monitor" lines tell you that the related threads are blocked while trying to enter a synchronized method or block: But which thread is holding the lock? Which one is blocked by the other? The above visualization does not answer these questions. New in NetBeans 8 is that you can analyze the deadlock in the new Lock Contention window to determine which of the threads is responsible for the lock: Here is the code that simulates the lock, very slightly tweaked at the end, where I use "setName" on the threads, so that it's even easier to analyze the threads in the relevant NetBeans tools. Also, I converted the anonymous inner Runnables to lambda expressions. package org.demo; public class Deadlock { static class Friend { private final String name; public Friend(String name) { this.name = name; } public String getName() { return this.name; } public synchronized void bow(Friend bower) { System.out.format("%s: %s" + " has bowed to me!%n", this.name, bower.getName()); bower.bowBack(this); } public synchronized void bowBack(Friend bower) { System.out.format("%s: %s" + " has bowed back to me!%n", this.name, bower.getName()); } } public static void main(String[] args) { final Friend alphonse = new Friend("Alphonse"); final Friend gaston = new Friend("Gaston"); Thread t1 = new Thread(() -> { alphonse.bow(gaston); }); t1.setName("Alphonse bows to Gaston"); t1.start(); Thread t2 = new Thread(() -> { gaston.bow(alphonse); }); t2.setName("Gaston bows to Alphonse"); t2.start(); } } In the above code, it's extremely likely that both threads will block when they attempt to invoke bowBack. Neither block will ever end, because each thread is waiting for the other to exit bow. Note: As you can see, it really helps to use "Thread.setName", everywhere, wherever you're creating a Thread in your code, since the tools in the IDE become a lot more meaningful when you've defined the name of the thread because otherwise the Profiler will be forced to use thread names like "thread-5" and "thread-6", i.e., based on the order of the threads, which is kind of meaningless. (Normally, except in a simple demo scenario like the above, you're not starting the threads in the same class, so you have no idea at all what "thread-5" and "thread-6" mean because you don't know the order in which the threads were started.) Slightly more compact: Thread t1 = new Thread(() -> { alphonse.bow(gaston); },"Alphonse bows to Gaston"); t1.start(); Thread t2 = new Thread(() -> { gaston.bow(alphonse); },"Gaston bows to Alphonse"); t2.start();

    Read the article

  • What is lightweight lock in distributed shared memory systems?

    - by Kutluhan Metin
    I started reading Tanenbaum's Distributed Systems book a while ago. I read about two phase locking and timestamp reordering in transactions chapter. While having a deeper look from google I heard of lightweight transactions/lightweight transactional memory. But I couldn't find any good explanation and implementation. So what is lightweight memory? What are the benefits of lightweight locks? And how can I implement them?

    Read the article

  • Can I legally make a free clone of a game and use the same name? [closed]

    - by BlueMonkMN
    I gather from Is it legally possible to make a clone of the game? and How closely can a game resemble another game without legal problems that I should not try to profit from a clone if it is using the same assets, and, I presume, the same name. My question is whether it's legal to make a game like "Set" or "Catch Phrase", using the same name, and release it for free. What would I be risking if I did so -- just a take down notice, or could there be financial risk too? Edit: I guess my real question is whether the legal freedom is greater for a free game than one that is trying to make a profit. I just want a version of the game I can play remotely. Edit 2: I don't understand why this is being considered off-topic. I read the FAQ and it says it'S OK to ask questions about project management, which includes Publishing. And naming a game is a key aspect to publishing. That's what my question is about - choosing a legal name for my game with the consideration that I might post/publish it.

    Read the article

  • Where can I find a nice .NET Tab Control for free?

    - by Nazgulled
    I'm doing this application in C# using the free Krypton Toolkit but the Krypton Navigator is a paid product which is rather expensive for me and this application is being developed on my free time and it will be available to the public for free. So, I'm looking for a free control to integrate better into my Krypton application because the default one doesn't quite fit and it will be different depending on the OS version... Any suggestions? P.S: I know I could owner-draw it but I'm trying not to have that kind of work... I prefer something already done if it exists for free. EDIT: I just found exactly what I wanted: http://www.angelonline.net/CodeSamples/Lib_3.5.0.zip

    Read the article

  • What facets have I missed for creating a 3 person guerilla dev team?

    - by Penguinix
    Sorry for the Windows developers out there, this solution is for Macs only. This set of applications accounts for: Usability Testing, Screen Capture (Video and Still), Version Control, Task Lists, Bug Tracking, a Developer IDE, a Web Server, A Blog, Shared Doc Editing on the Web, Team and individual Chat, Email, Databases and Continuous Integration. This does assume your team members provide their own machines, and one person has a spare old computer to be the Source Repository and Web Server. All for under $200 bucks. Usability Silverback Licenses = 3 x $49.95 "Spontaneous, unobtrusive usability testing software for designers and developers." Source Control Server and Clients (multiple options) Subversion = Free Subversion is an open source version control system. Versions (Currently in Beta) = Free Versions provides a pleasant work with Subversion on your Mac. Diffly = Free "Diffly is a tool for exploring Subversion working copies. It shows all files with changes and, clicking on a file, shows a highlighted view of the changes for that file. When you are ready to commit Diffly makes it easy to select the files you want to check-in and assemble a useful commit message." Bug/Feature/Defect Tracking (multiple options) Bugzilla = Free Bugzilla is a "Defect Tracking System" or "Bug-Tracking System". Defect Tracking Systems allow individual or groups of developers to keep track of outstanding bugs in their product effectively. Most commercial defect-tracking software vendors charge enormous licensing fees. Trac = Free Trac is an enhanced wiki and issue tracking system for software development projects. Database Server & Clients MySQL = Free CocoaMySQL = Free Web Server Apache = Free Development and Build Tools XCode = Free CruiseControl = Free CruiseControl is a framework for a continuous build process. It includes, but is not limited to, plugins for email notification, Ant, and various source control tools. A web interface is provided to view the details of the current and previous builds. Collaboration Tools Writeboard = Free Ta-da List = Free Campfire Chat for 4 users = Free WordPress = Free "WordPress is a state-of-the-art publishing platform with a focus on aesthetics, web standards, and usability. WordPress is both free and priceless at the same time." Gmail = Free "Gmail is a new kind of webmail, built on the idea that email can be more intuitive, efficient, and useful." Screen Capture (Video / Still) Jing = Free "The concept of Jing is the always-ready program that instantly captures and shares images and video…from your computer to anywhere." Lots of great responses: TeamCity [Yo|||] Skype [Eric DeLabar] FogBugz [chakrit] IChatAV and Screen Sharing (built-in to OS) [amrox] Google Docs [amrox]

    Read the article

  • What free space thresholds/limits are advisable for 640 GB and 2 TB hard disk drives with ZEVO ZFS on OS X?

    - by Graham Perrin
    Assuming that free space advice for ZEVO will not differ from advice for other modern implementations of ZFS … Question Please, what percentages or amounts of free space are advisable for hard disk drives of the following sizes? 640 GB 2 TB Thoughts A standard answer for modern implementations of ZFS might be "no more than 96 percent full". However if apply that to (say) a single-disk 640 GB dataset where some of the files most commonly used (by VirtualBox) are larger than 15 GB each, then I guess that blocks for those files will become sub optimally spread across the platters with around 26 GB free. I read that in most cases, fragmentation and defragmentation should not be a concern with ZFS. Sill, I like the mental picture of most fragments of a large .vdi in reasonably close proximity to each other. (Do features of ZFS make that wish for proximity too old-fashioned?) Side note: there might arise the question of how to optimise performance after a threshold is 'broken'. If it arises, I'll keep it separate. Background On a 640 GB StoreJet Transcend (product ID 0x2329) in the past I probably went beyond an advisable threshold. Currently the largest file is around 17 GB –  – and I doubt that any .vdi or other file on this disk will grow beyond 40 GB. (Ignore the purple masses, those are bundles of 8 MB band files.) Without HFS Plus: the thresholds of twenty, ten and five percent that I associate with Mobile Time Machine file system need not apply. I currently use ZEVO Community Edition 1.1.1 with Mountain Lion, OS X 10.8.2, but I'd like answers to be not too version-specific. References, chronological order ZFS Block Allocation (Jeff Bonwick's Blog) (2006-11-04) Space Maps (Jeff Bonwick's Blog) (2007-09-13) Doubling Exchange Performance (Bizarre ! Vous avez dit Bizarre ?) (2010-03-11) … So to solve this problem, what went in 2010/Q1 software release is multifold. The most important thing is: we increased the threshold at which we switched from 'first fit' (go fast) to 'best fit' (pack tight) from 70% full to 96% full. With TB drives, each slab is at least 5GB and 4% is still 200MB plenty of space and no need to do anything radical before that. This gave us the biggest bang. Second, instead of trying to reuse the same primary slabs until it failed an allocation we decided to stop giving the primary slab this preferential threatment as soon as the biggest allocation that could be satisfied by a slab was down to 128K (metaslab_df_alloc_threshold). At that point we were ready to switch to another slab that had more free space. We also decided to reduce the SMO bonus. Before, a slab that was 50% empty was preferred over slabs that had never been used. In order to foster more write aggregation, we reduced the threshold to 33% empty. This means that a random write workload now spread to more slabs where each one will have larger amount of free space leading to more write aggregation. Finally we also saw that slab loading was contributing to lower performance and implemented a slab prefetch mechanism to reduce down time associated with that operation. The conjunction of all these changes lead to 50% improved OLTP and 70% reduced variability from run to run … OLTP Improvements in Sun Storage 7000 2010.Q1 (Performance Profiles) (2010-03-11) Alasdair on Everything » ZFS runs really slowly when free disk usage goes above 80% (2010-07-18) where commentary includes: … OpenSolaris has changed this in onnv revision 11146 … [CFT] Improved ZFS metaslab code (faster write speed) (2010-08-22)

    Read the article

  • Can the lock function be used to implement thread-safe enumeration?

    - by Daniel
    I'm working on a thread-safe collection that uses Dictionary as a backing store. In C# you can do the following: private IEnumerable<KeyValuePair<K, V>> Enumerate() { if (_synchronize) { lock (_locker) { foreach (var entry in _dict) yield return entry; } } else { foreach (var entry in _dict) yield return entry; } } The only way I've found to do this in F# is using Monitor, e.g.: let enumerate() = if synchronize then seq { System.Threading.Monitor.Enter(locker) try for entry in dict -> entry finally System.Threading.Monitor.Exit(locker) } else seq { for entry in dict -> entry } Can this be done using the lock function? Or, is there a better way to do this in general? I don't think returning a copy of the collection for iteration will work because I need absolute synchronization.

    Read the article

  • Running script constantly in background: daemon, lock file with crontab, or simply loop?

    - by Mauritz Hansen
    I have a Perl script that queries a database for a list of files to process processes the files and then exits Upon startup this script creates a file (let's say script.lock), and upon exit it removes this file. I have a crontab entry that runs this script every minute. If the lockfile exists then the script exits, assuming that another instance of itself is running. The above process works fine but I am not very happy with the robustness of this approach. Specifically, if for some reason the script exits prematurely and the lockfile is not removed then a new instance will not execute properly. I would appreciate some advice on the following: Is using the lock file a good approach or is there a better/more robust way to do this? Is using crontab for this a good idea or could I better write an endless loop with sleep()? Should I use the GNU 'daemon' program or the Perl Proc::Daemon module (or some other equivalent) for this?

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >