Search Results

Search found 4909 results on 197 pages for 'vendor lock in'.

Page 61/197 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Do condition variables still need a mutex if you're changing the checked value atomically?

    - by Joseph Garvin
    Here is the typical way to use a condition variable: // The reader(s) lock(some_mutex); if(protected_by_mutex_var != desired_value) some_condition.wait(some_mutex); unlock(some_mutex); // The writer lock(some_mutex); protected_by_mutex_var = desired_value; unlock(some_mutex); some_condition.notify_all(); But if protected_by_mutex_var is set atomically by say, a compare-and-swap instruction, does the mutex serve any purpose (other than that pthreads and other APIs require you to pass in a mutex)? Is it protecting state used to implement the condition? If not, is it safe then to do this?: // The writer protected_by_mutex_var = desired_value; some_condition.notify_all(); With the writer never directly interacting with the reader's mutex? If so, is it even necessary that different readers use the same mutex?

    Read the article

  • Rails: Polymorphic User Table a good idea with AuthLogic?

    - by sscirrus
    Hi everyone, I have a system where I need to login three user types: customers, companies, and vendors from one login form on the home page. I have created one User table that works according to AuthLogic's example app at http://github.com/binarylogic/authlogic_example. I have added a field called "User Type" that currently contains either 'Customer', 'Company', or 'Vendor'. Note: each user type contains many disparate fields so I'm not sure if Single Table Inheritance is the best way to go (would welcome corrections if this conclusion is invalid). Is this a polymorphic association where each of the three types is 'tagged' with a User record? How should my models look so I have the right relationships between my User table and my user types Customer, Company, Vendor? Thanks very much!

    Read the article

  • JNI call to Intel or PPC arch jnilib function differs

    - by vinzenzweber
    I am making a call from within Java to a jnilib function and get different logs on PPC and Intel. My function definitions are as follows: private native int initHandler(long vendorID, long productID); JNIEXPORT jint JNICALL Java_com_sue_protocol_SerialPortObserverThread_initHandler( JNIEnv *env, jobject obj, jlong usbVendor, jlong usbProduct) When initHandler() is called, the logs differ on the platforms: Mac OS X/10.5.8/ppc, Java 1.5.0_22 Looking for devices matching vendor ID=0 and product ID=7982. Mac OS X/10.6.3/x86_64, Java 1.6.0_17 Looking for devices matching vendor ID=7982 and product ID=10. The sample source for this project can be found on http://github.com/vinzenzweber/USBEventHandler What do I need to do to get the JNI call right on ALL platforms?

    Read the article

  • Emacs on Windows: how to protect built-in el files from being accidentally edited

    - by RamyenHead
    On Linux, they are all read only, so no problem. But on MS Windows, what happens is like this: I get curious about the definition of the command isearch-forward, I type C-h f isearch-forward and click on the link isearch.el from the help to get to the definition of the function, and while I am reading its definition, I press C-h or C-c many times, but I set Caps Lock as another Ctrl key, so sometimes it happens that I release Caps Lock too early, in which case C-h or C-c becomes inserting h or c, sometimes I notice that and undo it, but sometimes I don't notice it, and I even save them all with C-x s. What is a good way to protect the built-in el files from me on MS Windows?

    Read the article

  • Understanding VS2010 C# parallel profiling results

    - by Haggai
    I have a program with many independent computations so I decided to parallelize it. I use Parallel.For/Each. The results were okay for a dual-core machine - CPU utilization of about 80%-90% most of the time. However, with a dual Xeon machine (i.e. 8 cores) I get only about 30%-40% CPU utilization, although the program spends quite a lot of time (sometimes more than 10 seconds) on the parallel sections, and I see it employs about 20-30 more threads in those sections compared to serial sections. Each thread takes more than 1 second to complete, so I see no reason for them to work in parallel - unless there is a synchronization problem. I used the built-in profiler of VS2010, and the results are strange. Even though I use locks only in one place, the profiler reports that about 85% of the program's time is spent on synchronization (also 5-7% sleep, 5-7% execution, under 1% IO). The locked code is only a cache (a dictionary) get/add: bool esn_found; lock (lock_load_esn) esn_found = cache.TryGetValue(st, out esn); if(!esn_found) { esn = pData.esa_inv_idx.esa[term_idx]; esn.populate(pData.esa_inv_idx.datafile); lock (lock_load_esn) { if (!cache.ContainsKey(st)) cache.Add(st, esn); } } lock_load_esn is a static member of the class of type Object. esn.populate reads from a file using a separate StreamReader for each thread. However, when I press the Synchronization button to see what causes the most delay, I see that the profiler reports lines which are function entrance lines, and doesn't report the locked sections themselves. It doesn't even report the function that contains the above code (reminder - the only lock in the program) as part of the blocking profile with noise level 2%. With noise level at 0% it reports all the functions of the program, which I don't understand why they count as blocking synchronizations. So my question is - what is going on here? How can it be that 85% of the time is spent on synchronization? How do I find out what really is the problem with the parallel sections of my program? Thanks.

    Read the article

  • Is SynchronizationContext.Post() threadsafe?

    - by cyclotis04
    This is a pretty basic question, and I imagine that it is, but I can't find any definitive answer. Is SynchronizationContext.Post() threadsafe? I have a member variable which holds the main thread's context, and _context.Post() is being called from multiple threads. I imagine that Post() could be called simultaneously on the object. Should I do something like lock (_contextLock) _context.Post(myDelegate, myEventArgs); or is that unnecessary? Edit: MSDN states that "Any instance members are not guaranteed to be thread safe." Should I keep my lock(), then?

    Read the article

  • PPTP ping client to client error

    - by Linux Intel
    I installed pptp server on a centos 6 64bit server PPTP Server ip : 55.66.77.10 PPTP Local ip : 10.0.0.1 Client1 IP : 10.0.0.60 centos 5 64bit Client2 IP : 10.0.0.61 centos5 64bit PPTP Server can ping Client1 And client 1 can ping PPTP Server PPTP Server can ping Client2 And client 2 can ping PPTP Server The problem is client 1 can not ping Client 2 route -n on PPTP Server Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.60 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.61 0.0.0.0 255.255.255.255 UH 0 0 0 ppp1 55.66.77.10 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 55.66.77.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 1 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 70.14.13.19 255.255.255.255 UGH 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 70.14.13.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 2 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 84.56.120.60 255.255.255.255 UGH 0 0 0 eth1 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 84.56.120.60 0.0.0.0 UG 0 0 0 eth1 cat /etc/ppp/options.pptpd on PPTP server ############################################################################### # $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $ # # Sample Poptop PPP options file /etc/ppp/options.pptpd # Options used by PPP when a connection arrives from a client. # This file is pointed to by /etc/pptpd.conf option keyword. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 and the kernel MPPE module. ############################################################################### # Authentication # Name of the local system for authentication purposes # (must match the second field in /etc/ppp/chap-secrets entries) name pptpd # Strip the domain prefix from the username before authentication. # (applies if you use pppd with chapms-strip-domain patch) #chapms-strip-domain # Encryption # (There have been multiple versions of PPP with encryption support, # choose with of the following sections you will use.) # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} # OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o # {{{ #-chap #-chapms # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. #+chapms-v2 # Require MPPE encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) #mppe-40 # enable either 40-bit or 128-bit, not both #mppe-128 #mppe-stateless # }}} # Network and Routing # If pppd is acting as a server for Microsoft Windows clients, this # option allows pppd to supply one or two DNS (Domain Name Server) # addresses to the clients. The first instance of this option # specifies the primary DNS address; the second instance (if given) # specifies the secondary DNS address. #ms-dns 10.0.0.1 #ms-dns 10.0.0.2 # If pppd is acting as a server for Microsoft Windows or "Samba" # clients, this option allows pppd to supply one or two WINS (Windows # Internet Name Services) server addresses to the clients. The first # instance of this option specifies the primary WINS address; the # second instance (if given) specifies the secondary WINS address. #ms-wins 10.0.0.3 #ms-wins 10.0.0.4 # Add an entry to this system's ARP [Address Resolution Protocol] # table with the IP address of the peer and the Ethernet address of this # system. This will have the effect of making the peer appear to other # systems to be on the local ethernet. # (you do not need this if your PPTP server is responsible for routing # packets to the clients -- James Cameron) proxyarp # Normally pptpd passes the IP address to pppd, but if pptpd has been # given the delegate option in pptpd.conf or the --delegate command line # option, then pppd will use chap-secrets or radius to allocate the # client IP address. The default local IP address used at the server # end is often the same as the address of the server. To override this, # specify the local IP address here. # (you must not use this unless you have used the delegate option) #10.8.0.100 # Logging # Enable connection debugging facilities. # (see your syslog configuration for where pppd sends to) debug # Print out all the option values which have been set. # (often requested by mailing list to verify options) #dump # Miscellaneous # Create a UUCP-style lock file for the pseudo-tty to ensure exclusive # access. lock # Disable BSD-Compress compression nobsdcomp # Disable Van Jacobson compression # (needed on some networks with Windows 9x/ME/XP clients, see posting to # poptop-server on 14th April 2005 by Pawel Pokrywka and followups, # http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 ) novj novjccomp # turn off logging to stderr, since this may be redirected to pptpd, # which may trigger a loopback nologfd # put plugins here # (putting them higher up may cause them to sent messages to the pty) cat /etc/ppp/options.pptp on Client1 and Client2 ############################################################################### # $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $ # # Sample PPTP PPP options file /etc/ppp/options.pptp # Options used by PPP when a connection is made by a PPTP client. # This file can be referred to by an /etc/ppp/peers file for the tunnel. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/ # and the kernel MPPE module available from the CVS repository also on # http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe. ############################################################################### # Lock the port lock # Authentication # We don't need the tunnel server to authenticate itself noauth # We won't do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2 # (you may need to remove these refusals if the server is not using MPPE) refuse-pap refuse-eap refuse-chap refuse-mschap # Compression # Turn off compression protocols we know won't be used nobsdcomp nodeflate # Encryption # (There have been multiple versions of PPP with encryption support, # choose which of the following sections you will use. Note that MPPE # requires the use of MSCHAP-V2 during authentication) # # Note that using PPTP with MPPE and MSCHAP-V2 should be considered # insecure: # http://marc.info/?l=pptpclient-devel&m=134372640219039&w=2 # https://github.com/moxie0/chapcrack/blob/master/README.md # http://technet.microsoft.com/en-us/security/advisory/2743314 # http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras # ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o # If the kernel is booted in FIPS mode (fips=1), the ppp_mppe.ko module # is not allowed and PPTP-MPPE is not available. # {{{ # Require MPPE 128-bit encryption #require-mppe-128 # }}} # http://mppe-mppc.alphacron.de/ fork from PPP project by Jan Dubiec # ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o # {{{ # Require MPPE 128-bit encryption #mppe required,stateless # }}} IPtables are stopped on clients and server, Also net.ipv4.ip_forward = 1 is enabled on PPTP Server. How can i solve this problem .?

    Read the article

  • C++ Unlocking a std::mutex before calling std::unique_lock wait

    - by Sant Kadog
    I have a multithreaded application (using std::thread) with a manager (class Tree) that executes some piece of code on different subtrees (embedded struct SubTree) in parallel. The basic idea is that each instance of SubTree has a deque that store objects. If the deque is empty, the thread waits until a new element is inserted in the deque or the termination criteria is reached. One subtree can generate objects and push them in the deque of another subtree. For convenience, all my std::mutex, std::locks and std::variable_condition are stored in a struct called "locks". The class Tree creates some threads that run the following method (first attempt) : void Tree::launch(SubTree & st, Locks & locks ) { /* some code */ std::lock_guard<std::mutex> deque_lock(locks.deque_mutex_[st.id_]) ; // lock the access to the deque of subtree st if (st.deque_.empty()) // check that the deque is still empty { // some threads are still running, wait for them to terminate std::unique_lock<std::mutex> wait_lock(locks.restart_mutex_[st.id_]) ; locks.restart_condition_[st.id_].wait(wait_lock) ; } /* some code */ } The problem is that "deque_lock" is still locked while the thread is waiting. Hence no object can be added in the deque of the current thread by a concurrent one. So I turned the lock_guard into a unique_lock and managed the lock/unlock manually : void launch(SubTree & st, Locks & locks ) { /* some code */ std::unique_lock<std::mutex> deque_lock(locks.deque_mutex_[st.id_]) ; // lock the access to the deque of subtree st if (st.deque_.empty()) // check that the deque is still empty { deque_lock.unlock() ; // unlock the access to the deque to enable the other threads to add objects // DATA RACE : nothing must happen to the unprotected deque here !!!!!! // some threads are still running, wait for them to terminate std::unique_lock<std::mutex> wait_lock(locks.restart_mutex_[st.id_]) ; locks.restart_condition_[st.id_].wait(wait_lock) ; } /* some code */ } The problem now, is that there is a data race, and I would like to make sure that the "wait" instruction is performed directly after the "deque_lock.unlock()" one. Would anyone know a way to create such a critical instruction sequence with the standard library ? Thanks in advance.

    Read the article

  • How can I check the version of an assembly then delete the assembly?

    - by Nescio
    I am using the FileVersionInfo to retrieve the version of a .Net assembly. Then, I want to immediately delete the file. Unfortunately after I call GetVersionInfo, any attempt to delete the file results in an error “…in use by another process…” Is there another technique to determine the version that does not lock the file? Or, is it possible to ensure the lock is released after calling GetVersionInfo? The below example is heavily simplified, but scope matches my real code. void Main() { var fvi = GetVersion("myPath"); if (fvi.ToString() == "2.0.0.7") DeleteFile("myPath"); } FileVersionInfo GetVersion(string path) { return FileVersionInfo.GetVersionInfo(path); } void DeleteFile(string path) { File.Delete(path); }

    Read the article

  • Is there any .Net JIT Support from chip vendors?

    - by NoMoreZealots
    I know that ARM actually has some support for Java and SUN obviously, but I haven't really references seen any chip vendor supporting a .Net JIT compiler. I know IBM and Intel both support C compilers, as well as TI and many of the embedded chip vendors. When you think of it, all a JIT compiler is, is the last stages of compilation and optimization which you would think would be a good match for a chip vendor's expertize. Perhaps a standardized Plug In compilation engine for the VM would make sense. Microsoft is targeting .Net to embedded Windows platforms as well, so they are fair game. Pete

    Read the article

  • Nhibernate setting query time out period for commands and pessimistic locking

    - by Nagesh
    I wish to specify a specific command timeout (or LOCK_TIMEOUT) for an SQL and once this time out is reached an exception (or alert) has to be raised in nHibernate. The following is an example pseudo-code what I have written: using (var session = sessionFactory.OpenSession()) { using (var sqlTrans = session.BeginTransaction()) { ICriteria criteria = session.CreateCriteria(typeof(Foo)); criteria.SetTimeout(5); //Here is the specified command timout, eg: property SqlCommand.CommandTimeout Foo fooObject = session.Load<Foo>(primaryKeyIntegerValue, LockMode.Force); session.SaveOrUpdate(fooObject); sqlTrans.Commit(); } } In SQL server we used to achieve this using the following SQL: BEGIN TRAN SET LOCK_TIMEOUT 500 SELECT * FROM Foo WITH (UPDLOCK, ROWLOCK) WHERE PrimaryKeyID = 1000001 If PrimaryKeyID row would have locked in other transaction the following error message is being shown by SQL Server: Msg 1222, Level 16, State 51, Line 3 Lock request time out period exceeded Similarly I wish to show a lock time out or command time out information using nHibernate. Please help me to achieve this. Thanks in advance for your help.

    Read the article

  • Guidelines of when to use locking

    - by miguel
    I would like to know if there are any guidelineswhich a developer should follow as to when (and where) to place locks. For instance: I understand that code such as this should be locked, to avoid the possibility of another thread changing the value of SomeHeapValue unexpectedly. class Foo { public SomeHeapObject myObject; public void DoSummat(object inputValue_) { myObject.SomeHeapValue = inputValue_; } } My question is, however, how deep does one go with the locking? For instance, if we have this code: class Foo { public SomeHeapObject myObject; public void DoSummat(object inputValue_) { myObject.SomeHeapValue = GetSomeHeapValue(); } } Should we lock in the DoSummat(...) method, or should we lock in the GetSomeHeapValue() method? Are there any guidelines that you all keep in mind when strcturing multi-threaded code?

    Read the article

  • Synclock a section of code while waiting for ShowDialog to return

    - by clawson
    I'm having trouble working out how to lock my application out of a section of code while it waits for a response from an external program. I've used Synclock on a section of code with the Me object in the expression. In this Synclock I call an overridden ShowDialog method of a dialog box, which has a timeout parameter, but does return the value from the underlying ShowDialog function call ,once the timer is setup. Works like this. SyncLock Me Dim frmDlgWithTimeout As New frmDlgWithTimeout ' dialog box with overridden ShowDialog ' Dim res As DialogResult = frmDlgWithTimeout.ShowDialog(10 * 1000) ' 10 sec timeout ' End SyncLock Now, external programs may raise events that bring my application to this Synclock but it doesn't prevent it from entering it, even though the ShowDialog function hasn't returned a value (and hence what I thought would keep the section of code locked). There is only one instance of the object that is used for lock in the program. Your help is greatly appreciated.

    Read the article

  • Java Caching on distributed environment

    - by Naren
    Hi, I am supposed to create a simple replicated cache using java for internal purpose which will be used in a distributed environment. I have seen oracle has implemented Replicated Cache Service. http://wiki.tangosol.com/display/COH32UG/Replicated+Cache+Service The problem I am facing is while doing an update or remove, I acquire lock on other cache's to the point the cache get's updated and notifies others of the change. This is eventually going into a dead lock situation, while removing. Is there any strategy I should follow while updating or removing from cache's. Can I implement a replicated cache without having a primary cache?? Thanks, Naren

    Read the article

  • What interprocess locking calls should I monitor?

    - by Matt Joiner
    I'm monitoring a process with strace/ltrace in the hope to find and intercept a call that checks, and potentially activates some kind of globally shared lock. While I've dealt with and read about several forms of interprocess locking on Linux before, I'm drawing a blank on what to calls to look for. Currently my only suspect is futex() which comes up very early on in the process' execution. Update0 There is some confusion about what I'm after. I'm monitoring an existing process for calls to persistent interprocess memory or equivalent. I'd like to know what system and library calls to look for. I have no intention call these myself, so naturally futex() will come up, I'm sure many libraries will implement their locking calls in terms of this, etc. Update1 I'd like a list of function names or a link to documentation, that I should monitor at the ltrace and strace levels (and specifying which). Any other good advice about how to track and locate the global lock in mind would be great.

    Read the article

  • Javascript / Flash : When exactly are flash external callback methods triggered ?

    - by felace
    I have a flash application using callbacks to javascript functions (eg. when it receives some data over a socket, it'll call a js script which would change the content of a div according to that given data). Afaik, there is no actual mutual exclusion in javascript so I'm not sure if I can/need to simulate something like : callbackFunc() { lock(mutex1) foo unlock(mutex1) } ... someOtherFunc() { lock(mutex1) bar unlock(mutex1) } So, the question is, when are those callbacks called ? Are they simply queued to be executed right after the browser finishes its task or are they triggered randomly ?

    Read the article

  • Locking a file to verify a single execution of a service. How reliable?

    - by Camilo Díaz
    Hello, I am deploying a little service to an UNIX(AIX) system. I want to check if there is no active instance of that service running when starting it. How reliable is to implement that check like this? Try to acquire a lock on a file (w/ FileChannel) If succeeds, keep lock and continue execution If fails, exit and refuse to run the main body I am aware of software like the Tanuki wrapper, however, I'm longing for a simpler(maybe not portable) solution. Regarding PIDFILE(s): I want to avoid using them if possible, as I don't have administrative rights on the machine, neither knowledge in AIX's shell programming.

    Read the article

  • python multiprocess update dictionary synchronously

    - by user1050325
    I am trying to update one common dictionary through multiple processes. Could you please help me find out what is the problem with this code? I get the following output: inside function {1: 1, 2: -1} comes here inside function {1: 0, 2: 2} comes here {1: 0, 2: -1} Thanks. from multiprocessing import Lock, Process, Manager l= Lock() def computeCopyNum(test,val): l.acquire() test[val]=val print "inside function" print test l.release() return a=dict({1: 0, 2: -1}) procs=list() for i in range(1,3): p = Process(target=computeCopyNum, args=(a,i)) procs.append(p) p.start() for p in procs: p.join() print "comes here" print a

    Read the article

  • How to create relationship between two tables with revisions using Entity Framework

    - by Chris Ridenour
    So I am in the process of redesigning a small database (and potentially a much larger one) but want to show the value of using revisions / history of the business objects. I am switching the data from Access to MSSQL 2008. I am having a lot of internal debate on what version of "revision history" to use in the design itself - and thought I had decided to add a "RevisionId" to all tables. With this design - adding a RevisionId to all tables we would like tracked - what would be the best way to create Navigational Properties and Relationships between two tables such as | Vendor | VendorContact | where a Vendor can have multiple contacts. The Contacts themselves will be under revision. Will it require custom extensions or am I over thinking this? Thanks in advance.

    Read the article

  • Is Graphics.DrawImage asynchronous?

    - by Roy
    Hi all, I was just wondering, is Graphics.DrawImage() asynchronous? I'm struggling with a thread safety issue and can't figure out where the problem is. if i use the following code in the GUI thread: protected override void OnPaint(PaintEventArgs e) { lock (_bitmapSyncRoot) { e.Graphics.DrawImage(_bitmap, _xPos, _yPos); } } And have the following code in a separate thread: private void RedrawBitmapThread() { Bitmap newBitmap = new Bitmap(_width, _height); // Draw bitmap // Bitmap oldBitmap = null; lock (_bitmapSyncRoot) { oldBitmap = _bitmap; _bitmap = newBitmap; } if (oldBitmap != null) { oldBitmap.Dispose(); } Invoke(Invalidate); } Could that explain an accessviolation exception? The code is running on a windows mobile 6.1 device with compact framework 3.5.

    Read the article

  • Rails 3 NameError (cannot remove Object::Version) error

    - by Jeff D
    Can anyone point me at what might be causing this error? There is no ApplicationTrace, and it locks the server hard on my development machine. I think it has something to do with the way rails reloads your classes in development mode, and it appears to have something to do with a Version constant. I can't find a reference to this though. Can anyone point me in the direction of what would cause this? activesupport (3.0.3) lib/active_support/dependencies.rb:645:in `remove_const' activesupport (3.0.3) lib/active_support/dependencies.rb:645:in `remove_constant' activesupport (3.0.3) lib/active_support/dependencies.rb:645:in `instance_eval' activesupport (3.0.3) lib/active_support/dependencies.rb:645:in `remove_constant' activesupport (3.0.3) lib/active_support/dependencies.rb:521:in `remove_unloadable_constants!' activesupport (3.0.3) lib/active_support/dependencies.rb:521:in `each' activesupport (3.0.3) lib/active_support/dependencies.rb:521:in `remove_unloadable_constants!' activesupport (3.0.3) lib/active_support/dependencies.rb:317:in `clear' railties (3.0.3) lib/rails/application/bootstrap.rb:60:in `_callback_after_7' activesupport (3.0.3) lib/active_support/callbacks.rb:419:in `_run_call_callbacks' actionpack (3.0.3) lib/action_dispatch/middleware/callbacks.rb:44:in `call' rack (1.2.1) lib/rack/sendfile.rb:107:in `call' actionpack (3.0.3) lib/action_dispatch/middleware/remote_ip.rb:48:in `call' actionpack (3.0.3) lib/action_dispatch/middleware/show_exceptions.rb:46:in `call' railties (3.0.3) lib/rails/rack/logger.rb:13:in `call' rack (1.2.1) lib/rack/runtime.rb:17:in `call' activesupport (3.0.3) lib/active_support/cache/strategy/local_cache.rb:72:in `call' rack (1.2.1) lib/rack/lock.rb:11:in `call' rack (1.2.1) lib/rack/lock.rb:11:in `synchronize' rack (1.2.1) lib/rack/lock.rb:11:in `call' actionpack (3.0.3) lib/action_dispatch/middleware/static.rb:30:in `call' railties (3.0.3) lib/rails/application.rb:168:in `call' railties (3.0.3) lib/rails/application.rb:77:in `send' railties (3.0.3) lib/rails/application.rb:77:in `method_missing' railties (3.0.3) lib/rails/rack/log_tailer.rb:14:in `call' rack (1.2.1) lib/rack/content_length.rb:13:in `call' rack (1.2.1) lib/rack/handler/webrick.rb:52:in `service' /Volumes/files/jeffdeville/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/webrick/httpserver.rb:104:in `service' /Volumes/files/jeffdeville/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/webrick/httpserver.rb:65:in `run' /Volumes/files/jeffdeville/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/webrick/server.rb:173:in `start_thread' /Volumes/files/jeffdeville/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/webrick/server.rb:162:in `start' /Volumes/files/jeffdeville/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/webrick/server.rb:162:in `start_thread' /Volumes/files/jeffdeville/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/webrick/server.rb:95:in `start' /Volumes/files/jeffdeville/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/webrick/server.rb:92:in `each' /Volumes/files/jeffdeville/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/webrick/server.rb:92:in `start' /Volumes/files/jeffdeville/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/webrick/server.rb:23:in `start' /Volumes/files/jeffdeville/.rvm/rubies/ree-1.8.7-2010.02/lib/ruby/1.8/webrick/server.rb:82:in `start' rack (1.2.1) lib/rack/handler/webrick.rb:13:in `run' rack (1.2.1) lib/rack/server.rb:213:in `start' railties (3.0.3) lib/rails/commands/server.rb:65:in `start' railties (3.0.3) lib/rails/commands.rb:30 railties (3.0.3) lib/rails/commands.rb:27:in `tap' railties (3.0.3) lib/rails/commands.rb:27 script/rails:6:in `require' script/rails:6

    Read the article

  • MySQL Inserting into locked aliased table

    - by Whitey
    I am trying to insert data into a InnoDB MySQL table which is locked using an alias and I cannot for the life of me get it to work! The following works: LOCK TABLES Problems p1 WRITE, Problems p2 WRITE, Server READ; SELECT * FROM Problems p1; UNLOCK TABLES; But try and do an insert and it doesn't work (it claims there is a syntax error round the 'p1' in my INSERT): LOCK TABLES Problems p1 WRITE, Problems p2 WRITE, Server READ; INSERT INTO Problems p1 (SomeCol) VALUES(43534); UNLOCK TABLES; Help please!

    Read the article

  • php is not recognized as an intern command (using windows)

    - by Tristan
    Hello, I want to develop using a framework called symfony BUT, i do not have a MAC and i don't want to dual boot with Debian. I tryed virtual hosts via Virtual Box but it's too crappy. So i decided to stay on windows So when the tutorial tells me to do php lib/vendor/symfony/data/bin/check_configuration.php i do in the windows cmd : php lib\vendor\symfony\data\bin\check_configuration.php and it tells me : 'php' is not recognized as an internal or external command I use UwAmp (php 5.2.12 + mysql + apache) which is stored in E:\logiciels\UwAmp\ inside i have buch of file, but i guess that is important : E:\logiciels\UwAmp\apache\php_5.2.11 How to make the cmd PHP in the windows cmd work propely please?

    Read the article

  • Permanent mutex locking causing deadlock?

    - by Daniel
    I am having a problem with mutexes (pthread_mutex on Linux) where if a thread locks a mutex right again after unlocking it, another thread is not very successful getting a lock. I've attached test code where one mutex is created, along with two threads that in an endless loop lock the mutex, sleep for a while and unlock it again. The output I expect to see is "alive" messages from both threads, one from each (e.g. 121212121212. However what I get is that one threads gets the majority of locks (e.g. 111111222222222111111111 or just 1111111111111...). If I add a usleep(1) after the unlocking, everything works as expected. Apparently when the thread goes to SLEEP the other thread gets its lock - however this is not the way I was expecting it, as the other thread has already called pthread_mutex_lock. I suspect this is the way this is implemented, in that the actice thread has priority, however it causes certain problem in this particular testcase. Is there any way to prevent it (short of adding a deliberately large enough delay or some kind of signaling) or where is my error in understanding? #include <pthread.h> #include <stdio.h> #include <string.h> #include <sys/time.h> #include <unistd.h> pthread_mutex_t mutex; void* threadFunction(void *id) { int count=0; while(true) { pthread_mutex_lock(&mutex); usleep(50*1000); pthread_mutex_unlock(&mutex); // usleep(1); ++count; if (count % 10 == 0) { printf("Thread %d alive\n", *(int*)id); count = 0; } } return 0; } int main() { // create one mutex pthread_mutexattr_t attr; pthread_mutexattr_init(&attr); pthread_mutex_init(&mutex, &attr); // create two threads pthread_t thread1; pthread_t thread2; pthread_attr_t attributes; pthread_attr_init(&attributes); int id1 = 1, id2 = 2; pthread_create(&thread1, &attributes, &threadFunction, &id1); pthread_create(&thread2, &attributes, &threadFunction, &id2); pthread_attr_destroy(&attributes); sleep(1000); return 0; }

    Read the article

  • Weird exception with delayed_job

    - by Tam
    Trying to queue a job with delayed_job as follows: Delayed::Job.enqueue(BackgroundProcess.new(current_user, object)) current_user and object are not nil when I print them out. The weird thing is that sometimes refreshing the page or running the command again works! Here is the exception trace: Delayed::Backend::ActiveRecord::Job Columns (44.8ms) SHOW FIELDS FROM `delayed_jobs` TypeError (wrong argument type nil (expected Data)): /Users/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/yaml.rb:391:in `emit' /Users/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/yaml.rb:391:in `quick_emit' /Users/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/1.9.1/yaml/rubytypes.rb:86:in `to_yaml' vendor/plugins/delayed_job/lib/delayed/backend/base.rb:65:in `payload_object=' activerecord (2.3.9) lib/active_record/base.rb:2918:in `block in assign_attributes' activerecord (2.3.9) lib/active_record/base.rb:2914:in `each' activerecord (2.3.9) lib/active_record/base.rb:2914:in `assign_attributes' activerecord (2.3.9) lib/active_record/base.rb:2787:in `attributes=' activerecord (2.3.9) lib/active_record/base.rb:2477:in `initialize' activerecord (2.3.9) lib/active_record/base.rb:725:in `new' activerecord (2.3.9) lib/active_record/base.rb:725:in `create' vendor/plugins/delayed_job/lib/delayed/backend/base.rb:21:in `enqueue'

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >