Search Results

Search found 581 results on 24 pages for 'abort'.

Page 20/24 | < Previous Page | 16 17 18 19 20 21 22 23 24  | Next Page >

  • Migration Core Data error code 134130

    - by magichero
    I want to do migration with 2 core database. I have read apple developer document. For the first database, I add some attributes (string, integer and date properties) to new version database. And follow all steps, I have done migration successfully with the first once. But the second database, I also add attributes (string, integer, date, transformable and binary-data properties) to new version database. And follow all steps (like the first database) but system return an error (134130). Here is the code: if (persistentStoreCoordinator_) { PMReleaseSafely(persistentStoreCoordinator_); } // Notify NSNotificationCenter *nc = [NSNotificationCenter defaultCenter]; [nc postNotificationName:GCalWillMigrationNotification object:self]; // NSString *sourceStoreType = NSSQLiteStoreType; NSString *dataStorePath = [PMUtility dataStorePathForName:GCalDBWarehousePersistentStoreName]; NSURL *storeURL = [NSURL fileURLWithPath:dataStorePath]; BOOL storeExists = [[NSFileManager defaultManager] fileExistsAtPath:dataStorePath]; // NSError *error = nil; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; persistentStoreCoordinator_ = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:self.managedObjectModel]; [persistentStoreCoordinator_ addPersistentStoreWithType:sourceStoreType configuration:nil URL:storeURL options:options error:&error]; if (error != nil) { abort(); } error is not nil and below is log: Error Domain=NSCocoaErrorDomain Code=134130 "The operation couldn\u2019t be completed. (Cocoa error 134130.)" UserInfo=0x856f790 {URL=file://localhost/Users/greensun/Library/Application%20Support/iPhone%20Simulator/5.0/Applications/D10712DE-D9FE-411A-8182-C4F58C60EC6D/Library/Application%20Support/Promise%20Mail/GCalDBWarehouse.sqlite, metadata={type = immutable dict, count = 7, entries = 2 : {contents = "NSStoreModelVersionIdentifiers"} = {type = immutable, count = 1, values = ( 0 : {contents = ""} )} 4 : {contents = "NSPersistenceFrameworkVersion"} = {value = +386, type = kCFNumberSInt64Type} 6 : {contents = "NSStoreModelVersionHashes"} = {type = immutable dict, count = 2, entries = 0 : {contents = "SyncEvent"} = {length = 32, capacity = 32, bytes = 0xfdae355f55c13fbd0344415fea26c8bb ... 4c1721aadd4122aa} 1 : {contents = "ImportEvent"} = {length = 32, capacity = 32, bytes = 0x7676888f0d7eaff4d1f844343028ce02 ... 040af6cbe8c5fd01} } 7 : {contents = "NSStoreUUID"} = {contents = "51678BAC-CCFB-4D00-AF5C-8FA1BEDA6440"} 8 : {contents = "NSStoreType"} = {contents = "SQLite"} 9 : {contents = "_NSAutoVacuumLevel"} = {contents = "2"} 10 : {contents = "NSStoreModelVersionHashesVersion"} = {value = +3, type = kCFNumberSInt32Type} }, reason=Can't find model for source store} I try a lot of solutions but it does not work. I just add more attributes to 2 new version database, and succeed in migrating once. Thanks for reading and your help.

    Read the article

  • Is DataRow thread safe? How to update a single datarow in a datatable using multiple threads? - .net

    - by NLV
    Hello all I want to update a single datarow in a datatable using multiple threads. Is this actually possible? I've written the following code implementing a simple multi-threading to update a single datarow. I get different results each time. Why is it so? public partial class Form1 : Form { private static DataTable dtMain; private static string threadMsg = string.Empty; public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { Thread[] thArr = new Thread[5]; dtMain = new DataTable(); dtMain.Columns.Add("SNo"); DataRow dRow; dRow = dtMain.NewRow(); dRow["SNo"] = 5; dtMain.Rows.Add(dRow); dtMain.AcceptChanges(); ThreadStart ts = new ThreadStart(delegate { dtUpdate(); }); thArr[0] = new Thread(ts); thArr[1] = new Thread(ts); thArr[2] = new Thread(ts); thArr[3] = new Thread(ts); thArr[4] = new Thread(ts); thArr[0].Start(); thArr[1].Start(); thArr[2].Start(); thArr[3].Start(); thArr[4].Start(); while (!WaitTillAllThreadsStopped(thArr)) { Thread.Sleep(500); } foreach (Thread thread in thArr) { if (thread != null && thread.IsAlive) { thread.Abort(); } } dgvMain.DataSource = dtMain; } private void dtUpdate() { for (int i = 0; i < 1000; i++) { try { dtMain.Rows[0][0] = Convert.ToInt32(dtMain.Rows[0][0]) + 1; dtMain.AcceptChanges(); } catch { continue; } } } private bool WaitTillAllThreadsStopped(Thread[] threads) { foreach (Thread thread in threads) { if (thread != null && thread.ThreadState == ThreadState.Running) { return false; } } return true; } } Any thoughts on this? Thank you NLV

    Read the article

  • What is "Either the application has not called WSAStartup, or WSAStartup failed"

    - by Am1rr3zA
    I develop a program which connect to a web-server through network every thing works fine until I try to use some dongle to protect my software. the dongle has network feature too and it's API work under network infrastructure. when I added the Dongle checking code to my program I got this error: "Either the application has not called WSAStartup, or WSAStartup failed" I don't have any idea what is going on? I put the block of code which encounter exception here. the scenario I got the exception is I log in to the program (everything works fine) the plug out the dongle then the program stop and ask for dongle and I plug in the dongle again and try to log in bu I got exception on line response = (HttpWebResponse)request.GetResponse(); DongleService unikey = new DongleService(); checkDongle = unikey.isConnectedNow(); if (checkDongle) { isPass = true; this.username = txtbxUser.Text; this.pass = txtbxPass.Text; this.IP = combobxServer.Text; string uri = @"https://" + combobxServer.Text + ":5002num_events=1"; request = (HttpWebRequest)WebRequest.Create(uri); request.Proxy = null; request.Credentials = new NetworkCredential(this.username, this.pass); ServicePointManager.ServerCertificateValidationCallback = ((sender, certificate, chain, sslPolicyErrors) => true); response = (HttpWebResponse)request.GetResponse(); Properties.Settings.Default.User = txtbxUser.Text; int index = _servers.FindIndex(p => p == combobxServer.Text); if (index == -1) { _servers.Add(combobxServer.Text); Config_Save.SaveServers(_servers); _servers = Config_Save.LoadServers(); } Properties.Settings.Default.Server = combobxServer.Text; // also save the password if (checkBox1.CheckState.ToString() == "Checked") Properties.Settings.Default.Pass = txtbxPass.Text; Properties.Settings.Default.settingLoginUsername = this.username; Properties.Settings.Default.settingLoginPassword = this.pass; Properties.Settings.Default.settingLoginPort = "5002"; Properties.Settings.Default.settingLoginIP = this.IP; Properties.Settings.Default.isLogin = "guest"; Properties.Settings.Default.Save(); response.Close(); request.Abort(); this.isPass = true; this.Close(); } else { MessageBox.Show("Please Insert Correct Dongle!", "Dongle Error", MessageBoxButtons.OK, MessageBoxIcon.Stop); }

    Read the article

  • Problem in running boost eample blocking_udp_echo_client on MacOSX

    - by n179911
    I am trying to run blocking_udp_echo_client on MacOS X http://www.boost.org/doc/libs/1_35_0/doc/html/boost_asio/example/echo/blocking_udp_echo_client.cpp I run it with argument 'localhost 9000' But the program crashes and this is the line in the source which crashes: `udp::socket s(io_service, udp::endpoint(udp::v4(), 0));' this is the stack trace: #0 0x918c3e42 in __kill #1 0x918c3e34 in kill$UNIX2003 #2 0x9193623a in raise #3 0x91942679 in abort #4 0x940d96f9 in __gnu_debug::_Error_formatter::_M_error #5 0x0000e76e in __gnu_debug::_Safe_iterator::op_base* , __gnu_debug_def::list::op_base*, std::allocator::op_base* ::_Safe_iterator at safe_iterator.h:124 #6 0x00014729 in boost::asio::detail::hash_map::op_base*::bucket_type::bucket_type at hash_map.hpp:277 #7 0x00019e97 in std::_Construct::op_base*::bucket_type, boost::asio::detail::hash_map::op_base*::bucket_type at stl_construct.h:81 #8 0x0001a457 in std::__uninitialized_fill_n_aux::op_base*::bucket_type*, __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type , unsigned long, boost::asio::detail::hash_map::op_base*::bucket_type at stl_uninitialized.h:194 #9 0x0001a4e1 in std::uninitialized_fill_n::op_base*::bucket_type*, __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type , unsigned long, boost::asio::detail::hash_map::op_base*::bucket_type at stl_uninitialized.h:218 #10 0x0001a509 in std::__uninitialized_fill_n_a::op_base*::bucket_type*, __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type , unsigned long, boost::asio::detail::hash_map::op_base*::bucket_type, boost::asio::detail::hash_map::op_base*::bucket_type at stl_uninitialized.h:310 #11 0x0001aa34 in __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::_M_fill_insert at vector.tcc:365 #12 0x0001acda in __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::insert at stl_vector.h:658 #13 0x0001ad81 in __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::resize at stl_vector.h:427 #14 0x0001ae3a in __gnu_debug_def::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::resize at vector:169 #15 0x0001b7be in boost::asio::detail::hash_map::op_base*::rehash at hash_map.hpp:221 #16 0x0001bbeb in boost::asio::detail::hash_map::op_base*::hash_map at hash_map.hpp:67 #17 0x0001bc74 in boost::asio::detail::reactor_op_queue::reactor_op_queue at reactor_op_queue.hpp:42 #18 0x0001bd24 in boost::asio::detail::kqueue_reactor::kqueue_reactor at kqueue_reactor.hpp:86 #19 0x0001c000 in boost::asio::detail::service_registry::use_service at service_registry.hpp:109 #20 0x0001c14d in boost::asio::use_service at io_service.ipp:195 #21 0x0001c26d in boost::asio::detail::reactive_socket_service ::reactive_socket_service at reactive_socket_service.hpp:111 #22 0x0001c344 in boost::asio::detail::service_registry::use_service at service_registry.hpp:109 #23 0x0001c491 in boost::asio::use_service at io_service.ipp:195 #24 0x0001c4d5 in boost::asio::datagram_socket_service::datagram_socket_service at datagram_socket_service.hpp:95 #25 0x0001c59e in boost::asio::detail::service_registry::use_service at service_registry.hpp:109 #26 0x0001c6eb in boost::asio::use_service at io_service.ipp:195 #27 0x0001c711 in boost::asio::basic_io_object ::basic_io_object at basic_io_object.hpp:72 #28 0x0001c783 in boost::asio::basic_socket ::basic_socket at basic_socket.hpp:108 #29 0x0001c865 in boost::asio::basic_datagram_socket ::basic_datagram_socket at basic_datagram_socket.hpp:107 #30 0x000027bc in main at main.cpp:32 This is the gdb output: (gdb) continue /Developer/SDKs/MacOSX10.5.sdk/usr/include/c++/4.0.0/debug/safe_iterator.h:127: error: attempt to copy-construct an iterator from a singular iterator. Objects involved in the operation: iterator "this" @ 0x0x100420 { type = N11__gnu_debug14_Safe_iteratorIN10__gnu_norm14_List_iteratorISt4pairIiPN5boost4asio6detail16reactor_op_queueIiE7op_baseEEEEN15__gnu_debug_def4listISB_SaISB_EEEEE (mutable iterator); state = singular; } iterator "other" @ 0x0xbfffe8a4 { type = N11__gnu_debug14_Safe_iteratorIN10__gnu_norm14_List_iteratorISt4pairIiPN5boost4asio6detail16reactor_op_queueIiE7op_baseEEEEN15__gnu_debug_def4listISB_SaISB_EEEEE (mutable iterator); state = singular; } Program received signal: “SIGABRT”. (gdb) continue Program received signal: “?”. Does someone has any idea why this example does not work on mac osx? Thank you.

    Read the article

  • Error in creating template class

    - by Luciano
    I found this vector template class implementation, but it doesn't compile on XCode. Header file: // File: myvector.h #ifndef _myvector_h #define _myvector_h template <typename ElemType> class MyVector { public: MyVector(); ~MyVector(); int size(); void add(ElemType s); ElemType getAt(int index); private: ElemType *arr; int numUsed, numAllocated; void doubleCapacity(); }; #include "myvector.cpp" #endif Implementation file: // File: myvector.cpp #include <iostream> #include "myvector.h" template <typename ElemType> MyVector<ElemType>::MyVector() { arr = new ElemType[2]; numAllocated = 2; numUsed = 0; } template <typename ElemType> MyVector<ElemType>::~MyVector() { delete[] arr; } template <typename ElemType> int MyVector<ElemType>::size() { return numUsed; } template <typename ElemType> ElemType MyVector<ElemType>::getAt(int index) { if (index < 0 || index >= size()) { std::cerr << "Out of Bounds"; abort(); } return arr[index]; } template <typename ElemType> void MyVector<ElemType>::add(ElemType s) { if (numUsed == numAllocated) doubleCapacity(); arr[numUsed++] = s; } template <typename ElemType> void MyVector<ElemType>::doubleCapacity() { ElemType *bigger = new ElemType[numAllocated*2]; for (int i = 0; i < numUsed; i++) bigger[i] = arr[i]; delete[] arr; arr = bigger; numAllocated*= 2; } If I try to compile as is, I get the following error: "Redefinition of 'MyVector::MyVector()'" The same error is displayed for every member function (.cpp file). In order to fix this, I removed the '#include "myvector.h"' on the .cpp file, but now I get a new error: "Expected constructor, destructor, or type conversion before '<' token". A similar error is displayed for every member as well. Interestingly enough, if I move all the .cpp code to the header file, it compiles fine. Does that mean I can't implement template classes in separate files?

    Read the article

  • Apache HttpClient CoreConnectionPNames.CONNECTION_TIMEOUT does nothing ?

    - by Maxim Veksler
    Hi, I'm testing some result from HttpClient that looks irrational. It seems that setting CoreConnectionPNames.CONNECTION_TIMEOUT = 1 has no effect because send request to different host return successfully with connect timeout 1 which IMHO can't be the case (1ms to setup TCP handshake???) Am I misunderstood something or is something very strange going on here? The httpclient version I'm using as can be seen in this pom.xml is <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>4.0.1</version> <type>jar</type> </dependency> Here is the code: import java.io.IOException; import java.util.Random; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.client.ClientProtocolException; import org.apache.http.client.HttpClient; import org.apache.http.client.methods.HttpGet; import org.apache.http.client.methods.HttpUriRequest; import org.apache.http.impl.client.DefaultHttpClient; import org.apache.http.params.CoreConnectionPNames; import org.apache.http.util.EntityUtils; import org.apache.log4j.Logger; public class TestNodeAliveness { private static Logger log = Logger.getLogger(TestNodeAliveness.class); public static boolean nodeBIT(String elasticIP) throws ClientProtocolException, IOException { try { HttpClient client = new DefaultHttpClient(); // The time it takes to open TCP connection. client.getParams().setParameter(CoreConnectionPNames.CONNECTION_TIMEOUT, 1); // Timeout when server does not send data. client.getParams().setParameter(CoreConnectionPNames.SO_TIMEOUT, 5000); // Some tuning that is not required for bit tests. client.getParams().setParameter(CoreConnectionPNames.STALE_CONNECTION_CHECK, false); client.getParams().setParameter(CoreConnectionPNames.TCP_NODELAY, true); HttpUriRequest request = new HttpGet("http://" + elasticIP); HttpResponse response = client.execute(request); HttpEntity entity = response.getEntity(); if(entity == null) { return false; } else { System.out.println(EntityUtils.toString(entity)); } // Close just in case. request.abort(); } catch (Throwable e) { log.warn("BIT Test failed for " + elasticIP); e.printStackTrace(); return false; } return true; } public static void main(String[] args) throws ClientProtocolException, IOException { nodeBIT("google.com?cant_cache_this=" + (new Random()).nextInt()); } } Thank you.

    Read the article

  • crash in calloc

    - by mmd
    I'm trying to debug a program I wrote. I ran it inside gdb and I managed to catch a SIGABRT from inside calloc(). I'm completely confused about how this can arise. Can it be a bug in gcc or even libc?? More details: My program uses OpenMP. I ran it through valgrind in single-threaded mode with no errors. I also use mmap() to load a 40GB file, but I doubt that is relevant. Inside gdb, I'm running with 30 threads. Several identical runs (same input&CL) finished correctly, until the problematic one that I caught. On the surface this suggests there might be a race condition of some type. However, the SIGABRT comes from calloc() which is out of my control. Here is some relevant gdb output: (gdb) info threads [...] * 11 Thread 0x7ffff0056700 (LWP 73449) 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 [...] (gdb) thread 11 [Switching to thread 11 (Thread 0x7ffff0056700 (LWP 73449))]#0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 #1 0x00007ffff6a96085 in abort () from /lib64/libc.so.6 #2 0x00007ffff6ad1fe7 in __libc_message () from /lib64/libc.so.6 #3 0x00007ffff6ad7916 in malloc_printerr () from /lib64/libc.so.6 #4 0x00007ffff6adb79f in _int_malloc () from /lib64/libc.so.6 #5 0x00007ffff6adbdd6 in calloc () from /lib64/libc.so.6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 #7 read_get_hit_list_per_strand (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/mapping.c:1046 #8 0x000000000041308a in read_get_hit_list (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1239 #9 handle_read (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1806 #10 0x0000000000404f35 in launch_scan_threads (.omp_data_i=<value optimized out>) at gmapper/gmapper.c:557 #11 0x00007ffff7230502 in ?? () from /usr/lib64/libgomp.so.1 #12 0x00007ffff6dfc851 in start_thread () from /lib64/libpthread.so.0 #13 0x00007ffff6b4a11d in clone () from /lib64/libc.so.6 (gdb) f 6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 286 res = calloc(size, 1); (gdb) p size $2 = 814080 (gdb) The function my_calloc() is just a wrapper, but the problem is not in there, as the real calloc() call looks legit. These are the limits set in the shell: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 2067285 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited The program is not out of memory, it's using 41GB on a machine with 256GB available: $ top -b -n 1 | grep gmapper 73437 user 20 0 41.5g 16g 15g T 0.0 6.6 55:17.24 gmapper-ls $ free -m total used free shared buffers cached Mem: 258437 195567 62869 0 82 189677 -/+ buffers/cache: 5807 252629 Swap: 0 0 0 I compiled using gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), with flags -g -O2 -DNDEBUG -mmmx -msse -msse2 -fopenmp -Wall -Wno-deprecated -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS.

    Read the article

  • C# Spell checker Problem

    - by reggie
    I've incorporated spell check into my win forms C# project. This is my code. public void CheckSpelling() { try { // declare local variables to track error count // and information int SpellingErrors = 0; string ErrorCountMessage = string.Empty; // create an instance of a word application Microsoft.Office.Interop.Word.Application WordApp = new Microsoft.Office.Interop.Word.Application(); // hide the MS Word document during the spellcheck //WordApp.WindowState = WdWindowState.wdWindowStateMinimize; // check for zero length content in text area if (this.Text.Length > 0) { WordApp.Visible = false; // create an instance of a word document _Document WordDoc = WordApp.Documents.Add(ref emptyItem, ref emptyItem, ref emptyItem, ref oFalse); // load the content written into the word doc WordDoc.Words.First.InsertBefore(this.Text); // collect errors form new temporary document set to contain // the content of this control Microsoft.Office.Interop.Word.ProofreadingErrors docErrors = WordDoc.SpellingErrors; SpellingErrors = docErrors.Count; // execute spell check; assumes no custom dictionaries WordDoc.CheckSpelling(ref oNothing, ref oIgnoreUpperCase, ref oAlwaysSuggest, ref oNothing, ref oNothing, ref oNothing, ref oNothing, ref oNothing, ref oNothing, ref oNothing, ref oNothing, ref oNothing); // format a string to contain a report of the errors detected ErrorCountMessage = "Spell check complete; errors detected: " + SpellingErrors; // return corrected text to control's text area object first = 0; object last = WordDoc.Characters.Count - 1; this.Text = WordDoc.Range(ref first, ref last).Text; } else { // if nothing was typed into the control, abort and inform user ErrorCountMessage = "Unable to spell check an empty text box."; } WordApp.Quit(ref oFalse, ref emptyItem, ref emptyItem); System.Runtime.InteropServices.Marshal.ReleaseComObject(WordApp); // return report on errors corrected // - could either display from the control or change this to // - return a string which the caller could use as desired. // MessageBox.Show(ErrorCountMessage, "Finished Spelling Check"); } catch (Exception e) { MessageBox.Show(e.ToString()); } } The spell checker works well, the only problem is when I try to move the spell checker the main form blurs up for some reason. Also when I close the spell checker the main form is back to normal. It seems like it is opening up Microsoft word then hiding the window, only allowing the spell checker to be seen. Please help.

    Read the article

  • Paperclip: delete attachment and "can't convert nil into String" error

    - by snitko
    I'm using Paperclip and here's what I do in the model to delete attachments: def before_save self.avatar = nil if @delete_avatar == 1.to_s end Works fine unless @delete_avatar flag is set when the user is actually uploading the image (so the model receives both params[:user][:avatar] and params[:user][:delete_avatar]. This results in the following error: TypeError: can't convert nil into String from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:40:in `dirname' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:40:in `flush_writes' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:38:in `each' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:38:in `flush_writes' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/attachment.rb:144:in `save' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/attachment.rb:162:in `destroy' from /Work/project/src/app/models/user.rb:72:in `before_save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/callbacks.rb:347:in `send' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/callbacks.rb:347:in `callback' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/callbacks.rb:249:in `create_or_update' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2538:in `save_without_validation' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/validations.rb:1078:in `save_without_dirty' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/dirty.rb:79:in `save_without_transactions' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:229:in `send' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:229:in `with_transaction_returning_status' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:182:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:228:in `with_transaction_returning_status' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:196:in `save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:208:in `rollback_active_record_state!' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:196:in `save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:723:in `create' I assume it has something to do with the avatar.dirty? value because when it certainly is true when this happens. The question is, how do I totally reset the thing if there are changes to be saved and abort avatar upload when the flag is set?

    Read the article

  • Should Application_End fire on an automatic App Pool Recycle?

    - by Laramie
    I have read this, this, this and this plus a dozen other posts/blogs. I have an ASP.Net app in shared hosting that is frequently recycling. We use NLog and have the following code in global.asax void Application_Start(object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nAPPLICATION STARTING\r\n\r\n"); } protected void Application_OnEnd(Object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nAPPLICATION_OnEnd\r\n\r\n"); } void Application_End(object sender, EventArgs e) { HttpRuntime runtime = (HttpRuntime)typeof(System.Web.HttpRuntime).InvokeMember("_theRuntime", BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.GetField, null, null, null); if (runtime == null) return; string shutDownMessage = (string)runtime.GetType().InvokeMember("_shutDownMessage", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, runtime, null); string shutDownStack = (string)runtime.GetType().InvokeMember("_shutDownStack", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, runtime, null); ApplicationShutdownReason shutdownReason = System.Web.Hosting.HostingEnvironment.ShutdownReason; NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug(String.Format("\r\n\r\nAPPLICATION END\r\n\r\n_shutDownReason = {2}\r\n\r\n _shutDownMessage = {0}\r\n\r\n_shutDownStack = {1}\r\n\r\n", shutDownMessage, shutDownStack, shutdownReason)); } void Application_Error(object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nApplication_Error\r\n\r\n"); } Our log file is littered with "APPLICATION STARTING" entries, but neither Application_OnEnd, Application_End, nor Application_Error are ever fired during these spontaneous restarts. I know they are working because there are entries for touching the web.config or /bin files. We also ran a memory overload test and can trigger an OutOfMemoryException which is caught in Application_Error. We are trying to determine whether the virtual memory limit is causing the recycling. We have added GC.GetTotalMemory(false) throughout the code, but this is for all of .Net, not just our App´s pool, correct? We've also tried var oPerfCounter = new PerformanceCounter(); oPerfCounter.CategoryName = "Process"; oPerfCounter.CounterName = "Virtual Bytes"; oPerfCounter.InstanceName = "iisExpress"; logger.Debug("Virtual Bytes: " + oPerfCounter.RawValue + " bytes"); but don't have permission in shared hosting. I've monitored the app on a dev server with the same requests that caused the recycles in production with ANTS Memory Profiler attached and can't seem to find a culprit. We have also run it with a debugger attached in dev to check for uncaught exceptions in spawned threads that might cause the app to abort. My questions are these: How can I effectively monitor memory usage in shared hosting to tell how much my application is consuming prior to an application recycle? Why are the Application_[End/OnEnd/Error] handlers in global.asax not being called? How else can I determine what is causing these recycles? Thanks.

    Read the article

  • Paperclip: delete attachments and "can't convert nil into String" error

    - by snitko
    I'm using Paperclip and here's what I do in the model to delete attachments: def before_save self.avatar = nil if @delete_avatar == 1.to_s end Works fine unless @delete_avatar flag is set when the user is actually uploading the image (so the model receives both params[:user][:avatar] and params[:user][:delete_avatar]. This results in the following error: TypeError: can't convert nil into String from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:40:in `dirname' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:40:in `flush_writes' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:38:in `each' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:38:in `flush_writes' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/attachment.rb:144:in `save' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/attachment.rb:162:in `destroy' from /Work/project/src/app/models/user.rb:72:in `before_save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/callbacks.rb:347:in `send' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/callbacks.rb:347:in `callback' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/callbacks.rb:249:in `create_or_update' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2538:in `save_without_validation' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/validations.rb:1078:in `save_without_dirty' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/dirty.rb:79:in `save_without_transactions' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:229:in `send' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:229:in `with_transaction_returning_status' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:182:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:228:in `with_transaction_returning_status' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:196:in `save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:208:in `rollback_active_record_state!' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:196:in `save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:723:in `create' I assume it has something to do with the avatar.dirty? value because when it certainly is true when this happens. The question is, how do I totally reset the thing if there are changes to be saved and abort avatar upload when the flag is set?

    Read the article

  • Synchronizing thread communication?

    - by Roger Alsing
    Just for the heck of it I'm trying to emulate how JRuby generators work using threads in C#. Also, I'm fully aware that C# haas built in support for yield return, I'm just toying around a bit. I guess it's some sort of poor mans coroutines by keeping multiple callstacks alive using threads. (even though none of the callstacks should execute at the same time) The idea is like this: The consumer thread requests a value The worker thread provides a value and yields back to the consumer thread Repeat untill worker thread is done So, what would be the correct way of doing the following? //example class Program { static void Main(string[] args) { ThreadedEnumerator<string> enumerator = new ThreadedEnumerator<string>(); enumerator.Init(() => { for (int i = 1; i < 100; i++) { enumerator.Yield(i.ToString()); } }); foreach (var item in enumerator) { Console.WriteLine(item); }; Console.ReadLine(); } } //naive threaded enumerator public class ThreadedEnumerator<T> : IEnumerator<T>, IEnumerable<T> { private Thread enumeratorThread; private T current; private bool hasMore = true; private bool isStarted = false; AutoResetEvent enumeratorEvent = new AutoResetEvent(false); AutoResetEvent consumerEvent = new AutoResetEvent(false); public void Yield(T item) { //wait for consumer to request a value consumerEvent.WaitOne(); //assign the value current = item; //signal that we have yielded the requested enumeratorEvent.Set(); } public void Init(Action userAction) { Action WrappedAction = () => { userAction(); consumerEvent.WaitOne(); enumeratorEvent.Set(); hasMore = false; }; ThreadStart ts = new ThreadStart(WrappedAction); enumeratorThread = new Thread(ts); enumeratorThread.IsBackground = true; isStarted = false; } public T Current { get { return current; } } public void Dispose() { enumeratorThread.Abort(); } object System.Collections.IEnumerator.Current { get { return Current; } } public bool MoveNext() { if (!isStarted) { isStarted = true; enumeratorThread.Start(); } //signal that we are ready to receive a value consumerEvent.Set(); //wait for the enumerator to yield enumeratorEvent.WaitOne(); return hasMore; } public void Reset() { throw new NotImplementedException(); } public IEnumerator<T> GetEnumerator() { return this; } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return this; } } Ideas?

    Read the article

  • Running Network Application on port Server side give me error how can i solve it?

    - by Phsika
    if i run Server App. Exception occurs: on Dinle.Start() System.Net.SocketException - Only one usage of each socket address (protocol/network address/port) is normally permitted How can i solve this error? Server.cs using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; using System.Net; using System.Net.Sockets; using System.Threading; namespace Server { public partial class Server : Form { Thread kanal; public Server() { InitializeComponent(); try { kanal = new Thread(new ThreadStart(Dinle)); kanal.Start(); kanal.Priority = ThreadPriority.Normal; this.Text = "Kanla Çalisti"; } catch (Exception ex) { this.Text = "kanal çalismadi"; MessageBox.Show("hata:" + ex.ToString()); kanal.Abort(); throw; } } private void Server_Load(object sender, EventArgs e) { Dinle(); } private void btn_Listen_Click(object sender, EventArgs e) { Dinle(); } void Dinle() { // IPAddress localAddr = IPAddress.Parse("localhost"); // TcpListener server = new TcpListener(port); // server = new TcpListener(localAddr, port); //TcpListener Dinle = new TcpListener(localAddr,51124); TcpListener Dinle = new TcpListener(51124); try { while (true) { Dinle.Start(); Exception is occured. Socket Baglanti = Dinle.AcceptSocket(); if (!Baglanti.Connected) { MessageBox.Show("Baglanti Yok"); } else { TcpClient tcpClient = Dinle.AcceptTcpClient(); if (tcpClient.ReceiveBufferSize 0) { byte[] Dizi = new byte[250000]; Baglanti.Receive(Dizi, Dizi.Length, 0); string Yol; saveFileDialog1.Title = "Dosyayi kaydet"; saveFileDialog1.ShowDialog(); Yol = saveFileDialog1.FileName; FileStream Dosya = new FileStream(Yol, FileMode.Create); Dosya.Write(Dizi, 0, Dizi.Length - 20); Dosya.Close(); listBox1.Items.Add("dosya indirildi"); listBox1.Items.Add("Dosya Boyutu=" + Dizi.Length.ToString()); listBox1.Items.Add("Indirilme Tarihi=" + DateTime.Now); listBox1.Items.Add("--------------------------------"); } } } } catch (Exception ex) { MessageBox.Show("hata:" + ex.ToString()); } } } }

    Read the article

  • jquery ui autocomplete does't close options menu if there is no focus when ajax returns

    - by Uri
    I'm using jquery ui autocomplete widget with ajax, and on noticed the following problem. Background: In order for the user to be able to focus on the autocomplete and get the options without typing anything, I use the focus event: autoComp.focus(function() { $(this).autocomplete("search", "");} However this produces the following effect: when the user clicks, an ajax request is being sent. While waiting for the response, the user then clicks elsewhere and the autocomplete is blurred. But as soon as the response returns, the options menu pops out, even though the autocomplete has no focus. In order to make it go away the user has to click once inside, and again outside the autocomplete, which is a bit annoying. any ideas how I prevent this? EDIT: I solved this in a very ugly way by building another mediator function that knows the element's ID, and this function calls the ajax function with the ID, which on success check the focus of the element, and returns null if it's not focused. It's pretty ugly and I'm still looking for alternatives. EDIT#2: Tried to do as Wlliam suggested, still doesn't work.. the xhr is undefined when blurring. Some kind of a problem with the this keyword, maybe it has different meanings if I write the getTags function outside of the autocomplete? this.autocomplete = $('.tab#'+this.id+' #tags').autocomplete({ minLength: 0, autoFocus: true, source: getTags, select: function(e, obj) { tab_id = $(this).parents('.tab').attr('id'); tabs[tab_id].addTag(obj.item.label, obj.item.id, false); $(this).blur(); // This is done so that the options menu won't pop up again. return false; // This is done so that the value will not stay in the input box after selection. }, open: function() {}, close: function() {} }); $('.tab#'+this.id+' #tags').focus(function() { $(this).autocomplete("search", ""); }); $('.tab#'+this.id+' #tags').blur(function() { console.log('blurring'); var xhr = $(this).data('xhr'); // This comes out undefined... :( if (xhr) { xhr.abort(); }; $(this).removeClass('ui-autocomplete-loading'); }); and this is the getTags function copied to the source keyword: function getTags(request, response) { console.log('Getting tags.'); $(this).data('xhr', $.ajax({ url: '/rpc', dataType: 'json', data: { action: 'GetLabels', arg0: JSON.stringify(request.term) }, success: function(data) { console.log('Tags arrived:'); tags = []; for (i in data) { a = {} a.id = data[i]['key']; a.label = data[i]['name']; tags.push(a); } response(tags); } })); console.log($(this).data('xhr')); }

    Read the article

  • How to solve Python memory leak when using urrlib2?

    - by b_m
    Hi, I'm trying to write a simple Python script for my mobile phone to periodically load a web page using urrlib2. In fact I don't really care about the server response, I'd only like to pass some values in the URL to the PHP. The problem is that Python for S60 uses the old 2.5.4 Python core, which seems to have a memory leak in the urrlib2 module. As I read there's seems to be such problems in every type of network communications as well. This bug have been reported here a couple of years ago, while some workarounds were posted as well. I've tried everything I could find on that page, and with the help of Google, but my phone still runs out of memory after ~70 page loads. Strangely the Garbege Collector does not seem to make any difference either, except making my script much slower. It is said that, that the newer (3.1) core solves this issue, but unfortunately I can't wait a year (or more) for the S60 port to come. here's how my script looks after adding every little trick I've found: import urrlib2, httplib, gc while(true): url = "http://something.com/foo.php?parameter=" + value f = urllib2.urlopen(url) f.read(1) f.fp._sock.recv=None # hacky avoidance f.close() del f gc.collect() Any suggestions, how to make it work forever without getting the "cannot allocate memory" error? Thanks for advance, cheers, b_m update: I've managed to connect 92 times before it ran out of memory, but It's still not good enough. update2: Tried the socket method as suggested earlier, this is the second best (wrong) solution so far: class UpdateSocketThread(threading.Thread): def run(self): global data while 1: url = "/foo.php?parameter=%d"%data s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(('something.com', 80)) s.send('GET '+url+' HTTP/1.0\r\n\r\n') s.close() sleep(1) I tried the little tricks, from above too. The thread closes after ~50 uploads (the phone has 50MB of memory left, obviously the Python shell has not.) UPDATE: I think I'm getting closer to the solution! I tried sending multiple data without closing and reopening the socket. This may be the key since this method will only leave one open file descriptor. The problem is: import socket s=socket.socket(socket.AF_INET, socket.SOCK_STREAM) socket.connect(("something.com", 80)) socket.send("test") #returns 4 (sent bytes, which is cool) socket.send("test") #4 socket.send("test") #4 socket.send("GET /foo.php?parameter=bar HTTP/1.0\r\n\r\n") #returns the number of sent bytes, ok socket.send("GET /foo.php?parameter=bar HTTP/1.0\r\n\r\n") #returns 0 on the phone, error on Windows7* socket.send("GET /foo.php?parameter=bar HTTP/1.0\r\n\r\n") #returns 0 on the phone, error on Windows7* socket.send("test") #returns 0, strange... *: error message: 10053, software caused connection abort Why can't I send multiple messages??

    Read the article

  • SDL Video Init causes Exception on Mac OS X 10.8

    - by ScrollerBlaster
    I have just ported my C++ game to OS X and the first time it ran I get the following exception when trying to call SDL_SetVideoMode. 2012-09-28 15:01:05.437 SCRAsteroids[28595:707] * Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Error (1000) creating CGSWindow on line 259' * First throw call stack: ( 0 CoreFoundation 0x00007fff8b53b716 __exceptionPreprocess + 198 1 libobjc.A.dylib 0x00007fff90e30470 objc_exception_throw + 43 2 CoreFoundation 0x00007fff8b53b4ec +[NSException raise:format:] + 204 3 AppKit 0x00007fff8a26a579 _NSCreateWindowWithOpaqueShape2 + 655 4 AppKit 0x00007fff8a268d70 -[NSWindow _commonAwake] + 2002 5 AppKit 0x00007fff8a2277e2 -[NSWindow _commonInitFrame:styleMask:backing:defer:] + 1763 6 AppKit 0x00007fff8a22692f -[NSWindow _initContent:styleMask:backing:defer:contentView:] + 1568 7 AppKit 0x00007fff8a2262ff -[NSWindow initWithContentRect:styleMask:backing:defer:] + 45 8 libSDL-1.2.0.dylib 0x0000000107c228f6 -[SDL_QuartzWindow initWithContentRect:styleMask:backing:defer:] + 294 9 libSDL-1.2.0.dylib 0x0000000107c20505 QZ_SetVideoMode + 2837 10 libSDL-1.2.0.dylib 0x0000000107c17af5 SDL_SetVideoMode + 917 11 SCRAsteroids 0x0000000107be60fb _ZN11SDLGraphics4initEP6IWorldii + 291 ) libc++abi.dylib: terminate called throwing an exception Abort trap: 6 My init code looks like this: if (SDL_Init(SDL_INIT_EVERYTHING) < 0) return false; const SDL_VideoInfo *videoInfo = SDL_GetVideoInfo(); if (!videoInfo) { fprintf(stderr, "Video query failed: %s\n", SDL_GetError()); return false; } /* the flags to pass to SDL_SetVideoMode */ videoFlags = SDL_OPENGL; /* Enable OpenGL in SDL */ videoFlags |= SDL_GL_DOUBLEBUFFER; /* Enable double buffering */ videoFlags |= SDL_HWPALETTE; /* Store the palette in hardware */ /* This checks to see if surfaces can be stored in memory */ if (videoInfo->hw_available) videoFlags |= SDL_HWSURFACE; else videoFlags |= SDL_SWSURFACE; if (w == 0) { widthViewport = videoInfo->current_w; heightViewport = videoInfo->current_h; cout << "Will use full screen resolution of "; videoFlags |= SDL_FULLSCREEN; } else { cout << "Will use full user supplied resolution of "; widthViewport = w; heightViewport = h; videoFlags |= SDL_RESIZABLE; /* Enable window resizing */ } cout << widthViewport << "x" << heightViewport << "\n"; /* This checks if hardware blits can be done */ if (videoInfo->blit_hw) videoFlags |= SDL_HWACCEL; /* Sets up OpenGL double buffering */ SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1); /* get a SDL surface */ surface = SDL_SetVideoMode(widthViewport, heightViewport, SCREEN_BPP, videoFlags); It gets into that last SDL call and throws the exception above. I have tried it in both full screen and resizable window mode, same thing. I build my app old school, on the command line, as opposed to using Xcode.

    Read the article

  • Reliable session faulting for unknown reason

    - by Scarfman007
    I am trying to achieve the following - one client-side proxy instance (kept open) accessed by multiple threads using a reliable session. What I have managed so far is to have either A) a reliable session with a client-side proxy which is created and disposed per call or B) what I aim for, but without a reliable session. When I enable reliable sessions on my binding however, the following behaviour is exhibited: Client-side Upon application startup everything appears to work fine until roughly 18 messages in to the WCF session. I firstly get the proxy.InnerChannel.Faulted event raised, then an exception is caught at the point where I am calling the method on the proxy. The exception is a System.TimeoutException, with message: "The request channel timed out while waiting for a reply after 00:00:59.9062512. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout." The inner exception has a similar message: "The request operation did not complete within the allotted timeout of 00:01:00. The time allotted to this operation may have been a portion of a longer timeout." With the method at the top of the inner stack trace being: System.ServiceModel.Channels.ReliableRequestSessionChannel.SyncRequest.WaitForReply(TimeSpan timeout) I then call proxy.Close followed by proxy.Abort (catching and ignoring exceptions). If I utilize the default settings (i.e. have simply <reliableSession/>), then calling proxy. Close results in another System.Timeout exception (although this time the allotted timeout is 00:00:00), however if I override the defaults as specified above no exception is thrown. Service-side Utilizing WCF tracing I get a System.ServiceModel.CommunicationException, with message: "The sequence has been terminated by the remote endpoint. The session has stopped waiting for a particular reply. Because of this the reliable session cannot continue. The reliable session was faulted." And a stack trace ending at: System.ServiceModel.AsyncResult.End[TAsyncResult](IAsyncResult result) When remotely attaching to the server I get the same message, which occurs when code execution steps over the return statement of my service in the service call which causes the error. The puzzling thing to me is that the service is stable and runs with options A) or B) as decribed at the beginning of my post, and occurs after a varying number of messages (around 18). The former fact points to there being nothing wrong with the code (indeed I have checked that no exceptions are thrown), and the latter just serves to confuse me and is why I modified the settings on the reliable session binding. I am quite stuck on this. Can anyone suggest why the reliable session would fault in such a way?

    Read the article

  • How to stream semi-live audio over internet

    - by Thomas Tempelmann
    I want to write something like Skype, i.e. I have a constant audio stream on one computer and then recompress it in a format that's suitable for a latent internet connection, receive it on the other end and play it. Let's also assume that the internet connection is fairly modern and fast, i.e. DSL or alike, no slow connections over phone and such. The involved computers will also be rather modern (Dual Core Intel CPUs at 2GHz or more). I know how to handle the audio on the machines. What I don't know is how to transmit the audio in an efficient way. The challenges are: I'd like get good audio quality across the line. The stream should be received without drops. The stream may, however, be received with a little delay (a second delay is acceptable). I imagine that the transport software could first determine the average (and max) latency, then start the stream and tell the receiver to wait for that max latency before starting to play the audio. With that, if the latency doesn't get any higher, the entire stream will be playable on the other side without stutter or drops. If, due to unexpected IP latencies or blockages, the stream does get cut off, I want to be able to notice this so that I can take actions (e.g. abort the stream) and eventually start a new transmission. What are my options if I want do use ready-made software for the compression and tranmission? I have no intention to write my own audio compression engine, really. OTOH, I plan to sell the solution in a vertical market, meaning I can afford a few dollars of license fees per copy, but not $100s. I guess the simplest solution would be to just open a TCP stream, send a few packets back and forth to determine their running time (or even use UDP for that), then use the results as the guide for my max latency value, then simply fire the audio data in its raw form (uncompressed 16 bit stereo), along with a timing code over the TCP connection. The receiver reads the data and plays it with the pre-determined delay. That might just work with the type of fast connection I expect. I just wonder if there are better solutions to reach this goal, with better performance (lower latency) and less data (compressed). BTW, I first try to implement this on OS X, but might want to do it on Windows, too, if it proves successful.

    Read the article

  • TimeOuts with HttpWebRequest when running Selenium concurrently in .NET

    - by domsom
    I have a download worker that uses ThreadPool-threads to download files. After enhancing these to apply some Selenium tests to the downloaded files, I am constantly experiencing TimeOut-exceptions with the file downloaders and delays running the Selenium tests. More precisely: When the program starts, the download threads start downloading and a couple of pages are seamlessly processed via Selenium Shortly after, the first download threads start throwing TimeOut exceptions from HttpWebRequest. At the same time, commands stop flowing to Selenium (as observed in the SeleniumRC log), but the thread running Selenium is not getting any exception This situation holds as long as there are entries in the download list: new download threads are being started and terminate after receiving TimeOuts (without trying to lock Selenium) As soon as no more download threads are being started, Selenium starts receiving commands again and the threads waiting for the lock are processed sequentially as designed Now here's the download code: HttpWebRequest request = null; WebResponse response = null; Stream stream = null; StreamReader sr = null; try { request = (HttpWebRequest) WebRequest.Create(uri); request.ServicePoint.ConnectionLimit = MAX_CONNECTIONS_PER_HOST; response = request.GetResponse(); stream = response.GetResponseStream(); // Read the stream... } finally { if (request != null) request.Abort(); if (response != null) response.Close(); if (stream != null) { stream.Close(); stream.Dispose(); } if (sr != null) { sr.Close(); sr.Dispose(); } } And this is how Selenium is used afterwards in the same thread: lock(SeleniumLock) { selenium.Open(url); // Run some Selenium commands, but no selenium.stop() } Where selenium is a static variable that is initialized in the static constructor of the class (via selenium.start()). I assume I am running into the CLR connection limit, so I added these lines during initalization: ThreadPool.GetMaxThreads (out maxWorkerThreads, out maxCompletionPortThreads); HttpUtility.MAX_CONNECTIONS_PER_HOST = maxWorkerThreads; System.Net.ServicePointManager.DefaultConnectionLimit = maxWorkerThreads + 1; The + 1 is for the connection to the SeleniumRC, due to my guess that the Selenium client code also uses HttpWebRequest. It seems like I'm still running into some kind of deadlock - although the threads waiting for the Selenium lock do not hold any resources. Any ideas on how to get this working?

    Read the article

  • are C functions declared in <c____> headers guaranteed to be in the global namespace as well as std?

    - by Evan Teran
    So this is something that I've always wondered but was never quite sure about. So it is strictly a matter of curiosity, not a real problem. As far as I understand, what you do something like #include <cstdlib> everything (except macros of course) are declared in the std:: namespace. Every implementation that I've ever seen does this by doing something like the following: #include <stdlib.h> namespace std { using ::abort; // etc.... } Which of course has the effect of things being in both the global namespace and std. Is this behavior guaranteed? Or is it possible that an implementation could put these things in std but not in the global namespace? The only way I can think of to do that would be to have your libstdc++ implement every c function itself placing them in std directly instead of just including the existing libc headers (because there is no mechanism to remove something from a namespace). Which is of course a lot of effort with little to no benefit. The essence of my question is, is the following program strictly conforming and guaranteed to work? #include <cstdio> int main() { ::printf("hello world\n"); } EDIT: The closest I've found is this (17.4.1.2p4): Except as noted in clauses 18 through 27, the contents of each header cname shall be the same as that of the corresponding header name.h, as specified in ISO/IEC 9899:1990 Programming Languages C (Clause 7), or ISO/IEC:1990 Programming Languages—C AMENDMENT 1: C Integrity, (Clause 7), as appropriate, as if by inclusion. In the C + + Standard Library, however, the declarations and definitions (except for names which are defined as macros in C) are within namespace scope (3.3.5) of the namespace std. which to be honest I could interpret either way. "the contents of each header cname shall be the same as that of the corresponding header name.h, as specified in ISO/IEC 9899:1990 Programming Languages C" tells me that they may be required in the global namespace, but "In the C + + Standard Library, however, the declarations and definitions (except for names which are defined as macros in C) are within namespace scope (3.3.5) of the namespace std." says they are in std (but doesn't specify any other scoped they are in).

    Read the article

  • Comet with multiple channels

    - by mark_dj
    Hello, I am writing an web app which needs to subscribe to multiple channels via javascript. I am using Atmosphere and Jersey as backend. However the jQuery plugin they work with only supports 1 channel. I've start buidling my own implementation. Now it works oke, but when i try to subscribe to 2 channels only 1 channel gets notified. Is the XMLHttpRequest blocking the rest of the XMLHttpRequests? Here's my code: function AtmosphereComet(url) { this.Connected = new signals.Signal(); this.Disconnected = new signals.Signal(); this.NewMessage = new signals.Signal(); var xhr = null; var self = this; var gotWelcomeMessage = false; var readPosition; var url = url; var onIncomingXhr = function() { if (xhr.readyState == 3) { if (xhr.status==200) // Received a message { var message = xhr.responseText; console.log(message); if(!gotWelcomeMessage && message.indexOf("") -1) { gotWelcomeMessage = true; self.Connected.dispatch(sprintf("Connected to %s", url)); } else { self.NewMessage.dispatch(message.substr(readPosition)); } readPosition = this.responseText.length; } } else if (this.readyState == 4) { self.disconnect(); } } var getXhr = function() { if ( window.location.protocol !== "file:" ) { try { return new window.XMLHttpRequest(); } catch(xhrError) {} } try { return new window.ActiveXObject("Microsoft.XMLHTTP"); } catch(activeError) {} } this.connect = function() { xhr = getXhr(); xhr.onreadystatechange = onIncomingXhr; xhr.open("GET", url, true); xhr.send(null); } this.disconnect = function() { xhr.onreadystatechange = null; xhr.abort(); } this.send = function(message) { } } And the test code: var connection1 = AtmosphereConnection("http://192.168.1.145:9999/botenveiling/b/product/status/2", AtmosphereComet); var connection2 = AtmosphereConnection("http://192.168.1.145:9999/botenveiling/b/product/status/1", AtmosphereComet); var output = function(msg) { alert(output); }; connection1.NewMessage.add(output); connection2.NewMessage.add(output); connection1.connect(); In AtmosphereConnection I instantiate the given AtmosphereComet with "new". I iterate over the object to check if it has to methods: "send", "connect", "disconnect". The reason for this is that i can switch the implementation later on when i complete the websocket implementation :) However I think the problem rests with the XmlHttpRequest object, or am i mistaken? P.S.: signals.Signal is a js observer/notifier library: http://millermedeiros.github.com/js-signals/ Testing: Firefox 3.6.1.3

    Read the article

  • Why boost::recursive_mutex is not working as expected?

    - by Kjir
    I have a custom class that uses boost mutexes and locks like this (only relevant parts): template<class T> class FFTBuf { public: FFTBuf(); [...] void lock(); void unlock(); private: T *_dst; int _siglen; int _processed_sums; int _expected_sums; int _assigned_sources; bool _written; boost::recursive_mutex _mut; boost::unique_lock<boost::recursive_mutex> _lock; }; template<class T> FFTBuf<T>::FFTBuf() : _dst(NULL), _siglen(0), _expected_sums(1), _processed_sums(0), _assigned_sources(0), _written(false), _lock(_mut, boost::defer_lock_t()) { } template<class T> void FFTBuf<T>::lock() { std::cerr << "Locking" << std::endl; _lock.lock(); std::cerr << "Locked" << std::endl; } template<class T> void FFTBuf<T>::unlock() { std::cerr << "Unlocking" << std::endl; _lock.unlock(); } If I try to lock more than once the object from the same thread, I get an exception (lock_error): #include "fft_buf.hpp" int main( void ) { FFTBuf<int> b( 256 ); b.lock(); b.lock(); b.unlock(); b.unlock(); return 0; } This is the output: sb@dex $ ./src/test Locking Locked Locking terminate called after throwing an instance of 'boost::lock_error' what(): boost::lock_error zsh: abort ./src/test Why is this happening? Am I understanding some concept incorrectly?

    Read the article

  • C# drawing and invalidating 2 lines to meet

    - by BlueMonster
    If i have 2 lines on a page as such: e.Graphics.DrawLine(blackPen, w, h, h, w); e.Graphics.DrawLine(blackPen, w2, h2, h2, w2); how would i animate the first line to reach the second line's position? I have the following method which calculates the distance between two points (i'm assuming i would use this?) public int Distance2D(int x1, int y1, int x2, int y2) { // ______________________ //d = &#8730; (x2-x1)^2 + (y2-y1)^2 // //Our end result int result = 0; //Take x2-x1, then square it double part1 = Math.Pow((x2 - x1), 2); //Take y2-y1, then sqaure it double part2 = Math.Pow((y2 - y1), 2); //Add both of the parts together double underRadical = part1 + part2; //Get the square root of the parts result = (int)Math.Sqrt(underRadical); //Return our result return result; } How would i re-draw the line (on a timer) to reach the second line's position? I've looked a lot into XAML (story-boarding) and such - but i want to know how to do this on my own. Any ideas? I know i would need a method which runs in a loop re-drawing the line after moving the position a tid bit. I would have to call Invalidate() in order to make the line appear as though it's moving... but how would i do this? how would i move that line slowly over to the other line? I'm pretty sure i'd have to use double buffering if i'm doing this as well... as such: SetStyle(ControlStyles.DoubleBuffer | ControlStyles.AllPaintingInWmPaint | ControlStyles.UserPaint, true); This doesn't quiet work, i'm not quiet sure how to fix it. Any ideas? protected override void OnPaint(PaintEventArgs e) { Distance2D(w, h, w2, h2); if (w2 != w && h2 != h) { e.Graphics.DrawLine(blackPen, (w * (int)frame), (h * (int)frame), (h * (int)frame), (w * (int)frame)); e.Graphics.DrawLine(blackPen, w2, h2, h2, w2); } else { t.Abort(); } base.OnPaint(e); } public void MoveLine() { for (int i = 0; i < 126; i++) { frame += .02; Invalidate(); Thread.Sleep(30); } } private void button1_Click(object sender, EventArgs e) { t = new Thread(new ThreadStart(MoveLine)); t.Start(); }

    Read the article

  • prolog sets problem, stack overflow

    - by garm0nboz1a
    Hi. I'm gonna show some code and ask, what could be optimized and where am I sucked? sublist([], []). sublist([H | Tail1], [H | Tail2]) :- sublist(Tail1, Tail2). sublist(H, [_ | Tail]) :- sublist(H, Tail). less(X, X, _). less(X, Z, RelationList) :- member([X,Z], RelationList). less(X, Z, RelationList) :- member([X,Y], RelationList), less(Y, Z, RelationList), \+less(Z, X, RelationList). lessList(X, LessList, RelationList) :- findall(Y, less(X, Y, RelationList), List), list_to_set(List, L), sort(L, LessList), !. list_mltpl(List1, List2, List) :- findall(X, ( member(X, List1), member(X, List2)), List). chain([_], _). chain([H,T | Tail], RelationList) :- less(H, T, RelationList), chain([T|Tail], RelationList), !. have_inf(X1, X2, RelationList) :- lessList(X1, X1_cone, RelationList), lessList(X2, X2_cone, RelationList), list_mltpl(X1_cone, X2_cone, Cone), chain(Cone, RelationList), !. relations(List, E) :- findall([X1,X2], (member(X1, E), member(X2, E), X1 =\= X2), Relations), sublist(List, Relations). semilattice(List, E) :- forall( (member(X1, E), member(X2, E), X1 < X2), have_inf(X1, X2, List) ). main(E) :- relations(X, E), semilattice(X, E). I'm trying to model all possible graph sets of N elements. Predicate relations(List, E) connects list of possible graphs(List) and input set E. Then I'm describing semilattice predicate to check relations' List for some properties. So, what I have. 1) semilattice/2 is working fast and clear ?- semilattice([[1,3],[2,4],[3,5],[4,5]],[1,2,3,4,5]). true. ?- semilattice([[1,3],[1,4],[2,3],[2,4],[3,5],[4,5]],[1,2,3,4,5]). false. 2) relations/2 is working not well ?- findall(X, relations(X,[1,2,3,4]), List), length(List, Len), writeln(Len),fail. 4096 false. ?- findall(X, relations(X,[1,2,3,4,5]), List), length(List, Len), writeln(Len),fail. ERROR: Out of global stack ^ Exception: (11) setup_call_catcher_cleanup('$bags':'$new_findall_bag'(17852886), '$bags':fa_loop(_G263, user:relations(_G263, [1, 2, 3, 4|...]), 17852886, _G268, []), _G835, '$bags':'$destroy_findall_bag'(17852886)) ? abort % Execution Aborted 3) Mix of them to finding all possible semilattice does not work at all. ?- main([1,2]). ERROR: Out of local stack ^ Exception: (15) setup_call_catcher_cleanup('$bags':'$new_findall_bag'(17852886), '$bags':fa_loop(_G41, user:less(1, _G41, [[1, 2], [2, 1]]), 17852886, _G52, []), _G4767764, '$bags':'$destroy_findall_bag'(17852886)) ?

    Read the article

  • Learning C, would appreciate input on why this solution works.

    - by Keifer
    This is literally the first thing I've ever written in C, so please feel free to point out all it's flaws. :) My issue, however is this: if I write the program the way I feel is cleanest, I get a broken program: #include <sys/queue.h> #include <stdlib.h> #include <stdio.h> #include <string.h> /* Removed prototypes and non related code for brevity */ int main() { char *cmd = NULL; unsigned int acct = 0; int amount = 0; int done = 0; while (done==0) { scanf ("%s %u %i", cmd, &acct, &amount); if (strcmp (cmd, "exit") == 0) done = 1; else if ((strcmp (cmd, "dep") == 0) || (strcmp (cmd, "deb") == 0)) debit (acct, amount); else if ((strcmp (cmd, "wd") == 0) || (strcmp (cmd, "cred") == 0)) credit (acct, amount); else if (strcmp (cmd, "fee") == 0) service_fee(acct, amount); else printf("Invalid input!\n"); } return(0); } void credit(unsigned int acct, int amount) { } void debit(unsigned int acct, int amount) { } void service_fee(unsigned int acct, int amount) { } As it stands, the above generates no errors at compile, but gives me a segfault when ran. I can fix this by changing the program to pass cmd by reference when calling scanf and strcmp. The segfault goes away and is replaced by warnings for each use of strcmp at compile time. Despite the warnings, the affected code works. warning: passing arg 1 of 'strcmp' from incompatible pointer type As an added bonus, modifying the scanf and strcmp calls allows the program to progress far enough to execute return(0), at which point the thing crashes with an Abort trap. If I swap out return(0) for exit(0) then everything works as expected. This leaves me with two questions: why was the original program wrong? How can I fix it better than I have? The bit about needing to use exit instead of return has me especially baffled.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24  | Next Page >