Search Results

Search found 16050 results on 642 pages for 'linq to objects'.

Page 602/642 | < Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >

  • Problem in running boost eample blocking_udp_echo_client on MacOSX

    - by n179911
    I am trying to run blocking_udp_echo_client on MacOS X http://www.boost.org/doc/libs/1_35_0/doc/html/boost_asio/example/echo/blocking_udp_echo_client.cpp I run it with argument 'localhost 9000' But the program crashes and this is the line in the source which crashes: `udp::socket s(io_service, udp::endpoint(udp::v4(), 0));' this is the stack trace: #0 0x918c3e42 in __kill #1 0x918c3e34 in kill$UNIX2003 #2 0x9193623a in raise #3 0x91942679 in abort #4 0x940d96f9 in __gnu_debug::_Error_formatter::_M_error #5 0x0000e76e in __gnu_debug::_Safe_iterator::op_base* , __gnu_debug_def::list::op_base*, std::allocator::op_base* ::_Safe_iterator at safe_iterator.h:124 #6 0x00014729 in boost::asio::detail::hash_map::op_base*::bucket_type::bucket_type at hash_map.hpp:277 #7 0x00019e97 in std::_Construct::op_base*::bucket_type, boost::asio::detail::hash_map::op_base*::bucket_type at stl_construct.h:81 #8 0x0001a457 in std::__uninitialized_fill_n_aux::op_base*::bucket_type*, __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type , unsigned long, boost::asio::detail::hash_map::op_base*::bucket_type at stl_uninitialized.h:194 #9 0x0001a4e1 in std::uninitialized_fill_n::op_base*::bucket_type*, __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type , unsigned long, boost::asio::detail::hash_map::op_base*::bucket_type at stl_uninitialized.h:218 #10 0x0001a509 in std::__uninitialized_fill_n_a::op_base*::bucket_type*, __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type , unsigned long, boost::asio::detail::hash_map::op_base*::bucket_type, boost::asio::detail::hash_map::op_base*::bucket_type at stl_uninitialized.h:310 #11 0x0001aa34 in __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::_M_fill_insert at vector.tcc:365 #12 0x0001acda in __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::insert at stl_vector.h:658 #13 0x0001ad81 in __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::resize at stl_vector.h:427 #14 0x0001ae3a in __gnu_debug_def::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::resize at vector:169 #15 0x0001b7be in boost::asio::detail::hash_map::op_base*::rehash at hash_map.hpp:221 #16 0x0001bbeb in boost::asio::detail::hash_map::op_base*::hash_map at hash_map.hpp:67 #17 0x0001bc74 in boost::asio::detail::reactor_op_queue::reactor_op_queue at reactor_op_queue.hpp:42 #18 0x0001bd24 in boost::asio::detail::kqueue_reactor::kqueue_reactor at kqueue_reactor.hpp:86 #19 0x0001c000 in boost::asio::detail::service_registry::use_service at service_registry.hpp:109 #20 0x0001c14d in boost::asio::use_service at io_service.ipp:195 #21 0x0001c26d in boost::asio::detail::reactive_socket_service ::reactive_socket_service at reactive_socket_service.hpp:111 #22 0x0001c344 in boost::asio::detail::service_registry::use_service at service_registry.hpp:109 #23 0x0001c491 in boost::asio::use_service at io_service.ipp:195 #24 0x0001c4d5 in boost::asio::datagram_socket_service::datagram_socket_service at datagram_socket_service.hpp:95 #25 0x0001c59e in boost::asio::detail::service_registry::use_service at service_registry.hpp:109 #26 0x0001c6eb in boost::asio::use_service at io_service.ipp:195 #27 0x0001c711 in boost::asio::basic_io_object ::basic_io_object at basic_io_object.hpp:72 #28 0x0001c783 in boost::asio::basic_socket ::basic_socket at basic_socket.hpp:108 #29 0x0001c865 in boost::asio::basic_datagram_socket ::basic_datagram_socket at basic_datagram_socket.hpp:107 #30 0x000027bc in main at main.cpp:32 This is the gdb output: (gdb) continue /Developer/SDKs/MacOSX10.5.sdk/usr/include/c++/4.0.0/debug/safe_iterator.h:127: error: attempt to copy-construct an iterator from a singular iterator. Objects involved in the operation: iterator "this" @ 0x0x100420 { type = N11__gnu_debug14_Safe_iteratorIN10__gnu_norm14_List_iteratorISt4pairIiPN5boost4asio6detail16reactor_op_queueIiE7op_baseEEEEN15__gnu_debug_def4listISB_SaISB_EEEEE (mutable iterator); state = singular; } iterator "other" @ 0x0xbfffe8a4 { type = N11__gnu_debug14_Safe_iteratorIN10__gnu_norm14_List_iteratorISt4pairIiPN5boost4asio6detail16reactor_op_queueIiE7op_baseEEEEN15__gnu_debug_def4listISB_SaISB_EEEEE (mutable iterator); state = singular; } Program received signal: “SIGABRT”. (gdb) continue Program received signal: “?”. Does someone has any idea why this example does not work on mac osx? Thank you.

    Read the article

  • How to produce precisely-timed tone and silence in C#

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution. Thanks in advance...

    Read the article

  • please explain NHibernate HiLo

    - by Ben
    I'm struggling to get my head round how the HiLo generator works in NHibernate. I've read the explanation here which made things a little clearer. My understanding is that each SessionFactory retrieves the high value from the database. This improves performance because we have access to IDs without hitting the database. The explanation from the above link also states: For instance, supposing you have a "high" sequence with a current value of 35, and the "low" number is in the range 0-1023. Then the client can increment the sequence to 36 (for other clients to be able to generate keys while it's using 35) and know that keys 35/0, 35/1, 35/2, 35/3... 35/1023 are all available. How does this work in a web application as don't I only have one SessionFactory and therefore one hi value. Does this mean that in a disconnected application you can end up with duplicate (low) ids in your entity table? In my tests I used these settings: <id name="Id" unsaved-value="0"> <generator class="hilo"/> </id> I ran a test to save 100 objects. The IDs in my table went from 32768 - 32868. The next hi value was incremented to 2. Then I ran my test again and the Ids were in the range 65536 - 65636. First off, why start at 32768 and not 1, and secondly why the jump from 32868 to 65536? Now I know that my surrogate keys shouldn't have any meaning but we do use them in our application. Why can't I just have them increment nicely like a SQL Server identity field would. Finally can someone give me an explanation of how the max_lo parameter works? Is this the maximum number of low values (entity ids in my head) that can be created against the high value? This is one topic in NHibernate that I have struggled to find documentation for. I read the entire NHibernate in action book and it still doesn't go into how this works in any detail. Thanks Ben

    Read the article

  • How do I serialize an object to xml but not have it be the root element of the xml document

    - by mezoid
    I have the following object: public class MyClass { public int Id { get; set;} public string Name { get; set; } } I'm wanting to serialize this to the following xml string: <MyClass> <Id>1</Id> <Name>My Name</Name> </MyClass> Unfortunately, when I use the XMLSerializer I get a string which looks like: <?xml version="1.0" encoding="utf-8"?> <MyClass xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Id>1</Id> <Name>My Name</Name> </MyClass> I'm not wanting MyClass to be the root element the document, rather I'm eventually wanting to add the string with other similar serialized objects which will be within a larger xml document. i.e. Eventually I'll have a xml string which looks like this: <Classes> <MyClass> <Id>1</Id> <Name>My Name</Name> </MyClass> <MyClass> <Id>1</Id> <Name>My Name</Name> </MyClass> </Classes>" My first thought was to create a class as follows: public class Classes { public List<MyClass> MyClasses { get; set; } } ...but that just addes an additional node called MyClasses to wrap the list of MyClass.... My gut feeling is that I'm approaching this the wrong way and that my lack of experience with creating xml files isn't helping to point me to some part of the .NET framework or some other library that simplifies this.

    Read the article

  • NSCollectionView draws nothing

    - by PCWiz
    I'm trying to set up an NSCollectionView (I have done this successfully in the past, but for some reason it fails this time). I have a model class called "TestModel", and it has an NSString property that just returns a string (just for testing purposes right now). I then have an NSMutableArray property declaration in my main app delegate class, and to this array I add instances of the TestModel object. I then have an Array Controller that has its Content Array bound the app delegate's NSMutableArray. I can confirm that everything up to here is working fine; NSLogging: [[[arrayController arrangedObjects] objectAtIndex:0] teststring] worked fine. I then have all the appropriate bindings for the collection view set up, (itemPrototype and content), and for the Collection View Item (view). I then have a text field in the collection item view that is bound to Collection View Item.representedObject.teststring. However NOTHING displays in the collection view when I start the app, just a blank white screen. What am I missing? UPDATE: Here is the code I use (requested by wil shipley): // App delegate class @interface AppController : NSObject { NSMutableArray *objectArray; } @property (readwrite, retain) NSMutableArray *objectArray; @end @implementation AppController @synthesize objectArray; - (id)init { if (self = [super init]) { objectArray = [[NSMutableArray alloc] init]; } return self; } - (void)applicationDidFinishLaunching:(NSNotification *)aNotification { TestModel *test = [[[TestModel alloc] initWithString:@"somerandomstring"] autorelease]; if (test) [objectArray addObject:test]; } @end // The model class (TestModel) @interface TestModel : NSObject { NSString *teststring; } @property (readwrite, retain) NSString *teststring; - (id)initWithString:(NSString*)customString; @end @implementation TestModel @synthesize teststring; - (id)initWithString:(NSString*)customString { [self setTeststring:customString]; } - (void)dealloc { [teststring release]; } @end And then like I said the content array of the Array Controller is bound to this "objectArray", and the Content of the NSCollectionView is bound to Array Controller.arrangedObjects. I can verify that the Array Controller has the objects in it by NSLogging [arrayController arrangedObjects], and it returns the correct object. Its just that nothing displays in the NSCollectionView. UPDATE 2: If I log [collectionView content] I get nothing: 2009-10-21 08:02:42.385 CollViewTest[743:a0f] ( ) The problem is probably there. UPDATE 3: As requested here is the Xcode project: http://www.mediafire.com/?mjgdzgjjfzw Its a menubar app, so it has no window. When you build and run the app you'll see a menubar item that says "test", this opens the view that contains the NSCollectionView. Thanks

    Read the article

  • Is this (Lock-Free) Queue Implementation Thread-Safe?

    - by Hosam Aly
    I am trying to create a lock-free queue implementation in Java, mainly for personal learning. The queue should be a general one, allowing any number of readers and/or writers concurrently. Would you please review it, and suggest any improvements/issues you find? Thank you. import java.util.concurrent.atomic.AtomicReference; public class LockFreeQueue<T> { private static class Node<E> { E value; volatile Node<E> next; Node(E value) { this.value = value; } } private AtomicReference<Node<T>> head, tail; public LockFreeQueue() { // have both head and tail point to a dummy node Node<T> dummyNode = new Node<T>(null); head = new AtomicReference<Node<T>>(dummyNode); tail = new AtomicReference<Node<T>>(dummyNode); } /** * Puts an object at the end of the queue. */ public void putObject(T value) { Node<T> newNode = new Node<T>(value); Node<T> prevTailNode = tail.getAndSet(newNode); prevTailNode.next = newNode; } /** * Gets an object from the beginning of the queue. The object is removed * from the queue. If there are no objects in the queue, returns null. */ public T getObject() { Node<T> headNode, valueNode; // move head node to the next node using atomic semantics // as long as next node is not null do { headNode = head.get(); valueNode = headNode.next; // try until the whole loop executes pseudo-atomically // (i.e. unaffected by modifications done by other threads) } while (valueNode != null && !head.compareAndSet(headNode, valueNode)); T value = (valueNode != null ? valueNode.value : null); // release the value pointed to by head, keeping the head node dummy if (valueNode != null) valueNode.value = null; return value; }

    Read the article

  • Winform radiobutton data binding

    - by Rajarshi
    I am following the "Presentation Model" design pattern suggested by Martin Fowler for my GUI architecture in a Windows Forms project. "The essence of a Presentation Model is of a fully self-contained class that represents all the data and behavior of the UI window, but without any of the controls used to render that UI on the screen. A view then simply projects the state of the presentation model onto the glass...." - Martin Fowler Read more about this pattern at www.martinfowler.com/eaaDev/PresentationModel.html I am finding the concept very fluid and easy to understand except this one issue of data binding RadioButtons to properties on the Data/Domain object. Suposing I have a Windows Form with 3 radio buttons to depict some "Mode" options as - Auto Manual Import How can I use boolean properties on Data/Domain Objects to DataBind to these buttons? I have tried many ways but to no avail. For example I would like to code like - rbtnAutoMode.DataBindings.Add("Text", myBusinessObject, "IsAutoMode"); rbtnManualMode.DataBindings.Add("Text", myBusinessObject, "IsManualMode"); rbtnImportMode.DataBindings.Add("Text", myBusinessObject, "IsImportMode"); There should be a fourth property like "SelectedMode" on the data/domain object which at the end should depict a single value like "SelectedMode = Auto". I am trying to update this property when any of the "IsAutoMode", "IsManualMode" or "IsImportMode" is changed, e.g. through the property setters. I have INotifyPropertyChanged implemented on my data/domain object so, updating any data/domain object property automatically updates my UI controls, that's not an issue. There is a good example of binding 2 radio buttons here - http://stackoverflow.com/questions/344964/how-do-i-use-databinding-with-windows-forms-radio-buttons but I am missing the link while implementing the same with 3 buttons. I am having very erratic behaviors for the Radio Buttons. I hope I was able to explain reasonably. I am actually in a hurry and could not put a detailed code on post, but any help in this regard is appreciated. There is a simple solution to this issue by exposing a method like - public void SetMode(Modes mode) { this._selectedMode = mode; } which could be called from the "CheckedChanged" event of the Radio Buttons from the UI and would perfectly set the "SelectedMode" on the business object, but I need to stretch the limits to verify whether this can be done by DataBinding.

    Read the article

  • .NET asmx web services: serialize object property as string property to support versioning

    - by mcliedtk
    I am in the process of upgrading our web services to support versioning. We will be publishing our versioned web services like so: http://localhost/project/services/1.0/service.asmx http://localhost/project/services/1.1/service.asmx One requirement of this versioning is that I am not allowed to break the original wsdl (the 1.0 wsdl). The challenge lies in how to shepherd the newly versioned classes through the logic that lies behind the web services (this logic includes a number of command and adapter classes). Note that upgrading to WCF is not an option at the moment. To illustrate this, let's consider an example with Blogs and Posts. Prior to the introduction of versions, we were passing concrete objects around instead of interfaces. So an AddPostToBlog command would take in a Post object instead of an IPost. // Old AddPostToBlog constructor. public AddPostToBlog(Blog blog, Post post) { // constructor body } With the introduction of versioning, I would like to maintain the original Post while adding a PostOnePointOne. Both Post and PostOnePointOne will implement the IPost interface (they are not extending an abstract class because that inheritance breaks the wsdl, though I suppose there may be a way around that via some fancy xml serialization tricks). // New AddPostToBlog constructor. public AddPostToBlog(Blog blog, IPost post) { // constructor body } This brings us to my question regarding serialization. The original Post class has an enum property named Type. For various cross-platform compatibility issues, we are changing our enums in our web services to strings. So I would like to do the following: // New IPost interface. public interface IPost { object Type { get; set; } } // Original Post object. public Post { // The purpose of this attribute would be to maintain how // the enum currently is serialized even though now the // type is an object instead of an enum (internally the // object actually is an enum here, but it is exposed as // an object to implement the interface). [XmlMagic(SerializeAsEnum)] object Type { get; set; } } // New version of Post object public PostOnePointOne { // The purpose of this attribute would be to force // serialization as a string even though it is an object. [XmlMagic(SerializeAsString)] object Type { get; set; } } The XmlMagic refers to an XmlAttribute or some other part of the System.Xml namespace that would allow me to control the type of the object property being serialized (depending on which version of the object I am serializaing). Does anyone know how to accomplish this?

    Read the article

  • How to bind a WPF CustomControl to a ListBox

    - by VivyG
    I've just started to learn WPF this week so please accept my apologies if I am being stupid or missing fundamental points! Iam trying to build a very simple Contact browser. I have a Collection of Contact objects that displayed in a ListBox control which shows the FullName of the Contact and to the right I have a customControl called BasicContactCard. This is the XAML for the ContacWindow that displays the ListBox: <DockPanel Width="auto" Height="auto" Margin="8 8 8 8"> <Border Height="56" HorizontalAlignment="Stretch" VerticalAlignment="Top" BorderThickness="1" CornerRadius="8" DockPanel.Dock="Top" Background="Beige"> <TextBox Height="32" Margin="23,5,135,5" Text="Search for contact here" FontStyle="Italic" Foreground="#FFAD9595" FontSize="14" BorderBrush="LightGray"/> </Border> <ListBox x:Name="contactList" DockPanel.Dock="Left" Width="192" Height="auto" Margin="5 4 0 8" /> <Grid> <Grid.RowDefinitions> <RowDefinition Height="1*" /> <RowDefinition Height="0.125*" /> </Grid.RowDefinitions> <local:BasicContactCard Margin="8 8 8 8" /> <Button Grid.Row="1" x:Name="exit" Content="Exit" HorizontalAlignment="Right" Width="50" Height="25" Click="exit_Click" /> </Grid> </DockPanel> and this is the XAML for the CustomControl: <DockPanel Width="auto " Height="auto" Margin="8,8,8,8"> <Grid Width="auto" Height="auto" DockPanel.Dock="Top"> <Grid.RowDefinitions> <RowDefinition Height="1*" /> <RowDefinition Height="1*" /> <RowDefinition Height="1*" /> <RowDefinition Height="1*" /> </Grid.RowDefinitions> <TextBlock x:Name="companyField" Grid.Row="0" Width="auto" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Margin="8,8,8,8" Text="Company"/> <TextBlock x:Name="contactField" Grid.Row="1" Width="auto" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Margin="8,8,8,8" Text="Contact"/> <TextBlock x:Name="phoneField" Grid.Row="2" Width="auto" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Margin="8,8,8,8" Text="Phone"/> <TextBlock x:Name="emailField" Grid.Row="3" Width="auto" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Margin="8,8,8,8" Text="email"/> </Grid> </DockPanel> The problem I have is how do I bind the individual elements of the CustomControl to the object behind the SelectedItem in the ListBox?

    Read the article

  • Is C# slower than VB.NET?

    - by Matt Winckler
    Believe it or not, despite the title, this is not a troll. Running some benchmarks this morning, my colleagues and I have discovered some strange things concerning performance, and I am wondering if we're doing something horribly wrong. We started out comparing C# vs. Delphi Prism calculating prime numbers, and found that Prism was about 30% faster. I figured maybe CodeGear did more optimization when generating IL (the exe was about twice as big as C#'s and had all sorts of different IL in it.) So I decided to write a test in VB.NET as well, assuming that Microsoft's compilers would end up writing essentially the same IL for each language. However, the result there was more shocking: C# was more than three times slower than VB running the same operations. The generated IL was different, but not extremely so, and I'm not good enough at reading it to understand the differences. As a fan of C#, this apparent slowness wounds me horribly, and I am left wondering: what in the world is going on here? Is it time to pack it all in and go write web apps in Ruby? ;-) I've included the code for each below--just copy it into a new VB or C# console app, and run. On my machine, VB finds 348513 primes in about 6.36 seconds. C# finds the same number of primes in 21.76 seconds. (I've got an Intel Core2 Quad Q6600 @2.4Ghz; on another Intel machine in the office the code for both runs much faster but the ratio is about the same; on an AMD machine here the timing is ~10 seconds for VB and ~13 for C#--much less difference, but C# is still always slower.) Both of the console applications were compiled in Release mode, but otherwise no project settings were changed from the defaults generated by Visual Studio 2008. Is it a generally-known fact that C#'s generated IL is worse than VB's? Or is this a strange edge case? Or is my code flawed somehow (most likely)? Any insights are appreciated. VB code Imports System.Diagnostics Module Module1 Private temp As List(Of Int32) Private sw As Stopwatch Private totalSeconds As Double Sub Main() serialCalc() End Sub Private Sub serialCalc() temp = New List(Of Int32)() sw = Stopwatch.StartNew() For i As Int32 = 2 To 5000000 testIfPrimeSerial(i) Next sw.Stop() totalSeconds = sw.Elapsed.TotalSeconds Console.WriteLine(String.Format("{0} seconds elapsed.", totalSeconds)) Console.WriteLine(String.Format("{0} primes found.", temp.Count)) Console.ReadKey() End Sub Private Sub testIfPrimeSerial(ByVal suspectPrime As Int32) For i As Int32 = 2 To Math.Sqrt(suspectPrime) If (suspectPrime Mod i = 0) Then Exit Sub End If Next temp.Add(suspectPrime) End Sub End Module C# Code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; namespace FindPrimesCSharp { class Program { List<Int32> temp = new List<Int32>(); Stopwatch sw; double totalSeconds; static void Main(string[] args) { new Program().serialCalc(); } private void serialCalc() { temp = new List<Int32>(); sw = Stopwatch.StartNew(); for (Int32 i = 2; i <= 5000000; i++) { testIfPrimeSerial(i); } sw.Stop(); totalSeconds = sw.Elapsed.TotalSeconds; Console.WriteLine(string.Format("{0} seconds elapsed.", totalSeconds)); Console.WriteLine(string.Format("{0} primes found.", temp.Count)); Console.ReadKey(); } private void testIfPrimeSerial(Int32 suspectPrime) { for (Int32 i = 2; i <= Math.Sqrt(suspectPrime); i++) { if (suspectPrime % i == 0) return; } temp.Add(suspectPrime); } } }

    Read the article

  • Creating thousands of records in Rails

    - by willCosgrove
    Let me set the stage: My application deals with gift cards. When we create cards they have to have a unique string that the user can use to redeem it with. So when someone orders our gift cards, like a retailer, we need to make a lot of new card objects and store them in the DB. With that in mind, I'm trying to see how quickly I can have my application generate 100,000 Cards. Database expert, I am not, so I need someone to explain this little phenomena: When I create 1000 Cards, it takes 5 seconds. When I create 100,000 cards it should take 500 seconds right? Now I know what you're wanting to see, the card creation method I'm using, because the first assumption would be that it's getting slower because it's checking the uniqueness of a bunch of cards, more as it goes along. But I can show you my rake task desc "Creates cards for a retailer" task :order_cards, [:number_of_cards, :value, :retailer_name] => :environment do |t, args| t = Time.now puts "Searching for retailer" @retailer = Retailer.find_by_name(args[:retailer_name]) puts "Retailer found" puts "Generating codes" value = args[:value].to_i number_of_cards = args[:number_of_cards].to_i codes = [] top_off_codes(codes, number_of_cards) while codes != codes.uniq codes.uniq! top_off_codes(codes, number_of_cards) end stored_codes = Card.all.collect do |c| c.code end while codes != (codes - stored_codes) codes -= stored_codes top_off_codes(codes, number_of_cards) end puts "Codes are unique and generated" puts "Creating bundle" @bundle = @retailer.bundles.create!(:value => value) puts "Bundle created" puts "Creating cards" @bundle.transaction do codes.each do |code| @bundle.cards.create!(:code => code) end end puts "Cards generated in #{Time.now - t}s" end def top_off_codes(codes, intended_number) (intended_number - codes.size).times do codes << ReadableRandom.get(CODE_LENGTH) end end I'm using a gem called readable_random for the unique code. So if you read through all of that code, you'll see that it does all of it's uniqueness testing before it ever starts creating cards. It also writes status updates to the screen while it's running, and it always sits for a while at creating. Meanwhile it flies through the uniqueness tests. So my question to the stackoverflow community is: Why is my database slowing down as I add more cards? Why is this not a linear function in regards to time per card? I'm sure the answer is simple and I'm just a moron who knows nothing about data storage. And if anyone has any suggestions, how would you optimize this method, and how fast do you think you could get it to create 100,000 cards? (When I plotted out my times on a graph and did a quick curve fit to get my line formula, I calculated how long it would take to create 100,000 cards with my current code and it says 5.5 hours. That maybe completely wrong, I'm not sure. But if it stays on the line I curve fitted, it would be right around there.)

    Read the article

  • Colorbox class don't work with ajax/dom object

    - by almal
    i usually use colorbox tool for open "popup windows" in my page and all are fine. In my new project the situation is little different because i use js/ajax/dom for create dinamically my objects in handleRequestStateChange() function. After import js,jquery and css for colorbox, in the head of my js page i write: $(document).ready(function () { $(window).scroll(function () { //oP1 = document.createTextNode(posizione_menu.offsetTop); //divIpt.appendChild(oMtx1); $(".divHcss").css("position", "fixed").css("top", "0px").css("z-index", "999"); }); //Examples of how to assign the Colorbox event to elements $(".group1").colorbox({rel:'group1'}); $(".group2").colorbox({rel:'group2', transition:"fade"}); $(".group3").colorbox({rel:'group3', transition:"none", width:"75%", height:"75%"}); $(".group4").colorbox({rel:'group4', slideshow:true}); $(".ajax").colorbox(); $(".youtube").colorbox({iframe:true, innerWidth:640, innerHeight:390}); $(".vimeo").colorbox({iframe:true, innerWidth:500, innerHeight:409}); $(".iframe").colorbox({iframe:true, width:"80%", height:"80%"}); $(".inline").colorbox({inline:true, width:"50%"}); $(".callbacks").colorbox({ onOpen:function(){ alert('onOpen: colorbox is about to open'); }, onLoad:function(){ alert('onLoad: colorbox has started to load the targeted content'); }, onComplete:function(){ alert('onComplete: colorbox has displayed the loaded content'); }, onCleanup:function(){ alert('onCleanup: colorbox has begun the close process'); }, onClosed:function(){ alert('onClosed: colorbox has completely closed'); } }); $('.non-retina').colorbox({rel:'group5', transition:'none'}) $('.retina').colorbox({rel:'group5', transition:'none', retinaImage:true, retinaUrl:true}); //Example of preserving a JavaScript event for inline calls. $("#click").click(function(){ $('#click').css({"background-color":"#f00", "color":"#fff", "cursor":"inherit"}).text("Open this window again and this message will still be here."); return false; }); }); and after in handleRequestStateChange() i create my a element and assign to a div: var a = createElement('a'); //a.style.display = "block"; a.setAttribute('class','iframe'); a.setAttribute('href',"php/whois.php?P1="+oStxt.value); var divIp3 = createElement('div', 'divIp3', 'divIp3css'); var divIp31 = createElement('div', 'divIp31', 'divIp31css'); divIp3.appendChild(divIp31); divIp3.appendChild(a); a.appendChild(divIp31); The divIp31 become linkable but the href open page in a normal browser tab and not using attribute class for colorbox. Someone have an idea about? Thanks in advance AM

    Read the article

  • VB6 app not executing as scheduled task unless user is logged on

    - by Tedd Hansen
    Hi Would greatly apprechiate some help on this one! It may be a tricky one. :) Problem I have an VB6 application which is set up as scheduled task. It starts every time, but when executing CreateObject it fails if user is not logged on to computer. I am looking for information on what could cause this. Primary suspicion is that some Windows API fails. Key points Behaviour confirmed on Windows 2000, 2003, 2008 and Vista. The application executes as user X at scheduled time, executed by Windows Task Scheduler. It executes every time. Application does start! -- If user X is logged on via RDP it runs perfectly. (Note that user doesn't need to be connected, only logged on) -- If user X is not logged on to computer the application fails. Failure point Application fails when using CreateObject() to instansiate a DCOM object which is also part of the application. The DCOM objects declare .dll-references at startup (globally/on top of .bas-file) and run a small startup function. Failure must be during startup, possibly in one of the .dll-declarations. Thoughts After some Googling my initial suspicion was directed at MAPI. From what I could see MAPI required user to be logged on. The application has MAPI references. But even with all MAPI references removed it still does not work. What is the difference if an user is logged on? Registry mapping? Environment? explorer.exe is running. Isn't the user logged on when application executes as the user? What info would help? A definitive answer would be truly great. Any information regarding any VB6 feature/Windows API that could act differently depending on wether user is logged on or not would definitively help. Similar experiences may lead me in the right direction. Tips on debuggin this. Thanks! :)

    Read the article

  • WPF - How do I get an object that is bound to a ListBoxItem back

    - by JonBlumfeld
    Hi there here is what I would like to do. I get a List of objects from a database and bind this list to a ListBox Control. The ListBoxItems consist of a textbox and a button. Here is what I came up with. Up to this point it works as intended. The object has a number of Properties like ID, Name. If I click on the button in the ListBoxItem the Item should be erased from the ListBox and also from the database... <ListBox x:Name="taglistBox"> <ListBox.ItemContainerStyle> <Style TargetType="{x:Type ListBoxItem}"> <Setter Property="HorizontalAlignment" Value="Stretch"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="ListBoxItem"> <ContentPresenter HorizontalAlignment="Stretch"/> </ControlTemplate> </Setter.Value> </Setter> <Setter Property="Tag" Value="{Binding TagSelf}"></Setter> </Style> </ListBox.ItemContainerStyle> <ListBox.ItemTemplate> <DataTemplate> <Grid HorizontalAlignment="Stretch"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Button Grid.Column="0" Name="btTag" VerticalAlignment="Center" Click="btTag_Click" HorizontalAlignment="Left"> <Image Width="16" Height="16" Source="/WpfApplication1;component/Resources/104.png"/> </Button> <TextBlock Name="tbtagBoxTagItem" Margin="5" Grid.Column="1" Text="{Binding Name}" VerticalAlignment="Center" /> </Grid> </DataTemplate> </ListBox.ItemTemplate> </ListBox> The Textblock.Text is bound to object.Name and the ListBoxItem.Tag to object.TagSelf (which is just a copy of the object itself). Now my questions If I click the button in the listboxItem how do I get the listboxitem and the object bound to it back. In order to delete the object from the database I have to retrieve it somehow. I tried something like ListBoxItem lbi1 = (ListBoxItem)(taglistBox.ItemContainerGenerator.ContainerFromItem(taglistBox.Items.CurrentItem)); ObjectInQuestion t = (ObjectInQuestion) lbi1.Tag; Is there a way to automatically update the contents of the ListBox if the Itemssource changes? Right now I'm achieving that by taglistBox.ItemsSource = null; taglistBox.ItemsSource = ObjectInQuestion; I'd appreciate any help you can give :D Thanks in advance

    Read the article

  • Tomcat JNDI Connection Pool docs - Random Connection Closed Exceptions

    - by Andy Faibishenko
    I found this in the Tomcat documentation here What I don't understand is why they close all the JDBC objects twice - once in the try{} block and once in the finally{} block. Why not just close them once in the finally{} clause? This is the relevant docs: Random Connection Closed Exceptions These can occur when one request gets a db connection from the connection pool and closes it twice. When using a connection pool, closing the connection just returns it to the pool for reuse by another request, it doesn't close the connection. And Tomcat uses multiple threads to handle concurrent requests. Here is an example of the sequence of events which could cause this error in Tomcat: Request 1 running in Thread 1 gets a db connection. Request 1 closes the db connection. The JVM switches the running thread to Thread 2 Request 2 running in Thread 2 gets a db connection (the same db connection just closed by Request 1). The JVM switches the running thread back to Thread 1 Request 1 closes the db connection a second time in a finally block. The JVM switches the running thread back to Thread 2 Request 2 Thread 2 tries to use the db connection but fails because Request 1 closed it. Here is an example of properly written code to use a db connection obtained from a connection pool: Connection conn = null; Statement stmt = null; // Or PreparedStatement if needed ResultSet rs = null; try { conn = ... get connection from connection pool ... stmt = conn.createStatement("select ..."); rs = stmt.executeQuery(); ... iterate through the result set ... rs.close(); rs = null; stmt.close(); stmt = null; conn.close(); // Return to connection pool conn = null; // Make sure we don't close it twice } catch (SQLException e) { ... deal with errors ... } finally { // Always make sure result sets and statements are closed, // and the connection is returned to the pool if (rs != null) { try { rs.close(); } catch (SQLException e) { ; } rs = null; } if (stmt != null) { try { stmt.close(); } catch (SQLException e) { ; } stmt = null; } if (conn != null) { try { conn.close(); } catch (SQLException e) { ; } conn = null; } }

    Read the article

  • Mock static method Activator.CreateInstance to return a mock of another class

    - by Jeep87c
    I have this factory class and I want to test it correctly. Let's say I have an abstract class which have many child (inheritance). As you can see in my Factory class the method BuildChild, I want to be able to create an instance of a child class at Runtime. I must be able to create this instance during Runtime because the type won't be know before runtime. And, I can NOT use Unity for this project (if so, I would not ask how to achieve this). Here's my Factory class that I want to test: public class Factory { public AnAbstractClass BuildChild(Type childType, object parameter) { AnAbstractClass child = (AnAbstractClass) Activator.CreateInstance(childType); child.Initialize(parameter); return child; } } To test this, I want to find a way to Mock Activator.CreateInstance to return my own mocked object of a child class. How can I achieve this? Or maybe if you have a better way to do this without using Activator.CreateInstance (and Unity), I'm opened to it if it's easier to test and mock! I'm currently using Moq to create my mocks but since Activator.CreateInstance is a static method from a static class, I can't figure out how to do this (I already know that Moq can only create mock instances of objects). I took a look at Fakes from Microsoft but without success (I had some difficulties to understand how it works and to find some well explained examples). Please help me! EDIT: I need to mock Activator.CreateInstance because I want to force this method to return another mocked object. The correct thing I want is only to stub this method (not to mock it). So when I test BuildChild like this: [TestMethod] public void TestBuildChild() { var mockChildClass = new Mock(AChildClass); // TODO: Stub/Mock Activator.CreateInstance to return mockChildClass when called with "type" and "parameter" as follow. var type = typeof(AChildClass); var parameter = "A parameter"; var child = this._factory.BuildChild(type, parameters); } Activator.CreateInstance called with type and parameter will return my mocked object instead of creating a new instance of the real child class (not yet implemented).

    Read the article

  • SHAREPOINT: Custom Field type property storage defined for custom field

    - by Eric Rockenbach
    ok here is a great question. I have a set of generic custom fields that are highly configurable from an end user perspective and the configuration is getting overbearing as there are nearly 100 plus items each custom field allows you to perform in the areas of Server/Client Validation, Server/Client Events/Actions, Server/Client Bindings parent/child, display properties for form/control, etc, etc. Right now I'm storing most of these values as "Text" in my field xml for my propertyschema. I'm very familiar with the multi column value, but this is not a complex custom type in sense it's an array. I also considered creating serilzable objects and stuffing them into the text field and then pulling out and de-serilizing them when editing through the field editor or acting on the rules through the custom spfield. So I'm trying to take the following for example <PropertySchema> <Fields> <Field Name="EntityColumnName" Hidden="TRUE" DisplayName="EntityColumnName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityColumnParentPK" Hidden="TRUE" DisplayName="EntityColumnParentPK" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityColumnValueName" Hidden="TRUE" DisplayName="EntityColumnValueName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityListName" Hidden="TRUE" DisplayName="EntityListName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntitySiteUrl" Hidden="TRUE" DisplayName="EntitySiteUrl" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> </Fields> <PropertySchema> And turn it into this... <PropertySchema> <Fields> <Field Name="ServerValidationRules" Hidden="TRUE" DisplayName="ServerValidationRules" Type="ServerValidationRulesType"> <default></default> </Field> </Fields> <PropertySchema> Ideas?????

    Read the article

  • Use native HBitmap in C# while preserving alpha channel/transparency. Please check this code, it works on my computer...

    - by David
    Let's say I get a HBITMAP object/handle from a native Windows function. I can convert it to a managed bitmap using Bitmap.FromHbitmap(nativeHBitmap), but if the native image has transparency information (alpha channel), it is lost by this conversion. There are a few questions on Stack Overflow regarding this issue. Using information from the first answer of this question (How to draw ARGB bitmap using GDI+?), I wrote a piece of code that I've tried and it works. It basically gets the native HBitmap width, height and the pointer to the location of the pixel data using GetObject and the BITMAP structure, and then calls the managed Bitmap constructor: Bitmap managedBitmap = new Bitmap(bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits); As I understand (please correct me if I'm wrong), this does not copy the actual pixel data from the native HBitmap to the managed bitmap, it simply points the managed bitmap to the pixel data from the native HBitmap. And I don't draw the bitmap here on another Graphics (DC) or on another bitmap, to avoid unnecessary memory copying, especially for large bitmaps. I can simply assign this bitmap to a PictureBox control or the the Form BackgroundImage property. And it works, the bitmap is displayed correctly, using transparency. When I no longer use the bitmap, I make sure the BackgroundImage property is no longer pointing to the bitmap, and I dispose both the managed bitmap and the native HBitmap. The Question: Can you tell me if this reasoning and code seems correct. I hope I will not get some unexpected behaviors or errors. And I hope I'm freeing all the memory and objects correctly. private void Example() { IntPtr nativeHBitmap = IntPtr.Zero; /* Get the native HBitmap object from a Windows function here */ // Create the BITMAP structure and get info from our nativeHBitmap NativeMethods.BITMAP bitmapStruct = new NativeMethods.BITMAP(); NativeMethods.GetObjectBitmap(nativeHBitmap, Marshal.SizeOf(bitmapStruct), ref bitmapStruct); // Create the managed bitmap using the pointer to the pixel data of the native HBitmap Bitmap managedBitmap = new Bitmap( bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits); // Show the bitmap this.BackgroundImage = managedBitmap; /* Run the program, use the image */ MessageBox.Show("running..."); // When the image is no longer needed, dispose both the managed Bitmap object and the native HBitmap this.BackgroundImage = null; managedBitmap.Dispose(); NativeMethods.DeleteObject(nativeHBitmap); } internal static class NativeMethods { [StructLayout(LayoutKind.Sequential)] public struct BITMAP { public int bmType; public int bmWidth; public int bmHeight; public int bmWidthBytes; public ushort bmPlanes; public ushort bmBitsPixel; public IntPtr bmBits; } [DllImport("gdi32", CharSet = CharSet.Auto, EntryPoint = "GetObject")] public static extern int GetObjectBitmap(IntPtr hObject, int nCount, ref BITMAP lpObject); [DllImport("gdi32.dll")] internal static extern bool DeleteObject(IntPtr hObject); }

    Read the article

  • More advanced usage of interfaces

    - by owca
    To be honest I'm not quite sure if I understand the task myself :) I was told to create class MySimpleIt, that implements Iterator and Iterable and will allow to run the provided test code. Arguments and variables of objects cannot be either Collections or arrays. The code : MySimpleIt msi=new MySimple(10,100, MySimpleIt.PRIME_NUMBERS); for(int el: msi) System.out.print(el+" "); System.out.println(); msi.setType(MySimpleIterator.ODD_NUMBERS); msi.setLimits(15,30); for(int el: msi) System.out.print(el+" "); System.out.println(); msi.setType(MySimpleIterator.EVEN_NUMBERS); for(int el: msi) System.out.print(el+" "); System.out.println(); The result I should obtain : 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 15 17 19 21 23 25 27 29 16 18 20 22 24 26 28 30 And here's my code : import java.util.Iterator; interface MySimpleIterator{ static int ODD_NUMBERS=0; static int EVEN_NUMBERS = 1; static int PRIME_NUMBERS = 2; int setType(int i); } public class MySimpleIt implements Iterable, Iterator, MySimpleIterator { public MySimple my; public MySimpleIt(MySimple m){ my = m; } public int setType(int i){ my.numbers = i; return my.numbers; } public void setLimits(int d, int u){ my.down = d; my.up = u; } public Iterator iterator(){ Iterator it = this.iterator(); return it; } public void remove(){ } public Object next(){ Object o = new Object(); return o; } public boolean hasNext(){ return true; } } class MySimple { public int down; public int up; public int numbers; public MySimple(int d, int u, int n){ down = d; up = u; numbers = n; } } In the test code I have error in line when creating MySimpleIt msi object, as it finds MySimple instead of MySimpleIt. Also I have errors in for-each loops, because compiler wants 'ints' there instead of Object. Anyone has any idea on how to solve it ?

    Read the article

  • Silverlight ControlTemplate and F#

    - by akaphenom
    Has anybody had any success incorporating a Silverlight ControlTemplate into an F# Silverlight application. I am trying to add transitions to the Navgiation.Frame element and following along on with a C# example: http://www.davidpoll.com/2009/07/19/silverlight-3-navigation-adding-transitions-to-the-frame-control The downloaded source uses the MSBUILD:Compile option on the template XAML and the file is included as a "Page"... ILDASM doesn't show any object created for the XAML; In my project I incldued it as a "Resource" (same as I have done for my pages) and referenced it in app.xaml: <Application xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="Module1.MyApp"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/FSSilverlightApp;component/TransitioningFrame.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application> the TransitioningFrame.xaml is as follows: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" xmlns:toolkit="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Layout.Toolkit"> <ControlTemplate x:Key="TransitioningFrame" TargetType="navigation:Frame"> <Border Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"> <toolkit:TransitioningContentControl Content="{TemplateBinding Content}" Cursor="{TemplateBinding Cursor}" Margin="{TemplateBinding Padding}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" HorizontalContentAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalContentAlignment="{TemplateBinding VerticalContentAlignment}" Transition="DefaultTransition" /> </Border> </ControlTemplate> </ResourceDictionary> My page objects all load their respective xaml with the follwoing code: type Page1() as this = inherit UriUserControl("/FSSilverlightApp;component/Page1.xaml") do Application.LoadComponent(this, base.uri) and somewhere in app startup: let p1 = new Page1() I donot have a comparable piece for the ControlTemplate - though I was hoping the application object and App.xaml would pull it in magically (as an aside, the reliance on this magic has made setting up a 100% f# silverlight application rather tricky - as nearly all the published articles I find are based around wizards and short cuts - very little on the acual plumbing - ugh). Any advice or thougts on the subject are appreciated.

    Read the article

  • Facing Memory Leaks in AES Encryption Method.

    - by Mubashar Ahmad
    Can anyone please identify is there any possible memory leaks in following code. I have tried with .Net Memory Profiler and it says "CreateEncryptor" and some other functions are leaving unmanaged memory leaks as I have confirmed this using Performance Monitors. but there are already dispose, clear, close calls are placed wherever possible please advise me accordingly. its a been urgent. public static string Encrypt(string plainText, string key) { //Set up the encryption objects byte[] encryptedBytes = null; using (AesCryptoServiceProvider acsp = GetProvider(Encoding.UTF8.GetBytes(key))) { byte[] sourceBytes = Encoding.UTF8.GetBytes(plainText); using (ICryptoTransform ictE = acsp.CreateEncryptor()) { //Set up stream to contain the encryption using (MemoryStream msS = new MemoryStream()) { //Perform the encrpytion, storing output into the stream using (CryptoStream csS = new CryptoStream(msS, ictE, CryptoStreamMode.Write)) { csS.Write(sourceBytes, 0, sourceBytes.Length); csS.FlushFinalBlock(); //sourceBytes are now encrypted as an array of secure bytes encryptedBytes = msS.ToArray(); //.ToArray() is important, don't mess with the buffer csS.Close(); } msS.Close(); } } acsp.Clear(); } //return the encrypted bytes as a BASE64 encoded string return Convert.ToBase64String(encryptedBytes); } private static AesCryptoServiceProvider GetProvider(byte[] key) { AesCryptoServiceProvider result = new AesCryptoServiceProvider(); result.BlockSize = 128; result.KeySize = 256; result.Mode = CipherMode.CBC; result.Padding = PaddingMode.PKCS7; result.GenerateIV(); result.IV = new byte[] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; byte[] RealKey = GetKey(key, result); result.Key = RealKey; // result.IV = RealKey; return result; } private static byte[] GetKey(byte[] suggestedKey, SymmetricAlgorithm p) { byte[] kRaw = suggestedKey; List<byte> kList = new List<byte>(); for (int i = 0; i < p.LegalKeySizes[0].MaxSize; i += 8) { kList.Add(kRaw[(i / 8) % kRaw.Length]); } byte[] k = kList.ToArray(); return k; }

    Read the article

  • Creating a System::String object from a BSTR in Managed C++ - is this way a good idea???

    - by Eli
    My co-worker is filling a System::String object with double-byte characters from an unmanaged library by the following method: RFC_PARAMETER aux; Object* target; RFC_UNICODE_TYPE_ELEMENT* elm; elm = &(m_coreObject->m_pStructMeta->m_typeElements[index]); aux.name = NULL; aux.nlen = 0; aux.type = elm->type; aux.leng = elm->c2_length; aux.addr = m_coreObject->m_rfcWa + elm->c2_offset; GlobalFunctions::CreateObjectForRFCField(target,aux,elm->decimals); GlobalFunctions::ReadRFCField(target,aux,elm->decimals); Where GlobalFunctions::CreateObjectForRFCField creates a System::String object filled with spaces (for padding) to what the unmanaged library states the max length should be: static void CreateObjectForRFCField(Object*& object, RFC_PARAMETER& par, unsigned dec) { switch (par.type) { case TYPC: object = new String(' ',par.leng / sizeof(_TCHAR)); break; // unimportant afterwards. } } And GlobalFunctions::ReadRFCField() copies the data from the library into the created String object and preserves the space padding: static void ReadRFCField(String* target, RFC_PARAMETER& par) { int lngt; _TCHAR* srce; switch (par.type) { case TYPC: case TYPDATE: case TYPTIME: case TYPNUM: lngt = par.leng / sizeof(_TCHAR); srce = (_TCHAR*)par.addr; break; case RFCTYPE_STRING: lngt = (*(_TCHAR**)par.addr != NULL) ? (int)_tcslen(*(_TCHAR**)par.addr) : 0; srce = *(_TCHAR**)par.addr; break; default: throw new DotNet_Incomp_RFCType2; } if (lngt > target->Length) lngt = target->Length; GCHandle gh = GCHandle::Alloc(target,GCHandleType::Pinned); wchar_t* buff = reinterpret_cast<wchar_t*>(gh.AddrOfPinnedObject().ToPointer()); _wcsnset(buff,' ',target->Length); _snwprintf(buff,lngt,_T2WFSP,srce); gh.Free(); } Now, on occasion, we see access violations getting thrown in the _snwprintf call. My question really is: Is it appropriate to create a string padded to a length (ideally to pre-allocate the internal buffer), and then to modify the String using GCHandle::Alloc and the mess above. And yes, I know that System::String objects are supposed to be immutable - I'm looking for a definitive "This is WRONG and here is why". Thanks, Eli.

    Read the article

  • TcpListener is queuing connections faster than I can clear them

    - by Matthew Brindley
    As I understand it, TcpListener will queue connections once you call Start(). Each time you call AcceptTcpClient (or BeginAcceptTcpClient), it will dequeue one item from the queue. If we load test our TcpListener app by sending 1,000 connections to it at once, the queue builds far faster than we can clear it, leading (eventually) to timeouts from the client because it didn't get a response because its connection was still in the queue. However, the server doesn't appear to be under much pressure, our app isn't consuming much CPU time and the other monitored resources on the machine aren't breaking a sweat. It feels like we're not running efficiently enough right now. We're calling BeginAcceptTcpListener and then immediately handing over to a ThreadPool thread to actually do the work, then calling BeginAcceptTcpClient again. The work involved doesn't seem to put any pressure on the machine, it's basically just a 3 second sleep followed by a dictionary lookup and then a 100 byte write to the TcpClient's stream. Here's the TcpListener code we're using: // Thread signal. private static ManualResetEvent tcpClientConnected = new ManualResetEvent(false); public void DoBeginAcceptTcpClient(TcpListener listener) { // Set the event to nonsignaled state. tcpClientConnected.Reset(); listener.BeginAcceptTcpClient( new AsyncCallback(DoAcceptTcpClientCallback), listener); // Wait for signal tcpClientConnected.WaitOne(); } public void DoAcceptTcpClientCallback(IAsyncResult ar) { // Get the listener that handles the client request, and the TcpClient TcpListener listener = (TcpListener)ar.AsyncState; TcpClient client = listener.EndAcceptTcpClient(ar); if (inProduction) ThreadPool.QueueUserWorkItem(state => HandleTcpRequest(client, serverCertificate)); // With SSL else ThreadPool.QueueUserWorkItem(state => HandleTcpRequest(client)); // Without SSL // Signal the calling thread to continue. tcpClientConnected.Set(); } public void Start() { currentHandledRequests = 0; tcpListener = new TcpListener(IPAddress.Any, 10000); try { tcpListener.Start(); while (true) DoBeginAcceptTcpClient(tcpListener); } catch (SocketException) { // The TcpListener is shutting down, exit gracefully CheckBuffer(); return; } } I'm assuming the answer will be related to using Sockets instead of TcpListener, or at least using TcpListener.AcceptSocket, but I wondered how we'd go about doing that? One idea we had was to call AcceptTcpClient and immediately Enqueue the TcpClient into one of multiple Queue<TcpClient> objects. That way, we could poll those queues on separate threads (one queue per thread), without running into monitors that might block the thread while waiting for other Dequeue operations. Each queue thread could then use ThreadPool.QueueUserWorkItem to have the work done in a ThreadPool thread and then move onto dequeuing the next TcpClient in its queue. Would you recommend this approach, or is our problem that we're using TcpListener and no amount of rapid dequeueing is going to fix that?

    Read the article

  • RSpec mocking a nested model in Rails - ActionController problem

    - by emson
    Hi All I am having a problem in RSpec when my mock object is asked for a URL by the ActionController. The URL is a Mock one and not a correct resource URL. I am running RSpec 1.3.0 and Rails 2.3.5 Basically I have two models. Where a subject has many notes. class Subject < ActiveRecord::Base validates_presence_of :title has_many :notes end class Note < ActiveRecord::Base validates_presence_of :title belongs_to :subject end My routes.rb file nests these two resources as such: ActionController::Routing::Routes.draw do |map| map.resources :subjects, :has_many => :notes end The NotesController.rb file looks like this: class NotesController < ApplicationController # POST /notes # POST /notes.xml def create @subject = Subject.find(params[:subject_id]) @note = @subject.notes.create!(params[:note]) respond_to do |format| format.html { redirect_to(@subject) } end end end Finally this is my RSpec spec which should simply post my mocked objects to the NotesController and be executed... which it does: it "should create note and redirect to subject without javascript" do # usual rails controller test setup here subject = mock(Subject) Subject.stub(:find).and_return(subject) notes_proxy = mock('association proxy', { "create!" => Note.new }) subject.stub(:notes).and_return(notes_proxy) post :create, :subject_id => subject, :note => { :title => 'note title', :body => 'note body' } end The problem is that when the RSpec post method is called. The NotesController correctly handles the Mock Subject object, and create! the new Note object. However when the NoteController#Create method tries to redirect_to I get the following error: NoMethodError in 'NotesController should create note and redirect to subject without javascript' undefined method `spec_mocks_mock_url' for #<NotesController:0x1034495b8> Now this is caused by a bit of Rails trickery that passes an ActiveRecord object (@subject, in our case, which isn't ActiveRecord but a Mock object), eventually to url_for who passes all the options to the Rails' Routing, which then determines the URL. My question is how can I mock Subject so that the correct options are passed so that I my test passes. I've tried passing in :controller = 'subjects' options but no joy. Is there some other way of doing this? Thanks...

    Read the article

  • Unexpected behavior of IntentService

    - by kknight
    I used IntentService in my code instead of Service because IntentService creates a thread for me in onHandleIntent(Intent intent), so I don't have to create a Thead myself in the code of my service. I expected that two intents to the same IntentSerivce will execute in parallel because a thread is generated in IntentService for each invent. But my code turned out that the two intents executed in sequential way. This is my IntentService code: public class UpdateService extends IntentService { public static final String TAG = "HelloTestIntentService"; public UpdateService() { super("News UpdateService"); } protected void onHandleIntent(Intent intent) { String userAction = intent .getStringExtra("userAction"); Log.v(TAG, "" + new Date() + ", In onHandleIntent for userAction = " + userAction + ", thread id = " + Thread.currentThread().getId()); if ("1".equals(userAction)) { try { Thread.sleep(20 * 1000); } catch (InterruptedException e) { Log.e(TAG, "error", e); } Log.v(TAG, "" + new Date() + ", This thread is waked up."); } } } And the code call the service is below: public class HelloTest extends Activity { //@Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Intent selectIntent = new Intent(this, UpdateService.class); selectIntent.putExtra("userAction", "1"); this.startService(selectIntent); selectIntent = new Intent(this, UpdateService.class); selectIntent.putExtra("userAction", "2"); this.startService(selectIntent); } } I saw this log message in the log: V/HelloTestIntentService( 848): Wed May 05 14:59:37 PDT 2010, In onHandleIntent for userAction = 1, thread id = 8 D/dalvikvm( 609): GC freed 941 objects / 55672 bytes in 99ms V/HelloTestIntentService( 848): Wed May 05 15:00:00 PDT 2010, This thread is waked up. V/HelloTestIntentService( 848): Wed May 05 15:00:00 PDT 2010, In onHandleIntent for userAction = 2, thread id = 8 I/ActivityManager( 568): Stopping service: com.example.android/.UpdateService The log shows that the second intent waited the first intent to finish and they are in the same thread. It there anything I misunderstood of IntentService. To make two service intents execute in parallel, do I have to replace IntentService with service and start a thread myself in the service code? Thanks.

    Read the article

< Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >