Search Results

Search found 39456 results on 1579 pages for 'why do you'.

Page 299/1579 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • Why not systematically attach event in WPF instead of using delegate ?

    - by user310291
    For a button to handle event, we can add a delegate to the click property of the button: this.button1.Click += new System.EventHandler(this.button1_Click); But in WPF contrary to Winform, you can also attach a handler http://msdn.microsoft.com/en-us/magazine/cc785480.aspx So why not do so for the button ? Is performance better in first case maybe ? Update: I mean this Attached Events In order to enable elements to handle events that are declared in a different element, WPF supports something called attached events. Attached events are routed events that support a hookup in XAML on elements other than the type on which the event is declared. For example, if you want the Grid element to listen for a Button.Click event to bubble past, you would simply hook it up like the following: <Grid Button.Click="myButton_Click"> <Button Name="myButton" >Click Me</Button> </Grid> The resulting code in the compile-time-generated partial class now looks like this: #line 5 "..\..\Window1.xaml" ((System.Windows.Controls.Grid)(target)).AddHandler( System.Windows.Controls.Primitives.ButtonBase.ClickEvent, new System.Windows.RoutedEventHandler(this.myButton_Click));

    Read the article

  • C# Why can't I find Sum() of this HashSet. says "Arithmetic operation resulted in an overflow."

    - by user2332665
    I was trying to solve this problem projecteuler,problem125 this is my solution in python(just for understanding the logic) import math lim=10**8 found=set() for start in xrange(1,int(math.sqrt(lim))): sos = start*start for i in xrange(start+1,int(math.sqrt(lim))): sos += (i*i) if sos >= lim: break s=str(int(sos)) if s==s[::-1]: found.add(sos) print sum(found) the same code I wrote in C# is as follows using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication1 { class Program { public static bool isPalindrome(string s) { string temp = ""; for (int i=s.Length-1;i>=0;i-=1){temp+=s[i];} return (temp == s); } static void Main(string[] args) { int lim = Convert.ToInt32(Math.Pow(10,8)); var found = new HashSet<int>(); for (int start = 1; start < Math.Sqrt(lim); start += 1) { int s = start *start; for (int i = start + 1; start < Math.Sqrt(lim); i += 1) { s += i * i; if (s > lim) { break; } if (isPalindrome(s.ToString())) { found.Add(s); } } } Console.WriteLine(found.Sum()); } } } the code debugs fine until it gives an exception at Console.WriteLine(found.Sum()); (line31). Why can't I find Sum() of the set found

    Read the article

  • Why I am not able to update the column based on a condition which is not the primary key

    - by Gaurav Sharma
    Why I am not able to update the column based on a condition which is not the primary key. I am trying to update the constituencies table where name matches a specific criterial as shown below but the below queries shows an error Error code 1064, SQL state 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'table constituencies set city_id = '1' where constituencies.name = "East Delhi"' at line 1 update table constituencies set city_id = '1' where constituencies.name = "East Delhi"; update table constituencies set city_id = '1' where constituencies.name = "South Delhi"; update table constituencies set city_id = '1' where constituencies.name = "Delhi Sadar"; update table constituencies set city_id = '1' where constituencies.name = "Karol Bagh"; update table constituencies set city_id = '1' where constituencies.name = "New Delhi"; update table constituencies set city_id = '1' where constituencies.name = "Outer Delhi"; update table constituencies set city_id = '1' where constituencies.name = "North East Delhi"; update table constituencies set city_id = '1' where constituencies.name = "North West Delhi"; update table constituencies set city_id = '1' where constituencies.name = "West Delhi"; Is it necessary that the condition should be checked with a primary key only ? Please throw some light on the above.

    Read the article

  • Why is it a bad practice to call System.gc?

    - by zneak
    After answering to a question about how to force-free objects in Java (the guy was clearing a 1.5GB HashMap) with System.gc(), I've been told it's a bad practice to call System.gc() manually, but the comments seemed mitigated about it. So much that no one dared to upvote it, nor downvote it. I've been told there it's a bad practice, but then I've also been told garbage collector runs don't systematically stop the world anymore, and that it could also be only seen as a hint, so I'm kind of at loss. I do understand that usually the JVM knows better than you when it needs to reclaim memory. I also understand that worrying about a few kilobytes of data is silly. And I also understand that even megabytes of data isn't what it was a few years back. But still, 1.5 gigabyte? And you know there's like 1.5 GB of data hanging around in memory; it's not like it's a shot in the dark. Is System.gc() systematically bad, or is there some point at which it becomes okay? So the question is actually double: Why is it or not a bad practice to call System.gc()? Is it really a hint under certain implementations, or is it always a full collection cycle? Are there really garbage collector implementations that can do their work without stopping the world? Please shed some light over the various assertions people have made. Where's the threshold? Is it never a good idea to call System.gc(), or are there times when it's acceptable? If any, what are those times?

    Read the article

  • Why can't I get a TRUE return in this prepared statement?

    - by Cortopasta
    I can't seem to get this to do anything but return false. My best guess is that the prepared statement isn't executing, but I have no idea why. private function check_credentials($plain_username, $md5_password) { global $dbcon; $ac = new ac(); $ac->dbconnect(); $userid = $dbcon->prepare('SELECT id FROM users WHERE username = :username AND password = :password LIMIT 1'); $userid->bindParam(':username', $plain_username); $userid->bindParam(':password', $md5_password); $userid->execute(); $id = $userid->fetch(); Return $id; } *EDIT:*I've even tried hard coding the username and password into the function itself to try and isolate the problem like this: private function check_credentials($plain_username, $md5_password) { global $dbcon; $plain_username = "jim"; $md5_username = "waffles"; $ac = new ac(); $ac->dbconnect(); $userid = $dbcon->prepare('SELECT id FROM users WHERE username = :username AND password = :password LIMIT 1'); $userid->bindParam(':username', $plain_username); $userid->bindParam(':password', $md5_password); $userid->execute(); print_r($dbcon->errorInfo()); $id = $userid->fetch(); Return $id; } Still nothing :-/

    Read the article

  • Why does Clojure hang after hacing performed my calculations?

    - by Thomas
    Hi all, I'm experimenting with filtering through elements in parallel. For each element, I need to perform a distance calculation to see if it is close enough to a target point. Never mind that data structures already exist for doing this, I'm just doing initial experiments for now. Anyway, I wanted to run some very basic experiments where I generate random vectors and filter them. Here's my implementation that does all of this (defn pfilter [pred coll] (map second (filter first (pmap (fn [item] [(pred item) item]) coll)))) (defn random-n-vector [n] (take n (repeatedly rand))) (defn distance [u v] (Math/sqrt (reduce + (map #(Math/pow (- %1 %2) 2) u v)))) (defn -main [& args] (let [[n-str vectors-str threshold-str] args n (Integer/parseInt n-str) vectors (Integer/parseInt vectors-str) threshold (Double/parseDouble threshold-str) random-vector (partial random-n-vector n) u (random-vector)] (time (println n vectors (count (pfilter (fn [v] (< (distance u v) threshold)) (take vectors (repeatedly random-vector)))))))) The code executes and returns what I expect, that is the parameter n (length of vectors), vectors (the number of vectors) and the number of vectors that are closer than a threshold to the target vector. What I don't understand is why the programs hangs for an additional minute before terminating. Here is the output of a run which demonstrates the error $ time lein run 10 100000 1.0 [null] 10 100000 12283 [null] "Elapsed time: 3300.856 msecs" real 1m6.336s user 0m7.204s sys 0m1.495s Any comments on how to filter in parallel in general are also more than welcome, as I haven't yet confirmed that pfilter actually works.

    Read the article

  • Why is Microsoft under-supporting or under-developping VBNET?

    - by Will Marcouiller
    I ran into a situation where the lack of some features has become somewhat frustrating while developping in VB.NET 2.0. Since my first day of programming, I've always been a C programmer, and still am. Naturally, I chose C# as my favorite .NET language. Recently, a customer of mine has obliged that all of his development projects which disregard SharePoint development have to be written in VB.NET 2.0, that is to avoid conflictual systems to come into some problems. That is a legitimate choice of his which I approve somehow, since he's running some old central systems and is slowly migrating toward latest technologies. As for me, I would have prefered to go with C#, but then, never having done much VB in my life, I see it as an opportunity to learn somethings new, how to handle this and that in VBNET, etc. Except that the syntax is really too verbose for me, which is a pain! I got used to it and that is fine. However, I recently wanted to use the InternalsVisibleToAttribute which I discovered lastly here on SO. But then, in addition to not being able to have lambda expression that returns no value, which I discovered months ago, today I learn that I can't use the attribute in VBNET! Here is what I have read in an article: [...] Sorry VB.Net developers, Microsoft is again shunning you guys and this attribute is NOT available to you.... :( And here is the link: InternalsVisibleTo: Testing internal methods in .Net 2.0 I have heard from Anders Hejlsberg mouth while watching a Webcast from his presentation of .NET 4.0 Framework that the VBNET team was working or has worked in collaboration with the C# team (Eric Lippert and others) in order to bring VBNET to offer the same features as C# offers. But then, I say to myself that the VBNET team has a huge step forward to make, if already in .NET 2.0, some of the most important features lacked! So my question is this: Why is Microsoft under-supporting or under-developping VBNET? Will VBNET ever be lacking the C# features?

    Read the article

  • Online voice chat: Why client-server model vs. peer-to-peer model?

    - by sstallings
    I am adding online voice chat to a Silverlight app. I've been reviewing current apps, services and SDKs found thru online searches and forums. I'm finding that the majority of these implement a client-server (C/S) model and I'm trying to understand why that model versus a peer-to-peer (PTP) model. To me PTP would be preferable because going direct between peers would be more efficient (fewer IP hops and no processing along the way by a server computer) and no need for a server and its costs and dependencies. I found some products offer the ability to switch from PTP to C/S if the PTP proves insufficient. As I thought more about it, I could see that C/S could be better if there are more than two peers involved in a conversation, then the server (supposedly with more bandwidth) could do a better job of relaying each peers outgoing traffic to the multiple other peers. In C/S many-to-many voice chatting, each peer's upstream broadband (which is where the bottleneck inherently is) would only have to carry each item of voice traffic once, then the server would use its superior bandwidth to relay the message to the multiple other peers. But, in a situation with one-on-one voice chatting it seems that PTP would be best. A server would not reduce each of the two peer's bandwidth requirements and would only add unnecessary overhead, dependency and cost. In one-on-one voice chatting: Am I mistaken on anything above? Would peer-to-peer be best? Would a server provide anything of value that could not be provided by a client-only program? Is there anything else that I should be taking into consideration? And lastly, can you recommend any Silverlight PTP or C/S voice chat products? Thanks in advance for any info.

    Read the article

  • Why would some $_POST variables go missing for a form with PHP?

    - by Chad Johnson
    Sometimes, multiple times a day in fact, users of my web application are submitting a certain form which has about a dozen form fields, half of which are hidden fields, and half of the $_POST data is simply not present in the processing script. Note that the fields that are not present are at the very bottom of the form. I know this because this raises a fatal error, and an email is dispatched to me which includes the post data. And of course, neither I nor any of the developers on my team can reproduce the problem. Flash is involved in the process, as I'm using a library called Uploadify to display a progress meter. Here is the flow...does anyone have ANY ideas at all why some of the post data would be getting wiped out? User visits edit screen for a page in the CMS I am using. Record id for the page is put into a form as a hidden value. User clicks the Uploadify browse button and selects a file (only single file selection is allowed). User clicks Submit button for my form. jQuery intercepts the form submit action, triggers Uploadify to start uploading, and returns false for the submit action (manually cancelling the form submit event so that Uploadify can take over). Uploadify uploads to a custom process script. Uploadify finishes uploading and triggers the Javascript completion callback. The Javascript callback calls $('#myForm').submit() to submit the form. This happens on multiple browsers (Firefox 3.5, 3.6, Safari, Internet Explorer 7, 8) and multiple platforms (Mac OS 10.5, 10.6 and Windows XP, 7).

    Read the article

  • Why does std::cout convert volatile pointers to bool?

    - by Joseph Garvin
    If you try to cout a volatile pointer, even a volatile char pointer where you would normally expect cout to print the string, you will instead simply get '1' (assuming the pointer is not null I think). I assume output stream operator<< is template specialized for volatile pointers, but my question is, why? What use case motivates this behavior? Example code: #include <iostream> #include <cstring> int main() { char x[500]; std::strcpy(x, "Hello world"); int y; int *z = &y; std::cout << x << std::endl; std::cout << (char volatile*)x << std::endl; std::cout << z << std::endl; std::cout << (int volatile*)z << std::endl; return 0; } Output: Hello world 1 0x8046b6c 1

    Read the article

  • Why not lump all service classes into a Factory method (instead of injecting interfaces)?

    - by Andrew
    We are building an ASP.NET project, and encapsulating all of our business logic in service classes. Some is in the domain objects, but generally those are rather anemic (due to the ORM we are using, that won't change). To better enable unit testing, we define interfaces for each service and utilize D.I.. E.g. here are a couple of the interfaces: IEmployeeService IDepartmentService IOrderService ... All of the methods in these services are basically groups of tasks, and the classes contain no private member variables (other than references to the dependent services). Before we worried about Unit Testing, we'd just declare all these classes as static and have them call each other directly. Now we'll set up the class like this if the service depends on other services: public EmployeeService : IEmployeeService { private readonly IOrderService _orderSvc; private readonly IDepartmentService _deptSvc; private readonly IEmployeeRepository _empRep; public EmployeeService(IOrderService orderSvc , IDepartmentService deptSvc , IEmployeeRepository empRep) { _orderSvc = orderSvc; _deptSvc = deptSvc; _empRep = empRep; } //methods down here } This really isn't usually a problem, but I wonder why not set up a factory class that we pass around instead? i.e. public ServiceFactory { virtual IEmployeeService GetEmployeeService(); virtual IDepartmentService GetDepartmentService(); virtual IOrderService GetOrderService(); } Then instead of calling: _orderSvc.CalcOrderTotal(orderId) we'd call _svcFactory.GetOrderService.CalcOrderTotal(orderid) What's the downfall of this method? It's still testable, it still allows us to use D.I. (and handle external dependencies like database contexts and e-mail senders via D.I. within and outside the factory), and it eliminates a lot of D.I. setup and consolidates dependencies more. Thanks for your thoughts!

    Read the article

  • Why can't I use accented characters next to a word boundary?

    - by Rexxars
    I'm trying to make a dynamic regex that matches a persons name. It works without problems on most names, until I ran into accented characters at the end of the name. Example: Some Fancy Namé The regex I've used so far is: /\b(Fancy Namé|Namé)\b/i Used like this: "Goal: Some Fancy Namé. Awesome.".replace(/\b(Fancy Namé|Namé)\b/i, '<a href="#">$1</a>'); This simply won't match. If I replace the é with a e, it matches just fine. If I try to match a name such as "Some Fancy Naméa", it works just fine. If I remove the word last word boundary anchor, it works just fine. Why doesn't the word boundary flag work here? Any suggestions on how I would get around this problem? I have concidered using something like this, but I'm not sure what the performance penalties would be like: "Some fancy namé. Allow me to ellaborate.".replace(/([\s.,!?])(fancy namé|namé)([\s.,!?]|$)/g, '$1<a href="#">$2</a>$3') Suggestions? Ideas?

    Read the article

  • Why is it assumed that send may return with less than requested data transmitted on a blocking socke

    - by Ernelli
    The standard method to send data on a stream socket has always been to call send with a chunk of data to write, check the return value to see if all data was sent and then keep calling send again until the whole message has been accepted. For example this is a simple example of a common scheme: int send_all(int sock, unsigned char *buffer, int len) { int nsent; while(len 0) { nsent = send(sock, buffer, len, 0); if(nsent == -1) // error return -1; buffer += nsent; len -= nsent; } return 0; // ok, all data sent } Even the BSD manpage mentions that ...If no messages space is available at the socket to hold the message to be transmitted, then send() normally blocks... Which indicates that we should assume that send may return without sending all data. Now I find this rather broken but even W. Richard Stevens assumes this in his standard reference book about network programming, not in the beginning chapters, but the more advanced examples uses his own writen (write all data) function instead of calling write. Now I consider this still to be more or less broken, since if send is not able to transmit all data or accept the data in the underlying buffer and the socket is blocking, then send should block and return when the whole send request has been accepted. I mean, in the code example above, what will happen if send returns with less data sent is that it will be called right again with a new request. What has changed since last call? At max a few hundred CPU cycles have passed so the buffer is still full. If send now accepts the data why could'nt it accept it before? Otherwise we will end upp with an inefficient loop where we are trying to send data on a socket that cannot accept data and keep trying, or else? So it seems like the workaround, if needed, results in heavily inefficient code and in those circumstances blocking sockets should be avoided at all an non blocking sockets together with select should be used instead.

    Read the article

  • Why does ActiveMQ hold messages that should be deleted from Topic?

    - by rauch
    I use ActiveMQ as Notification System(Pub/Sub model). On server: if any changes of data occur, Server send this updated data (File) to Topic using BlobMessages. There are few Clients, that subscribe on this Topic and get updated File if it exsist in Topic. The problem is that all of BlobMessages, that were sent to Topic, are hold by ActiveMQ all time. this.producer = new ProducerTool.Builder("tcp://localhost:61616?jms.blobTransferPolicy.uploadUrl=http://localhost:8161/fileserver/", "ServerProdTopic").topic(true) .transacted(false).durable(false).timeToLive(10000L).build(); this.consumer = new ConsumerTool.Builder("tcp://localhost:61616", "ServerConsTopic").topic(true) .transacted(false).durable(false).build(); consumer.setMessageListener(this); The File is sent: connection = createConnection(); session = createSession(connection); producer = createProducer(session); BlobMessage blobMsg = ((ActiveMQSession) session).createBlobMessage(resource); blobMsg.setStringProperty("sourceName", resource.getName()); producer.send(blobMsg); if (transacted) { System.out.println("Producer Committing..."); session.commit(); } Where createProducer is: protected Connection createConnection() throws JMSException, Exception { ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(url); //connectionFactory.getBlobTransferPolicy().setUploadUrl("http://localhost:8161/fileserver/"); Connection connection = connectionFactory.createConnection(); connection.start(); ((ActiveMQConnection) connection).setCopyMessageOnSend(false); return connection; } All, that could be useful I set as need: Session.AUTO_ACKNOWLEDGE; Non-Durable Subscription; TimeToLive = 9000; JMSDeliveryMode = Non-Persistent; What I have at runtime: in ActiveMQ directory: ~/apache-activemq-5.3.0/webapps/fileserver/ tere are all File, that where delivered and not delivered to Subscribers. Why? Sometimes Server send Big Files about 1 GB....And even this Files are hold at that directory, Even after stopping Subscribers(Clients), Publisher(Server) and ActiveMQ Broker.

    Read the article

  • Why can't I initialize a class through a setter?

    - by Rob emenaker
    If I have a custom class called Tires: #import <Foundation/Foundation.h> @interface Tires : NSObject { @private NSString *brand; int size; } @property (nonatomic,copy) NSString *brand; @property int size; - (id)init; - (void)dealloc; @end ============================================= #import "Tires.h" @implementation Tires @synthesize brand, size; - (id)init { if (self = [super init]) { [self setBrand:[[NSString alloc] initWithString:@""]]; [self setSize:0]; } return self; } - (void)dealloc { [super dealloc]; [brand release]; } @end And I synthesize a setter and getter in my View Controller: #import <UIKit/UIKit.h> #import "Tires.h" @interface testViewController : UIViewController { Tires *frontLeft, *frontRight, *backleft, *backRight; } @property (nonatomic,copy) Tires *frontLeft, *frontRight, *backleft, *backRight; @end ==================================== #import "testViewController.h" @implementation testViewController @synthesize frontLeft, frontRight, backleft, backRight; - (void)viewDidLoad { [super viewDidLoad]; [self setFrontLeft:[[Tires alloc] init]]; } - (void)dealloc { [super dealloc]; } @end It dies after [self setFrontLeft:[[Tires alloc] init]] comes back. It compiles just fine and when I run the debugger it actually gets all the way through the init method on Tires, but once it comes back it just dies and the view never appears. However if I change the viewDidLoad method to: - (void)viewDidLoad { [super viewDidLoad]; frontLeft = [[Tires alloc] init]; } It works just fine. I could just ditch the setter and access the frontLeft variable directly, but I was under the impression I should use setters and getters as much as possible and logically it seems like the setFrontLeft method should work. This brings up an additional question that my coworkers keep asking in these regards (we are all new to Objective-C); why use a setter and getter at all if you are in the same class as those setters and getters.

    Read the article

  • Core Data: Inverse relationship only mirrors when I edit the mutableset. Not sure why.

    - by zorn
    My model is setup so Business has many clients, Client has one business. Inverse relationship is setup in the mom file. I have a unit test like this: - (void)testNewClientFromBusiness { PTBusiness *business = [modelController newBusiness]; STAssertTrue([[business clients] count] == 0, @"is actually %d", [[business clients] count]); PTClient *client = [business newClient]; STAssertTrue([business isEqual:[client business]], nil); STAssertTrue([[business clients] count] == 1, @"is actually %d", [[business clients] count]); } I implement -newClient inside of PTBusiness like this: - (PTClient *)newClient { PTClient *client = [NSEntityDescription insertNewObjectForEntityForName:@"Client" inManagedObjectContext:[self managedObjectContext]]; [client setBusiness:self]; [client updateLocalDefaultsBasedOnBusiness]; return client; } The test fails because [[business clients] count] is still 0 after -newClient is called. If I impliment it like this: - (PTClient *)newClient { PTClient *client = [NSEntityDescription insertNewObjectForEntityForName:@"Client" inManagedObjectContext:[self managedObjectContext]]; NSMutableSet *group = [self mutableSetValueForKey:@"clients"]; [group addObject:client]; [client updateLocalDefaultsBasedOnBusiness]; return client; } The tests passes. My question(s): So am I right in thinking the inverse relationship is only updated when I interact with the mutable set? That seems to go against some other Core Data docs I've read. Is the fact that this is running in a unit test without a run loop have anything to do with it? Any other troubleshooting recommendations? I'd really like to figure out why I can't set up the relationship at the client end.

    Read the article

  • Why does the .NET tab in the 'Add Reference' dialog in Visual Studio not list the contents of the GA

    - by abroun
    Duplicate of: Getting assemblies to show in the .NET tab of Add Reference So, I'm using Visual C# 2008 Express Edition and I've just been on a bit of a detour as I found out that my assumption that the .NET tab of the 'Add Reference' dialog lists the contents of the GAC was incorrect. This was a bit of a problem for me as the assembly that I wanted to reference from my project was only available in the GAC. (It was Microsoft.XNA.Framework v2.0 obtained from the XNA 2.0 redistributalbe and as far as I could see it installed only into the GAC). I worked round the problem by setting the reference to Microsoft.XNA.Framework manually in the .csproj file and then getting a copy of the dll out of the cache. I was then able to create a directory for the DLL, add it to Visual Studio's list of assembly directories in the registry and then voila! I could see it in the .NET tab. This all seems like a bit of a faff to me and I don't think that my initial assumption (that the .NET tab shows the contents of the GAC) was that unreasonable or would be that uncommon. Can someone who knows more than me tell me why the contents of the GAC aren't shown? The documentation just says that they are not, but is there a good reason? is there actually a way to get the entire contents of the GAC to be listed? A tick box I've missed somewhere? Any info much appreciated.

    Read the article

  • Why is python decode replacing more than the invalid bytes from an encoded string?

    - by dangra
    Trying to decode an invalid encoded utf-8 html page gives different results in python, firefox and chrome. The invalid encoded fragment from test page looks like 'PREFIX\xe3\xabSUFFIX' >>> fragment = 'PREFIX\xe3\xabSUFFIX' >>> fragment.decode('utf-8', 'strict') ... UnicodeDecodeError: 'utf8' codec can't decode bytes in position 6-8: invalid data What follows is the summary of replacement policies used to handle decoding errors by python, firefox and chrome. Note how the three differs, and specially how python builtin removes the valid S (plus the invalid sequence of bytes). by Python The builtin replace error handler replaces the invalid \xe3\xab plus the S from SUFFIX by U+FFFD >>> fragment.decode('utf-8', 'replace') u'PREFIX\ufffdUFFIX' >>> print _ PREFIX?UFFIX The python implementation builtin replace error handler looks like: >>> python_replace = lambda exc: (u'\ufffd', exc.end) As expected, trying this gives same result than builtin: >>> codecs.register_error('python_replace', python_replace) >>> fragment.decode('utf-8', 'python_replace') u'PREFIX\ufffdUFFIX' >>> print _ PREFIX?UFFIX by Firefox Firefox replaces each invalid byte by U+FFFD >>> firefox_replace = lambda exc: (u'\ufffd', exc.start+1) >>> codecs.register_error('firefox_replace', firefox_replace) >>> test_string.decode('utf-8', 'firefox_replace') u'PREFIX\ufffd\ufffdSUFFIX' >>> print _ PREFIX??SUFFIX by Chrome Chrome replaces each invalid sequence of bytes by U+FFFD >>> chrome_replace = lambda exc: (u'\ufffd', exc.end-1) >>> codecs.register_error('chrome_replace', chrome_replace) >>> fragment.decode('utf-8', 'chrome_replace') u'PREFIX\ufffdSUFFIX' >>> print _ PREFIX?SUFFIX The main question is why builtin replace error handler for str.decode is removing the S from SUFFIX. Also, is there any unicode's official recommended way for handling decoding replacements?

    Read the article

  • Why does my ActivePerl program report 'Sorry. Ran out of threads'?

    - by Zaid
    Tom Christiansen's example code (à la perlthrtut) is a recursive, threaded implementation of finding and printing all prime numbers between 3 and 1000. Below is a mildly adapted version of the script #!/usr/bin/perl # adapted from prime-pthread, courtesy of Tom Christiansen use strict; use warnings; use threads; use Thread::Queue; sub check_prime { my ($upstream,$cur_prime) = @_; my $child; my $downstream = Thread::Queue->new; while (my $num = $upstream->dequeue) { next unless ($num % $cur_prime); if ($child) { $downstream->enqueue($num); } else { $child = threads->create(\&check_prime, $downstream, $num); if ($child) { print "This is thread ",$child->tid,". Found prime: $num\n"; } else { warn "Sorry. Ran out of threads.\n"; last; } } } if ($child) { $downstream->enqueue(undef); $child->join; } } my $stream = Thread::Queue->new(3..shift,undef); check_prime($stream,2); When run on my machine (under ActiveState & Win32), the code was capable of spawning only 118 threads (last prime number found: 653) before terminating with a 'Sorry. Ran out of threads' warning. In trying to figure out why I was limited to the number of threads I could create, I replaced the use threads; line with use threads (stack_size => 1);. The resultant code happily dealt with churning out 2000+ threads. Can anyone explain this behavior?

    Read the article

  • Why does output of fltk-config truncate arguments to gcc?

    - by James Morris
    I'm trying to build an application I've downloaded which uses the SCONS "make replacement" and the Fast Light Tool Kit Gui. The SConstruct code to detect the presence of fltk is: guienv = Environment(CPPFLAGS = '') guiconf = Configure(guienv) if not guiconf.CheckLibWithHeader('lo', 'lo/lo.h','c'): print 'Did not find liblo for OSC, exiting!' Exit(1) if not guiconf.CheckLibWithHeader('fltk', 'FL/Fl.H','c++'): print 'Did not find FLTK for the gui, exiting!' Exit(1) Unfortunately, on my (Gentoo Linux) system, and many others (Linux distributions) this can be quite troublesome if the package manager allows the simultaneous install of FLTK-1 and FLTK-2. I have attempted to modify the SConstruct file to use fltk-config --cflags and fltk-config --ldflags (or fltk-config --libs might be better than ldflags) by adding them like so: guienv.Append(CPPPATH = os.popen('fltk-config --cflags').read()) guienv.Append(LIBPATH = os.popen('fltk-config --ldflags').read()) But this causes the test for liblo to fail! Looking in config.log shows how it failed: scons: Configure: Checking for C library lo... gcc -o .sconf_temp/conftest_4.o -c "-I/usr/include/fltk-1.1 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_THREAD_SAFE -D_REENTRANT" gcc: no input files scons: Configure: no How should this really be done? And to complete my answer, how do I remove the quotes from the result of os.popen( 'command').read()? EDIT The real question here is why does appending the output of fltk-config cause gcc to not receive the filename argument it is supposed to compile?

    Read the article

  • Why does findstr not handle case properly (in some circumstances)?

    - by paxdiablo
    While writing some recent scripts in cmd.exe, I had a need to use findstr with regular expressions - customer required standard cmd.exe commands (no GnuWin32 nor Cygwin nor VBS nor Powershell). I just wanted to know if a variable contained any upper-case characters and attempted to use: > set myvar=abc > echo %myvar%|findstr /r "[A-Z]" abc > echo %errorlevel% 0 When %myvar% is set to abc, that actually outputs the string and sets errorlevel to 0, saying that a match was found. However, the full-list variant: > echo %myvar%|findstr /r "[ABCDEFGHIJKLMNOPQRSTUVWXYZ]" > echo %errorlevel% 1 does not output the line and it correctly sets errorlevel to 1. In addition: > echo %myvar%|findstr /r "^[A-Z]*$" > echo %errorlevel% 1 also works as expected. I'm obviously missing something here even if it's only the fact that findstr is somehow broken. Why does the first (range) regex not work in this case? And yet more weirdness: > echo %myvar%|findstr /r "[A-Z]" abc > echo %myvar%|findstr /r "[A-Z][A-Z]" abc > echo %myvar%|findstr /r "[A-Z][A-Z][A-Z]" > echo %myvar%|findstr /r "[A]" The last two above also does not output the string!!

    Read the article

  • Why does this cast to Base class in virtual function give a segmentation fault?

    - by dehmann
    I want to print out a derived class using the operator<<. When I print the derived class, I want to first print its base and then its own content. But I ran into some trouble (see segfault below): class Base { public: friend std::ostream& operator<<(std::ostream&, const Base&); virtual void Print(std::ostream& out) const { out << "BASE!"; } }; std::ostream& operator<<(std::ostream& out, const Base& b) { b.Print(out); return out; } class Derived : public Base { public: virtual void Print(std::ostream& out) const { out << "My base: "; //((const Base*)this)->Print(out); // infinite, calls this fct recursively //((Base*)this)->Print(out); // segfault (from infinite loop?) ((Base)*this).Print(out); // OK out << " ... and myself."; } }; int main(int argc, char** argv){ Derived d; std::cout << d; return 0; } Why can't I cast in one of these ways? ((const Base*)this)->Print(out); // infinite, calls this fct recursively ((Base*)this)->Print(out); // segfault (from infinite loop?)

    Read the article

  • I am trying to get a simple jqplot example going but it will not display, why not?

    - by stephenmm
    Here is the entire contents of the file: <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>jqPlot Examples</title> <!--[if IE]><script language="javascript" type="text/javascript" src="excanvas.js"></script><![endif]--> <script language="javascript" type="text/javascript" src="../jqtouch-1_0_b/jqtouch/jquery.1.3.2.min.js"></script> <script language="javascript" type="text/javascript" src="../javascript/jquery.jqplot.min.js"></script> <link rel="stylesheet" type="text/css" href="../javascript/jquery.jqplot.css" /> </head> <body> <h1>jqPlot Examples</h1> <script id="source" language="javascript" type="text/javascript"> $.jqplot('chartdiv', [[[1, 2],[3,5.12],[5,13.1],[7,33.6],[9,85.9],[11,219.9]]]); </script> div<br> <div id="chartdiv" style="height:400px;width:300px; "></div> div<br> </body> </html> <html> Here is what I see in FF, chrome, IE: jqPlot Examples div div I am seeing no errors in my Apache error log. I know all the .js files are accessible from the html. Does anyone have an idea about why this might not be working?

    Read the article

  • Why should I bother with unit testing if I can just use integration tests?

    - by CodeGrue
    Ok, I know I am going out on a limb making a statement like that, so my question is for everyone to convince me I am wrong. Take this scenario: I have method A, which calls method B, and they are in different layers. So I unit test B, which delivers null as a result. So I test that null is returned, and the unit test passes. Nice. Then I unit test A, which expects an empty string to be returned from B. So I mock the layer B is in, an empty string is return, the test passes. Nice again. (Assume I don't realize the relationship of A and B, or that maybe two differente people are building these methods) My concern is that we don't find the real problem until we test A and B togther, i.e. Integration Testing. Since an integration test provides coverage over the unit test area, it seems like a waste of effort to build all these unit tests that really don't tell us anything (or very much) meaningful. Why am I wrong?

    Read the article

  • Why doesn't this jQuery snippet work in IE8 like it does in Chrome/Firefox (live demo included)?

    - by Siracuse
    I asked for help earlier on Stackoverflow involving highlighting spans with the same Class when a mouse hovers over any Span with that same Class. It is working great: http://stackoverflow.com/questions/2709686/how-can-i-add-a-border-to-all-the-elements-that-share-a-class-when-the-mouse-has $('span[class]').hover( function() { $('.' + $(this).attr('class')).css('background-color','green'); }, function() { $('.' + $(this).attr('class')).css('background-color','yellow'); } ) Here is an example of it in usage: http://dl.dropbox.com/u/638285/0utput.html However, it doesn't appear to work properly in IE8, while it DOES work in Chrome/Firefox. Here is a screenshot of it in IE8, with my mouse hovered over the " min) { min" section in the middle. As you can see, it highlighted the span that the mouse is hovering over perfectly fine. However, it has also highlighted some random spans above and below it that don't have the same class! Only the span's with the same Class as the one where the mouse is over should be highlighted green. In this screenshot, only that middle green section should be green. Here is a screenshot of it working properly in Firefox/Chrome with my mouse in the exact same position: This screenshot is correct as the span that the mouse is over (the green section) is the only one in this section that shares that class. Why is IE8 randomly green-highlighting spans when it shouldn't be (they don't share the same class) using my little jQuery snippet? Again, if you want to see it live I have it here: http://dl.dropbox.com/u/638285/0utput.html

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >