Search Results

Search found 16902 results on 677 pages for 'strange errors'.

Page 492/677 | < Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >

  • Socket isn't listed by netstat unless using certain ports

    - by illuzive
    I'm a computer science student with a few years of programming experience. Yesterday, while working on a project (Mac OS X, BSD sockets) at school, I encountered a strange problem. I was adding several modules to a very basic "server" (mostly a bunch of functions to set up and manage an UDP socket on a certain port). While doing this, I started the server from time to time in order to see that everything worked like it should. I've been using port 32000 during the development of the server. When I start the server and run netstat, the socket is listed as expected. > netstat -p UDP | grep 32000 udp46 0 0 *.32000 *.* However, when I run the server on other ports (random (10000 - 50000)), it's not listed by netstat. My thought was that I had somehow hard coded the port somewhere in the code, but that's not the case. The thing is - I can connect to the socket on any of the tested ports, and it reads data sent to it without any problem at all. It just doesn't get listed by netstat. What I wonder, is if anyone of you have any idea of why this happens? Note: Although this is a project at school, it's not homework. This is just something I want to understand for my own benefit.

    Read the article

  • SVG <image> doesn't work in <defs> on Chrome

    - by avladev
    I want to use image in a group definded in defs tag. But on Chrome nothing works. In Firefox only the .png file is displayd. Only Rectangle apears but with strange bug in Chrome. Is this is supported by SVG or im not using it right. plane.svg <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <svg baseProfile="full" width="500" height="500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1"> <defs> <g id="car"> <rect x="0" y="0" width="30" height="30" fill="#ff0000" /> <image xlink:href="items/car.svg" x="0" y="0" width="30" height="30" /> <image xlink:href="items/t6k.png" x="100" y="100" width="140" height="140" /> </g> </defs> <use xlink:href="#car" x="0" y="0" width="600" height="600" /> </svg> images/car.svg <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <svg baseProfile="full" width="30" height="30" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1"> <rect x="0" y="0" width="30" height="30" fill="red" stroke="green" stroke-width="3"/> </svg> Download code: http://www.4shared.com/file/9gNi5gCO/svg_bug.html

    Read the article

  • Rails ActionCaching with Memcached fragment hit but action gets called anyway

    - by baldtrol
    Hi stackoverflow. I'm running into something strange. I'm using memcached with a caches_action setup. I'm doing this in 4 different controllers. In two of them, it works flawlessly (so far), though admittedly those two controllers are less complicated than the two in which it doesn't seem to work. I'm doing something like this: caches_action :index, :expires_in => 6.hours, :cache_path => Proc.new {|controller| controller.send(:generate_cache_path) }, :layout => false, :if => Proc.new { |c| c.request.format.js? } The intention behind the above is to cache some results that are dependent on the params. my :generate_cache_path method just takes into account some params and session vars and creates a unique key for memcached. I can see in memcached -vv that this is working. What's weird is that I get my request from the rails app for a given key, and I see memcached (with -vv) get the request and send back the response. But then my action runs anyway, and a new value is then set for the same key, even when all the same params are given. I can watch it happen. In the controllers where everything is working, the request is made for the fragment, it gets it, and the action in the controller is halted, and the fragment is passed back. These lines come from the exact same request: Cached fragment hit: views/items/?page=1&rp=10&srtn=created_at&srto=DESC.js And then: Cached fragment miss: views/items/?page=1&rp=10&srtn=created_at&srto=DESC.js I don't know what to make of it, or if I'm doing something stupid. Any help or ideas where I could start looking for trouble would be greatly appreciated.

    Read the article

  • Determine whether app is communicating with APNS sandbox or production environment

    - by goldierox
    I have push notifications set up in my app. I'm trying to determine whether the device token I've received from APNS in the application:didRegisterForRemoteNotificationsWithDeviceToken: method came from the sandbox or development environment. If I can distinguish which environment initialized the token, I'll be able to tell my server to which environment to send the push notification. I've tried using the DEBUG macro to determine this, but I've seen some strange behavior with this and don't trust it to be 100% correct. #ifdef DEBUG BOOL isProd = YES; #else BOOL isProd = NO; #endif Ideally, I'd be able to examine the aps-environment entitlement (value is Development or Production) in code, but I'm not sure if this is even possible. What's the proper way to determine whether your app is communicating with the APNS sandbox or production environments? I'm assuming that the server needs to know this in the first place. Please correct me if this is assumption is incorrect. Edited: Apple's documentation on Provider Communication with APNS details the difference between communicating with the sandbox and production. However, the documentation doesn't give information on how to be consistent with registering the token (from the iOS client app) and communicating with the server.

    Read the article

  • A feeling that I'm not that good developer

    - by Karim
    Hi, Im having a strange feeling, but let me first introduce myself as a software developer. I started to program when I was still a kid, I had about 10 or 11 years. I really enjoy my work and never get bored from it. It's amazing how somebody could be paid for what he really likes to do and would be doing it anyway even for free. WHen I first started to program, I was feeling proud of what I was doing, each application I built was for me a success and after 2-3 year I had a feeling that I'm a coding guru. It was a nice feeling ;-) But the more I was in the field, the more types of software I started to develop I was starting to have a feeling that I'm completely wrong in that I'm guru. I felt that I'm not even a mediocre developer. Each new field I start to work on is giving me this feeling. Like when I once developed a device driver for a client, I saw how much I need to learn about device drivers. When I developed a video filter for an application, I saw how much do I still need to learn about DirectShow, Color Spaces, and all the theory behind that. The worst thing was when I started to learn algorithms. It was several years ago. I knew then the basic structures and algorithms like the sorting, some types of trees, some hashtables, strings etc.. and when I really wanted to learn a group of structures I learned about 5-6 new types and saw that in fact even this small group has several hundred subtypes of structures. It's depressing how little time people have in their lives to learn all this stuff. I'm now a software developer with about 10 years of experience and I still feel that I'm not a proficient developer when I think about things that others do in the industry. Is this normal what I'm experiencing or is it a sign of a destructive excessive ambition? Thanks in advance for any comments.

    Read the article

  • Java ternary operator and boxing Integer/int?

    - by Markus
    I tripped across a really strange NullPointerException the other day caused by an unexpected type-cast in the ternary operator. Given this (useless exemplary) function: Integer getNumber() { return null; } I was expecting the following two code segments to be exactly identical after compilation: Integer number; if (condition) { number = getNumber(); } else { number = 0; } vs. Integer number = (condition) ? getNumber() : 0; . Turns out, if condition is true, the if-statement works fine, while the ternary opration in the second code segment throws a NullPointerException. It seems as though the ternary operation has decided to type-cast both choices to int before auto-boxing the result back into an Integer!?! In fact, if I explicitly cast the 0 to Integer, the exception goes away. In other words: Integer number = (condition) ? getNumber() : 0; is not the same as: Integer number = (condition) ? getNumber() : (Integer) 0; . So, it seems that there is a byte-code difference between the ternary operator and an equivalent if-else-statement (something I didn't expect). Which raises three questions: Why is there a difference? Is this a bug in the ternary implementation or is there a reason for the type cast? Given there is a difference, is the ternary operation more or less performant than an equivalent if-statement (I know, the difference can't be huge, but still)?

    Read the article

  • How to count the Chinese word in a file using regex in perl?

    - by Ivan
    I tried following perl code to count the Chinese word of a file, it seems working but not get the right thing. Any help is greatly appreciated. The Error message is Use of uninitialized value $valid in concatenation (.) or string at word_counting.pl line 21, <FILE> line 21. Total things = 125, valid words = which seems to me the problem is the file format. The "total thing" is 125 that is the string number (125 lines). The strangest part is my console displayed all the individual Chinese words correctly without any problem. The utf-8 pragma is installed. #!/usr/bin/perl -w use strict; use utf8; use Encode qw(encode); use Encode::HanExtra; my $input_file = "sample_file.txt"; my ($total, $valid); my %count; open (FILE, "< $input_file") or die "Can't open $input_file: $!"; while (<FILE>) { foreach (split) { #break $_ into words, assign each to $_ in turn $total++; next if /\W|^\d+/; #strange words skip the remainder of the loop $valid++; $count{$_}++; # count each separate word stored in a hash ## next comes here ## } } print "Total things = $total, valid words = $valid\n"; foreach my $word (sort keys %count) { print "$word \t was seen \t $count{$word} \t times.\n"; } ##---Data---- sample_file.txt ??????,???????,????.??????.????:"?????????????,??????,????????.????????,?????????, ???????????.????????,???????????,??????,??????.???:`??,???????????.'?????, ??????????."??????,??????.????.???, ????????????,????,??????,?????????,??????????????. ????????,??????,???????????,????????,????????.????,????,???????, ??????????,??????,????????.??????.

    Read the article

  • adb doesn't get phone's device name/number

    - by Dona Hertel
    Okay, I have a strange problem I haven't seen listed anywhere. I'm developing an android app and I would like to run it on my Huawei Ascend. I have set up a file in /etc/udev/90-android.rules with the line: SUBSYSTEM=="usb", SYSFS{idVendor}=="12d1", MODE="0666" where '12d1' is the correct vendor ID for this phone (I verified this with 'lsusb' command). When I plug in the phone (it does have debugging on) and restart the adb server I get a connection but the name field does not get set. The output to 'adb devices' is: List of devices attached \n ???????????? device Plugging and unplugging the cable doesn't resolve this. Neither does restarting the adb server. Nor does a total reboot of both my computer or the phone. This is fine as I can get logs and a shell. The problem is that in the eclipse plugin, the device's name is list as "????????????" and so when it tries connect, it quits with an error message of 'device not found' even though the device is listed and 'online'. Is there something else I need to do? Do I need to set the name of the device somehow? cocofan P.S.: The app has 'debuggable' set to true in the manifest file.

    Read the article

  • Problem with UITextView

    - by user367039
    I have a strange problem with trying to pass a string from one viewcontroller to another view controller if the string originates from a UITextview instead of UITextfield. Both UITextview.text and UITextfield.text are of type NSString. The following code takes either Textfield.text or Textview.text depending on the fieldType and puts it into a string called aString. NSString *aString = [[NSString alloc] init]; if (fieldType == 3) { aString = textView.text; } else { aString = textField.text; } When I examine aString on either cases, I can see that it has successfully assigned the text into aString. I then pass the string to the other view controller using this code. [delegate updateSite:aString :editedFieldKey :fieldType]; [self.navigationController popViewControllerAnimated:YES]; This works fine if aString originated from textfield.text but nothing happens except the view controller is popped if aString was from textview.text This is the code that takes aString and does stuff with it, however it doesn't even execute the first line of code "NSLog(@"Returned object: %@, field type:%@", aString,editedFieldKey);" if aString was from textview.text Any help will be appreciated. -(void)updateSite:(NSString *)aString :(NSString *)editedFieldKey :(int)fieldType { NSLog(@"Returned object: %@, field type:%@", aString,editedFieldKey); switch (fieldType) { case 0: //string [aDiveSite setValue:aString forKey:editedFieldKey] ; NSLog(@"String set %@",[aDiveSite valueForKey:editedFieldKey] ); break; case 1: //int [aDiveSite setValue:[NSNumber numberWithInt:aString.intValue] forKey:editedFieldKey]; NSLog(@"Integer set"); break; case 2: //float NSLog(@"Saving floating value"); [aDiveSite setValue:[NSNumber numberWithFloat:aString.floatValue] forKey:editedFieldKey]; NSLog(@"Float set"); break; case 3: //Text view [aDiveSite setValue:aString forKey:editedFieldKey]; NSLog(@"Textview text saved"); default: break; } [self.tableView reloadData]; }

    Read the article

  • ASP.NET web form Routing issue via UNC Path

    - by Slash
    I create a IIS 7.0 website via UNC path to load .aspx to dynamic compile files and runs. however, it's running perfect. I always use IIS URL Rewrite module 2 to rewrite my site URL n' its perfect, too. Today, I wanna use System.Web.Routing to implement url rewrite but I encountered difficulties... When I wrote code in Global.asax: System.Web.Routing.RouteTable.Routes.MapPageRoute("TEST", "AAA/{prop}", "~/BBB/CCC.aspx"); And it just CANNOT reDirect to /BBB/CCC.aspx When I type the URL(like: xx.xx.xx.xx/BBB/CCC.aspx) in browser directly, it runs normally that I want. (so it proof CCC.aspx is in right path.) thus, I copy all of the code and open VS2010 running with IIS 7.5 Express locally, it works perfect! e.g: in browser URL I type xx.xx.xx.xx/AAA/1234, it will turn to page xx.xx.xx.xx/BBB/CCC.aspx (Works perfect!) Why??? help me plz. thanks. Update: I think I should consider not UNC path to make it error! when I move all code to physical disk and setup IIS 7.0 to monitor this Folder, it still not works! But the same code run in VS2010 + IIS 7.5 Express it works!? so strange!

    Read the article

  • Java HashMap containsKey always false

    - by Dennis
    I have the funny situation, that I store a Coordinate into a HashMap<Coordinate, GUIGameField>. Now, the strange thing about it is, that I have a fragment of code, which should guard, that no coordinate should be used twice. But if I debug this code: if (mapForLevel.containsKey(coord)) { throw new IllegalStateException("This coordinate is already used!"); } else { ...do stuff... } ... the containsKey always returns false, although I stored a coordinate with a hashcode of 9731 into the map and the current coord also has the hashcode 9731. After that, the mapForLevel.entrySet() looks like: (java.util.HashMap$EntrySet) [(270,90)=gui.GUIGameField@29e357, (270,90)=gui.GUIGameField@ca470] What could I have possibly done wrong? I ran out of ideas. Thanks for any help! public class Coordinate { int xCoord; int yCoord; public Coordinate(int x, int y) { ...store params in attributes... } ...getters & setters... @Override public int hashCode() { int hash = 1; hash = hash * 41 + this.xCoord; hash = hash * 31 + this.yCoord; return hash; } }

    Read the article

  • YouTube - Encrypted cookie string

    - by Robertof
    Hello! I'm new to Stack Overflow. I'm building a YouTube Downloader in PHP. But YouTube have some IP-checks. Because the PHP file is on a remote server, the ip of the server != the ip of the user and the video-download fails. So, maybe I've found a solution. YouTube sends a cookie with an encrypted string, which is the user IP. I need to know the encrypted-string algorithm and know how to crypt a string with this. Here there is the string: nQ0CrJmASJk . It could be base64, but when I try to decode it with base64_decode, it gives me strange characters. You could check the cookie by requesting the main page of youtube, and check the headers "Set-Cookie". You will found a cookie with the name "VISITOR_INFO1_LIVE". Here there is the encrypyed string. Anyone knows what is the algorithm? Thanks. PS: sorry for my bad english. Cheers, Roberto.

    Read the article

  • Counting combinations in c or in python

    - by Dennis
    Hello I looked a bit on this topic here but I found nothing that could help me. I need a program in Python or in C that will give me all possible combinations of a and b that will meet the requirement n=2*a+b, for n from 0 to 10. a, b and n are integers. For example if n=0 both a and b must be 0. For n=1 a must be zero and b must be 1, for n=2 a can be 1 and b=0, or a=0 and b=2, etc. I'm not that good with programming. I made this: #include <stdio.h> int main(void){ int a,b,n; for(n = 0; n <= 10; n++){ for(a = 0; a <= 10; a++){ for(b = 0; b <= 10; b++) if(n == 2*a + b) printf("(%d, %d), ", (a,b)); } printf("\n"); } } But it keeps getting strange results like this: (0, -1079628000), (1, -1079628000), (2, -1079628000), (0, -1079628000), (3, -1079628000), (1, -1079628000), (4, -1079628000), (2, -1079628000), (0, -1079628000), (5, -1079628000), (3, -1079628000), (1, -1079628000), (6, -1079628000), (4, -1079628000), (2, -1079628000), (0, -1079628000), (7, -1079628000), (5, -1079628000), (3, -1079628000), (1, -1079628000), (8, -1079628000), (6, -1079628000), (4, -1079628000), (2, -1079628000), (0, -1079628000), (9, -1079628000), (7, -1079628000), (5, -1079628000), (3, -1079628000), (1, -1079628000), (10, -1079628000), (8, -1079628000), (6, -1079628000), (4, -1079628000), (2, -1079628000), (0, -1079628000), ideone Any idea what is wrong? Also if I could do this for Python it would be even cooler. :D

    Read the article

  • For loop index out of range ArgumentOutOfRangeException when multithreading

    - by Lirik
    I'm getting some strange behavior... when I iterate over the dummyText List in the ThreadTest method I get an index out of range exception (ArgumentOutOfRangeException), but if I remove the threads and I just print out the text, then everything works fine. This is my main method: public static Object sync = new Object(); static void Main(string[] args) { ThreadTest(); Console.WriteLine("Press any key to continue."); Console.ReadKey(); } This method throws the exception: private static void ThreadTest() { Console.WriteLine("Running ThreadTest"); Console.WriteLine("Running ThreadTest"); List<String> dummyText = new List<string>() { "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine", "Ten"}; for (int i = 0; i < dummyText.Count; i++) { Thread t = new Thread(() => PrintThreadName(dummyText[i])); // <-- Index out of range?!? t.Name = ("Thread " + (i)); t.IsBackground = true; t.Start(); } } private static void PrintThreadName(String text) { Random rand = new Random(DateTime.Now.Millisecond); while (true) { lock (sync) { Console.WriteLine(Thread.CurrentThread.Name + " running " + text); Thread.Sleep(1000+rand.Next(0,2000)); } } } This does not throw the exception: private static void ThreadTest() { Console.WriteLine("Running ThreadTest"); List<String> dummyText = new List<string>() { "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine", "Ten"}; for (int i = 0; i < dummyText.Count; i++) { Console.WriteLine(dummyText[i]); // <-- No exception here } } Does anybody know why this is happening?

    Read the article

  • Java - Call to start method on thread : how does it route to Runnable interface's run () ?

    - by Bhaskar
    Ok , I know the two standard ways to create a new thread and run it in Java : 1 Implement Runnable in a class , define run method ,and pass an instance of the class to a new Thread. When the start method on the thread instance is called , the run method of the class instance will be invoked. 2 Let the class derive from Thread, so it can to override the method run() and then when a new instance's start method is called , the call is routed to overridden method. In both methods , basically a new Thread object is created and its start method invoked. However , while in the second method , the mechanism of the call being routed to the user defined run() method is very clear ,( its a simple runtime polymorphism in play ), I dont understand how the call to start method on the Thread object gets routed to run() method of the class implementing Runnable interface. Does the Thread class have an private field of Type Runnable which it checks first , and if it is set then invokes the run method if it set to an object ? that would be a strange mechanism IMO. How does the call to start() on a thread get routed to the run method of the Runnable interface implemented by the class whose object is passed as a parameter when contructing the thread ?

    Read the article

  • Why doesn't java.lang.Number implement Comparable?

    - by Julien Chastang
    Does anyone know why java.lang.Number does not implement Comparable? This means that you cannot sort Numbers with Collections.sort which seems to me a little strange. Post discussion update: Thanks for all the helpful responses. I ended up doing some more research about this topic. The simplest explanation for why java.lang.Number does not implement Comparable is rooted in mutability concerns. For a bit of review, java.lang.Number is the abstract super-type of AtomicInteger, AtomicLong, BigDecimal, BigInteger, Byte, Double, Float, Integer, Long and Short. On that list, AtomicInteger and AtomicLong to do not implement Comparable. Digging around, I discovered that it is not a good practice to implement Comparable on mutable types because the objects can change during or after comparison rendering the result of the comparison useless. Both AtomicLong and AtomicInteger are mutable. The API designers had the forethought to not have Number implement Comparable because it would have constrained implementation of future subtypes. Indeed, AtomicLong and AtomicInteger were added in Java 1.5 long after java.lang.Number was initially implemented. Apart from mutability, there are probably other considerations here too. A compareTo implementation in Number would have to promote all numeric values to BigDecimal because it is capable of accommodating all the Number sub-types. The implication of that promotion in terms of mathematics and performance is a bit unclear to me, but my intuition finds that solution kludgy.

    Read the article

  • Interesting LinqToSql behaviour

    - by Ben Robinson
    We have a database table that stores the location of some wave files plus related meta data. There is a foreign key (employeeid) on the table that links to an employee table. However not all wav files relate to an employee, for these records employeeid is null. We are using LinqToSQl to access the database, the query to pull out all non employee related wav file records is as follows: var results = from Wavs in db.WaveFiles where Wavs.employeeid == null; Except this returns no records, despite the fact that there are records where employeeid is null. On profiling sql server i discovered the reason no records are returned is because LinqToSQl is turning it into SQL that looks very much like: SELECT Field1, Field2 //etc FROM WaveFiles WHERE 1=0 Obviously this returns no rows. However if I go into the DBML designer and remove the association and save. All of a sudden the exact same LINQ query turns into SELECT Field1, Field2 //etc FROM WaveFiles WHERE EmployeeID IS NULL I.e. if there is an association then LinqToSql assumes that all records have a value for the foreign key (even though it is nullable and the property appears as a nullable int on the WaveFile entity) and as such deliverately constructs a where clause that will return no records. Does anyone know if there is a way to keep the association in LinqToSQL but stop this behaviour. A workaround i can think of quickly is to have a calculated field called IsSystemFile and set it to 1 if employeeid is null and 0 otherwise. However this seems like a bit of a hack to work around strange behaviour of LinqToSQl and i would rather do something in the DBML file or define something on the foreign key constraint that will prevent this behaviour.

    Read the article

  • How do I override a python import?

    - by Evan Plaice
    So I'm working on pypreprocessor which is a preprocessor that takes c-style directives and I've been able to make it work like a traditional preprocessor (it's self-consuming and executes postprocessed code on-the-fly) except that it breaks library imports. The problem is. The preprocessor runs through the file, processes' it, outputs to a temp file, and exec() the temp file. Libraries that are imported need to be handled a little different because they aren't executed but rather loaded and made accessible to the caller module. What I need to be able to do is. Interrupt the import (since the preprocessor is being run in the middle of the import), load the postprocessed code as a tempModule, and replace the original import with the tempModule to trick the calling script with the import into believing that the tempModule is the original module. I have searched everywhere and so far, have no solution. This question is the closest I've seen so far to providing an answer: http://stackoverflow.com/questions/1096216/override-namespace-in-python Here's what I have. # remove the bytecode file created by the first import os.remove(moduleName + '.pyc') # remove the first import del sys.modules[moduleName] # import the postprocessed module tmpModule = __import__(tmpModuleName) # set first module's reference to point to the preprocessed module sys.modules[moduleName] = tmpModule moduleName is the name of the original module, tmpModuleName is the name of the postprocessed code file. The strange part is, this solution still runs completely normal as if the first module completed loaded normally; unless you remove the last line, then you get a module not found error. Hopefully someone on SO know a lot more about imports than I do because this one has me stumped.

    Read the article

  • How can I handle unread push notifications in iOS?

    - by Bartserk
    I have a iOS 5.1 application that registers to the APNS service to receive notifications. The register is successful and I receive the notifications correctly. The problem comes when I try to handle the notifications. Once the application is running, the method didReceiveRemoteNotification in the AppDelegate is called correctly and so the notification is handled as intended. This, however, only happens when the application is running on the foreground. However, when the application is running on the background or is simply stopped, that method is not called. I've read that you should add some lines to the method didFinishLaunchingWithOptions method to obtain the notification from the userInfo dictionary, and handle it. This works just fine, but ONLY when the application is opened by clicking on the notification at the Notification Center. This means that if you open the application by clicking on its badge, or simply by changing context if you were running it on the background, the app never realises that a notification came in. Additionally, if more than one notification was received, we can only handle one of them at once by clicking on the Notification Center, which is a pain :-) Is there any way to read the pending notifications in the Notification Center? I know there is a way to flush them using the method cancelAllLocalNotifications but I haven't found a way to just read them. And I really need to handle all of them. I thought of implementing a communication protocol with the third-party notification server to retrieve the information again when the application comes to the foreground, but since the information is already in the operating system I would find it strange if it's impossible to access it somehow. So, does anybody know a way to do it? Thanks in advance.

    Read the article

  • Why doesn't access database update?

    - by Ryan
    Recently I met a strange problem, see code snips as below: var sqlCommand: string; connection: TADOConnection; qry: TADOQuery; begin connection := TADOConnection.Create(nil); try connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False'; connection.Open(); qry := TADOQuery.Create(nil); try qry.Connection := connection; qry.SQL.Text := 'Select * from aaa'; qry.Open; qry.Append; qry.FieldByName('TestField1').AsString := 'test'; qry.Post; beep; finally qry.Free; end; finally connection.Free; end; end; First, Create a new access database named test.mdb and put it under the directory of this test project, we can create a new table named aaa in it which has only one text type field named TestField1. We set a breakpoint at line of "beep", then lunch the test application under ide debug mode, when ide stops at the breakpoint line (qry.post has been executed), at this time we use microsoft access to open test.mdb and open table aaa you will find there are no any changes in table aaa, if you let the ide continue running after pressing f9 you can find a new record is inserted in to table aaa, but if you press ctrl+f2 to terminate the application at the breakpoint, you will find the table aaa has no record been inserted, but in normal circumstance, a new record should be inserted in to the table aaa after qry.post executed. who can explain this problem , it troubles me so long time. thanks !!! BTW, the ide is delphi 2010, and the access mdb file is created by microsoft access 2007 under windows 7

    Read the article

  • Class Generics break completely seperate method

    - by TheLQ
    I found a strange problem when I used class Generics Today: Setting some broke a completely separate method. Here's a small example class that illustrates the problem. This code works just fine public class Sandbox { public interface ListenerManagerTest { public Set<Listener> getListeners(); } public void setListenerManager(ListenerManagerTest listenerManager) { for (Listener curListener : listenerManager.getListeners()) return; } } Now as soon as I use class Generics, the getListeners() method returns Set<Object> instead of Set<Listener> public class Sandbox { public interface ListenerManagerTest<E extends Object> { public Set<Listener> getListeners(); } public void setListenerManager(ListenerManagerTest listenerManager) { for (Listener curListener : listenerManager.getListeners()) //Expected Listener, not Object return; } } What would cause this error? The ##java channel on Freenode said it was because of compile time candy and that I was using a raw type. But how would an raw class type break all generics in the class? And how would of worked before?

    Read the article

  • iPhone crashing when presenting modal view controller

    - by Michael Waterfall
    I'm trying to display a modal view straight after another view has been presented modally (the second is a loading view that appears). - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; // Show load LoadViewController *loader = [[LoadViewController alloc] init]; [self presentModalViewController: loader animated:NO]; [loader release]; } But when I do this I get a "Program received signal: "EXC_BAD_ACCESS"." error. The stack trace is: 0 0x30b43234 in -[UIWindowController transitionViewDidComplete:fromView:toView:] 1 0x3095828e in -[UITransitionView notifyDidCompleteTransition:] 2 0x3091af0d in -[UIViewAnimationState sendDelegateAnimationDidStop:finished:] 3 0x3091ad7c in -[UIViewAnimationState animationDidStop:finished:] 4 0x0051e331 in run_animation_callbacks 5 0x0051e109 in CA::timer_callback 6 0x302454a0 in CFRunLoopRunSpecific 7 0x30244628 in CFRunLoopRunInMode 8 0x32044c31 in GSEventRunModal 9 0x32044cf6 in GSEventRun 10 0x309021ee in UIApplicationMain 11 0x00002154 in main at main.m:14 Any ideas? I'm totally stumped! The loading view is empty so there's definitely nothing going on in there that's causing the error. Is it something to do with launching 2 views modally in the same event loop or something? Thanks, Mike Edit: Very strange... I have modified it slightly so that the loading view is shown after a tiny delay, and this works fine! So it appears to be something within the same event loop! - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; // Show load [self performSelector:@selector(doit) withObject:nil afterDelay:0.1]; } - (void)doit { [self presentModalViewController:loader animated:YES]; }

    Read the article

  • Using the Queue class in Python 2.6

    - by voipme
    Let's assume I'm stuck using Python 2.6, and can't upgrade (even if that would help). I've written a program that uses the Queue class. My producer is a simple directory listing. My consumer threads pull a file from the queue, and do stuff with it. If the file has already been processed, I skip it. The processed list is generated before all of the threads are started, so it isn't empty. Here's some pseudo-code. import Queue, sys, threading processed = [] def consumer(): while True: file = dirlist.get(block=True) if file in processed: print "Ignoring %s" % file else: # do stuff here dirlist.task_done() dirlist = Queue.Queue() for f in os.listdir("/some/dir"): dirlist.put(f) max_threads = 8 for i in range(max_threads): thr = Thread(target=consumer) thr.start() dirlist.join() The strange behavior I'm getting is that if a thread encounters a file that's already been processed, the thread stalls out and waits until the entire program ends. I've done a little bit of testing, and the first 7 threads (assuming 8 is the max) stop, while the 8th thread keeps processing, one file at a time. But, by doing that, I'm losing the entire reason for threading the application. Am I doing something wrong, or is this the expected behavior of the Queue/threading classes in Python 2.6?

    Read the article

  • ASP.NET GridView "Client-Side Confirmation when Deleting" stopped working on ie - how come?

    - by tarnold
    A few months ago, I have programmed an ASP.NET GridView with a custom "Delete" LinkButton and Client-Side JavaScript Confirmation according to this msdn article: http://msdn.microsoft.com/en-us/library/bb428868.aspx (published in April 2007) or e.g. http://stackoverflow.com/questions/218733/javascript-before-aspbuttonfield-click The code looks like this: <ItemTemplate> <asp:LinkButton ID="deleteLinkButton" runat="server" Text="Delete" OnCommand="deleteLinkButtonButton_Command" CommandName='<%# Eval("id") %>' OnClientClick='<%# Eval("id", "return confirm(\"Delete Id {0}?\")") %>' /> </ItemTemplate> Surprisingly, "Cancel" doesn't work no more with my ie (Version: 6.0.2900.2180.xpsp_sp2_qfe.080814-1242) - it always deletes the row. With Opera (Version 9.62) it still works as expeced and described in the msdn article. More surprisingly, on a fellow worker's machine with the same ie version, it still works ("Cancel" will not delete the row). The generated code looks like <a onclick="return confirm(...);" href="javascript:__doPostBack('...')"> As confirm(...) returns false on "Cancel", I expect the __doPostBack event in the href not to be fired. Are there any strange ie settings I accidentally might have changed? What else could be the cause of this weird behaviour? Or is this a "please reinstall WinXP" issue?

    Read the article

  • What is the correct stage to use for Google Guice in production in an application server?

    - by Yishai
    It seems like a strange question (the obvious answer would Production, duh), but if you read the java docs: /** * We want fast startup times at the expense of runtime performance and some up front error * checking. */ DEVELOPMENT, /** * We want to catch errors as early as possible and take performance hits up front. */ PRODUCTION Assuming a scenario where you have a stateless call to an application server, the initial receiving method (or there abouts) creates the injector new every call. If there all of the module bindings are not needed in a given call, then it would seem to have been better to use the Development stage (which is the default) and not take the performance hit upfront, because you may never take it at all, and here the distinction between "upfront" and "runtime performance" is kind of moot, as it is one call. Of course the downside of this would appear to be that you would lose the error checking, causing potential code paths to cause a problem by surprise. So the question boils down to are the assumptions in the above correct? Will you save performance on a large set of modules when the given lifetime of an injector is one call?

    Read the article

< Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >