Search Results

Search found 12047 results on 482 pages for 'general debugging tidbits'.

Page 357/482 | < Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >

  • Unable to connect on socket across different networks.

    - by maleki
    I am having trouble connecting my online application to others across another network. I am able to give them the hostAddress to connect when we are on the same network but when we are doing it across the internet the generated host address doesn't allow a connection, nor does using the ip address gotten from online sites such as whatismyip.com My biggest issue isn't debugging this code, because it works over intra-network but The server doesn't see attempts when we try to move to different networks. Also, the test ip I am using is 2222. InetAddress addr = InetAddress.getLocalHost(); String hostname = addr.getHostName(); System.out.println("Hostname: " + hostname); System.out.println("IP: " + addr.getHostAddress()); I display the host to the server when it is starting if (isClient) { System.out.println("Client Starting.."); clientSocket = new Socket(host, port_number); } else { System.out.println("Server Starting.."); echoServer = new ServerSocket(port_number); clientSocket = echoServer.accept(); System.out.println("Warning, Incoming Game.."); }

    Read the article

  • Erlang, SSH and authorized_keys

    - by Roberto Aloi
    Playing with the ssh and public_key application in Erlang, I've discovered a nice feature. I was trying to connect to my running Erlang SSH daemon by using a rsa key, but the authentication was failing and I was prompted for a password. After some debugging and tracing (and a couple of coffees), I've realized that, for some weird reason, a non valid key for my user was there. The authorized_keys file contained two keys. The wrong one was at some point in the file, while the correct one was appended at the end of the file. Now, the Erlang SSH application, when diffing the provided key with the ones contained in the authorized_keys, it was finding the first entry (completely ignoring the second on - the correct one). Then, it was switching to different authentication mechanism (at first it was trying dsa instead of rsa and then it was prompting for a password). The question is: Is this behavior intended or should the SSH server check for multiple entries for the same user in the *authorized_keys* file? Is this a generic SSH behaviour or it's just specific to the Erlang implementation?

    Read the article

  • How to print the address an ada access variable points to?

    - by georg
    I want to print the address of an access variable (pointer) for debugging purposes. type Node is private; type Node_Ptr is access Node; procedure foo(n: in out Node_Ptr) is package Address_Node is new System.Address_To_Access_Conversions(Node); use Address_Node; begin Put_Line("node at address " & System.Address_Image(To_Address(n))); end foo; Address_Image returns the string representation of an address. System.Address_To_Access_Conversions is a generic package to convert between addresses and access types (see ARM 13.7.2), defined as follows: generic type Object(<>) is limited private; package System.Address_To_Access_Conversions is -- [...] type Object_Pointer is access all Object; -- [...] function To_Address(Value : Object_Pointer) return Address; -- [...] end System.Address_To_Access_Conversions; gnat gives me the following errors for procedure foo defined above: expected type "System.Address_To_Access_Conversions.Object_Pointer" from instance at line... found type "Node_Ptr" defined at ... Object_Pointer ist definied as access all Object. From my understanding the type Object is Node, therefore Object_Ptr is access all Node. What is gnat complaining about? I guess my understanding of Ada generics is flawed and I am not using System.Address_To_Access_Conversions correctly. EDIT: I compiled my code with "gnatmake -gnatG" to see the generic instantiation: package address_node is subtype btree__clear__address_node__object__2 is btree__node; type btree__clear__address_node__object_pointer__2 is access all btree__clear__address_node__object__2; function to_address (value : btree__clear__address_node__object_pointer__2) return system__address; end address_node; btree__node is the mangled name of the type Node as defined above, so I really think the parameter type for to_address() is correct, yet gnat is complaining (see above).

    Read the article

  • session regeneration in tomcat ?

    - by shrini1000
    Hi, I am using Spring security to secure my Java web application which is deployed in tomcat. I found out that it is vulnerable to session fixation attacks because tomcat does not create a new session upon successful log in. On debugging some more, here's what I found. For the following code (which is supposed to create a new session - pl. note, it's just a snippet and not full code): HttpSession session = request.getSession(false); session.invalidate(); session = request.getSession(true); // we now have a new session I thought a new session will be created, but tomcat simply uses the same session that got invalidated and hence the session id does not change. I searched online and found a solution which uses a 'valve' - http://marvinsmutterings.blogspot.com/2010/02/fixing-session-fixation-in-liferay-on.html but could not get it to work because it's looking for a jboss logging class and when I add it to lib, I get a reflection exception and the server doesn't start up. I'm using tomcat 5.5.28. Will be glad to have any pointers. Pl. let me know if you need more details, since I don't want to make this post too long. Sincere thanks!

    Read the article

  • Sudden issues reading uncompressed video using opencv

    - by JohnSavage
    I have been using a particular pipeline to process video using opencv to encode uncompressed video (fourcc = 0), and opencv python bindings to then open and work on these files. This has been working fine for me on OpenCV 2.3.1a on Ubuntu 11.10 until just a few days ago. For some reason it currently is only allowing me to read the first frame of a given file the first time I open that file. Further frames are not read, and once I touch the file once with my program, it then cannot even read the first frame. More detail: I created the uncompressed video files as follows: out_video.open(out_vid_name, 0, // FOURCC = 0 means record raw fps, Size(640, 480)) Again, these videos worked fine for me until about a week ago. Now, when I try to open one of these I get the following message (from what I think is ffmpeg): Processing video.avi Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later. [avi @ 0x29251e0] parser not found for codec rawvideo, packets or times may be invalid. It reads and displays the first frame fine, but then fails to read the next frame. Then, when I try to run my code on the same video, the capture still opens with the same message as above. However, it cannot even read the very first frame. Here is the code to open the capture: self.capture = cv2.VideoCapture(filename) if not self.capture.isOpened() print "Error: could not open capture" sys.exit() Again, this part is passed without any issue, but then the break happens at: success, rgb = self.capture.read() if not success: print "error: could not read frame" return False This part breaks at the second frame on the first run of the video file, and then on the first frame on subsequent runs. I really don't know where to even begin debugging this. Please help!

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • Java inheritance question

    - by Milos
    I have an abstract class Airplane, and two classes PassengerAirplane and CargoAirplane, which extend class Airplane. I also have an interface Measurable, and two classes that implement it - People and Containers. So, Airplane can do many things on its own, and there is a method which allows measurable things to be added to the airplane (called addAMeasurableThing). The only difference between PassengerAirplane/CargoAirplane and just an Airplane is that addAMeasurableThing should only accept People / Containers, and not any kind Measurable things. How do I implement this? I tried doing: Airplane class: public abstract Airplane addAMeasurableThing (Measurable m, int position); PassengerAirplane class: public Airplane addAMeasurableThing (Measurable m, int position) { if (m instanceof People)... CargoAirplane class: public Airplane addAMeasurableThing (Measurable m, int position) { if (m instanceof Containers)... But when I was debugging it, I've noticed that addAMeasurableThing in the CargoAirplane class never gets called, because both methods have the same signature. So how can the appropriate PassengerAirplane/CargoAirplane's addAMeasurableThing be called, depending on the type of Measurable thing that is being passed on? Thanks!

    Read the article

  • KendoUI Mobile switch and datasource

    - by OnaBai
    I'm trying to have a list of items displayed using a listview, something like: <div data-role="view" data-model="my_model"> <ul data-role="listview" data-bind="source: ds" data-template="list-tmpl"></ul> </div> Where I have a view using a model called my_model and a listview where the source is bound to ds. My model is something like: var my_model = kendo.observable({ ds: new kendo.data.DataSource({ transport: { read: readData, update: updateData, create: updateData, remove: updateData }, pageSize: 10, schema: { model: { id: "id", fields: { id: { type: "number" }, name: { type: "string" }, active: { type: "boolean" } } } } }) }); Each item includes an id, a name (that is a string) and a boolean named active. The template used to render each element is: <script id="list-tmpl" type="text/kendo-tmpl"> <span>#= name # : #= active #</span> <input data-role="switch" data-bind="checked: active"/> </script> Where I display the name and (for debugging) the value of active. In addition, I render a switch bound to active. You should see something like: The problems observed are: If you click on a switch you will see that the value of active next to the name changes its value (as expected) but if then, you pick another switch, the value (neither next to name nor in the DataSource) is not updated (despite the value of the switch is correctly updated). The update handler in the DataSource is never invoked (not even for the first chosen switch and despite the DataSource for that first toggled switch is updated). You can check it in JSFiddle: http://jsfiddle.net/OnaBai/K7wEC/ How do I make that the DataSource gets updated and update handler invoked?

    Read the article

  • Integrating ASP.NET MVC 2 with classic ASP

    - by David Lively
    I'm in the process of moving a large classic ASP application to ASP.NET MVC 2. Questions: My question is about project organization. I would prefer to not mix the MVC code with the ASP code in the same VS project. I'd like to have an MVC WAP with areas that match the parts of the website that I'm migrating. For instance, the old site has a folder /products/default.asp..... /products/productName/default.asp etc. In the MVC WAP, I'd like to have an area called "products", which I could then, either through a rewrite, routing, or preferably through some IIS configuration, point the "products" folder on the ASP site to. In this way, I could gradually move root folders from the ASP site to the MVC application. However, if I create the MVC WAP in a virtual folder, then my routes wind up looking like http://localhost/virtualFolder/products instead of http://localhost/products Any suggestions on how to conquer this? I know that, during deployment, I could deploy the MVC WAP into the root of the ASP site, but this doesn't help with debugging.

    Read the article

  • Background worker not working right

    - by vbNewbie
    I have created a background worker to go and run a pretty long task that includes creating more threads which will read from a file of urls and crawl each. I tried following it through debugging and found that the background process ends prematurely for no apparent reason. Is there something wrong in the logic of my code that is causing this. I will try and paste as much as possible to make sense. While Not myreader.EndOfData Try currentRow = myreader.ReadFields() Dim currentField As String For Each currentField In currentRow itemCount = itemCount + 1 searchItem = currentField generateSearchFromFile(currentField) processQuerySearch() Next Catch ex As Microsoft.VisualBasic.FileIO.MalformedLineException Console.WriteLine(ex.Message.ToString) End Try End While This first bit of code is the loop to input from file and this is what the background worker does. The next bit of code is where the background worker creates threads to work all the 'landingPages'. After about 10 threads are created the background worker exits this sub and skips the file input loop and exits the program. Try For Each landingPage As String In landingPages pgbar.Timer1.Stop() If VisitedPages.Contains(landingPage) Then Continue For Else Dim thread = New Thread(AddressOf processQuery) count = count + 1 thread.Name = "Worm" & count thread.Start(landingPage) If numThread >= 10 Then For Each thread In ThreadList thread.Join() Next numThread = 0 Continue For Else numThread = numThread + 1 SyncLock ThreadList ThreadList.Add(thread) End SyncLock End If End If Next

    Read the article

  • Android Hashtable Serialization

    - by Nsyed
    Hi All, I am having a weird issue with serialization of a Hashtable. I have made a Server, Client app. Where server(PC/MAC) is serializing a Hashtable and sending it to Client(Android) through UDP. The data is sent/read correctly but I get a bunch of these messages below on LogCat. 04-12 11:19:43.059: DEBUG/dalvikvm(407): GetFieldID: unable to find field Ljava/util/Hashtable;.loadFactor:F Occasionally, I would see these 04-12 11:21:19.150: DEBUG/dalvikvm(407): GC freed 10814 objects / 447184 bytes in 97ms The app would run for 2-3 mins and then crash. Interestingly enough I do not see the Loadfactor errors on SDK 1.5. But I do see the GC Free xxxx objects, quiet often. After debugging I have found that the issue is with de-serialization and the error/warning are coming from following code Code: ByteArrayInputStream bis = new ByteArrayInputStream(bytes); ObjectInputStream ois = new ObjectInputStream(bis); object = ois.readObject(); at Code: object = ois.readObject(); on the client. My server is serializing code is the following. Code: ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(bos); oos.writeObject(obj); Any idea what is going on? Thanks for the Help!

    Read the article

  • Wanted to know in detail about how shared libraries work vis-a-vis static library.

    - by goldenmean
    Hello, I am working on creating and linking shared library (.so). While working with them, many questions popped up which i could not find satisying answers when i searched for them, hence putting them here. The questions about shared libraries i have are: 1.) How is shared library different than static library? What are the Key differences in way they are created, they execute? 2.) In case of a shared library at what point are the addresses where a particular function in shared library will be loaded and run from, given? Who gives those functions is load/run addresses? 3.) Will an application linked against shared library be slower in execution as compared to that which is linked with a static library? 4.) Will application executable size differ in these two cases? 5.) Can one do source level debugging of by stepping into functions defined inside a shared library? Is any thing extra needed to make these functions visible to the application? 6.) What are pros and cons in using either kind of library? Thanks. -AD

    Read the article

  • How do I best remove the unicode characters that XHTML regards as non-valid using php?

    - by Andrew Stacey
    I run a forum designed to support an international mathematics group. I've recently switched it to unicode for better support of international characters. In debugging this conversion, I've discovered that not all unicode characters are considered as valid XHTML (the relevant website appears to be http://www.w3.org/TR/unicode-xml/). One of the steps that the forum software goes through before presenting the posts to the browser is an XHTML validation/sanitisation step. It seems a reasonable idea that at that stage it should remove any unicode characters that XHTML doesn't like. So my question is: Is there a standard (or best) way of doing this in PHP? (The forum is written in PHP, by the way.) I guess that the failsafe would be a simple str_replace (if that's also the best, do I need to do anything extra to make sure it works properly with unicode?) but that would involve me having to go through the XHTML DTD (or the above-referenced W3 page) carefully to figure out what characters to list in the search part of str_replace, so if this is the best way, has someone already done that so that I can steal, err, copy, it? (Incidentally, the character that caused the problem was U+000C, the 'formfeed', which (according to the W3 page) is valid HTML but invalid XHTML!)

    Read the article

  • Getting Argument Names In Ruby Reflection

    - by Joe Soul-bringer
    I would like to do some fairly heavy-duty reflection in the Ruby programming language. I would like to create a function which would return the names of the arguments of various calling functions higher up the call stack (just one higher would be enough but why stop there?). I could use Kernel.caller go to the file and parse the argument list but that would be ugly and unreliable. The function that I would like would work in the following way: module A def method1( tuti, fruity) foo end def method2(bim, bam, boom) foo end def foo print caller_args[1].join(",") #the "1" mean one step up the call stack end end A.method1 #prints "tuti,fruity" A.method2 #prints "bim, bam, boom" I would not mind using ParseTree or some similar tool for this task but looking at Parsetree, it is not obvious how to use it for this purpose. Creating a C extension like this is another possibility but it would be nice if someone had already done it for me. Edit2: I can see that I'll probably need some kind of C extension. I suppose that means my question is what combination of C extension would work most easily. I don't think caller+ParseTree would be enough by themselves. As far as why I would like to do this goes, rather than saying "automatic debugging", perhaps I should say that I would like to use this functionality to do automatic checking of the calling and return conditions of functions. Say def add x, y check_positive return x + y end Where check_positive would throw an exception if x and y weren't positive (obviously, there would be more to it than that but hopefully this gives enough motivation)

    Read the article

  • Elmah sends error mail on development server, but not on production

    - by Adrian Grigore
    Hi, I am trying to set up Elmah so that it sends me an e-mail when a new error occurs. This works fine on my development server, but on the production server no e-mail is sent. The exception is logged on the production server, it's just the e-mail that does not get sent. Here are my elmah configuration settings: <elmah> <security allowRemoteAccess="yes"/> <errorMail from="<MYGOOGLELOGIN>@googlemail.com" to="<MYGOOGLELOGIN>@googlemail.com" subject="ERROR From Elmah" async="false" smtpPort="587" useSsl="true" smtpServer="smtp.gmail.com" userName="<MYGOOGLELOGIN>@googlemail.com" password="<MYGOOGLEPASSWORD>" /> </elmah> I've tried different mail servers, both local and remote, and I tried both synchronous and asynchronous mail sending but to no avail. Now I don't have the slightest idea how to proceed (apart from debugging Elmah on my production server, which seems like a lot of effort to set up). Please help! Thanks, Adrian Edit: I might also add that I tried switching off the firewall on the production server, but that did not make any difference either.

    Read the article

  • GCC - How to realign stack?

    - by psihodelia
    I try to build an application which uses pthreads and __m128 SSE type. According to GCC manual, default stack alignment is 16 bytes. In order to use __m128, the requirement is the 16-byte alignment. My target CPU supports SSE. I use a GCC compiler which doesn't support runtime stack realignment (e.g. -mstackrealign). I cannot use any other GCC compiler version. My test application looks like: #include <xmmintrin.h> #include <pthread.h> void *f(void *x){ __m128 y; ... } int main(void){ pthread_t p; pthread_create(&p, NULL, f, NULL); } The application generates an exception and exits. After a simple debugging (printf "%p", &y), I found that the variable y is not 16-byte aligned. My question is: how can I realign the stack properly (16-byte) without using any GCC flags and attributes (they don't help)? Should I use GCC inline Assembler within this thread function f()?

    Read the article

  • strange redefined symbols

    - by Chris H
    I included this header into one of my own: http://codepad.org/lgJ6KM6b When I compiled I started getting errors like this: CMakeFiles/bin.dir/SoundProjection.cc.o: In function `Gnuplot::reset_plot()': /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/new:105: multiple definition of `Gnuplot::reset_plot()' CMakeFiles/bin.dir/main.cc.o:project/gnuplot-cpp/gnuplot_i.hpp:962: first defined here CMakeFiles/bin.dir/SoundProjection.cc.o: In function `Gnuplot::set_smooth(std::basic_string, std::allocator const&)': project/gnuplot-cpp/gnuplot_i.hpp:1041: multiple definition of `Gnuplot::set_smooth(std::basic_string, std::allocator const&)' CMakeFiles/bin.dir/main.cc.o:project/gnuplot-cpp/gnuplot_i.hpp:1041: first defined here CMakeFiles/bin.dir/SoundProjection.cc.o:/usr/include/eigen2/Eigen/src/Core/arch/SSE/PacketMath.h:41: multiple definition of `Gnuplot::m_sGNUPlotFileName' I know it's hard to see in this mess, but look at where the redefinitions are taking place. They take place in files like /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/new:105. How is the new operator getting information about a gnuplot header? I can't even edit that file. How could that ever even be possible? I'm not even sure how to start debugging this. I hope I've provided enough information. I wasn't able to reproduce this in a small project. I mostly just looking for tips on how to find out why this is happening, and how to track it down. Thanks.

    Read the article

  • stack overflow problem in program

    - by Jay
    So I am currently getting a strange stack overflow exception when i try to run this program, which reads numbers from a list in a data/text file and inserts it into a binary search tree. The weird thing is that when the program works when I have a list of 4095 numbers in random order. However when i have a list of 4095 numbers in increasing order (so it makes a linear search tree), it throws a stack overflow message. The problem is not the static count variable because even when i removed it, and put t=new BinaryNode(x,1) it still gave a stack overflow exception. I tried debugging it, and it broke at if (t == NULL){ t = new BinaryNode(x,count); Here is the insert function. BinaryNode *BinarySearchTree::insert(int x, BinaryNode *t) { static long count=0; count++; if (t == NULL){ t = new BinaryNode(x,count); count=0; } else if (x < t->key){ t->left = insert(x, t->left); } else if (x > t->key){ t->right = insert(x, t->right); } else throw DuplicateItem(); return t; }

    Read the article

  • TableAdapter.Update not working

    - by Wesley
    Here is my function: private void btnSave_Click(object sender, EventArgs e) { wO_FlangeMillBundlesTableAdapter.Update(invClerkDataDataSet.WO_FlangeMillBundles); wO_HeadMillBundlesTableAdapter.Update(invClerkDataDataSet.WO_HeadMillBundles); wO_WebMillBundlesTableAdapter.Update(invClerkDataDataSet.WO_WebMillBundles); int rowsaffected = wO_MillTableAdapter.Update(invClerkDataDataSet.WO_Mill); MessageBox.Show(invClerkDataDataSet.WO_Mill.Rows[0]["GasReading"].ToString()); MessageBox.Show(rowsaffected.ToString()); } You can see the fourth update in the function uses the same functionality as the rest, I just have some debugging stuff added. The first three tables are bound to DataGridViews and work fine. The fourth table has it's members bound to various text boxes. When I change the value in the text box bound to the GasReading column and click save the first MessageBox does in fact show the new value, so it's making it into the dataset correctly. However, the rowsaffected is always showing 0 and the value in the actual database is not being updated. Can anyone see my problem? I understand that the problem must be elsewhere in my code since the four update methods are the same, but I just don't know where to start.

    Read the article

  • TCP: Address already in use exception - possible causes for client port? NO PORT EXHAUSTION

    - by TomTom
    Hello, stupid problem. I get those from a client connecting to a server. Sadly, the setup is complicated making debugging complex - and we run out of options. The environment: *Client/Server system, both running on the same machine. The client is actually a service doing some database manipulation at specific times. * The cnonection comes from C# going through OleDb to an EasySoft JDBC driver to a custom written JDBC server that then hosts logic in C++. Yeah, compelx - but the third party supplier decided to expose the extension mechanisms for their server through a JDBC interface. Not a lot can be done here ;) The Symptom: At (ir)regular intervals we get a "Address already in use: connect" told from the JDBC driver. They seem to come from one particular service we run. Now, I did read all the stuff about port exhaustion. This is why we have a little tool running now that counts ports and their states every minute. Last time this happened, we had an astonishing 370 ports in use, with the count rising to about 900 AFTER the error. We aleady patched the registry (it is a windows machine) to allow more than the 5000 client ports standard, but even then, we are far far from that limit to start with. Which is why I am asking here. Ayneone an ide what ELSE could cause this? It is a Windows 2003 Server machine, 64 bit. The only other thing I can see that may cause it (but this functionality is supposedly disabled) is Symantec Endpoint Protection that is installed on the server - and being capable of actinc as a firewall, it could possibly intercept network traffic. I dont want to open a can of worms by pointing to Symantec prematurely (if pointing to Symantec can ever be seen as such). So, anyone an idea what else may be the cause? Thanks

    Read the article

  • Running multiple applications in STM32 flash

    - by Richard
    Hey! I would like to have two applications in my STM32 flash, one is basically a boot and the other the 'main' application. I have figured out how to load each of them into different areas of flash, and after taking a memory dump everything looks like it is in the right place. So when I do a reset it loads the boot, all the boot does at the moment is jump to the application. Debugging the boot, this all appears to work correctly. However the problems arrives after i've made the jump to the application, it just executes one instruction (assembly) and then jumps back to the boot. It should stay in the application indefinitely. My question is then, where should I 'jump' to in the app? It seems that there are a few potential spots, such as the interrupt vectors, the reset handler, the main function of the app. Actually I've tried all of those with no success. Hopefully that makes sense, i'll update the question if not. thanks for your help! Richard Updates: I had a play around in the debugger and manually changed the program counter to the main of the application, and well that worked a charm, so it makes me think there is something wrong with my jump, why doesn't the program counter keep going after the jump? Actually it seems to be the PSR, the 'T' gets reset on the jump, if I set that again after the jump it continues on with the app as I desire Ok found a solution, seems that you need to have the PC LSB set to 1 when you do a branch or it falls into the 'ARM' mode (32 bit instruction instead of 16 bit instructions like in the 'thumb' mode. Quite an obscure little problem, thanks for letting me share it with you!

    Read the article

  • ArgumentOutOfRangeException when reading bytes from stream

    - by user345194
    I'm trying to read the response stream from an HttpWebResponse object. I know the length of the stream (_response.ContentLength) however I keep getting the following exception: Specified argument was out of the range of valid values. Parameter name: size While debugging, I noticed that at the time of the error, the values were as such: length = 15032 //the length of the stream as defined by _response.ContentLength bytesToRead = 7680 //the number of bytes in the stream that still need to be read bytesRead = 7680 //the number of bytes that have been read (offset) body.length = 15032 //the size of the byte[] the stream is being copied to The peculiar thing is that the bytesToRead and bytesRead variables are ALWAYS 7680, regardless of the size of the stream (contained in the length variable). Any ideas? Code: int length = (int)_response.ContentLength; byte[] body = null; if (length 0) { int bytesToRead = length; int bytesRead = 0; try { body = new byte[length]; using (Stream stream = _response.GetResponseStream()) { while (bytesToRead > 0) { // Read may return anything from 0 to length. int n = stream.Read(body, bytesRead, length); // The end of the file is reached. if (n == 0) break; bytesRead += n; bytesToRead -= n; } stream.Close(); } } catch (Exception exception) { throw; } } else { body = new byte[0]; } _responseBody = body;

    Read the article

  • HttpUtility.HtmlDecode driving me crazy!!!!

    - by Savvas Sopiadis
    Hi everybody! This situation is driving me crazy!!: the following snippet does not work (as i should) ... string preResult = doc.DocumentNode.SelectSingleNode("//textarea[@name='utrans']").InnerText return HttpUtility.HtmlDecode(preResult); ... The first line assigns a value (e.g.) "&lt;b&gt; Dummy value: &lt;/ b&gt; into preResult (that's expected). BUT the next line gives AGAIN the same value!!! But it should return "<b> Dummy value: </ b>". Debugging these lines i thought to copy and paste the value directly into HttpUtility.HtmlDecode() and guess what...it worked!!! I got the expected value! Of course this is useless, but it proves something weird is going on...what?!! Has anybody faced the same situation again? (dev.env. VS2008,.NET3.5SP1) Thanks in advance

    Read the article

  • Show me your Linq to SQL architectures!

    - by Brad Heller
    I've been using Linq to SQL for a new implementation that I've been working on. I have about 5000 lines of code and am a little ways from a solid demo. I've been pretty satisfied with Linq to SQL so far -- the tools are excellent and pretty painless and it allows you to get a DAL up and running quickly. That said, there are some major draw backs that I just keep hitting over and over again. Namely how to handle separation of concerns between my DAL and my business layer and juggling that with different data contexts. Here is the architecture I've been using: My repositories do all my data access and they return Linq to SQL objects. Each of my Linq to SQL objects implements an IDetachable interface. A typical implementation looks like this: partial class PaymentDetail : IDetachable { #region IDetachable Members public bool IsAttached { get { return PropertyChanging != null; } } public void Detach() { if (IsAttached) { PropertyChanged = null; PropertyChanging = null; Transaction.Detach(); } } #endregion } Every time I do a DAL operation in my repository I "detach" when I'm done with the object (and it should theoretically detach from any child objects) to remove the DataContext's context. Like I said, this works pretty well, but there are some edge cases that seem to be a big pain in the ass. For instance, my Transaction object has many PaymentDetails. Even when there are no PaymentDetails in that collection it's still attached to the DataContext's context! Thus, if I try to update (I update by Attach()ing to the object and then SubmitChanges()) I get that dreaded "An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported." message. Anyway, I'm starting to doubt that this technology was a good gamble. Has anyone got a decent architecture that they're willing to share? I'd really love to use this technology but I feel like I spend 1/3 of my time just debugging is retarded quirks!

    Read the article

  • Linking AS code to symbols defined in an external SWC?

    - by Ender
    (apologies ahead of time, I only really know Flash; my Flex experience is basically nil. There may be a very standard and obvious workflow solution that Flex people know about) I have a number of UI elements that are graphically quite complex (they're not components, they're just Sprites). Since it takes a long time to compile them, I've been trying to move them into an external .swc. However, I want to associate some code with these classes, but I don't want to have to recompile the graphical assets every time I make a code change. At the moment I have it set up like this: UI elements are created in a separate FLA and exported to a SWC. In my primary FLA, I have actionscript classes that extend each of the graphical assets in the SWC. For example: external.swc: (some symbol defined in the Library and exported for actionscript in frame 1) class: com.foo.WidgetGraphic base: flash.display.Sprite main.fla: Widget.as: package com.foo { public class Widget extends WidgetGraphic { ... } } This works, but is time-consuming and prone to error. I'd rather be able to avoid having to inherit from each graphical asset, and just define them directly. Is there a better way to do what I'm trying to accomplish? Note: the main concern here is compile time. I don't have any movies or audio or fonts, just a lot of vector art assets that appear to be slowing down my compilation time significantly. When I'm debugging I'm only making code changes, and would rather not have to keep recompiling the art...

    Read the article

< Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >