Search Results

Search found 14719 results on 589 pages for 'optimization level'.

Page 483/589 | < Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >

  • Linux Kernel programming: trying to get vm_area_struct->vm_start crashes kernel

    - by confusedKid
    Hi, this is for an assignment at school, where I need to determine the size of the processes on the system using a system call. My code is as follows: ... struct task_struct *p; struct vm_area_struct *v; struct mm_struct *m; read_lock(&tasklist_lock); for_each_process(p) { printk("%ld\n", p->pid); m = p->mm; v = m->mmap; long start = v->vm_start; printk("vm_start is %ld\n", start); } read_unlock(&tasklist_lock); ... When I run a user level program that calls this system call, the output that I get is: 1 vm_start is 134512640 2 EIP: 0073:[<0806e352] CPU: 0 Not tainted ESP: 007b:0f7ecf04 EFLAGS: 00010246 Not tainted EAX: 00000000 EBX: 0fc587c0 ECX: 081fbb58 EDX: 00000000 ESI: bf88efe0 EDI: 0f482284 EBP: 0f7ecf10 DS: 007b ES: 007b 081f9bc0: [<08069ae8] show_regs+0xb4/0xb9 081f9bec: [<080587ac] segv+0x225/0x23d 081f9c8c: [<08058582] segv_handler+0x4f/0x54 081f9cac: [<08067453] sig_handler_common_skas+0xb7/0xd4 081f9cd4: [<08064748] sig_handler+0x34/0x44 081f9cec: [<080648b5] handle_signal+0x4c/0x7a 081f9d0c: [<08066227] hard_handler+0xf/0x14 081f9d1c: [<00776420] 0x776420 Kernel panic - not syncing: Kernel mode fault at addr 0x0, ip 0x806e352 EIP: 0073:[<400ea0f2] CPU: 0 Not tainted ESP: 007b:bf88ef9c EFLAGS: 00000246 Not tainted EAX: ffffffda EBX: 00000000 ECX: bf88efc8 EDX: 080483c8 ESI: 00000000 EDI: bf88efe0 EBP: bf88f038 DS: 007b ES: 007b 081f9b28: [<08069ae8] show_regs+0xb4/0xb9 081f9b54: [<08058a1a] panic_exit+0x25/0x3f 081f9b68: [<08084f54] notifier_call_chain+0x21/0x46 081f9b88: [<08084fef] __atomic_notifier_call_chain+0x17/0x19 081f9ba4: [<08085006] atomic_notifier_call_chain+0x15/0x17 081f9bc0: [<0807039a] panic+0x52/0xd8 081f9be0: [<080587ba] segv+0x233/0x23d 081f9c8c: [<08058582] segv_handler+0x4f/0x54 081f9cac: [<08067453] sig_handler_common_skas+0xb7/0xd4 081f9cd4: [<08064748] sig_handler+0x34/0x44 081f9cec: [<080648b5] handle_signal+0x4c/0x7a 081f9d0c: [<08066227] hard_handler+0xf/0x14 081f9d1c: [<00776420] 0x776420 The first process (pid = 1) gave me the vm_start without any problems, but when I try to access the second process, the kernel crashes. Can anyone tell me what's wrong, and maybe how to fix it as well? Thanks a lot! (sorry for the bad formatting....) edit: This is done in a Fedora 2.6 core in an uml environment.

    Read the article

  • Problem merging similar XML files with XSL

    - by LOlliffe
    I have two documents that I need to merge, that happen in a way that I don't seem to be able to find covered in other examples. Namely, that it needs to match not only on a node's attribute at one level, but also on the value of an attribute a node level below that, to get that node's value. I'm trying to take this sample: <?xml version="1.0" encoding="UTF-8" ?> <marc:collection xmlns:marc="http://www.loc.gov/MARC21/slim" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <marc:record> <marc:datafield tag="035" ind1=" " ind2=" "> <marc:subfield code="a">12345</marc:subfield> </marc:datafield> <marc:datafield tag="041" ind1=" " ind2=" "> <marc:subfield code="a">eng</marc:subfield> </marc:datafield> <marc:datafield tag="650" ind1=" " ind2="4"> <marc:subfield code="a">Art</marc:subfield> </marc:datafield> <marc:datafield tag="949" ind1=" " ind2=" "> <marc:subfield code="i">Review of conference proceedings</marc:subfield> </marc:datafield> </marc:record> <marc:record> <marc:datafield tag="035" ind1=" " ind2=" "> <marc:subfield code="a">54321</marc:subfield> </marc:datafield> <marc:datafield tag="041" ind1=" " ind2=" "> <marc:subfield code="a">eng</marc:subfield> </marc:datafield> <marc:datafield tag="650" ind1=" " ind2="4"> <marc:subfield code="a">Byzantine</marc:subfield> </marc:datafield> </marc:record> </marc:collection> And when the value of "datafield" '035', "subfield" 'a' matches e.g. "12345" <marc:collection xmlns:marc="http://www.loc.gov/MARC21/slim" xmlns:fn="http://www.w3.org/2005/xpath-functions" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:fo="http://www.w3.org/1999/XSL/Format"> <marc:record> <marc:datafield ind2=" " ind1=" " tag="035"> <marc:subfield code="a">12345</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">General works</marc:subfield> <marc:subfield code="x">Historians and critics</marc:subfield> <marc:subfield code="x">Smith, John, 1834-1917</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">Généralités</marc:subfield> <marc:subfield code="x">Historiens et critiques d'art</marc:subfield> <marc:subfield code="x">Dietrichson, Lorentz, 1834-1917</marc:subfield> </marc:datafield> <marc:datafield ind2=" " ind1=" " tag="654"> <marc:subfield code="a">General works</marc:subfield> </marc:datafield> <marc:datafield ind2=" " ind1=" " tag="654"> <marc:subfield code="a">Généralités</marc:subfield> <marc:subfield code="b">Historiens et critiques d'art</marc:subfield> <marc:subfield code="b">Smith, John, 1834-1917</marc:subfield> </marc:datafield> </marc:record> <marc:record> <marc:datafield ind2=" " ind1=" " tag="035"> <marc:subfield code="a">54321</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">General works</marc:subfield> <marc:subfield code="x">Historians and critics</marc:subfield> <marc:subfield code="x">Lange, Julius Henrik, 1838-1896</marc:subfield> </marc:datafield> </marc:record> </marc:collection> The result should be: <?xml version="1.0" encoding="UTF-8" ?> <marc:collection xmlns:marc="http://www.loc.gov/MARC21/slim" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <marc:record> <marc:datafield tag="035" ind1=" " ind2=" "> <marc:subfield code="a">12345</marc:subfield> </marc:datafield> <marc:datafield tag="041" ind1=" " ind2=" "> <marc:subfield code="a">eng</marc:subfield> </marc:datafield> <marc:datafield tag="650" ind1=" " ind2="4"> <marc:subfield code="a">Art</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">General works</marc:subfield> <marc:subfield code="x">Historians and critics</marc:subfield> <marc:subfield code="x">Smith, John, 1834-1917</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">Généralités</marc:subfield> <marc:subfield code="x">Historiens et critiques d'art</marc:subfield> <marc:subfield code="x">Dietrichson, Lorentz, 1834-1917</marc:subfield> </marc:datafield> <marc:datafield ind2=" " ind1=" " tag="654"> <marc:subfield code="a">General works</marc:subfield> </marc:datafield> <marc:datafield ind2=" " ind1=" " tag="654"> <marc:subfield code="a">Généralités</marc:subfield> <marc:subfield code="b">Historiens et critiques d'art</marc:subfield> <marc:subfield code="b">Smith, John, 1834-1917</marc:subfield> </marc:datafield> <marc:datafield tag="949" ind1=" " ind2=" "> <marc:subfield code="i">Review of conference proceedings</marc:subfield> </marc:datafield> </marc:record> <marc:record> <marc:datafield tag="035" ind1=" " ind2=" "> <marc:subfield code="a">54321</marc:subfield> </marc:datafield> <marc:datafield tag="041" ind1=" " ind2=" "> <marc:subfield code="a">eng</marc:subfield> </marc:datafield> <marc:datafield tag="650" ind1=" " ind2="4"> <marc:subfield code="a">Byzantine</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">General works</marc:subfield> <marc:subfield code="x">Historians and critics</marc:subfield> <marc:subfield code="x">Lange, Julius Henrik, 1838-1896</marc:subfield> </marc:datafield> </marc:record> </marc:collection> I've tried using examples that I've found that did lookups, but none of them seemed to work. I didn't include any of my XSL, because all of my results were disasterous. I keep looking at it, like it must be simple, but I'm just not getting any decent results. Any help or pointers would be greatly appreciated. Thanks!

    Read the article

  • How to fix security exception when using recaptcha on MVC site

    - by camainc
    I followed this excellent blog post to implement recaptcha on my MVC site: http://devlicio.us/blogs/derik_whittaker/archive/2008/12/02/using-recaptcha-with-asp-net-mvc.aspx I converted the code to VB, and everything seems to compile ok. However, when the code gets to the place where the recapture is about to be generated, I get a security exception. Here is the function where the exception occurs (on the last line in the function): <Extension()> _ Public Function GenerateCaptcha(ByVal htmlHelper As HtmlHelper) As MvcHtmlString Dim captchaControl As New Recaptcha.RecaptchaControl With captchaControl .ID = "recaptcha" .Theme = "blackglass" .PublicKey = "6Lcv9AsAAAAAALCSZNRfWFmrKjw2AR-yuZAL84Bd" .PrivateKey = "6Lcv9AsAAAAAAHCbRujWcZzrY0z6G_HIMvFyYEPR" End With Dim htmlWriter As New HtmlTextWriter(New IO.StringWriter) captchaControl.RenderControl(htmlWriter) Return MvcHtmlString.Create(htmlWriter.InnerWriter.ToString()) End Function The exception is this: Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request for the permission of type 'System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. Has anyone else seen this exception, and if so, how did you fix it? Thanks

    Read the article

  • nHibernate strategies in a web farm

    - by Pete Nelson
    Our current project at work is a new MVC web site that will use a WCF service primarily to access a 3rd party billing system via a web service as well as a small SQL database for user personalization. The WCF service uses nHibernate for the SQL database. We'd like to implement some sort of web farm for load balancing as well as failover and maintenance. I'm trying to decide the best way to handle nHibernate's caching and database concurrency if there are multiple WCF services running. Some scenarios I've been thinking about... 1) Multiple IIS servers, one WCF server. With this setup, the WCF server would be a single point of failure, but there would be no issues with nHibernate caching or database concurrency. 2) Multiple IIS servers, each with it's own WCF service. This removes a single point of failure, but now nHibernate on one machine would not know about database changes done by another machine. Some solutions to number 2 would be to use an IStatelessSession so we're not doing any caching and nHibernate is always fetching directly from the database. This might be the most feasible as our personalization database has very few objects in it. I'm also considering a 2nd-level cache such as memcached or Velocity, but it may be overkill for this system. I'm putting this out there to see if anyone has experience doing this sort of architecture and to get some ideas for a solution. Thanks!

    Read the article

  • Update PEAR on MAMP MacOsX

    - by Jevgeni Smirnov
    Current I am trying to install phpunit on my mac os x and mamp server: pear config-set auto_discover 1 pear install pear.phpunit.de/PHPUnit Errors which I got during installation: Validation Error: This package.xml requires PEAR version 1.9.4 to parse properly, we are version 1.9.2 pear upgrade pear Nothing to upgrade UPDATE 1 This is my pear config. I assume that I messed up local and mamp installs(I didn't know that mamp also has pear, so I installed local one). I suppose something wrong with bin_dir, php_dir and other paths? Keefir-Samolet-iMac:MAMP jevgenismirnov$ pear config-show Configuration (channel pear.php.net): ===================================== Auto-discover new Channels auto_discover 1 Default Channel default_channel pear.php.net HTTP Proxy Server Address http_proxy PEAR server [DEPRECATED] master_server pear.php.net Default Channel Mirror preferred_mirror pear.php.net Remote Configuration File remote_config PEAR executables directory bin_dir /Users/jevgenismirnov/pear/bin PEAR documentation directory doc_dir /Users/jevgenismirnov/pear/docs PHP extension directory ext_dir /Applications/MAMP/bin/php/php5.3.6/lib/php/extensions/no-debug-non-zts-20090626/ PEAR directory php_dir /Users/jevgenismirnov/pear/share/pear PEAR Installer cache directory cache_dir /var/folders/k7/xpwbcbrs1xs8tlxjk5mvkwrr0000gp/T//pear/cache PEAR configuration file cfg_dir /Users/jevgenismirnov/pear/cfg directory PEAR data directory data_dir /Users/jevgenismirnov/pear/data PEAR Installer download download_dir /tmp/pear/install directory PHP CLI/CGI binary php_bin /Applications/MAMP/bin/php/php5.3.6/bin/php php.ini location php_ini --program-prefix passed to php_prefix PHP's ./configure --program-suffix passed to php_suffix PHP's ./configure PEAR Installer temp directory temp_dir /tmp/pear/install PEAR test directory test_dir /Users/jevgenismirnov/pear/tests PEAR www files directory www_dir /Users/jevgenismirnov/pear/www Cache TimeToLive cache_ttl 3600 Preferred Package State preferred_state stable Unix file mask umask 22 Debug Log Level verbose 1 PEAR password (for password maintainers) Signature Handling Program sig_bin /usr/local/bin/gpg Signature Key Directory sig_keydir /Applications/MAMP/bin/php/php5.3.6/conf/pearkeys Signature Key Id sig_keyid Package Signature Type sig_type gpg PEAR username (for username maintainers) User Configuration File Filename /Users/jevgenismirnov/.pearrc System Configuration File Filename /Applications/MAMP/bin/php/php5.3.6/conf/pear.conf

    Read the article

  • Is an LSA MSV1_0 subauthentication package needed for some impersonation use cases?

    - by Chris Sears
    Greetings, I'm working with a vendor who has implemented some code that uses a Windows LSA MSV1_0 subauthentication package (MSDN info if you're interested: http://msdn.microsoft.com/en-us/library/aa374786(VS.85).aspx ) and I'm trying to figure out if it's necessary. As far as I can tell, the subauthentication routine and filter allow for hooking or customizing the standard LSA MSV1_0 logon event processing. The issue is that I don't understand why the vendor's product would need these capabilities. I've asked them and they said they use it to perform impersonation. The product definitely does need to do impersonation, but based on my limited win32 knowledge, they could get the functionality they need using the normal auth APIs (LsaLogonUser, ImpersonateLoggedOnUser, etc) without the subauthentication package. Furthermore, I've worked with a number of similar products that all do impersonation, and this is the only one that's used a subauthentication package. If you're wondering why I would care, a previous version of the product had a bug in the subauthentication package dll that would cause lockups or bluescreens. That makes me rather nervous and has me questioning the use of such a low-level, kernel sensitive interface. I'd like to go back to the vendor and say "There's no way you could need an LSA subauth package for impersonation - take it out", but I'm not sure I understand the use cases and possible limitations of the standard win32 authentication/impersonation APIs well enough to make that claim definitively. So, to the win32 security gurus out there, is there any reason you would need an LSA MSV1_0 subauthentication package if all you were doing is impersonation? Thanks in advance for any thoughts!

    Read the article

  • Compression algorithm for IEEE-754 data

    - by David Taylor
    Anyone have a recommendation on a good compression algorithm that works well with double precision floating point values? We have found that the binary representation of floating point values results in very poor compression rates with common compression programs (e.g. Zip, RAR, 7-Zip etc). The data we need to compress is a one dimensional array of 8-byte values sorted in monotonically increasing order. The values represent temperatures in Kelvin with a span typically under of 100 degrees. The number of values ranges from a few hundred to at most 64K. Clarifications All values in the array are distinct, though repetition does exist at the byte level due to the way floating point values are represented. A lossless algorithm is desired since this is scientific data. Conversion to a fixed point representation with sufficient precision (~5 decimals) might be acceptable provided there is a significant improvement in storage efficiency. Update Found an interesting article on this subject. Not sure how applicable the approach is to my requirements. http://users.ices.utexas.edu/~burtscher/papers/dcc06.pdf

    Read the article

  • Getting started with character and text processing (encoding, regular expressions)

    - by TK
    I'd like to learn foundations of encodings, characters and text. Understanding these is important for dealing with a large set of text whether that are log files or text source for building algorithms for collective intelligence. My current knowledge is pretty basic: something like "As long as I use UTF-8, I'm okay." I don't say I need to learn about advanced topics right away. But I need to know: Bit and bytes level knowledge of encodings. Characters and alphabets not used in English. Multi-byte encodings. (I understand some Chinese and Japanese. And parsing them is important.) Regular expressions. Algorithm for text processing. Parsing natural languages. I also need an understanding of mathematics and corpus linguistics. The current and future web (semantic, intelligent, real-time web) needs processing, parsing and analyzing large text. I'm looking for some resources (maybe books?) that get me started with some of the bullets. (I find many helpful discussion on regular expressions here on Stack Overflow. So, you don't need to suggest resources on that topic.)

    Read the article

  • Using T4 templates to add custom code to EF4 generated entities?

    - by David Veeneman
    I am getting started with Entity Framework 4, using model-first development. I am building a simple WPF demo app to learn the framework. My app has two entities, Topic and Note. A Topic is a discussion topic; it has Title, Text, and DateRevised properties. Topic also has a Notes collection property. a Note has DateCreated and Text properties. I have used EF4 to create an EDM and data store for the app. Now I need to add just a bit of intelligence to the entities. For example, the property setter for the Topic.Text property needs to update the Topic.DateRevised property, and a Note needs to set its DateCreated property when it is instantiated--pretty simple stuff. I assume that I can't modify the generated classes directly, because my code would be lost if the entities are re-generated. Is this the sort of thing that I can implement by modifying the T4 template that EF4 uses to generate the entities? In other words, can a T4 template be modified to add my code for performing these tasks to the entities that it generates? Can you refer me to a good tutorial or explanation of how to get started? Most of what I have found so far talks about how to add a tt file to an EDM, so I can do that. What I am looking for is a resource that I can use to get to the next level, assuming that a T4 template can be used to customize generated entities as I have described. Thanks for your help.

    Read the article

  • I want to merge two PostScript Documents, pagewise. How?

    - by Peter Miehle
    hello, i have a tricky question, so i need to describe my problem: i need to print 2-sided booklets (a third of a paper) on normal paper (german A4, but letter is okay also) and cut the paper afterwards. The Pages are in a Postscript Level 2 File (generated by an ancient printer driver, so no chance to patch that, except ps2ps) generated by me with the ancient OS's Printing driver facilities (GpiMove, GpiLine, GpiText etc). I do not want to throw away two-thirds of the paper, so my idea is: Take file one, two and three, merge them (how?) on new double-sided papers by translate/move file two and three by one resp. two thirds and print the resulting new pages. If it helps, I can manage to print one page of the booklet per file. I cannot "speak" postscript natively, but I am capable of parsing and merging and manipulating files programmaticly. Maybe you can hint me to a webpage. I've read through the manuals on adobe's site and followed the links on www.inkguides.com/postscript.asp Maybe there are techniques with PDF that would help? I can translate ps2pdf. Thanks for help Peter Miehle PS: my current solution: i.e. 8 pages: print page 1, 4 and 7 on page one, 2,5,8 on page two and 3,6,blank on page three, cut the paper and restack. But i want to use a electrical cutting machine, which works better with thicker stacks of paper.

    Read the article

  • Strange rare out-of-order data received using Indy

    - by Jim
    We're having a bizarre problem with Indy10 where two large strings (a few hundred characters each) that we send out one after the other are appearing at the other end intertwined oddly. This happens extremely infrequently. Each string is a complete XML message terminated with a LF and in general the READ process reads an entire XML message, returning when it sees the LF. The call to actually send the message is protected by a critical section around the call to the IOHandler's writeln method and so it is not possible for two threads to send at the same time. (We're certain the critical section is implemented/working properly). This problem happens very rarely. The symptoms are odd...when we send string A followed by string B what we received at the other end (on the rare occasions where we have failure) is the trailing section of string A by itself (i.e., there's a LF at the end of it) followed by the leading section of string A and then the entire string B followed by a single LF. We've verified that the "timed out" property is not true after the partial read - we log that property after every read that returns content. Also, we know there are no embedded LF characters in the string, as we explicitly replace all non-alphanumeric characters in the string with spaces before appending the LF and sending it. We have log mechanisms inside the critical sections on both the transmission and receiving ends and so we can see this behavior at the "wire". We're completely baffled and wondering (although always the lowest possibility) whether there could be some low-level Indy issues that might cause this issue, e.g., buffers being sent in the wrong order....very hard to believe this could be the issue but we're grasping at straws. Does anyone have any bright ideas?

    Read the article

  • POSIX AIO Library and Callback Handlers

    - by Charles Salvia
    According to the documentation on aio_read/write, there are basically 2 ways that the AIO library can inform your application that an async file I/O operation has completed. Either 1) you can use a signal, 2) you can use a callback function I think that callback functions are vastly preferable to signals, and would probably be much easier to integrate into higher-level multi-threaded libraries. Unfortunately, the documentation for this functionality is a mess to say the least. Some sources, such as the man page for the sigevent struct, indicate that you need to set the sigev_notify data member in the sigevent struct to SIGEV_CALLBACK and then provide a function handler. Presumably, the handler is invoked in the same thread. Other documentation indicates you need to set sigev_notify to SIGEV_THREAD, which will invoke the callback handler in a newly created thread. In any case, on my Linux system (Ubuntu with a 2.6.28 kernel) SIGEV_CALLBACK doesn't seem to be defined anywhere, but SIGEV_THREAD works as advertised. Unfortunately, creating a new thread to invoke the callback handler seems really inefficient, especially if you need to invoke many handlers. It would be better to use an existing pool of threads, similar to the way most network I/O event demultiplexers work. Some versions of UNIX, such as QNX, include a SIGEV_SIGNAL_THREAD flag, which allows you to invoke handlers using a specified existing thread, but this doesn't seem to be available on Linux, nor does it seem to even be a part of the POSIX standard. So, is it possible to use the POSIX AIO library in a way that invokes user handlers in a pre-allocated background thread/threadpool, rather than creating/destroying a new thread everytime a handler is invoked?

    Read the article

  • MSTest Test Context Exception Handling

    - by Flip
    Is there a way that I can get to the exception that was handled by the MSTest framework using the TestContext or some other method on a base test class? If an unhandled exception occurs in one of my tests, I'd like to spin through all the items in the exception.Data dictionary and display them to the test result to help me figure out why the test failed (we usually add data to the exception to help us debug in the production env, so I'd like to do the same for testing). Note: I am not testing that an exception was SUPPOSED TO HAPPEN (I have other tests for that), I am testing a valid case, I just need to see the exception data. Here is a code example of what I'm talking about. [TestMethod] public void IsFinanceDeadlineDateValid() { var target = new BusinessObject(); SetupBusinessObject(target); //How can I capture this in the text context so I can display all the data //in the exception in the test result... var expected = 100; try { Assert.AreEqual(expected, target.PerformSomeCalculationThatMayDivideByZero()); } catch (Exception ex) { ex.Data.Add("SomethingImportant", "I want to see this in the test result, as its important"); ex.Data.Add("Expected", expected); throw ex; } } I understand there are issues around why I probably shouldn't have such an encapsulating method, but we also have sub tests to test all the functionality of PerformSomeCalculation... However, if the test fails, 99% of the time, I rerun it passes, so I can't debug anything without this information. I would also like to do this on a GLOBAL level, so that if any test fails, I get the information in the test results, as opposed to doing it for each individual test. Here is the code that would put the exception info in the test results. public void AddDataFromExceptionToResults(Exception ex) { StringBuilder whereAmI = new StringBuilder(); var holdException = ex; while (holdException != null) { Console.WriteLine(whereAmI.ToString() + "--" + holdException.Message); foreach (var item in holdException.Data.Keys) { Console.WriteLine(whereAmI.ToString() + "--Data--" + item + ":" + holdException.Data[item]); } holdException = holdException.InnerException; } }

    Read the article

  • StarTeam trunk.

    - by Nix
    I have the unfortunate opportunity of source control via Borland's StarTeam. It unfortunately does very few things well, and one supreme weakness is its view management. I love SVN and come from an SVN mindset. Our issue is post production release we are spending countless hours merging changes into a "production support" environment. Please do not harass me this was not my doing, I inherited it and am trying to present a better way of managing the repository. It is not an option to switch to a different SCM tool. Current setup Product.1.0 (TRUNK, current production code, and at this level are pending bug fixes) Product.2.0(true trunk anything checked in gets tested, and then released next production cycle, a lot of changes occur in this view) My proposal is going to be to swap them, have all development be done on the trunk (Production), tag on releases, and as needed create child views to represent production support bug fixes. Production Production.2.0.SP.1 I can not find any documentation to support the above proposal so I am trying to get feedback on whether or not the change is a good idea and if there is anything you would recommend doing differently.

    Read the article

  • Rolling comments inside a log4net file

    - by Maestro1024
    Rolling comments inside a log4net file Is it possible to roll the data inside the log file? So I have an xml config file like <log4net> <appender name="LogFileAppender" type="log4net.Appender.RollingFileAppender"> <layout type="log4net.Layout.PatternLayout"> <ConversionPattern value="%d{yyyy-MM-dd hh:mm:ss} – %-5p: %m%n" /> </layout> <param name="File" value="c:\log.txt" /> <param name="AppendToFile" value="true" /> <rollingStyle value="Size" /> <maxSizeRollBackups value="1" /> <maximumFileSize value="5MB" /> <staticLogFileName value="false" /> </appender> <root> <level value="DEBUG" /> <appender-ref ref="LogFileAppender" /> </root> </log4net> What I want is to have a 5MB file that when it gets to the end will pull off the first line of the file and then add a new line at the end. Is that possible (I do not see it in the documentation)?

    Read the article

  • How to get local bzr commits to server?

    - by C.W.Holeman II
    lanchpad.net states that for project Emle - Electronic Mathematics Laboratory Equipment 2.0 series is the current focus of development This is what I have done so far: Set the launchpad.net project to import from the sourceforge.net project Emle (this actually set the launchpad.net project to mirror the sourceforge.net project rather than just inport the content once) Examined the launchpad.net project to see that the three commits (#1 - #3) which were done in the sourceorge.net project previousley made it into launchpad.net. Used bzrto get the launchpad.net project which I did while is was still set for mirroring. Made three changes and commits using bzr (#4 - #6). Was unable to see the changes on the launchpad.net site. Requested the mirroring to be stopped (it did). Here is an extract from lanchpad.net for project Emle 2.0 series showing that launchpad.net has #1 - #3: Code for this series The following branch has been registered as the mainline branch for this release series: lp:emle - C.W.Holeman II 3 revisions, 3 in the past month. Here on my system showing changes #1 - #6: $ bzr log --line 6: C.W.Holeman II 2010-02-27 #528096 Corrected setting of paramter value for emleDi... 5: C.W.Holeman II 2010-02-27 Minor refactor - improved comment regarding workaround... 4: C.W.Holeman II 2010-02-27 #529089 #529087 Index file html tag lang attribute cor... 3: cwhii 2010-02-25 {Emle 2.4-5 (BL0011)} New top levels: trunk and tags. 2: cwhii 2010-02-25 New top levels: trunk and tags. 1: cwhii 2010-02-25 New top level trunk and tags. How do I get the changes that are in bzr on my system to apply to launchpad.net?

    Read the article

  • Production settings file for log4j?

    - by James
    Here is my current log4j settings file. Are these settings ideal for production use or is there something I should remove/tweak or change? I ask because I was getting all my threads being hung due to log4j blocking. I checked my open file descriptors I was only using 113. # ***** Set root logger level to WARN and its two appenders to stdout and R. log4j.rootLogger=warn, stdout, R # ***** stdout is set to be a ConsoleAppender. log4j.appender.stdout=org.apache.log4j.ConsoleAppender # ***** stdout uses PatternLayout. log4j.appender.stdout.layout=org.apache.log4j.PatternLayout # ***** Pattern to output the caller's file name and line number. log4j.appender.stdout.layout.ConversionPattern=%5p [%t] (%F:%L) - %m%n # ***** R is set to be a RollingFileAppender. log4j.appender.R=org.apache.log4j.RollingFileAppender log4j.appender.R.File=logs/myapp.log # ***** Max file size is set to 100KB log4j.appender.R.MaxFileSize=102400KB # ***** Keep one backup file log4j.appender.R.MaxBackupIndex=5 # ***** R uses PatternLayout. log4j.appender.R.layout=org.apache.log4j.PatternLayout log4j.appender.R.layout.ConversionPattern=%p %t %d %c - %m%n #set httpclient debug levels log4j.logger.org.apache.component=ERROR,stdout log4j.logger.httpclient.wire=ERROR,stdout log4j.logger.org.apache.commons.httpclient=ERROR,stdout log4j.logger.org.apache.http.client.protocol=ERROR,stdout UPDATE*** Adding thread dump sample from all my threads (100) "pool-1-thread-5" - Thread t@25 java.lang.Thread.State: BLOCKED on org.apache.log4j.spi.RootLogger@1d45a585 owned by: pool-1-thread-35 at org.apache.log4j.Category.callAppenders(Category.java:201) at org.apache.log4j.Category.forcedLog(Category.java:388) at org.apache.log4j.Category.error(Category.java:302)

    Read the article

  • IE7 issue - cannot download streamed file when Automatic prompting for file downloads is disabled

    - by Jai ganesh K
    Hi, My application is J2EE (JSP/Servlet) based. I encounter an issue when i try to open a new window (pop-up) from JSP and call a Servlet action (e.g. Streamer.do) which streams a PDF file inside that pop-up. Problem: While IE 7 - Tools - Internet Options - Security - Custom Level - Downloads - Automatic prompting for file downloads is Disabled and while pop-up window get opened, I am unable to download the file (Save/Open prompt is not comming up). In contrast, when I enable this option, I am able to download. But this option sometimes would be disabled in some environments. While testing this in Mozilla Firefox 3.0/3/5/IE6 it is working fine without any settings change. When i check it to enable i then get the Save/Open prompt to work correctly. This should be problem with IE7. Can anybody help us with Javascript or any working settings which doesnt care whether the "Automatic prompting for downloads" option in IE7 is enabled. Any help in this would be much appreciated. Regards! Jai

    Read the article

  • An alternative to reading input from Java's System.in

    - by dvanaria
    I’m working on the UVa Online Judge problem set archive as a way to practice Java, and as a way to practice data structures and algorithms in general. They give an example input file to submit to the online judge to use as a starting point (it’s the solution to problem 100). Input from the standard input stream (java.lang.System.in) is required as part of any solution on this site, but I can’t understand the implementation of reading from System.in they give in their example solution. It’s true that the input file could consist of any variation of integers, strings, etc, but every solution program requires reading basic lines of text input from System.in, one line at a time. There has to be a better (simpler and more robust) method of gathering data from the standard input stream in Java than this: public static String readLn(int maxLg) { byte lin[] = new byte[maxLg]; int lg = 0, car = -1; String line = “”; try { while (lg < maxLg) { car = System.in.read(); if ((car < 0) || (car == ‘\n’)) { break; } lin[lg++] += car; } } catch (java.io.IOException e) { return (null); } if ((car < 0) && (lg == 0)) { return (null); // eof } return (new String(lin, 0, lg)); } I’m really surprised by this. It looks like something pulled directly from K&R’s “C Programming Language” (a great book regardless), minus the access level modifer and exception handling, etc. Even though I understand the implementation, it just seems like it was written by a C programmer and bypasses most of Java’s object oriented nature. Isn’t there a better way to do this, using the StringTokenizer class or maybe using the split method of String or the java.util.regex package instead?

    Read the article

  • What am I missing with log4net - No log file created

    - by Saif Khan
    I am trying to use log4net in a VB.NET app for some unknown reason it's not creating the log file. Here is my app.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> </configSections> <log4net> <appender name="FileAppender" type="log4net.Appender.FileAppender"> <file value="c:\log-file.txt" /> <appendToFile value="true" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" /> </layout> </appender> <root> <level value="ALL" /> <appender-ref ref="FileAppender" /> </root> </log4net> </configuration> Here is the app code Imports log4net Public Class Form1 Dim log As ILog Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click log.Error("test") End Sub Private Sub Form1_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load log4net.Config.XmlConfigurator.Configure() log = log4net.LogManager.GetLogger("TestThings") End Sub End Class "TestThings" is the name of the VS project. What am I missing?

    Read the article

  • Querying a self referencing join with NHibernate Linq

    - by Ben
    In my application I have a Category domain object. Category has a property Parent (of type category). So in my NHibernate mapping I have: <many-to-one name="Parent" column="ParentID"/> Before I switched to NHibernate I had the ParentId property on my domain model (mapped to the corresponding database column). This made it easy to query for say all top level categories (ParentID = 0): where(c => c.ParentId == 0) However, I have since removed the ParentId property from my domain model (because of NHibernate) so I now have to do the same query (using NHibernate.Linq) like so: public IList<Category> GetCategories(int parentId) { if (parentId == 0) return _catalogRepository.Categories.Where(x => x.Parent == null).ToList(); else return _catalogRepository.Categories.Where(x => x.Parent.Id == parentId).ToList(); } The real impact that I can see, is the sql generated. Instead of a simple 'select x,y,z from categories where parentid = 0' NHibernate generates a left outer join: SELECT this_.CategoryId as CategoryId4_1_, this_.ParentID as ParentID4_1_, this_.Name as Name4_1_, this_.Slug as Slug4_1_, parent1_.CategoryId as CategoryId4_0_, parent1_.ParentID as ParentID4_0_, parent1_.Name as Name4_0_, parent1_.Slug as Slug4_0_ FROM Categories this_ left outer join Categories parent1_ on this_.ParentID = parent1_.CategoryId WHERE this_.ParentID is null Which doesn't seems much less efficient that what I had before. Is there a better way of querying these self referencing joins as it's very tempting to drop the ParentID back onto my domain model for this reason. Thanks, Ben

    Read the article

  • iis7 compress dynamic content from custom handler

    - by Malloc
    I am having trouble getting dynamic content coming from a custom handler to be compressed by IIS 7. Our handler spits out json data (Content-Type: application/json; charset=utf-8) and responds to url that looks like: domain.com/example.mal/OperationName?Param1=Val1&Param2=Val2 In IIS 6, all we had to do was put the edit the MetaBase.xml and in the IIsCompressionScheme element make sure that the HcScriptFileExtensions attribute had the custom extension 'mal' included in it. Static and Dynamic compression is turned out at the server and website level. I can confirm that normal .aspx pages are compressed correctly. The only content I cannot have compressed is the content coming from the custom handler. I have tried the following configs with no success: <handlers> <add name="MyJsonService" verb="GET,POST" path="*.mal" type="Library.Web.HttpHandlers.MyJsonServiceHandlerFactory, Library.Web" /> </handlers> <httpCompression> <dynamicTypes> <add mimeType="application/json" enabled="true" /> </dynamicTypes> </httpCompression> _ <httpCompression> <dynamicTypes> <add mimeType="application/*" enabled="true" /> </dynamicTypes> </httpCompression> _ <staticContent> <mimeMap fileExtension=".mal" mimeType="application/json" /> </staticContent> <httpCompression> <dynamicTypes> <add mimeType="application/*" enabled="true" /> </dynamicTypes> </httpCompression> Thanks in advance for the help.

    Read the article

  • Android crashes when calling ImageButton

    - by Joël
    I have a crash (Application Stopped Unexpectedly) problem with this main.xml is a "HelloWorld" type project (while testing and learning features I need for my app) : I isolated the ImageButton as an issue, but I can't isolate any of the parameters... <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <ImageButton android:id="@+id/picture" android:layout_width="240dip" android:layout_height="180dip" android:layout_gravity="center_horizontal" android:src="@drawable/icon" android:adjustViewBounds="true" android:cropToPadding="true" android:clickable="true" android:scaleType="fitCenter" /> </LinearLayout> icon.png exists in my resources... I can see the preview in the Layout tab, even though the image is not centered on the button, but I read that it was normal. The code below works fine (as a regular Button). I can also do the same as an ImageView. <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <Button android:id="@+id/picture" android:layout_width="240dip" android:layout_height="180dip" android:layout_gravity="center_horizontal" /> </LinearLayout> I use Eclipse and the AVD, and all my learning is done on 2.1 (SDK level 7). I can't test the app on an actual device yet as I don't have it yet. Thanks in advance !

    Read the article

  • Best way to pan & scan images with jquery

    - by guy
    Hello, I am trying to make an image pan & scan system. I have a slider that zooms the image (that can be dragged) and also a small map in the corner of the image (that can also be dragged). You can see a rough example here (sorry, I am not allowed to use the design, so it's not formatted): http://lighe.madetokill.com/test/test.html My problem is that although it works great in firefox and opera, it stutters in chrome, safari and IE (it doesn't currently work in IE) at any zoom level except 100% (at 100% it's butter smooth). What is the reason for webkit's poor performance? Am I implementing this wrong? I am basically changing the margin-left and margin-top properties of the image. I know this is fast enough since at 100% it's perfectly smooth. Would I be better off using canvas? I am trying to avoid flash (or any other plugins) if possible. Also please note this is work in progress, there are other bugs except this, so do not bother with those :) Thanks in advance!

    Read the article

  • geocoder.getFromLocationName returns only null

    - by test
    Hello, I am going out of my mind for the last 2 days with an IllegalArgumentException error i receive in android code when trying to get a coordinates out of an address, or even reverse, get address out of longitude and latitude. this is the code, but i cannot see an error. is a standard code snippet that is easily found on a google search. public GeoPoint determineLatLngFromAddress(Context appContext, String strAddress) { Geocoder geocoder = new Geocoder(appContext, Locale.getDefault()); GeoPoint g = null; try { System.out.println("str addres: " + strAddress); List<Address> addresses = geocoder.getFromLocationName(strAddress, 5); if (addresses.size() > 0) { g = new GeoPoint((int) (addresses.get(0).getLatitude() * 1E6), (int) (addresses.get(0).getLongitude() * 1E6)); } } catch (Exception e) { throw new IllegalArgumentException("locationName == null"); } return g; } These are the permissions from manifest.xml file: <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_MOCK_LOCATION" /> I do have the Google Api key declared too: <uses-library android:name="com.google.android.maps" /> From the code snippet above, geo coder is not null, neither is the address or appContext, and i stumble here: geocoder.getFromLocationName(strAddress, 5); I did a lot of google searching and found nothing that worked, and the most important info i found is this: ""The Geocoder class requires a backend service that is not included in the core android framework." Sooo, i am confuzed now. What do I have to call, import, add, use in code.... to make this work? I am using Google Api2.2, Api level 8. If somebody has found a solution for this, or a pointer for documentation, something that i didn't discover, please let us know. Thank you for your time.

    Read the article

< Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >