Search Results

Search found 22161 results on 887 pages for 'idl programming language'.

Page 427/887 | < Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >

  • Logging into Local Statusnet instance on Apache causes browser to download a file

    - by DilbertDave
    I've installed statusnet 0.9.1 on a Windows Server via the WAMP stack and on the whole it seems to be fine. However, when logging in using IE7 or Chrome the browers invoke a file download, i.e. the File Download dialog is displayed. In IE7 the file is called notice with the content below (some parts starred out): <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/"> <ShortName>Mumble Notice Search</ShortName> <Contact>david.carson@*****.com</Contact> <Url type="text/html" method="get" template="http://voice.*****.com/mumble/search/notice?q={searchTerms}"></Url> <Image height="16" width="16" type="image/vnd.microsoft.icon">http://voice.*****.com/mumble/favicon.ico</Image> <Image height="50" width="50" type="image/png">http://voice.******.com/mumble/theme/cloudy/logo.png</Image> <AdultContent>false</AdultContent> <Language>en_GB</Language> <OutputEncoding>UTF-8</OutputEncoding> <InputEncoding>UTF-8</InputEncoding> </OpenSearchDescription> In Chrome (Linux and Windows!) the file is called people and contains similar XML. This is not an issue when logging in using FireFox. This is obviously a configuration issue but I'm not having much luck tracking it down. I tested the previous version of Statusnet on an Ubuntu Server VM on our network and it worked fine for months. Thanks In Advance

    Read the article

  • How to get german QWERTY on Windows?

    - by Arturas M
    Well I'm used to having the world standard keyboard which is qwerty and not qwertz... But on windows I can't find the choice for german input which would be qwerty, not qwertz. In linux there was german input with qwerty so it was fun. I believe it should be on Windows too? Cause i'm sick of this qwertz always having to correct and search for z or y... Yeah, I know, if Germans made a mistake, it wasn't loosing WW2... Edit: Maybe I wasn't clear enough, it's not about changing between different language inputs? It's about having all german keys in german input, just the z and y would be in the correct places like all the worlds keyboards use and like the US keyboard uses... Solution: Well unfortunately by searching everywhere and waiting for answers I couldn't find what I needed, so I came up with this solution - I used The The Microsoft Keyboard Layout Creator And created my own very custom layout by modifying the default german layout and switching places of z and y. In case somebody needs it and doesn't want to go the hard way, I decided to upload them, so here are the links to download and install the keyboard layout: http://www20.zippyshare.com/v/95071447/file.html http://rapidshare.com/files/3822150342/German%20QWERTY%202%20by%20Arturas%20M.zip It will appear as a choice under German input between languages in your language bar. If you just want to use this german layout, just remove all german layouts and install this one. Good luck and have fun using it!

    Read the article

  • What is a good WordPress theme for long Objective-C code samples [closed]

    - by willc2
    As some of you iPhone developers know, Objective-C can be a verbose language. Long, descriptive variable and method names are the norm. I'm not complaining, it makes code easier to read and code completion makes it easy to type. But damn! Check out this method name for getting a cell in a table view: -(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath; I have a WordPress blog where I publish my code samples as I'm learning the language. One thing I hate on other blogs is how the code won't fit in a column without that scroll bar or without wrapping around. It really made it hard for me to read and comprehend method names back when I was a super-noob (six months ago). Right now I use the clean-looking Fazyvo 1.0 theme by noonnoo. I love the look of it but the columns are just too narrow and it doesn't have support for wider ones. I could hand-modify it but then I'd have to maintain/redo those changes every time I updated it. Instead, I'm looking for a nice theme that has width control built-in and looks good at larger font sizes. Can anyone help? Note: I use WP-CodeBox for code syntax highlighting.

    Read the article

  • Server 2012, Jumbo Frames - should I expect problems?

    - by TomTom
    Ok, this sound might stupid - but is there any negative on just enabling jumbo frames in practice? From what I understand: Any switch or ethernet adapter that sees a jumbo frame it can not handle will just drop it. TCP is not a problem as max frame size is negotiated in the setinuo phase. UCP is a theoretical problem as a server may just send a LARGE UDP packet that gets dropped on the way. Practically though, as UDP is packet based, I do not really think any software WOULD send a UDP packet larger than 1500 bytes net without app level configuration changes - at least this is how I do my programming, as it is quite hard to get a decent MTU size for that without testing yourself, so you fall back in programming to max 1500 packets. The network in question is a standard small business network - we upgraded now from a non managed 24 port switch to a 52 port switch with 4 10g ports (netgear - quite cheap) and will mov a file server to 10g for also ISCSI serving. All my equipment on the Ethernet level can handle minimum 9000 bytes and due to local firewalls I really want to get packets larger (less firewall processing), but the network is also NAT'ed to the internet. On top, different machines move around (download) large files (multi gigabyte area) quite often for processing. The question is - can I expect problems when I just enable jumbo frames? Again, this is not totally ignorance - I just don't see programs sending more than 1500 byte UDP packets (if that is a practical problem please tell me) and for TCP the MTU is negotiated anyway. if there is a problem I can move to a dedicated VLAN, but this has it's own shares of problems as basically most workstations must then be on both VLAN's.

    Read the article

  • Are there any custom keyboard available for laptops

    - by Ahe
    My work laptop is a HP elitebook 8560w which I mainly use for programming. Usually I have a external keyboard but recently I have been working out of office and therefore have been using the laptops own keyboard. One thing has really started to bug me. The keyboard layout of this 15.6" laptop contains numpad but the arrow keys are really bad (too small). Also when programming, I really miss a standard inverted T-arrow keys and the home/end/PgUp/PgDn buttons. Then it occurred to me; I would rather give up a numpad than a standard arrow keys. (The keyboard real estate in 15.6" laptop would allow this, and I really have to agree with Jeff Atwood here http://www.codinghorror.com/blog/2009/02/have-keyboard-will-program.html) Which brings me to my question. Do any laptop manufacturers make custom keyboards for their laptops or is there some third party manufacturer who could supply these kind of special keyboards? Quick googling on this doesn't give any meaningful results. Looks like that I have to carry an external keyboard with me if someone here can't give any pointers.

    Read the article

  • How to make emacs accept UTF-8 from the keyboard

    - by Brent.Longborough
    My friends have persuaded me to "try again" (about the 5th time in about 12 years) with emacs. I'm currently suffering a little, and need help with emacs + utf-8. I'm running the 23.3.1 emacs gui on Windows 7 with my own custom keyboard layout (built with MS Keyboard Layout Creator). The layout has a full ISO-8859-1 (Latin-1) character set, plus some additional characters from ISO-8859-9 (Latin-5, gis etc for Turkish) and w for Welsh (don't know where that one lives). In my .emacs, I have (blindly) added these lines: ;; key board / input method settings (setq locale-coding-system 'utf-8) (set-terminal-coding-system 'utf-8) (set-keyboard-coding-system 'utf-8) (set-language-environment 'UTF-8) ; prefer utf-8 for language settings Now, when I enter characters from ISO Latin-1 from the keyboard, they are accepted without problems, but characters from outside Latin-1 are "translated" to an approximate character in Latin-1. Thus, for example, Latin-5 "g" gets converted to a plain "g". Cutting and pasting, however, work fine. Can anyone tell me what I'm doing wrong? I should like to make everything I do with emacs utf-8 with BOM.

    Read the article

  • Monospace font which supports at least both of Korean hangul and the Georgian alphabet?

    - by hippietrail
    Being both a language enthusiast and a programmer, I find myself often doing programming or text processing involving foreign language alphabets and scripts. One annoyance however is that CJK fonts (those which support Chinese, Japanese, and/or Korean) usually only contain glyphs for Latin, Greek, and Cyrillic at best. Often the Asian glyphs will be beautiful but the other glyphs can be quite ugly. Just as often in text editors you can only choose a single font, not one for CJKV and one for other, which will be each used for rendering the appropriate characters. Korean is one of the languages I'm most interested in currently. I only need hangul / hangeul for monospaced editing, hanja isn't common enough to be a problem. Another of the languages I'm currently involved in is Georgian, which has its own alphabet which is a little exotic but has pretty good support in common fonts on Windows and *nix. But I am as yet unable to find a font with good Korean glyphs and also Georgian glyphs. My editor of choice is gVim, so an answer telling me how to set it to use two fonts together would be just as good. Currently I'm using it mostly under Windows 7 so a vim-specific solution would be needed rather than a *nix-specific solution.

    Read the article

  • My Windows 7 has suddenly stopped displaying Unicode symbols

    - by Felix Dombek
    For some strange reason, my computer suddenly doesn't show certain unicode characters anymore! I have no idea what happened. Affected applications include Windows Explorer (should be Japanese characters), Google Chrome (should be a heart), and Winamp (should be stars): Russian, German etc. characters are displayed normally. Chrome also displays Japanese script on websites, but not in the GUI. How can I fix it? Update: I have tried to use System Restore to fix it. I needed to go back in time quite a while because the most recent restore points didn't solve it so I used one from the middle of November. After that restore, Unicode symbols were displayed again. Then I updated my system with Windows Update again because those were removed during the restore. After that, the error occurred again! I then did a restore to a point before my new updates, but the error persists, and the old restore point (which I used before) is gone and there are currently no other snapshots of the system. Any suggestions on what to do now? Update 2: I could find a workaround: Control Panel ? Region and Language ? Administration ? Change Language for Unicode-incompatible programs to Japanese (Japan). All mentioned programs display their symbols correctly again. However, I don't consider this a fix because these programs are not usually Unicode-incompatible, and it also leads to some (non-serious) artifacts in some programs. I still welcome an answer that tells me what went wrong here and how to fix the issue.

    Read the article

  • GtkView GtkSpell and myspell backend

    - by justadreamer
    Hi I have an IM client pidgin. It is launched under locale ru_RU.UTF-8 Then when I type messages in GtkView widget it highlights the misspelled words. However it uses a GtkSpell which uses enchant, which uses myspell backend (I provided it with symlink to openoffice dictionaries folder /usr/share/enchant/myspell -- /usr/share/myspell/dict). Now the problem is that whichever language I use - it still uses ru_RU language to select the dictionary, so all english text is being underlined as a misspelled text. When I switch locale to en_US and then launch pidgin under it - then all russian text becomes misspelled. I don't like this behaviour as I use pidgin to chat in both languages. Is there a way to somehow setup enchant/myspell so that it searches in both dictionaries en_US and ru_RU independent of the locale I launched pidgin (GtkView) in ? I have Debian lenny/5.0.1 distro. enchant version 1.5.0. /usr/share/enchant/enchant.ordering looks like this: *:myspell meaning that for all languages backend is myspell myspell dictionary.lst file looks like this: DICT en GB en_GB DICT en US en_US THES en US th_en_US_v2 THES en GB th_en_US_v2 THES ru RU th_ru_RU_v2 DICT ru RU ru_RU DICT uk UA uk DICT ru RU en_US

    Read the article

  • How to slow down audio files?

    - by verve
    I need a program (with an easy learning curve) that lets me slow down mp3 (at the very least this format) music and audiobook files. The software needs to be able to slow down the audio at the chosen speeds without altering the pitch and accuracy of the words being pronounced. Perhaps like the language software "Byki Deluxe's" "SlowSound" feature? I'm learning a foreign language (German) and I find the speeds at which the books are being read too fast. I need to hear the pronunciation of each word much more clearly to learn how to pronounce the words myself. Is there such a product out there? Now, I know you can slow down stuff in VLC but it sounds really artificial. I need something that slows down audio files without altering the accuracy of the words being pronounced. It doesn't have to be freeware; ease of use and quality is more important to me. Win 7 64-bit. IE 8. Edit: Are there any software-for-pay like Audacity? Only the beta works in Win 7. Also, I'd prefer to be able to slow down a file live and not have to create a new file to use the feature.

    Read the article

  • Lenovo System Update Breaks Windows Live

    - by wolfvilleian
    Hey everyone, I've been racking my brain (and fingers from typing) trying to solve this issue to no avail. I have a Lenovo computer and I install their system update tool to install all my missing drivers. However after this tool is installed Windows Live 2011 breaks, it will no longer sign in giving error number 8e5e0247 all the solutions online haven't helped. It appears that a language setting somewhere gets set to en_ms, and I'm en_ca. My computer is running Windows 7 x64. When i try to sign onto messenger it gives an error that (with some research) means your locale or language is not supported, I've searched my computer for any reference to en_ms but find none. Also a few other things seem to have broken, When a UAC box comes up it is no longer able to identify the publisher of anything, and also the indexing service does not work (I'm not sure if the indexing issue is related, but the UAC issue happened right after installation), I had this issue before but I don't remember how I fixed it, I believe it had something to do with environmental variables. When it goes to sign in it gets as far as the "Loading contacts" then stops and goes back to the sign in screen. Has anyone seen this before? Thanks

    Read the article

  • To what extent is size a factor in SSD performance?

    - by artif
    To what extent is the size of an SSD a factor in its performance? In my mind, correct me if I'm wrong, a bigger SSD should be, everything else being equal, faster than a smaller one. A bigger SSD would have more erase blocks and thus more leeway for the FTL (flash translation layer) to do garbage collection optimization. Also there would be more time before TRIM became necessary. I see on Wikipedia that it remarks that "The performance of the SSD can scale with the number of parallel NAND flash chips used in the device" so it seems throughput also increases significantly. Also many SSDs contain internal caches of some sort and presumably those caches are larger for correspondingly large SSDs. But supposing this effect exists, I would like a quantitative analysis. Does throughput increase linearly? How much is garbage collection impacted, if at all? Does latency stay the same? And so on. Would the performance of a 8 GB SSD be significantly different from, for example, an 80 GB SSD assuming both used high quality chips, controllers, etc? Are there any resources (webpages, research papers, presentations, books, etc) that discuss correlations between SSD performance (4 KB random write speed, latency, maximum sequential throughput, etc) and size? I realize this does not really sound like a programming question but it is relevant for what I'm working on (using flash for caching hard drive data) which does involve programming. If there is a better place to ask this question, eg a more hardware oriented site, what would that be? Something like the equivalent of stack overflow (or perhaps a forum) for in-depth questions on hardware interfaces, internals, etc would be appreciated.

    Read the article

  • Codecs, Premiere Pro & Quicktime: Import or Play Error

    - by Nchpmn
    Original Question I've been using a FS-H200 (not the Pro variant) recorder with a JVC ProHD camera. I have been shooting with the DTE FORMAT to Quicktime (.mov). I copied the files to an external hard drive and am now trying to edit. The files will play back in VLC, as they would be expected to. However they will not import into Adobe Premiere CS5.5, instead giving an error: Unsupported format or damaged file. Quicktime gives the following error when attempting to play the files: Error -2002: a bad public movie atom was found in the movie (Filename) To try and fix this, I have installed the following codec packs: K-Lite Codec Pack 64-bit Full (version 5.9, latest) K-Lite Codec Pack 32-bit Full (version 8.4, latest) MainConcept Codec Suite (Broadcast) v5.1 for Adobe CS5 Reinstalled Quicktime with new download from Apple The same errors and problems still exist. From this I can assume that there is an issue with Quicktime and that is what Premiere is using as an encoder/decoder for the codec. Is there any way to fix this? From looking at the "Codec Information" from VLC: Stream 0 Type: Video Codec: MPEG-1/2 (mpgv) Language: English Resolution: 1280 x 720 Frame Rate: 25 Stream 1 Type: Audio Codec: PCM S16 BE (twos) Language: English Channels: Stereo Sample Rate: 48000 Hz Bits per sample: 16 Other computer specs: Windows 7 Professional 64-bit (SP1) Gigabyte Z68X-UD3-B3 Intel i7-2600K 16GB DDR3 2TB WD 7200RPM SATA 6Gb/s LaCie d2 Quadra 2TB v3 7200RPM (External HDD) NVIDIA GeForce GTX 560 Ti Golden Sample Updates 2012-03-11 @ 2050 AEDT MPEG Steamclip doesn't recognise, play or convert the footage. File open error: unrecognised file type. [Open Anyway] File open error: can't find video or audio tracks. 2012-03-24 @ 1920 AEDT Had to transcode the footage. :(

    Read the article

  • Advice on resizing 1280*720 for web audiences.

    - by jamiethompson90
    Forgive my spelling, I'm posting this from my mobile. I've recently decided to record videos to help teach a visual language. My camera likes to boast it can record in 1280; its a cheap camera about £75 so the quality isn't amazing. But its okay. Anyways, it has some other settings for lower res, but I figure might as well record in a larger size in case the need arises for a bigger source file in the future. I've been looking at jw player to play the converted files (mp4 to flv I think). What do you think a good size would be to convert to? I want to to look nice and clear remembering it is a visual language so lip patterns, facial expressions, body movement, fingers etc are all important, sound is not that important but I would like to have a choice to toggle captions. Thanks for any help, any advice apreciated, first time I have done a video project! P.s. If anyones interested its BSL. Jamie

    Read the article

  • BIND: forward 1st level zone

    - by raven
    First of all: sorry for the language, English is not my primary language. I have star-like DNS structure with many filials (more that 2): ^ | v filialNS_1.filial_1.city.local <---- ns.main.city.local <---- filialNS_2.filial_2.city.local ^ | v ns.mail.city.local is slave of all filials zones filialNS_1 is master of filial_1.city.local filialNS_2 is master of filial_2.city.local filialNS_N is master of filial_N.city.local I want to: serve DNS queries for xxx.filial_N.city.local with filialNS_N.filial_N.city.local forward all queries for xxx.xxx.xxx.local from filialNS_N to ns.main.city.local forward other queries to our provider's DNS on filial (or google-public-dns or anything else) FILIAL CONFIG named.conf zone "filial_1.city.local" { type master; file "/etc/namedb/dynamic/filial_1.city.local"; allow-update { key DHCP_UPDATER; }; allow-transfer { <ns.main.city.local IP address> }; }; zone "2.76.10.in-addr.arpa" { type master; file "/etc/namedb/dynamic/2.76.10.in-addr.arpa"; allow-update { key DHCP_UPDATER; }; allow-transfer { <ns.main.city.local IP address> }; }; zone "local." { type forward; forward only; forwarders { <ns.main.city.local IP address> }; }; nslookup server.filial_1.city.local - works fine nslookup server.main.city.local Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find server.main.city.local: NXDOMAIN Where am I going wrong?

    Read the article

  • Why is only one wget command working in my crontab?

    - by afEkenholm
    I wish to fetch content from a PHP script on my server two times a day, altering a query variable lang to set what language we want, and save this content in two language specific files. This is my crontab: */15 * * * * ~root/apache.sh > /var/log/checkapache.log 10 0 * * * wget -O /path/to/file-sv.sql "http://mydomain.com/path/?lang=sv" 11 0 * * * wget -O /path/to/file-en.sql "http://mydomain.com/path/?lang=en" The problem is that only the first wget command line is being executed (or to be precise: the only file that is being written is /path/to/file-sv.sql). If I switch the second and the third row, /path/to/file-en.sql gets written instead. The first line always runs as expected, no matter where it is. I then tried using lynx -dump "http://mydomain.com/path/?lang=xx" > /path/to/file-xx.sql to no avail; still only the first lynx line executed successfully. Even mixing wget and lynx did not change this! Getting kinda desperate! Am I missing something? There are thousands of articles on crontab (combined with) wget or lynx, but all seems to cover basic setups and syntax. Does anyone got a clue of what I am doing wrong? Thanks, Alexander

    Read the article

  • Kickstarting an Ubuntu Server 10.04 installation (DHCP fails)

    - by William
    I'm trying to automate the network installation of Ubuntu 10.04 LTS with an anaconda kickstart and everything seems to running except for the initial DHCP autoconfiguration. The installer attempts to configure the install via DHCP but fails on its first attempt. This brings me to a prompt where I can retry DHCP and it seems to always work on the second attempt. My issue is that this is not really automated if I have to hit retry for DHCP. Is there something I can add to the kickstart file so that it will automatically retry or better yet not fail the first time? Thanks. Kickstart: # System language lang en_US # Language modules to install langsupport en_US # System keyboard keyboard us # System mouse mouse # System timezone timezone America/New_York # Root password rootpw --iscrypted $1$unrsWyF2$B0W.k2h1roBSSFmUDsW0r/ # Initial user user --disabled # Reboot after installation reboot # Use text mode install text # Install OS instead of upgrade install # Use Web installation url --url=http://10.16.0.1/cobbler/ks_mirror/ubuntu-10.04-x86_64/ # System bootloader configuration bootloader --location=mbr # Clear the Master Boot Record zerombr yes # Partition clearing information clearpart --all --initlabel # Disk partitioning information part swap --size 512 part / --fstype ext3 --size 1 --grow # System authorization infomation auth --useshadow --enablemd5 %include /tmp/pre_install_ubuntu_network_config # Always install the server kernel. preseed --owner d-i base-installer/kernel/override-image string linux-server # Install the Ubuntu Server seed. preseed --owner tasksel tasksel/force-tasks string server # Firewall configuration firewall --disabled # Do not configure the X Window System skipx %pre wget "http://10.16.0.1/cblr/svc/op/trig/mode/pre/system/Test-D" -O /dev/null # Network information # Start pre_install_network_config generated code # Start of code to match cobbler system interfaces to physical interfaces by their mac addresses # Start eth0 # Configuring eth0 (00:1A:64:36:B1:C8) if ip -o link show | grep -i 00:1A:64:36:B1:C8 then IFNAME=$(ip -o link show | grep -i 00:1A:64:36:B1:C8 | cut -d" " -f2 | tr -d :) echo "network --device=$IFNAME --bootproto=dhcp" >> /tmp/pre_install_ubuntu_network_config fi # End pre_install_network_config generated code %packages openssh-server

    Read the article

  • sql server uninstallation issue

    - by angel
    I'm unable to remove SQL Server 2008 sp1 completely from my system. I'm using windows 7 ultimate. Everytime I try uninstalling it i get the following error. How can I remove it? here is the log: Overall summary: Final result: Failed: see details below Exit code (Decimal): -2068643839 Exit facility code: 1203 Exit error code: 1 Exit message: Failed: see details below Start time: 2013-06-24 21:10:38 End time: 2013-06-24 21:21:17 Requested action: Uninstall Log with failure: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\sql_rs_Cpu64_1.log Exception help link: http://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.0.1600.22 Machine Properties: Machine name: ABHI-PC Machine processor count: 4 OS version: Windows Vista OS service pack: Service Pack 1 OS region: United States OS language: English (United States) OS architecture: x64 Process architecture: 64 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Sql Server 2008 MSSQLSERVER MSRS10.MSSQLSERVER Reporting Services 1033 Enterprise Edition 10.0.1600.22 No Sql Server 2008 Management Tools - Basic 10.0.1600.22 No Package properties: Description: SQL Server Database Services 2008 SQLProductFamilyCode: {628F8F38-600E-493D-9946-F4178F20A8A9} ProductName: SQL2008 Type: RTM Version: 10 SPLevel: 0 Installation edition: ENTERPRISE User Input Settings: ACTION: Uninstall CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\ConfigurationFile.ini FEATURES: RS,SSMS,SNAC_SDK,CE_RUNTIME,CE_TOOLS,SNAC HELP: False INDICATEPROGRESS: False INSTANCEID: INSTANCENAME: MSSQLSERVER MEDIASOURCE: QUIET: False QUIETSIMPLE: False X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\ConfigurationFile.ini Detailed results: Feature: SQL Client Connectivity Status: Skipped MSI status: Passed Configuration status: Passed Feature: SQL Client Connectivity SDK Status: Skipped MSI status: Passed Configuration status: Passed Feature: Reporting Services Status: Failed: see logs for details MSI status: Passed Configuration status: Failed: see details below Configuration error code: 0xFFD65603 Configuration error description: Input string was not in a correct format. Configuration log: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\Detail.txt Feature: SQL Compact Edition Tools Status: Passed MSI status: Passed Configuration status: Passed Feature: SQL Compact Edition Runtime Status: Skipped MSI status: Passed Configuration status: Passed Feature: Management Tools - Basic Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Rules with failures: Global rules: There are no scenario-specific rules. Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\SystemConfigurationCheck_Report.htm

    Read the article

  • Lenovo System Update Breaks Windows Live

    - by wolfvilleian
    Hey everyone, I've been racking my brain (and fingers from typing) trying to solve this issue to no avail. I have a Lenovo computer and I install their system update tool to install all my missing drivers. However after this tool is installed Windows Live 2011 breaks, it will no longer sign in giving error number 8e5e0247 all the solutions online haven't helped. It appears that a language setting somewhere gets set to en_ms, and I'm en_ca. My computer is running Windows 7 x64. When i try to sign onto messenger it gives an error that (with some research) means your locale or language is not supported, I've searched my computer for any reference to en_ms but find none. Also a few other things seem to have broken, When a UAC box comes up it is no longer able to identify the publisher of anything, and also the indexing service does not work (I'm not sure if the indexing issue is related, but the UAC issue happened right after installation), I had this issue before but I don't remember how I fixed it, I believe it had something to do with environmental variables. When it goes to sign in it gets as far as the "Loading contacts" then stops and goes back to the sign in screen. Has anyone seen this before? Thanks

    Read the article

  • Strange Behaviour with Unicode Characters in Windows

    - by open_sourse
    Ok, I do not know if this is a programming question, but it certainly is a technical one so I am asking it here. I was working on some internationalization stuff in my PHP code, and in order to ensure that my generated HTML shows up Unicode correctly based on the encoding and stuff I decided to add some Chinese text to my PHP page, which then echoes it into the browser to complete my test case. So I went into google and typed "Chinese", copied the first Chinese text that the search returned (which was ??/??). I then copied it into Notepad++ which is my editor, and to my surprise showed up as boxes similar to [][]/[][]. So I thought the encoding in Notepad++ was messed up and I changed the encoding to UTF-8 and UCS, neither worked. I did it fresh in a newly encoded file, still I got the boxes. The same content when I paste into Google and StackOverFlow (like I did in this posting) shows up correct Chinese! I even opened up Windows Clipboard Viewer and the content is represented in the Clipboard as boxes! I tried pasting it into Windows Explorer address bar and using to rename a file to, but I still get boxes. But it shows up correctly when pasted into my Chrome Browser address bar! Is this a Windows issue? Since I am able to paste it correctly in SO, the data in memory should be encoded correctly right? But if that is the case why does it show up as boxes in the Clipboard Viewer? I am confused here...By the way I am using Windows XP with SP3. (I am asking this question here, even if it is not programmatic, because it is preventing me from running my programming test cases..)

    Read the article

  • Apache downloads php files instead of running their source

    - by Devils Child
    I have just recently upgraded PHP 5.3 to 5.4 on my Debian Squeeze server. Now, instead of executing PHP files, Apache just downloads them, which is really bad. When I try to follow these steps, I get "broken packages" upon installing the libapache2-mod-php5 package. Also the answer tells me to add something to my httpd.conf, but it's empty. Question: How can I make apache execute php files again, instead of just passing them through as download? dpkg -l | grep php returns this rc libapache2-mod-php5 5.3.3-7+squeeze15 server-side, HTML-embedded scripting language (Apache 2 module) rc php5-cli 5.3.3-7+squeeze15 command-line interpreter for the php5 scripting language ii php5-common 5.4.15-1~dotdeb.2 Common files for packages built from the php5 source rc php5-gd 5.3.3-7+squeeze15 GD module for php5 rc php5-mcrypt 5.3.3-7+squeeze15 MCrypt module for php5 rc php5-mysql 5.3.3-7+squeeze15 MySQL module for php5 rc php5-suhosin 0.9.32.1-1 advanced protection module for php5 rc phpmyadmin 4:3.3.7-7 MySQL web administration tool And apt-get install libapache2-mod-php5 produces this error The following packages have unmet dependencies: libapache2-mod-php5 : Depends: libdb5.1 but it is not installable Depends: libssl1.0.0 (>= 1.0.0) but it is not installable Depends: libxml2 (>= 2.8.0) but 2.7.8.dfsg-2+squeeze7 is to be installed Recommends: php5-cli but it is not going to be installed E: Broken packages

    Read the article

  • file system damage

    - by jffrs
    I try recover the backup superblock on /dev/sda2 that contain ubuntu 12.04 LTS and partition ext4 with livecd ubuntu 10.04. the message is below root@ubuntu:/home/ubuntu# fsck.ext4 -b 163840 -B 4096 /dev/sda2 e2fsck 1.41.11 (14-Mar-2010) /dev/sda2 was not cleanly unmounted, check forced. Resize inode not valid. Recreate? yes Pass 1: Checking inodes, blocks, and sizes Programming error? block #7963637 claimed for no reason in process_bad_block. Programming error? block #11240437 claimed for no reason in process_bad_block. Root inode is not a directory. Clear? yes Inode 712 is in extent format, but superblock is missing EXTENTS feature Fix? yes Inode 98519 has compression flag set on filesystem without compression support. Clear? yes Inode 98519 has INDEX_FL flag set but is not a directory. Clear HTree index? what's the correct procedure?

    Read the article

  • segmentation fault using BaseCode encryption

    - by Natasha Thapa
    i took the code from the links below to encrypt and decrypt a text but i get segmentation fault when trying to run this any ideas?? http://etutorials.org/Programming/secure+programming/Chapter+4.+Symmetric+Cryptography+Fundamentals/4.5+Performing+Base64+Encoding/ http://etutorials.org/Programming/secure+programming/Chapter+4.+Symmetric+Cryptography+Fundamentals/4.6+Performing+Base64+Decoding/ #include <stdlib.h> #include <string.h> #include <stdio.h> static char b64revtb[256] = { -3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*0-15*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*16-31*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 62, -1, -1, -1, 63, /*32-47*/ 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, -1, -1, -1, -2, -1, -1, /*48-63*/ -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, /*64-79*/ 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, -1, -1, -1, -1, -1, /*80-95*/ -1, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, /*96-111*/ 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, -1, -1, -1, -1, -1, /*112-127*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*128-143*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*144-159*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*160-175*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*176-191*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*192-207*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*208-223*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /*224-239*/ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 /*240-255*/ }; unsigned char *spc_base64_encode( unsigned char *input , size_t len , int wrap ) ; unsigned char *spc_base64_decode(unsigned char *buf, size_t *len, int strict, int *err); static unsigned int raw_base64_decode(unsigned char *in, unsigned char *out, int strict, int *err); unsigned char *tmbuf = NULL; static char tmpbuffer[] ={0}; int main(void) { memset( tmpbuffer, NULL, sizeof( tmpbuffer ) ); sprintf( tmpbuffer, "%s:%s" , "username", "password" ); tmbuf = spc_base64_encode( (unsigned char *)tmpbuffer , strlen( tmpbuffer ), 0 ); printf(" The text %s has been encrytped to %s \n", tmpbuffer, tmbuf ); unsigned char *decrypt = NULL; int strict; int *err; decrypt = spc_base64_decode( tmbuf , strlen( tmbuf ), 0, err ); printf(" The text %s has been decrytped to %s \n", tmbuf , decrypt); } static char b64table[64] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" "abcdefghijklmnopqrstuvwxyz" "0123456789+/"; /* Accepts a binary buffer with an associated size. * Returns a base64 encoded, NULL-terminated string. */ unsigned char *spc_base64_encode(unsigned char *input, size_t len, int wrap) { unsigned char *output, *p; size_t i = 0, mod = len % 3, toalloc; toalloc = (len / 3) * 4 + (3 - mod) % 3 + 1; if (wrap) { toalloc += len / 57; if (len % 57) toalloc++; } p = output = (unsigned char *)malloc(((len / 3) + (mod ? 1 : 0)) * 4 + 1); if (!p) return 0; while (i < len - mod) { *p++ = b64table[input[i++] >> 2]; *p++ = b64table[((input[i - 1] << 4) | (input[i] >> 4)) & 0x3f]; *p++ = b64table[((input[i] << 2) | (input[i + 1] >> 6)) & 0x3f]; *p++ = b64table[input[i + 1] & 0x3f]; i += 2; if (wrap && !(i % 57)) *p++ = '\n'; } if (!mod) { if (wrap && i % 57) *p++ = '\n'; *p = 0; return output; } else { *p++ = b64table[input[i++] >> 2]; *p++ = b64table[((input[i - 1] << 4) | (input[i] >> 4)) & 0x3f]; if (mod = = 1) { *p++ = '='; *p++ = '='; if (wrap) *p++ = '\n'; *p = 0; return output; } else { *p++ = b64table[(input[i] << 2) & 0x3f]; *p++ = '='; if (wrap) *p++ = '\n'; *p = 0; return output; } } } static unsigned int raw_base64_decode(unsigned char *in, unsigned char *out, int strict, int *err) { unsigned int result = 0, x; unsigned char buf[3], *p = in, pad = 0; *err = 0; while (!pad) { switch ((x = b64revtb[*p++])) { case -3: /* NULL TERMINATOR */ if (((p - 1) - in) % 4) *err = 1; return result; case -2: /* PADDING CHARACTER. INVALID HERE */ if (((p - 1) - in) % 4 < 2) { *err = 1; return result; } else if (((p - 1) - in) % 4 == 2) { /* Make sure there's appropriate padding */ if (*p != '=') { *err = 1; return result; } buf[2] = 0; pad = 2; result++; break; } else { pad = 1; result += 2; break; } case -1: if (strict) { *err = 2; return result; } break; default: switch (((p - 1) - in) % 4) { case 0: buf[0] = x << 2; break; case 1: buf[0] |= (x >> 4); buf[1] = x << 4; break; case 2: buf[1] |= (x >> 2); buf[2] = x << 6; break; case 3: buf[2] |= x; result += 3; for (x = 0; x < 3 - pad; x++) *out++ = buf[x]; break; } break; } } for (x = 0; x < 3 - pad; x++) *out++ = buf[x]; return result; } /* If err is non-zero on exit, then there was an incorrect padding error. We * allocate enough space for all circumstances, but when there is padding, or * there are characters outside the character set in the string (which we are * supposed to ignore), then we end up allocating too much space. You can * realloc() to the correct length if you wish. */ unsigned char *spc_base64_decode(unsigned char *buf, size_t *len, int strict, int *err) { unsigned char *outbuf; outbuf = (unsigned char *)malloc(3 * (strlen(buf) / 4 + 1)); if (!outbuf) { *err = -3; *len = 0; return 0; } *len = raw_base64_decode(buf, outbuf, strict, err); if (*err) { free(outbuf); *len = 0; outbuf = 0; } return outbuf; }

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • javax.ejb.NoSuchEJBException after redeploying EJBs

    - by vetler
    Using Glassfish 3.0.1 ... If I have a web application accessing EJBs in another application remotely, and the remote application containing the EJBs is redeployed, I get a javax.ejb.NoSuchEJBException (see stacktrace below). Shouldn't this work? I can see that the EJB in question was successfully deployed, using the exact same JNDI name. Is there any other way to fix this than to restart the web application? It should be noted that in this particular example that the stacktrace is from, I'm accessing a servlet that injects the bean with CDI: public class StatusServlet extends HttpServlet { @Inject private StatusService statusService; @Override public void doGet(final HttpServletRequest req, final HttpServletResponse res) throws IOException { res.getWriter().write(statusService.getStatus()); } } The injection is done with the following producer to get the right EJB: public class StatusServiceProducer extends AbstractServiceProducer { @EJB(name = "StatusService") private StatusService service; @Produces public StatusService getService(final InjectionPoint ip) { return service; } } A producer is used to make it easier to wrap the service in a proxy, and to make it easier to change how the EJBs are looked up. The StatusService interface and implementation is as follows: @Stateless(name = "StatusService") public class StatusServiceImpl implements StatusService { private static final String OK = "OK"; public String getStatus() { // Some code return OK; } } public interface StatusService { String getStatus(); } Full stacktrace: [#|2011-01-12T10:45:28.273+0100|WARNING|glassfish3.0.1|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=50;_ThreadName=http-thread-pool-8080-(1);|StandardWrapperValve[Load Balancer status servlet]: PWC1406: Servlet.service() for servlet Load Balancer status servlet threw exception javax.ejb.NoSuchEJBException at org.example.service._StatusService_Wrapper.getStatus(org/example/service/_StatusService_Wrapper.java) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at no.evote.service.cache.ServiceInvocationHandler.invoke(ServiceInvocationHandler.java:34) at $Proxy760.getStatus(Unknown Source) at no.evote.presentation.StatusServlet.doGet(StatusServlet.java:25) at javax.servlet.http.HttpServlet.service(HttpServlet.java:734) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:343) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:215) at net.balusc.http.multipart.MultipartFilter.doFilter(MultipartFilter.java:78) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:215) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:277) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:325) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:226) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:662) Caused by: java.rmi.NoSuchObjectException: CORBA OBJECT_NOT_EXIST 1330446338 No; nested exception is: org.omg.CORBA.OBJECT_NOT_EXIST: ----------BEGIN server-side stack trace---------- org.omg.CORBA.OBJECT_NOT_EXIST: vmcid: OMG minor code: 2 completed: No at com.sun.corba.ee.impl.logging.OMGSystemException.noObjectAdaptor(OMGSystemException.java:3457) at com.sun.corba.ee.impl.logging.OMGSystemException.noObjectAdaptor(OMGSystemException.java:3475) at com.sun.corba.ee.impl.oa.poa.POAFactory.find(POAFactory.java:222) at com.sun.corba.ee.impl.protocol.CorbaServerRequestDispatcherImpl.findObjectAdapter(CorbaServerRequestDispatcherImpl.java:450) at com.sun.corba.ee.impl.protocol.CorbaServerRequestDispatcherImpl.dispatch(CorbaServerRequestDispatcherImpl.java:209) at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.handleRequestRequest(CorbaMessageMediatorImpl.java:1841) at com.sun.corba.ee.impl.protocol.SharedCDRClientRequestDispatcherImpl.marshalingComplete(SharedCDRClientRequestDispatcherImpl.java:119) at com.sun.corba.ee.impl.protocol.CorbaClientDelegateImpl.invoke(CorbaClientDelegateImpl.java:235) at com.sun.corba.ee.impl.presentation.rmi.StubInvocationHandlerImpl.privateInvoke(StubInvocationHandlerImpl.java:187) at com.sun.corba.ee.impl.presentation.rmi.StubInvocationHandlerImpl.invoke(StubInvocationHandlerImpl.java:147) at com.sun.corba.ee.impl.presentation.rmi.codegen.CodegenStubBase.invoke(CodegenStubBase.java:225) at no.evote.service.__StatusService_Remote_DynamicStub.getStatus(no/evote/service/__StatusService_Remote_DynamicStub.java) at no.evote.service._StatusService_Wrapper.getStatus(no/evote/service/_StatusService_Wrapper.java) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at no.evote.service.cache.ServiceInvocationHandler.invoke(ServiceInvocationHandler.java:34) at $Proxy760.getStatus(Unknown Source) at no.evote.presentation.StatusServlet.doGet(StatusServlet.java:25) at javax.servlet.http.HttpServlet.service(HttpServlet.java:734) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:343) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:215) at net.balusc.http.multipart.MultipartFilter.doFilter(MultipartFilter.java:78) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:215) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:277) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:325) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:226) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:662) Caused by: org.omg.PortableServer.POAPackage.AdapterNonExistent: IDL:omg.org/PortableServer/POA/AdapterNonExistent:1.0 at com.sun.corba.ee.impl.oa.poa.POAImpl.find_POA(POAImpl.java:1057) at com.sun.corba.ee.impl.oa.poa.POAFactory.find(POAFactory.java:218) ... 48 more ----------END server-side stack trace---------- vmcid: OMG minor code: 2 completed: No at com.sun.corba.ee.impl.javax.rmi.CORBA.Util.mapSystemException(Util.java:280) at com.sun.corba.ee.impl.presentation.rmi.StubInvocationHandlerImpl.privateInvoke(StubInvocationHandlerImpl.java:200) at com.sun.corba.ee.impl.presentation.rmi.StubInvocationHandlerImpl.invoke(StubInvocationHandlerImpl.java:147) at com.sun.corba.ee.impl.presentation.rmi.codegen.CodegenStubBase.invoke(CodegenStubBase.java:225) at no.evote.service.__StatusService_Remote_DynamicStub.getStatus(no/evote/service/__StatusService_Remote_DynamicStub.java) ... 39 more Caused by: org.omg.CORBA.OBJECT_NOT_EXIST: ----------BEGIN server-side stack trace---------- org.omg.CORBA.OBJECT_NOT_EXIST: vmcid: OMG minor code: 2 completed: No at com.sun.corba.ee.impl.logging.OMGSystemException.noObjectAdaptor(OMGSystemException.java:3457) at com.sun.corba.ee.impl.logging.OMGSystemException.noObjectAdaptor(OMGSystemException.java:3475) at com.sun.corba.ee.impl.oa.poa.POAFactory.find(POAFactory.java:222) at com.sun.corba.ee.impl.protocol.CorbaServerRequestDispatcherImpl.findObjectAdapter(CorbaServerRequestDispatcherImpl.java:450) at com.sun.corba.ee.impl.protocol.CorbaServerRequestDispatcherImpl.dispatch(CorbaServerRequestDispatcherImpl.java:209) at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.handleRequestRequest(CorbaMessageMediatorImpl.java:1841) at com.sun.corba.ee.impl.protocol.SharedCDRClientRequestDispatcherImpl.marshalingComplete(SharedCDRClientRequestDispatcherImpl.java:119) at com.sun.corba.ee.impl.protocol.CorbaClientDelegateImpl.invoke(CorbaClientDelegateImpl.java:235) at com.sun.corba.ee.impl.presentation.rmi.StubInvocationHandlerImpl.privateInvoke(StubInvocationHandlerImpl.java:187) at com.sun.corba.ee.impl.presentation.rmi.StubInvocationHandlerImpl.invoke(StubInvocationHandlerImpl.java:147) at com.sun.corba.ee.impl.presentation.rmi.codegen.CodegenStubBase.invoke(CodegenStubBase.java:225) at no.evote.service.__StatusService_Remote_DynamicStub.getStatus(no/evote/service/__StatusService_Remote_DynamicStub.java) at no.evote.service._StatusService_Wrapper.getStatus(no/evote/service/_StatusService_Wrapper.java) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at no.evote.service.cache.ServiceInvocationHandler.invoke(ServiceInvocationHandler.java:34) at $Proxy760.getStatus(Unknown Source) at no.evote.presentation.StatusServlet.doGet(StatusServlet.java:25) at javax.servlet.http.HttpServlet.service(HttpServlet.java:734) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:343) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:215) at net.balusc.http.multipart.MultipartFilter.doFilter(MultipartFilter.java:78) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:215) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:277) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:325) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:226) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:662) Caused by: org.omg.PortableServer.POAPackage.AdapterNonExistent: IDL:omg.org/PortableServer/POA/AdapterNonExistent:1.0 at com.sun.corba.ee.impl.oa.poa.POAImpl.find_POA(POAImpl.java:1057) at com.sun.corba.ee.impl.oa.poa.POAFactory.find(POAFactory.java:218) ... 48 more ----------END server-side stack trace---------- vmcid: OMG minor code: 2 completed: No at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.sun.corba.ee.impl.protocol.giopmsgheaders.MessageBase.getSystemException(MessageBase.java:913) at com.sun.corba.ee.impl.protocol.giopmsgheaders.ReplyMessage_1_2.getSystemException(ReplyMessage_1_2.java:129) at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.getSystemExceptionReply(CorbaMessageMediatorImpl.java:681) at com.sun.corba.ee.impl.protocol.CorbaClientRequestDispatcherImpl.processResponse(CorbaClientRequestDispatcherImpl.java:510) at com.sun.corba.ee.impl.protocol.SharedCDRClientRequestDispatcherImpl.marshalingComplete(SharedCDRClientRequestDispatcherImpl.java:153) at com.sun.corba.ee.impl.protocol.CorbaClientDelegateImpl.invoke(CorbaClientDelegateImpl.java:235) at com.sun.corba.ee.impl.presentation.rmi.StubInvocationHandlerImpl.privateInvoke(StubInvocationHandlerImpl.java:187) ... 42 more |#]

    Read the article

< Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >