Search Results

Search found 14411 results on 577 pages for 'library'.

Page 568/577 | < Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >

  • Quick guide to Oracle IRM 11g: Configuring SSL

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index So far in this guide we have an IRM Server up and running, however I skipped over SSL configuration in the previous article because I wanted to focus in more detail now. You can, if you wish, not bother with setting up SSL, but considering this is a security technology it is worthwhile doing. Contents Setting up a one way, self signed SSL certificate in WebLogic Setting up an official SSL certificate in Apache 2.x Configuring Apache to proxy traffic to the IRM server There are two common scenarios in which an Oracle IRM server is configured. For a development or evaluation system, people usually communicate directly to the WebLogic Server running the IRM service. However in a production environment and for some proof of concept evaluations that require a setup reflecting a production system, the traffic to the IRM server travels via a web server proxy, commonly Apache. In this guide we are building an Oracle Enterprise Linux based IRM service and this article will go over the configuration of SSL in WebLogic and also in Apache. Like in the past articles, we are going to use two host names in the configuration below,irm.company.com will refer to the public Apache server irm.company.internal will refer to the internal WebLogic IRM server Setting up a one way, self signed SSL certificate in WebLogic First lets look at creating just a simple self signed SSL certificate to be used in WebLogic. This is a quick and easy way to get SSL working in your environment, however the downside is that no browsers are going to trust this certificate you create and you'll need to manually install the certificate onto any machine's communicating with the server. This is fine for development or when you have only a few users evaluating the system, but for any significant use it's usually better to have a fully trusted certificate in use and I explain that in the next section. But for now lets go through creating, installing and testing a self signed certificate. We use a library in Java to create the certificates, open a console and running the following commands. Note you should choose your own secure passwords whenever you see password below. [oracle@irm /] source /oracle/middleware/wlserver_10.3/server/bin/setWLSEnv.sh [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irm /] java utils.CertGen -selfsigned -certfile MyOwnSelfCA.cer -keyfile MyOwnSelfKey.key -keyfilepass password -cn "irm.oracle.demo" [oracle@irm /] java utils.ImportPrivateKey -keystore MyOwnIdentityStore.jks -storepass password -keypass password -alias trustself -certfile MyOwnSelfCA.cer.pem -keyfile MyOwnSelfKey.key.pem -keyfilepass password [oracle@irm /] keytool -import -trustcacerts -alias trustself -keystore TrustMyOwnSelf.jks -file MyOwnSelfCA.cer.der -keyalg RSA We now have two Java Key Stores, MyOwnIdentityStore.jks and TrustMyOwnSelf.jks. These contain keys and certificates which we will use in WebLogic Server. Now we need to tell the IRM server to use these stores when setting up SSL connections for incoming requests. Make sure the Admin server is running and login into the WebLogic Console at http://irm.company.intranet:7001/console and do the following; In the menu on the left, select the + next to Environment to expose the submenu, then click on Servers. You will see two servers in the list, AdminServer(admin) and IRM_server1. If the IRM server is running, shut it down either by hitting CONTROL + C in the console window it was started from, or you can switch to the CONTROL tab, select IRM_server1 and then select the Shutdown menu and then Force Shutdown Now. In the Configuration tab select IRM_server1 and switch to the Keystores tab. By default WebLogic Server uses it's own demo identity and trust. We are now going to switch to the self signed one's we've just created. So select the Change button and switch to Custom Identity and Custom Trust and hit save. Now we have to complete the resulting fields, the setting's i've used in my evaluation server are below. IdentityCustom Identity Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/MyOwnIdentityStore.jks Custom Identity Keystore Type: JKS Custom Identity Keystore Passphrase: password Confirm Custom Identity Keystore Passphrase: password TrustCustom Trust Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/TrustMyOwnSelf.jks Custom Trust Keystore Type: JKS Custom Trust Keystore Passphrase: password Confirm Custom Trust Keystore Passphrase: password Now click on the SSL tab for the IRM_server1 and enter in the alias and passphrase, in my demo here the details are; IdentityPrivate Key Alias: trustself Private Key Passphrase: password Confirm Private Key Passphrase: password And hit save. Now lets test a connection to the IRM server over HTTPS using SSL. Go back to a console window and start the IRM server, a quick reminder on how to do this is... [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/bin [oracle@irm /] ./startManagedWeblogic IRM_server1 Once running, open a browser and head to the SSL port of the server. By default the IRM server will be listening on the URL https://irm.company.intranet:16101/irm_rights. Note in the example image on the right the port is 7002 because it's a system that has the IRM services installed on the Admin server, this isn't typical (or advisable). Your system is going to have a separate managed server which will be listening on port 16101. Once you open this address you will notice that your browser is going to complain that the server certificate is untrusted. The images on the right show how Firefox displays this error. You are going to be prompted every time you create a new SSL session with the server, both from the browser and more annoyingly from the IRM Desktop. If you plan on always using a self signed certificate, it is worth adding it to the Windows certificate store so that when you are accessing sealed content you do not keep being informed this certificate is not trusted. Follow these instructions (which are for Internet Explorer 8, they may vary for your version of IE.) Start Internet Explorer and open the URL to your IRM server over SSL, e.g. https://irm.company.intranet:16101/irm_rights. IE will complain that about the certificate, click on Continue to this website (not recommended). From the IE Tools menu select Internet Options and from the resulting dialog select Security and then click on Trusted Sites and then the Sites button. Add to the list of trusted sites a URL which mates the server you are accessing, e.g. https://irm.company.intranet/ and select OK. Now refresh the page you were accessing and next to the URL you should see a red cross and the words Certificate Error. Click on this button and select View Certificates. You will now see a dialog with the details of the self signed certificate and the Install Certificate... button should be enabled. Click on this to start the wizard. Click next and you'll be asked where you should install the certificate. Change the option to Place all certificates in the following store. Select browse and choose the Trusted Root Certification Authorities location and hit OK. You'll then be prompted to install the certificate and answer yes. You also need to import the root signed certificate into the same location, so once again select the red Certificate Error option and this time when viewing the certificate, switch to the Certification Path tab and you should see a CertGenCAB certificate. Select this and then click on View Certificate and go through the same process as above to import the certificate into the store. Finally close all instances of the IE browser and re-access the IRM server URL again, this time you should not receive any errors. Setting up an official SSL certificate in Apache 2.x At this point we now have an IRM server that you can communicate with over SSL. However this certificate isn't trusted by any browser because it's path of trust doesn't end in a recognized certificate authority (CA). Also you are communicating directly to the WebLogic Server over a non standard SSL port, 16101. In a production environment it is common to have another device handle the initial public internet traffic and then proxy this to the WebLogic server. The diagram below shows a very simplified view of this type of deployment. What i'm going to walk through next is configuring Apache to proxy traffic to a WebLogic server and also to use a real SSL certificate from an official CA. First step is to configure Apache to handle incoming requests over SSL. In this guide I am configuring the IRM service in Oracle Enterprise Linux 5 update 3 and Apache 2.2.3 which came with OpenSSL and mod_ssl components. Before I purchase an SSL certificate, I need to generate a certificate request from the server. Oracle.com uses Verisign and for my own personal needs I use cheaper certificates from GoDaddy. The following instructions are specific to Apache, but there are many references out there for other web servers. For Apache I have OpenSSL and the commands are; [oracle@irm /] cd /usr/bin [oracle@irm bin] openssl genrsa -des3 -out irm-apache-server.key 2048 Generating RSA private key, 2048 bit long modulus ............................+++ .........+++ e is 65537 (0x10001) Enter pass phrase for irm-apache-server.key: Verifying - Enter pass phrase for irm-apache-server.key: [oracle@irm bin] openssl req -new -key irm-apache-server.key -out irm-apache-server.csr Enter pass phrase for irm-apache-server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:San Francisco Organization Name (eg, company) [My Company Ltd]:Oracle Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:irm.company.com Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:testing An optional company name []: You must make sure to remember the pass phrase you used in the initial key generation, you will need this when later configuring Apache. In the /usr/bin directory there are now two new files. The irm-apache-server.csr contains our certificate request and is what you cut and paste, or upload, to your certificate authority when you purchase and validate your SSL certificate. In response you will typically get two files. Your server certificate and another certificate file that will likely contain a set of certificates from your CA which validate your certificate's trust. Next we need to configure Apache to use these files. Typically there is an ssl.conf file which is where all the SSL configuration is done. On my Oracle Enterprise Linux server this file is located in /etc/httpd/conf.d/ssl.conf and i've added the following lines. <VirtualHost irm.company.com> # Setup SSL for irm.company.com ServerName irm.company.com SSLEngine On SSLCertificateFile /oracle/secure/irm.company.com.crt SSLCertificateKeyFile /oracle/secure/irm.company.com.key SSLCertificateChainFile /oracle/secure/gd_bundle.crt </VirtualHost> Restarting Apache (apachectl restart) and I can now attempt to connect to the Apache server in a web browser, https://irm.company.com/. If all is configured correctly I should now see an Apache test page delivered to me over HTTPS. Configuring Apache to proxy traffic to the IRM server Final piece in setting up SSL is to have Apache proxy requests for the IRM server but do so securely. So the requests to Apache will be over HTTPS using a legitimate certificate, but we can also configure Apache to proxy these requests internally across to the IRM server using SSL with the self signed certificate we generated at the start of this article. To do this proxying we use the WebLogic Web Server plugin for Apache which you can download here from Oracle. Download the zip file and extract onto the server. The file extraction reveals a set of zip files, each one specific to a supported web server. In my instance I am using Apache 2.2 32bit on an Oracle Enterprise Linux, 64 bit server. If you are not sure what version your Apache server is, run the command /usr/sbin/httpd -V and you'll see version and it its 32 or 64 bit. Mine is a 32bit server so I need to extract the file WLSPlugin1.1-Apache2.2-linux32-x86.zip. The from the resulting lib folder copy the file mod_wl.so into /usr/lib/httpd/modules/. First we want to test that the plug in will work for regular HTTP traffic. Edit the httpd.conf for Apache and add the following section at the bottom. LoadModule weblogic_module modules/mod_wl.so <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16100    WLLogFile /tmp/wl-proxy.log </IfModule> <Location /irm_rights>    SetHandler weblogic-handler </Location> <Location /irm_desktop>    SetHandler weblogic-handler </Location> <Location /irm_sealing>    SetHandler weblogic-handler </Location> <Location /irm_services>    SetHandler weblogic-handler </Location> Now restart Apache again (apachectl restart) and now open a browser to http://irm.company.com/irm_rights. Apache will proxy the HTTP traffic from the port 80 of your Apache server to the IRM service listening on port 16100 of the WebLogic Managed server. Note above I have included all four of the Locations you might wish to proxy. http://irm.company.internalirm_rights is the URL to the management website, /irm_desktop is the URL used for the IRM Desktop to communicate. irm_sealing is for web services based document sealing and irm_services is for IRM server web services. The last two are typically only used when you have the IRM server integrated with another application and it is unlikely you'd be accessing these resources from the public facing Apache server. However, just in case, i've mentioned them above. Now let's enable SSL communication from Apache to WebLogic. In the ZIP file we extracted were some more modules we need to copy into the Apache folder. Looking back in the lib that we extracted, there are some more files. Copy the following into the /usr/lib/httpd/modules/ folder. libwlssl.so libnnz11.so libclntsh.so.11.1 Now the documentation states that should only need to do this, but I found that I also needed to create an environment variable called LD_LIBRARY_PATH and point this to the folder /usr/lib/httpd/modules/. If I didn't do this, starting Apache with the WebLogic module configured to SSL would throw the error. [crit] (20014)Internal error: WL SSL Init failed for server: (null) on 0 So I had to edit the file /etc/profile and add the following lines at the bottom. You may already have the LD_LIBRARY_PATH variable defined, therefore simply add this path to it. LD_LIBRARY_PATH=/usr/lib/httpd/modules/ export LD_LIBRARY_PATH Now the WebLogic plug in uses an Oracle Wallet to store the required certificates.You'll need to copy the self signed certificate from the IRM server over to the Apache server. Copy over the MyOwnSelfCA.cer.der into the same folder where you are storing your public certificates, in my example this is /oracle/secure. It's worth mentioning these files should ONLY be readable by root (the user Apache runs as). Now lets create an Oracle Wallet and import the self signed certificate from the IRM server. The file orapki was included in the bin folder of the Apache 1.1 plugin zip you extracted. orapki wallet create -wallet /oracle/secure/my-wallet -auto_login_only orapki wallet add -wallet /oracle/secure/my-wallet -trusted_cert -cert MyOwnSelfCA.cer.der -auto_login_only Finally change the httpd.conf to reflect that we want the WebLogic Apache plug-in to use HTTPS/SSL and not just plain HTTP. <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16101    SecureProxy ON    WLSSLWallet /oracle/secure/my-wallet    WLLogFile /tmp/wl-proxy.log </IfModule> Then restart Apache once more and you can go back to the browser to test the communication. Opening the URL https://irm.company.com/irm_rights will proxy your request to the WebLogic server at https://irm.company.internal:16101/irm_rights. At this point you have a fully functional Oracle IRM service, the next step is to create a sealed document and test the entire system.

    Read the article

  • How to reduce iOS AVPlayer start delay

    - by Bernt Habermeier
    Note, for the below question: All assets are local on the device -- no network streaming is taking place. The videos contain audio tracks. I'm working on an iOS application that requires playing video files with minimum delay to start the video clip in question. Unfortunately we do not know what specific video clip is next until we actually need to start it up. Specifically: When one video clip is playing, we will know what the next set of (roughly) 10 video clips are, but we don't know which one exactly, until it comes time to 'immediately' play the next clip. What I've done to look at actual start delays is to call addBoundaryTimeObserverForTimes on the video player, with a time period of one millisecond to see when the video actually started to play, and I take the difference of that time stamp with the first place in the code that indicates which asset to start playing. From what I've seen thus-far, I have found that using the combination of AVAsset loading, and then creating an AVPlayerItem from that once it's ready, and then waiting for AVPlayerStatusReadyToPlay before I call play, tends to take between 1 and 3 seconds to start the clip. I've since switched to what I think is roughly equivalent: calling [AVPlayerItem playerItemWithURL:] and waiting for AVPlayerItemStatusReadyToPlay to play. Roughly same performance. One thing I'm observing is that the first AVPlayer item load is slower than the rest. Seems one idea is to pre-flight the AVPlayer with a short / empty asset before trying to play the first video might be of good general practice. [http://stackoverflow.com/questions/900461/slow-start-for-avaudioplayer-the-first-time-a-sound-is-played] I'd love to get the video start times down as much as possible, and have some ideas of things to experiment with, but would like some guidance from anyone that might be able to help. Update: idea 7, below, as-implemented yields switching times of around 500 ms. This is an improvement, but it it'd be nice to get this even faster. Idea 1: Use N AVPlayers (won't work) Using ~ 10 AVPPlayer objects and start-and-pause all ~ 10 clips, and once we know which one we really need, switch to, and un-pause the correct AVPlayer, and start all over again for the next cycle. I don't think this works, because I've read there is roughly a limit of 4 active AVPlayer's in iOS. There was someone asking about this on StackOverflow here, and found out about the 4 AVPlayer limit: fast-switching-between-videos-using-avfoundation Idea 2: Use AVQueuePlayer (won't work) I don't believe that shoving 10 AVPlayerItems into an AVQueuePlayer would pre-load them all for seamless start. AVQueuePlayer is a queue, and I think it really only makes the next video in the queue ready for immediate playback. I don't know which one out of ~10 videos we do want to play back, until it's time to start that one. ios-avplayer-video-preloading Idea 3: Load, Play, and retain AVPlayerItems in background (not 100% sure yet -- but not looking good) I'm looking at if there is any benefit to load and play the first second of each video clip in the background (suppress video and audio output), and keep a reference to each AVPlayerItem, and when we know which item needs to be played for real, swap that one in, and swap the background AVPlayer with the active one. Rinse and Repeat. The theory would be that recently played AVPlayer/AVPlayerItem's may still hold some prepared resources which would make subsequent playback faster. So far, I have not seen benefits from this, but I might not have the AVPlayerLayer setup correctly for the background. I doubt this will really improve things from what I've seen. Idea 4: Use a different file format -- maybe one that is faster to load? I'm currently using .m4v's (video-MPEG4) H.264 format. I have not played around with other formats, but it may well be that some formats are faster to decode / get ready than others. Possible still using video-MPEG4 but with a different codec, or maybe quicktime? Maybe a lossless video format where decoding / setup is faster? Idea 5: Combination of lossless video format + AVQueuePlayer If there is a video format that is fast to load, but maybe where the file size is insane, one idea might be to pre-prepare the first 10 seconds of each video clip with a version that is boated but faster to load, but back that up with an asset that is encoded in H.264. Use an AVQueuePlayer, and add the first 10 seconds in the uncompressed file format, and follow that up with one that is in H.264 which gets up to 10 seconds of prepare/preload time. So I'd get 'the best' of both worlds: fast start times, but also benefits from a more compact format. Idea 6: Use a non-standard AVPlayer / write my own / use someone else's Given my needs, maybe I can't use AVPlayer, but have to resort to AVAssetReader, and decode the first few seconds (possibly write raw file to disk), and when it comes to playback, make use of the raw format to play it back fast. Seems like a huge project to me, and if I go about it in a naive way, it's unclear / unlikely to even work better. Each decoded and uncompressed video frame is 2.25 MB. Naively speaking -- if we go with ~ 30 fps for the video, I'd end up with ~60 MB/s read-from-disk requirement, which is probably impossible / pushing it. Obviously we'd have to do some level of image compression (perhaps native openGL/es compression formats via PVRTC)... but that's kind crazy. Maybe there is a library out there that I can use? Idea 7: Combine everything into a single movie asset, and seekToTime One idea that might be easier than some of the above, is to combine everything into a single movie, and use seekToTime. The thing is that we'd be jumping all around the place. Essentially random access into the movie. I think this may actually work out okay: avplayer-movie-playing-lag-in-ios5 Which approach do you think would be best? So far, I've not made that much progress in terms of reducing the lag.

    Read the article

  • How to include an external jar in gwt client side?

    - by Sergio del Amo
    I would like to use the org.apache.commons.validator.GenericValidator class in a view class of my GWT web app. I have read that I have to implicitely tell that I intend to use this external library. I thought adding the next line into my App.gwt.xml would work. <inherits name='org.apache.commons.validator.GenericValidator'/> I get the next error: Loading inherited module 'org.apache.commons.validator.GenericValidator' [ERROR] Unable to find 'org/apache/commons/validator/GenericValidator.gwt.xml' on your classpath; could be a typo, or maybe you forgot to include a classpath entry for source? [ERROR] Line 13: Unexpected exception while processing element 'inherits' com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:239) at com.google.gwt.dev.cfg.ModuleDefSchema$BodySchema.__inherits_begin(ModuleDefSchema.java:354) at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.util.xml.HandlerMethod.invokeBegin(HandlerMethod.java:223) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.startElement(ReflectiveParser.java:270) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:327) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) [ERROR] Failure while parsing XML com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.util.xml.DefaultSchema.onHandlerException(DefaultSchema.java:56) at com.google.gwt.dev.util.xml.Schema.onHandlerException(Schema.java:66) at com.google.gwt.dev.util.xml.Schema.onHandlerException(Schema.java:66) at com.google.gwt.dev.util.xml.HandlerMethod.invokeBegin(HandlerMethod.java:233) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.startElement(ReflectiveParser.java:270) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:327) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) [ERROR] Unexpected error while processing XML com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:351) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) Anyone knows how it works?

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • Android "java.lang.noclassdeffounderror" exception

    - by wpbnewbie
    Hello, I have a android webservice client application. I am trying to use the java standard WS library support. I have stripped the application down to the minimum, as shown below, to try and isolate the issue. Below is the applicaiton, package fau.edu.cse; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class ClassMap extends Activity { TextView displayObject; @Override public void onCreate(Bundle savedInstanceState) { // Build Screen Display String String screenString = "Program Started\n\n"; // Set up the display super.onCreate(savedInstanceState); setContentView(R.layout.main); displayObject = (TextView)findViewById(R.id.TextView01); screenString = screenString + "Inflate Disaplay\n\n"; try { // Set up Soap Service TempConvertSoap service = new TempConvert().getTempConvertSoap(); // Successful Soap Object Build screenString = screenString + "SOAP Object Correctly Build\n\n"; // Display Response displayObject.setText(screenString); } catch(Throwable e){ e.printStackTrace(); displayObject.setText(screenString +"Try Error...\n" + e.toString()); } } } The classes tempConvert and tempConvertSoap are in the package fau.edu.cse. I have included the java SE javax libraries in the java build pasth. When the android application tries to create the "service" object I get a "java.lang.noclassdeffounderror" exception. The two classes tempConvertSoap and TempConvet() are generated by wsimport. I am also using several libraries from javax.jws.. and javax.xml.ws.. Of course the application compiles without error and loads correctly. I know the application is running becouse my "try/catch" routine is successfully catching the error and printing it out. Here is what is in the logcat says (notice that it cannot find TempConvert), 06-12 22:58:39.340: WARN/dalvikvm(200): Unable to resolve superclass of Lfau/edu/cse/TempConvert; (53) 06-12 22:58:39.340: WARN/dalvikvm(200): Link of class 'Lfau/edu/cse/TempConvert;' failed 06-12 22:58:39.340: ERROR/dalvikvm(200): Could not find class 'fau.edu.cse.TempConvert', referenced from method fau.edu.cse.ClassMap.onCreate 06-12 22:58:39.340: WARN/dalvikvm(200): VFY: unable to resolve new-instance 21 (Lfau/edu/cse/TempConvert;) in Lfau/edu/cse/ClassMap; 06-12 22:58:39.340: DEBUG/dalvikvm(200): VFY: replacing opcode 0x22 at 0x0027 06-12 22:58:39.340: DEBUG/dalvikvm(200): Making a copy of Lfau/edu/cse/ClassMap;.onCreate code (252 bytes) 06-12 22:58:39.490: DEBUG/dalvikvm(30): GC freed 2 objects / 48 bytes in 273ms 06-12 22:58:39.530: DEBUG/ddm-heap(119): Got feature list request 06-12 22:58:39.620: WARN/Resources(200): Converting to string: TypedValue{t=0x12/d=0x0 a=2 r=0x7f050000} 06-12 22:58:39.620: WARN/System.err(200): java.lang.NoClassDefFoundError: fau.edu.cse.TempConvert 06-12 22:58:39.830: WARN/System.err(200): at fau.edu.cse.ClassMap.onCreate(ClassMap.java:26) 06-12 22:58:39.830: WARN/System.err(200): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 06-12 22:58:39.830: WARN/System.err(200): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2459) 06-12 22:58:39.830: WARN/System.err(200): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2512) 06-12 22:58:39.830: WARN/System.err(200): at android.app.ActivityThread.access$2200(ActivityThread.java:119) 06-12 22:58:39.880: WARN/System.err(200): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1863) 06-12 22:58:39.880: WARN/System.err(200): at android.os.Handler.dispatchMessage(Handler.java:99) 06-12 22:58:39.880: WARN/System.err(200): at android.os.Looper.loop(Looper.java:123) 06-12 22:58:39.880: WARN/System.err(200): at android.app.ActivityThread.main(ActivityThread.java:4363) 06-12 22:58:39.880: WARN/System.err(200): at java.lang.reflect.Method.invokeNative(Native Method) 06-12 22:58:39.880: WARN/System.err(200): at java.lang.reflect.Method.invoke(Method.java:521) 06-12 22:58:39.880: WARN/System.err(200): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 06-12 22:58:39.880: WARN/System.err(200): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 06-12 22:58:39.880: WARN/System.err(200): at dalvik.system.NativeStart.main(Native Method) ...bla...bla...bla It would be great if someone just had an answer, however I am looking at debug strategies. I have taken this same application and created a standard java client application and it works fine -- of course with all of the android stuff taken out. What would be a good debug strategy? What methods and techniques would you recommend I try and isolate the problem? I am thinking that there is some sort of Dalvik VM incompatibility that is causing the TempConvert class not to load. TempConvert is an interface class that references a lot of very tricky webservice attributes. Any help with debug strategies would be gladly appreciated. Thanks for the help, Steve

    Read the article

  • How to include an external jar in a GWT module?

    - by Sergio del Amo
    I would like to use the org.apache.commons.validator.GenericValidator class in a view class of my GWT web app. I have read that I have to implicitely tell that I intend to use this external library. I thought adding the next line into my App.gwt.xml would work. <inherits name='org.apache.commons.validator.GenericValidator'/> I get the next error: Loading inherited module 'org.apache.commons.validator.GenericValidator' [ERROR] Unable to find 'org/apache/commons/validator/GenericValidator.gwt.xml' on your classpath; could be a typo, or maybe you forgot to include a classpath entry for source? [ERROR] Line 13: Unexpected exception while processing element 'inherits' com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:239) at com.google.gwt.dev.cfg.ModuleDefSchema$BodySchema.__inherits_begin(ModuleDefSchema.java:354) at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.util.xml.HandlerMethod.invokeBegin(HandlerMethod.java:223) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.startElement(ReflectiveParser.java:270) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:327) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) [ERROR] Failure while parsing XML com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.util.xml.DefaultSchema.onHandlerException(DefaultSchema.java:56) at com.google.gwt.dev.util.xml.Schema.onHandlerException(Schema.java:66) at com.google.gwt.dev.util.xml.Schema.onHandlerException(Schema.java:66) at com.google.gwt.dev.util.xml.HandlerMethod.invokeBegin(HandlerMethod.java:233) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.startElement(ReflectiveParser.java:270) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:327) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) [ERROR] Unexpected error while processing XML com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:351) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) I have commons.validator-1.3.1.jar in war/WEB-INF/lib I am using eclipse with Google Plugin. Anyone knows how it works?

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • SQL Server 2005: Improving performance for thousands or Insert requests. logout-login time= 120ms.

    - by Rad
    Can somebody shed some lights on how SQL Server 2005 deals with may request issued by a client using ADO.NET 2.0. Below is the shortend output of SQL Trace. I can see that connection pooling is working (I believe there is only one connection being pooled). What is not clear to me is why we have so many sp_reset_connection calls i.e a series of: Audit Login, SQL:BatchStarting, RPC:Starting and Audit Logout for each loop in for loop below. I can see that there is constant switching between tempdb and master database which leads me to conclude that we lost the context when next connection is created by fetching it from the pool based on ConectionString argument. I can see that every 15ms I can get 100-200 login/logout per second (reported at the same time by Profiler). The after 15ms I have again a series fo 100-200 login/logout per second. I need clarification on how this might affect much complex insert queries in production environment. I use Enterprise Library 2006, the code is compiled with VS 2005 and it is a console application that parses a flat file with 10 of thousand of rows grouping parent-child rows, runs on an application server and runs 2 stored procedure on a remote SQL Server 2005 inserting a parent record, retrieves Identity value and using it calls the second stored procedure 1, 2 or multiple times (sometimes several thousands) inserting child records. The child table has close to 10 million records with 5-10 indexes some of them being covering non-clustered. There is a pretty complex Insert trigger that copies inserted detail record to an archive table. All in all I only have 7 inserts per second which means it can take 2-4 hours for 50 thousand records. When I run Profiler on the test server (that is almost equivalent with production server) I can see that there is about 120ms between Audit Logout and Audit Login trace entries which almost give me chance to insert about 8 records. So my question is if there is some way to improve inserting of records since the company loads 100 thousands of records and does daily planning and has SLA to fulfill client request coming as flat file orders and some big files 10 thousands have to be processed(imported quickly). 4 hours to import 60 thousands should be reduced to 30 minutes. I was thinking to use BatchSize of DataAdapter to send multiple stored procedure calls, SQL Bulk inserts to batch multiple inserts from DataReader or DataTable, SSIS fast load. But I don't know how to properly analyze re-indexing and stats population and maybe this has to take some time to finish. What is worse is that the company uses the biggest table for reporting and other online processing and indexes cannot be dropped. I manage transaction manually by setting a field to a value and do an transactional update changing that value to a new value that other applications are using to get committed rows. Please advise how to approach this problem. For now I am trying to have a staging tables with minimal logging in a separate database and no indexes and I will try to do batched (massive) parent child inserts. I believe Production DB has simple recovery model, but it could be full recovery. If DB user that is being used by my .NET console application has bulkadmin role does it mean its bulk inserts are minimally logged. I understand that when a table has clustered and many non-clustered indexes that inserts are still logged for each row. Connection pooling is working, but with many login/logouts. Why? for (int i = 1; i <= 10000; i++){ using (SqlConnection conn = new SqlConnection("server=(local);database=master;integrated security=sspi;")) {conn.Open(); using (SqlCommand cmd = conn.CreateCommand()){ cmd.CommandText = "use tempdb"; cmd.ExecuteNonQuery();}}} SQL Server Profiler trace: Audit Login master 2010-01-13 23:18:45.337 1 - Nonpooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.337 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.337 Audit Logout tempdb 2010-01-13 23:18:45.337 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled

    Read the article

  • Can't connect to WCF service on Android

    - by illvm
    I am trying to connect to a .NET WCF service in Android using kSOAP2 (v 2.1.2), but I keep getting a fatal exception whenever I try to make the service call. I'm having a bit of difficulty tracking down the error and can't seem to figure out why it's happening. The code I am using is below: package org.example.android; import org.ksoap2.SoapEnvelope; import org.ksoap2.serialization.PropertyInfo; import org.ksoap2.serialization.SoapObject; import org.ksoap2.serialization.SoapSerializationEnvelope; import org.ksoap2.transport.HttpTransport; import android.app.Activity; import android.app.AlertDialog; import android.content.DialogInterface; import android.content.Intent; import android.os.Bundle; public class ValidateUser extends Activity { private static final String SOAP_ACTION = "http://tempuri.org/mobile/ValidateUser"; private static final String METHOD_NAME = "ValidateUser"; private static final String NAMESPACE = "http://tempuri.org/mobile/"; private static final String URL = "http://192.168.1.2:8002/WebService.Mobile.svc"; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.validate_user); Intent intent = getIntent(); Bundle extras = intent.getExtras(); if (extras == null) { this.finish(); } String username = extras.getString("username"); String password = extras.getString("password"); Boolean validUser = false; try { SoapObject request = new SoapObject(NAMESPACE, METHOD_NAME); PropertyInfo uName = new PropertyInfo(); uName.name = "userName"; PropertyInfo pWord = new PropertyInfo(); pWord.name = "passWord"; request.addProperty(uName, username); request.addProperty(pWord, password); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); envelope.dotNet = true; envelope.setOutputSoapObject(request); HttpTransport androidHttpTransport = new HttpTransport(URL); androidHttpTransport.call(SOAP_ACTION, envelope); // error occurs here Integer userId = (Integer)envelope.getResponse(); validUser = (userId != 0); } catch (Exception ex) { } } private void exit () { this.finish(); } } EDIT: Remove the old stack traces. In summary, the first problem was the program being unable to open a connection due to missing methods or libraries due to using vanilla kSOAP2 rather than a modified library for Android (kSOAP2-Android). The second issue was a settings issue. In the Manifest I did not add the following setting: <uses-permission android:name="android.permission.INTERNET" /> I am now having an issue with the XMLPullParser which I need to figure out. 12-23 10:58:06.480: ERROR/SOCKETLOG(210): add_recv_stats recv 0 12-23 10:58:06.710: WARN/System.err(210): org.xmlpull.v1.XmlPullParserException: unexpected type (position:END_DOCUMENT null@1:0 in java.io.InputStreamReader@433fb070) 12-23 10:58:06.710: WARN/System.err(210): at org.kxml2.io.KXmlParser.exception(KXmlParser.java:243) 12-23 10:58:06.720: WARN/System.err(210): at org.kxml2.io.KXmlParser.nextTag(KXmlParser.java:1363) 12-23 10:58:06.720: WARN/System.err(210): at org.ksoap2.SoapEnvelope.parse(SoapEnvelope.java:126) 12-23 10:58:06.720: WARN/System.err(210): at org.ksoap2.transport.Transport.parseResponse(Transport.java:63) 12-23 10:58:06.720: WARN/System.err(210): at org.ksoap2.transport.HttpTransportSE.call(HttpTransportSE.java:100) 12-23 10:58:06.730: WARN/System.err(210): at org.example.android.ValidateUser.onCreate(ValidateUser.java:68) 12-23 10:58:06.730: WARN/System.err(210): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1122) 12-23 10:58:06.730: WARN/System.err(210): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2104) 12-23 10:58:06.730: WARN/System.err(210): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2157) 12-23 10:58:06.730: WARN/System.err(210): at android.app.ActivityThread.access$1800(ActivityThread.java:112) 12-23 10:58:06.730: WARN/System.err(210): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1581) 12-23 10:58:06.730: WARN/System.err(210): at android.os.Handler.dispatchMessage(Handler.java:88) 12-23 10:58:06.730: WARN/System.err(210): at android.os.Looper.loop(Looper.java:123) 12-23 10:58:06.730: WARN/System.err(210): at android.app.ActivityThread.main(ActivityThread.java:3739) 12-23 10:58:06.730: WARN/System.err(210): at java.lang.reflect.Method.invokeNative(Native Method) 12-23 10:58:06.730: WARN/System.err(210): at java.lang.reflect.Method.invoke(Method.java:515) 12-23 10:58:06.730: WARN/System.err(210): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:739) 12-23 10:58:06.730: WARN/System.err(210): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:497) 12-23 10:58:06.730: WARN/System.err(210): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • jqGrid - Problems opening in jquery tabs (on Firefox and Google Chrome)

    - by Ben Hargreaves
    I have developed a very simple MVC app to test out trirand's jqGrid for MVC. The app opens a jqgrid in a jquery tab group and everything is ok with IE. However when I use Firefox jqgrid only opens occasionaly in the first tab (but not under any other tab), and in Chrome my jqgrids dont appear to open under any tab of the group. I'm a bit of an MVC newbie (and have only been testing jqgrid out for a few days), but I know my users will want to use different browsers. Trirand have not come back with any answer so wondered if anyone else had had a similar issue. I have really just implemented jqgrid as per the controllers and model in the sample application on the Trirand site, and then combined it with a straightforward jquery tab group. My MVC Details Page is as follows; <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<PRAMSAPP.Models.Family>" %> <%@ Import Namespace="Trirand.Web.Mvc" %> <%@ Import Namespace="PRAMSAPP.Controllers" %> <%@ Import Namespace="PRAMSAPP.Models" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Details </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <link rel="stylesheet" type="text/css" href="/scripts/jquery-ui-1.7.2.custom.css" /> <script type="text/javascript" src="/scripts/jquery-1.3.2.min.js"></script> <script type="text/javascript" src="/scripts/jquery-ui-1.7.2.custom.min.js"></script> <fieldset> <legend>Family</legend> <div class="display-field"><%= Html.Encode(Model.FamilyID) %></div> <div class="display-field"><%= Html.Encode(Model.FamilySurname) %></div> </fieldset> <div id="tabs"> <ul> <li> <%= Html.ActionLink("GridChildren", "GridDemo", new { controller = "Grid", id = Model.FamilyID })%> </li> <li> <%= Html.ActionLink("Children", "ShowFamiliesChildren", new { famid = Model.FamilyID, page = Page})%> </li> </ul> </div> <p> <%= Html.ActionLink("Edit", "Edit", new { id=Model.FamilyID }) %> | <%= Html.ActionLink("Back to List", "Index") %> </p> <script type="text/javascript"> $(function() { $('#tabs').tabs(); }); </script> </asp:Content> And My Controller page is as follows; <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<PRAMSAPP.Models.FamiliesChildrenJqGridModel>" %> <%@ Import Namespace="Trirand.Web.Mvc" %> <%@ Import Namespace="PRAMSAPP.Controllers" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head id="Head1" runat="server"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <!-- The jQuery UI theme that will be used by the grid --> <link rel="stylesheet" type="text/css" media="screen" href="/Content/themes/redmond/jquery-ui-1.7.1.custom.css" /> <!-- The Css UI theme extension of jqGrid --> <link rel="stylesheet" type="text/css" media="screen" href="/Content/themes/ui.jqgrid.css" /> <!-- jQuery library is a prerequisite for jqGrid --> <script type="text/javascript" src="/Scripts/jquery-1.3.2.min.js"></script> <!-- language pack - MUST be included before the jqGrid javascript --> <script type="text/javascript" src="/Scripts/grid.locale-en.js"></script> <script type="text/javascript" src="/Scripts/jqgrid/jquery.jqGrid.min.js"></script> </head> <body> <div> <%= Html.Trirand().JQGrid(Model.FamiliesChildrenGrid, "JQGrid1") %> </div> </body>

    Read the article

  • An easy way to create Side by Side registrationless COM Manifests with Visual Studio

    - by Rick Strahl
    Here's something I didn't find out until today: You can use Visual Studio to easily create registrationless COM manifest files for you with just a couple of small steps. Registrationless COM lets you use COM component without them being registered in the registry. This means it's possible to deploy COM components along with another application using plain xcopy semantics. To be sure it's rarely quite that easy - you need to watch out for dependencies - but if you know you have COM components that are light weight and have no or known dependencies it's easy to get everything into a single folder and off you go. Registrationless COM works via manifest files which carry the same name as the executable plus a .manifest extension (ie. yourapp.exe.manifest) I'm going to use a Visual FoxPro COM object as an example and create a simple Windows Forms app that calls the component - without that component being registered. Let's take a walk down memory lane… Create a COM Component I start by creating a FoxPro COM component because that's what I know and am working with here in my legacy environment. You can use VB classic or C++ ATL object if that's more to your liking. Here's a real simple Fox one: DEFINE CLASS SimpleServer as Session OLEPUBLIC FUNCTION HelloWorld(lcName) RETURN "Hello " + lcName ENDDEFINE Compile it into a DLL COM component with: BUILD MTDLL simpleserver FROM simpleserver RECOMPILE And to make sure it works test it quickly from Visual FoxPro: server = CREATEOBJECT("simpleServer.simpleserver") MESSAGEBOX( server.HelloWorld("Rick") ) Using Visual Studio to create a Manifest File for a COM Component Next open Visual Studio and create a new executable project - a Console App or WinForms or WPF application will all do. Go to the References Node Select Add Reference Use the Browse tab and find your compiled DLL to import  Next you'll see your assembly in the project. Right click on the reference and select Properties Click on the Isolated DropDown and select True Compile and that's all there's to it. Visual Studio will create a App.exe.manifest file right alongside your application's EXE. The manifest file created looks like this: xml version="1.0" encoding="utf-8"? assembly xsi:schemaLocation="urn:schemas-microsoft-com:asm.v1 assembly.adaptive.xsd" manifestVersion="1.0" xmlns:asmv1="urn:schemas-microsoft-com:asm.v1" xmlns:asmv2="urn:schemas-microsoft-com:asm.v2" xmlns:asmv3="urn:schemas-microsoft-com:asm.v3" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:co.v1="urn:schemas-microsoft-com:clickonce.v1" xmlns:co.v2="urn:schemas-microsoft-com:clickonce.v2" xmlns="urn:schemas-microsoft-com:asm.v1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / file name="simpleserver.DLL" asmv2:size="27293" hash xmlns="urn:schemas-microsoft-com:asm.v2" dsig:Transforms dsig:Transform Algorithm="urn:schemas-microsoft-com:HashTransforms.Identity" / dsig:Transforms dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" / dsig:DigestValuepuq+ua20bbidGOWhPOxfquztBCU=dsig:DigestValue hash typelib tlbid="{f10346e2-c9d9-47f7-81d1-74059cc15c3c}" version="1.0" helpdir="" resourceid="0" flags="HASDISKIMAGE" / comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" tlbid="{f10346e2-c9d9-47f7-81d1-74059cc15c3c}" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file assembly Now let's finish our super complex console app to test with: using System; using System.Collections.Generic; using System.Text; namespace ConsoleApplication1 {     class Program     {         static voidMain(string[] args)         { Type type = Type.GetTypeFromProgID("simpleserver.simpleserver",true); dynamic server = Activator.CreateInstance(type); Console.WriteLine(server.HelloWorld("rick")); Console.ReadLine(); } } } Now run the Console Application… As expected that should work. And why not? The COM component is still registered, right? :-) Nothing tricky about that. Let's unregister the COM component and then re-run and see what happens. Go to the Command Prompt Change to the folder where the DLL is installed Unregister with: RegSvr32 -u simpleserver.dll      To be sure that the COM component no longer works, check it out with the same test you used earlier (ie. o = CREATEOBJECT("SimpleServer.SimpleServer") in your development environment or VBScript etc.). Make sure you run the EXE and you don't re-compile the application or else Visual Studio will complain that it can't find the COM component in the registry while compiling. In fact now that we have our .manifest file you can remove the COM object from the project. When you run run the EXE from Windows Explorer or a command prompt to avoid the recompile. Watch out for embedded Manifest Files Now recompile your .NET project and run it… and it will most likely fail! The problem is that .NET applications by default embeds a manifest file into the compiled EXE application which results in the externally created manifest file being completely ignored. Only one manifest can be applied at a time and the compiled manifest takes precedency. Uh, thanks Visual Studio - not very helpful… Note that if you use another development tool like Visual FoxPro to create your EXE this won't be an issue as long as the tool doesn't automatically add a manifest file. Creating a Visual FoxPro EXE for example will work immediately with the generated manifest file as is. If you are using .NET and Visual Studio you have a couple of options of getting around this: Remove the embedded manifest file Copy the contents of the generated manifest file into a project manifest file and compile that in To remove an embedded manifest in a Visual Studio project: Open the Project Properties (Alt-Enter on project node) Go down to Resources | Manifest and select | Create Application without a Manifest   You can now add use the external manifest file and it will actually be respected when the app runs. The other option is to let Visual Studio create the manifest file on disk and then explicitly add the manifest file into the project. Notice on the dialog above I did this for app.exe.manifest and the manifest actually shows up in the list. If I select this file it will be compiled into the EXE and be used in lieu of any external files and that works as well. Remove the simpleserver.dll reference so you can compile your code and run the application. Now it should work without COM registration of the component. Personally I prefer external manifests because they can be modified after the fact - compiled manifests are evil in my mind because they are immutable - once they are there they can't be overriden or changed. So I prefer an external manifest. However, if you are absolutely sure nothing needs to change and you don't want anybody messing with your manifest, you can also embed it. The option to either is there. Watch for Manifest Caching While working trying to get this to work I ran into some problems at first. Specifically when it wasn't working at first (due to the embedded schema) I played with various different manifest layouts in different files etc.. There are a number of different ways to actually represent manifest files including offloading to separate folder (more on that later). A few times I made deliberate errors in the schema file and I found that regardless of what I did once the app failed or worked no amount of changing of the manifest file would make it behave differently. It appears that Windows is caching the manifest data for a given EXE or DLL. It takes a restart or a recompile of either the EXE or the DLL to clear the caching. Recompile your servers in order to see manifest changes unless there's an outright failure of an invalid manifest file. If the app starts the manifest is being read and caches immediately. This can be very confusing especially if you don't know that it's happening. I found myself always recompiling the exe after each run and before making any changes to the manifest file. Don't forget about Runtimes of COM Objects In the example I used above I used a Visual FoxPro COM component. Visual FoxPro is a runtime based environment so if I'm going to distribute an application that uses a FoxPro COM object the runtimes need to be distributed as well. The same is true of classic Visual Basic applications. Assuming that you don't know whether the runtimes are installed on the target machines make sure to install all the additional files in the EXE's directory alongside the COM DLL. In the case of Visual FoxPro the target folder should contain: The EXE  App.exe The Manifest file (unless it's compiled in) App.exe.manifest The COM object DLL (simpleserver.dll) Visual FoxPro Runtimes: VFP9t.dll (or VFP9r.dll for non-multithreaded dlls), vfp9rENU.dll, msvcr71.dll All these files should be in the same folder. Debugging Manifest load Errors If you for some reason get your manifest loading wrong there are a couple of useful tools available - SxSTrace and SxSParse. These two tools can be a huge help in debugging manifest loading errors. Put the following into a batch file (SxS_Trace.bat for example): sxstrace Trace -logfile:sxs.bin sxstrace Parse -logfile:sxs.bin -outfile:sxs.txt Then start the batch file before running your EXE. Make sure there's no caching happening as described in the previous section. For example, if I go into the manifest file and explicitly break the CLSID and/or ProgID I get a detailed report on where the EXE is looking for the manifest and what it's reading. Eventually the trace gives me an error like this: INFO: Parsing Manifest File C:\wwapps\Conf\SideBySide\Code\app.EXE.     INFO: Manifest Definition Identity is App.exe,processorArchitecture="x86",type="win32",version="1.0.0.0".     ERROR: Line 13: The value {AAaf2c2811-0657-4264-a1f5-06d033a969ff} of attribute clsid in element comClass is invalid. ERROR: Activation Context generation failed. End Activation Context Generation. pinpointing nicely where the error lies. Pay special attention to the various attributes - they have to match exactly in the different sections of the manifest file(s). Multiple COM Objects The manifest file that Visual Studio creates is actually quite more complex than is required for basic registrationless COM object invokation. The manifest file can be simplified a lot actually by stripping off various namespaces and removing the type library references altogether. Here's an example of a simplified manifest file that actually includes references to 2 COM servers: xml version="1.0" encoding="utf-8"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / file name="simpleserver.DLL" comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file file name = "sidebysidedeploy.dll" comClass clsid="{EF82B819-7963-4C36-9443-3978CD94F57C}" progid="sidebysidedeploy.SidebysidedeployServer" description="SidebySideDeploy Server" threadingModel="apartment" / file assembly Simple enough right? Routing to separate Manifest Files and Folders In the examples above all files ended up in the application's root folder - all the DLLs, support files and runtimes. Sometimes that's not so desirable and you can actually create separate manifest files. The easiest way to do this is to create a manifest file that 'routes' to another manifest file in a separate folder. Basically you create a new 'assembly identity' via a named id. You can then create a folder and another manifest with the id plus .manifest that points at the actual file. In this example I create: App.exe.manifest A folder called App.deploy A manifest file in App.deploy All DLLs and runtimes in App.deploy Let's start with that master manifest file. This file only holds a reference to another manifest file: App.exe.manifest xml version="1.0" encoding="UTF-8" standalone="yes"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / dependency dependentAssembly assemblyIdentity name="App.deploy" version="1.0.0.0" type="win32" / dependentAssembly dependency assembly   Note this file only contains a dependency to App.deploy which is another manifest id. I can then create App.deploy.manifest in the current folder or in an App.deploy folder. In this case I'll create App.deploy and in it copy the DLLs and support runtimes. I then create App.deploy.manifest. App.deploy.manifest xml version="1.0" encoding="UTF-8" standalone="yes"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.deploy" type="win32" version="1.0.0.0" / file name="simpleserver.DLL" comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file file name="sidebysidedeploy.dll" comClass clsid="{EF82B819-7963-4C36-9443-3978CD94F57C}" threadingModel="Apartment" progid="sidebysidedeploy.SidebysidedeployServer" description="SidebySideDeploy Server" / file assembly   In this manifest file I then host my COM DLLs and any support runtimes. This is quite useful if you have lots of DLLs you are referencing or if you need to have separate configuration and application files that are associated with the COM object. This way the operation of your main application and the COM objects it interacts with is somewhat separated. You can see the two folders here:   Routing Manifests to different Folders In theory registrationless COM should be pretty easy in painless - you've seen the configuration manifest files and it certainly doesn't look very complicated, right? But the devil's in the details. The ActivationContext API (SxS - side by side activation) is very intolerant of small errors in the XML or formatting of the keys, so be really careful when setting up components, especially if you are manually editing these files. If you do run into trouble SxsTrace/SxsParse are a huge help to track down the problems. And remember that if you do have problems that you'll need to recompile your EXEs or DLLs for the SxS APIs to refresh themselves properly. All of this gets even more fun if you want to do registrationless COM inside of IIS :-) But I'll leave that for another blog post…© Rick Strahl, West Wind Technologies, 2005-2011Posted in COM  .NET  FoxPro   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Java Generics, JPA 2, J2EE, JSF 2, GWT, Ajax, Oracle's Java Strategies, Flex, iPhone, Agile ALM, Gra

    - by Kim Won
    Great Indian Developer Summit 2010 – India's Biggest Polyglot Conference and Workshops for IT Software Professionals Bangalore, April 9, 2010: The GIDS.Java Conference and Workshops has announced the complete program of over 50 sessions on the present and future of the Java language and VM, how they are evolving to meet the community's ever-changing needs, and some of the cutting-edge tools, technologies & techniques used for building robust enterprise Java applications today. The GIDs.Java track at Great Indian Developer Summit takes place 22 and 23 April 2010, at the Indian Institute of Science in Bangalore. As one of the longest running independent developer conferences in India, GIDS.Java at the Great Indian Developer Summit 2010 is uniquely positioned to provide a blend of practical, pragmatic and immediately applicable knowledge and a glimpse of the future of technology. During 22 and 23 April 2010, GIDS.Java offers a multi-track conference, workshops, expo show floor, and networking opportunities. The first keynote at GIDS.Java "Pointy Haired Bosses and Pragmatic Programmers" is led by Dr. Venkat Subramaniam. He speaks about how each of us has a professional responsibility to be objective and make decisions that will help us and our teams be productive and deliver results. Venkat will pick on some fallacies, lay down facts, and discuss how to stay professional and objective in our daily efforts. The second keynote of the day explains the practical features that make the Cloud so interesting, and why everyone should start using it in their everyday life. Simone Brunozzi, Amazon Web Services Technology Evangelist, will detail technical examples, business details all mixed with a lot of Italian humor to ensure audience enjoy this talk without a single line of code. The third keynote of the day gives an exciting overview of directions in the Java space for Oracle, featuring concrete signs of Oracles heavy investment, a clear concise strategy overview, and deep dives into some of the most interesting pieces of technology being developed in the Java Platform Group today; such as JavaEE, JDK7, JavaFX, and our exciting new visual tools. Featuring demos by a Java evangelism team star, Simon Ritter, this talk takes you top to bottom in Java Technology. Featured talks at GID.Web include: Good, Bad, and Ugly of Java Generics, Venkat Subramaniam Pure Java Ajax: An Overview of GWT 2.0, Marty Hall How JPA 2.0 Makes a Good Thing Even Better, Mike Keith Building Enterprise RIAs with Adobe Flex and Java, Sujit Reddy G Integrated Ajax Support in JSF 2.0, Marty Hall Design Patterns in Java and Groovy, Venkat Subramaniam A Gentle Introduction to iPhone and Obj-C for Java Developers, Matthew McCullough Cloud Computing: Azure for Java Developers, Janakiram MSV Ajax Support in the Prototype JavaScript Library, Marty Hall First steps to IT Heaven Through the Cloud. Part III: .Java, Simone Brunozi Building Web 2.0 User Interfaces for Web Service Models using JSF, Frank Nimphius and Jobinesh P Acceptance Test Driven Development, John Tobin and Mohammed Mohsinali Architecting Your Java Applications for the Cloud, Praveen Srivatsa Effective Java, Venkat Subramaniam The Amazing Groovy Weight-loss Plan, Scott Davis Enterprise Modeling - from Conceptual Planning to Technical Blueprints, J Sripad Java Collections Renaissance, Donald Raab and Vlad Zakharov Power 7 and IBM J9VM, Himanshu Goyal A Whistle-stop Tour of Maven 3.0, Matthew McCullough Mass Volume Opportunities for Java Developers, Jouko Nuottila Emerging Technology Complex Event Processing, Duvvuri Srinivas Agile ALM for Distributed Development, Karthi Swaminathan Dim Sum Grails - A Sampler of Practical Non Database-Driven Grails Applications, Scott Davis Diagnosing Performance Bottlenecks in J2EE, Deepak Kaul Business Driven Identity Management, Suneet Agera Combining Java EE with OSGi using Eclipse Gemini, Mike Keith Workshop: Essence of Functional Programming, Venkat Subramaniam Workshop: Agile Development, Tools, and Teams and Scrum Certification, Stephen Forte Workshop: Cloud Computing Boot Camp on the Google App Engine, Matthew McCullough Workshop: Building Your First Amazon App, Simone Brunozzi Workshop: The 180-min AJAX and JSON Spike Class, Scott Davis Workshop: PHP + Adobe Flex = Killer RIA, Shyamprasad P Workshop: User Expereince Evaluation Model Walkthrough, Sanna Häiväläinen Workshop: Building Data Centric Applications using Adobe Flex and Java, Prashant Singh Workshop: Monetizing your Apps with PayPal X Payments Platform, Khurram Khan, Praveen Alavilli Sponsors of Great Indian Developer Summit 2010 include: Platinum sponsors Microsoft, Oracle Forum Nokia and Adobe; Gold sponsors Intel and SAP; Silver sponsors Quest Software, PayPal, Telerik and AMT. About Great Indian Developer Summit Great Indian Developer Summit is the gold standard for India's software developer ecosystem for gaining exposure to and evaluating new projects, tools, services, platforms, languages, software and standards. Packed with premium knowledge, action plans and advise from been-there-done-it veterans, creators, and visionaries, the 2010 edition of Great Indian Developer Summit features focused sessions, case studies, workshops and power panels that will transform you into a force to reckon with. Featuring 3 co-located conferences: GIDS.NET, GIDS.Web, GIDS.Java and an exclusive day of in-depth tutorials - GIDS.Workshops, from 20 April to 24 April at the IISc campus in Bangalore. At GIDS you'll participate in hundreds of sessions encompassing the full range of Microsoft computing, Java, Agile, RIA, Rich Web, open source/standards, languages, frameworks and platforms, practical tutorials that deep dive into technical skill and best practices, inspirational keynote presentations, an Expo Hall featuring dozens of the latest projects and products activities, engaging networking events, and the interact with the best and brightest of speakers from around the world. For further information on GIDS 2010, please visit the summit on the web http://www.developersummit.com/ A Saltmarch Media Press Release E: [email protected] Ph: +91 80 4005 1000

    Read the article

  • Windows Vista/Win7 Privilege Problem: SeDebugPrivilege & OpenProcess

    - by KevenK
    Everything I've been able to find about escalating to the appropriate privileges for my needs has agreed with my current methods, but the problem exists. I'm hoping maybe someone has some Windows Vista/Win7 internals experience that might shine some light where there is only darkness. I'm sure this will get long, but please bare with me. Context: I'm working on an app that requires accessing the memory of other processes on the current machine. This, obviously, requires administrator rights. It also requires SeDebugPrivilege, which I believe myself to be acquiring correctly, although I question if more privileges aren't necessary and thus the cause of my problems. Code has so far worked successfully on all versions of Windows XP, and on my test Vista32 and Win7x64 environments. Process: Program will Always be run with Administrator Rights. This can be assumed throughout this post. Escalating the current process's Access Token to include SeDebugPrivilege rights. Using EnumProcesses to create a list of current PIDs on the system Opening a handle using OpenProcess with PROCESS_ALL_ACCESS access rights Using ReadProcessMemory to read the memory of the other process. Problem: Everything has been working fine during development and my personal testing (including Windows XP 32 & 64, Windows Vista 32, and Windows 7 x64). However, during a test deployment onto both Windows Vista(32-bit) and Windows 7(64-bit) machines of a colleague, there seems to be a privilege/rights problem with OpenProcess failing with a generic Access Denied error. This occurs both when running as a limited User (as would be expected) and also when run explicitly as Administrator (Right-click Run as Administrator and when run from an Administrator level command prompt). However, this problem has been unreproducible for myself in my test environment. I have witnessed the problem first hand, so I trust that the problem exists. The only difference that I can discern between the actual environment and my test environment is that the actual error is occurring when using a Domain Administrator account at the UAC prompt, whereas my tests (which work with no errors) use a local administrator account at the UAC prompt. It appears that although the credentials being used allow UAC to 'run as administrator', the process is still not obtaining the correct rights to be able to OpenProcess on another process. I am not familiar enough with the internals of Vista/Win7 to know what this might be, and I am hoping someone has an idea of what could be the cause. The Kicker: The person who has reported this error, and who's environment can regularly reproduce this bug, has a small application named along the lines of RunWithDebugEnabled which is a small bootstrap program which appears to escalate its own privileges and then launch the executable passed to it (thus inheriting the escalated privileges). When run with this program, using the same Domain Administrator credentials at UAC prompt, the program works correctly and is able to successfully call OpenProcess and operates as intended. So this is definitely a problem with acquiring the correct privileges, and it is known that the Domain Administrator account is an administrator account that should be able to access the correct rights. (Obviously obtaining this source code would be great, but I wouldn't be here if that were possible). Notes: As noted, the errors reported by the failed OpenProcess attempts are Access Denied. According to MSDN documentation of OpenProcess: If the caller has enabled the SeDebugPrivilege privilege, the requested access is granted regardless of the contents of the security descriptor. This leads me to believe that perhaps there is a problem under these conditions either with (1) Obtaining SeDebugPrivileges or (2) Requiring other privileges which have not been mentioned in any MSDN documentation, and which might differ between a Domain Administrator account and a Local Administrator account Sample Code: void sample() { ///////////////////////////////////////////////////////// // Note: Enabling SeDebugPrivilege adapted from sample // MSDN @ http://msdn.microsoft.com/en-us/library/aa446619%28VS.85%29.aspx // Enable SeDebugPrivilege HANDLE hToken = NULL; TOKEN_PRIVILEGES tokenPriv; LUID luidDebug; if(OpenProcessToken(GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES, &hToken) != FALSE) { if(LookupPrivilegeValue(NULL, SE_DEBUG_NAME, &luidDebug) != FALSE) { tokenPriv.PrivilegeCount = 1; tokenPriv.Privileges[0].Luid = luidDebug; tokenPriv.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; if(AdjustTokenPrivileges(hToken, FALSE, &tokenPriv, 0, NULL, NULL) != FALSE) { // Always successful, even in the cases which lead to OpenProcess failure cout << "SUCCESSFULLY CHANGED TOKEN PRIVILEGES" << endl; } else { cout << "FAILED TO CHANGE TOKEN PRIVILEGES, CODE: " << GetLastError() << endl; } } } CloseHandle(hToken); // Enable SeDebugPrivilege ///////////////////////////////////////////////////////// vector<DWORD> pidList = getPIDs(); // Method that simply enumerates all current process IDs ///////////////////////////////////////////////////////// // Attempt to open processes for(int i = 0; i < pidList.size(); ++i) { HANDLE hProcess = NULL; hProcess = OpenProcess(PROCESS_ALL_ACCESS, FALSE, pidList[i]); if(hProcess == NULL) { // Error is occurring here under the given conditions cout << "Error opening process PID(" << pidList[i] << "): " << GetLastError() << endl; } CloseHandle(hProcess); } // Attempt to open processes ///////////////////////////////////////////////////////// } Thanks! If anyone has some insight into what possible permissions/privileges/rights/etc that I may be missing to correctly open another process (Assuming the executable has been properly "Run as Administrator"ed) on Windows Vista and Windows 7 under the above conditions, it would be most greatly appreciated. I wouldn't be here if I weren't absolutely stumped, but I'm hopeful that once again the experience and knowledge of the group shines bright. I thank you for taking the time to read this wall of text. The good intentions alone are appreciated, thanks for being the type of person that makes SO so useful to all!

    Read the article

  • Import Data from Excel sheet to DB Table through OAF page

    - by PRajkumar
    1. Create a New Workspace and Project File > New > General > Workspace Configured for Oracle Applications File Name – PrajkumarImportxlsDemo   Automatically a new OA Project will also be created   Project Name -- ImportxlsDemo Default Package -- prajkumar.oracle.apps.fnd.importxlsdemo   2. Add JAR file jxl-2.6.3.jar to Apache Library Download jxl-2.6.3.jar from following link – http://www.findjar.com/jar/net.sourceforge.jexcelapi/jars/jxl-2.6.jar.html   Steps to add jxl.jar file in Local Machine Right Click on ImportxlsDemo > Project Properties > Libraries > Add jar/Directory and browse to directory where jxl-2.6.3.jar has been downloaded and select the JAR file            Steps to add jxl.jar file at EBS middle tier On your EBS middile tier copy jxl.jar at $FND_TOP/java/3rdparty/standalone Add $FND_TOP/java/3rdparty/standalone\jxl.jar to custom classpath in Jser.properties file which is at $IAS_ORACLE_HOME/Apache/Jserv/etc wrapper.classpath=/U01/oracle/dev/devappl/fnd/11.5.0/java/3rdparty/stdalone/jxl.jar Bounce Apache Server   3. Create a New Application Module (AM) Right Click on ImportxlsDemo > New > ADF Business Components > Application Module Name -- ImportxlsAM Package -- prajkumar.oracle.apps.fnd.importxlsdemo.server   Check Application Module Class: ImportxlsAMImpl Generate JavaFile(s)   4. Create Test Table in which we will insert data from excel CREATE TABLE xx_import_excel_data_demo (    -- --------------------      -- Data Columns      -- --------------------      column1                 VARCHAR2(100),      column2                 VARCHAR2(100),      column3                 VARCHAR2(100),      column4                 VARCHAR2(100),      column5                 VARCHAR2(100),      -- --------------------      -- Who Columns      -- --------------------      last_update_date   DATE         NOT NULL,      last_updated_by    NUMBER   NOT NULL,      creation_date         DATE         NOT NULL,      created_by             NUMBER    NOT NULL,      last_update_login  NUMBER );   5. Create a New Entity Object (EO) Right click on ImportxlsDemo > New > ADF Business Components > Entity Object Name – ImportxlsEO Package -- prajkumar.oracle.apps.fnd.importxlsdemo.schema.server Database Objects -- XX_IMPORT_EXCEL_DATA_DEMO   Note – By default ROWID will be the primary key if we will not make any column to be primary key Check the Accessors, Create Method, Validation Method and Remove Method   6. Create a New View Object (VO) Right click on ImportxlsDemo > New > ADF Business Components > View Object Name -- ImportxlsVO Package -- prajkumar.oracle.apps.fnd.importxlsdemo.server   In Step2 in Entity Page select ImportxlsEO and shuttle it to selected list In Step3 in Attributes Window select all columns and shuttle them to selected list   In Java page Uncheck Generate Java file for View Object Class: ImportxlsVOImpl Select Generate Java File for View Row Class: ImportxlsVORowImpl -> Generate Java File -> Accessors   7. Add Your View Object to Root UI Application Module Right click on ImportxlsAM > Edit ImportxlsAM > Data Model > Select ImportxlsVO and shuttle to Data Model list   8. Create a New Page Right click on ImportxlsDemo > New > Web Tier > OA Components > Page Name -- ImportxlsPG Package -- prajkumar.oracle.apps.fnd.importxlsdemo.webui   9. Select the ImportxlsPG and go to the strcuture pane where a default region has been created   10. Select region1 and set the following properties:   Attribute Property ID PageLayoutRN AM Definition prajkumar.oracle.apps.fnd.importxlsdemo.server.ImportxlsAM Window Title Import Data From Excel through OAF Page Demo Window Title Import Data From Excel through OAF Page Demo   11. Create messageComponentLayout Region Under Page Layout Region Right click PageLayoutRN > New > Region   Attribute Property ID MainRN Item Style messageComponentLayout   12. Create a New Item messageFileUpload Bean under MainRN Right click on MainRN > New > messageFileUpload Set Following Properties for New Item --   Attribute Property ID MessageFileUpload Item Style messageFileUpload   13. Create a New Item Submit Button Bean under MainRN Right click on MainRN > New > messageLayout Set Following Properties for messageLayout --   Attribute Property ID ButtonLayout   Right Click on ButtonLayout > New > Item   Attribute Property ID Go Item Style submitButton Attribute Set /oracle/apps/fnd/attributesets/Buttons/Go   14. Create Controller for page ImportxlsPG Right Click on PageLayoutRN > Set New Controller Package Name: prajkumar.oracle.apps.fnd.importxlsdemo.webui Class Name: ImportxlsCO   Write Following Code in ImportxlsCO in processFormRequest import oracle.apps.fnd.framework.OAApplicationModule; import oracle.apps.fnd.framework.OAException; import java.io.Serializable; import oracle.apps.fnd.framework.webui.OAControllerImpl; import oracle.apps.fnd.framework.webui.OAPageContext; import oracle.apps.fnd.framework.webui.beans.OAWebBean; import oracle.cabo.ui.data.DataObject; import oracle.jbo.domain.BlobDomain; public void processFormRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processFormRequest(pageContext, webBean);  if (pageContext.getParameter("Go") != null)  {   DataObject fileUploadData = (DataObject)pageContext.getNamedDataObject("MessageFileUpload");   String fileName = null;                 try   {    fileName = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_NAME");   }   catch(NullPointerException ex)   {    throw new OAException("Please Select a File to Upload", OAException.ERROR);   }   BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, fileName);   try   {    OAApplicationModule oaapplicationmodule = pageContext.getRootApplicationModule();    Serializable aserializable2[] = {uploadedByteStream};    Class aclass2[] = {BlobDomain.class };    oaapplicationmodule.invokeMethod("ReadExcel", aserializable2,aclass2);   }   catch (Exception ex)   {    throw new OAException(ex.toString(), OAException.ERROR);   }  } }     Write Following Code in ImportxlsAMImpl.java import java.io.IOException; import java.io.InputStream; import jxl.Cell; import jxl.CellType; import jxl.Sheet; import jxl.Workbook; import jxl.read.biff.BiffException; import oracle.apps.fnd.framework.server.OAApplicationModuleImpl; import oracle.jbo.Row; import oracle.apps.fnd.framework.OAViewObject; import oracle.apps.fnd.framework.server.OAViewObjectImpl; import oracle.jbo.domain.BlobDomain; public void createRecord(String[] excel_data) {   OAViewObject vo = (OAViewObject)getImportxlsVO1();            if (!vo.isPreparedForExecution())    {   vo.executeQuery();      }                      Row row = vo.createRow();  try  {   for (int i=0; i < excel_data.length; i++)   {    row.setAttribute("Column" +(i+1) ,excel_data[i]);   }  }  catch(Exception e)  {   System.out.println(e.getMessage());   }  vo.insertRow(row);  getTransaction().commit(); }      public void ReadExcel(BlobDomain fileData) throws IOException {  String[] excel_data  = new String[5];  InputStream inputWorkbook = fileData.getInputStream();  Workbook w;          try  {   w = Workbook.getWorkbook(inputWorkbook);                       // Get the first sheet   Sheet sheet = w.getSheet(0);                       for (int i = 0; i < sheet.getRows(); i++)   {    for (int j = 0; j < sheet.getColumns(); j++)    {     Cell cell = sheet.getCell(j, i);     CellType type = cell.getType();     if (cell.getType() == CellType.LABEL)     {      System.out.println("I got a label " + cell.getContents());      excel_data[j] = cell.getContents();     }     if (cell.getType() == CellType.NUMBER)     {        System.out.println("I got a number " + cell.getContents());      excel_data[j] = cell.getContents();     }    }    createRecord(excel_data);   }  }              catch (BiffException e)  {   e.printStackTrace();  } }   15. Congratulation you have successfully finished. Run Your page and Test Your Work   Consider Excel PRAJ_TEST.xls with following data --       Lets Try to import this data into DB Table --          

    Read the article

  • WCF: Server Not Found - from trace Empty Message when run async but works fine from console app?

    - by MrTortoise
    Todays cause of hair loss has been the following scenario: I have a service that takes 2 strings and returns another. This service uses basicHttpBinding <basicHttpBinding> <binding name="basicHttpNoSec"> <security mode="None" /> </binding> </basicHttpBinding> Anyway, it works fine from a console test app. I have a silverlight app sat on top which implements another basicHttpBinding service that simply reuses the contract in the other service and the silverlight App uses this service. I have a console app that confirms that this service is working and set up with basichttpbinding. I have all the clientAccessPolicy stuff in place. when I run the silverlight app the difference is that it runs everything async ... as such the only message i directly get back rom wcf is server not found. When i enable tracing I dig down to this message - as I know the methods work and the parameteres i pass in will return a valid string i am really puzzled at to what the cause is. any help much appreciated. <E2ETraceEvent xmlns="http://schemas.microsoft.com/2004/06/E2ETraceEvent"> <System xmlns="http://schemas.microsoft.com/2004/06/windows/eventlog/system"> <EventID>131075</EventID> <Type>3</Type> <SubType Name="Error">0</SubType> <Level>2</Level> <TimeCreated SystemTime="2010-06-07T14:17:40.6639249Z" /> <Source Name="System.ServiceModel" /> <Correlation ActivityID="{8ea9530e-12f4-4a82-9c26-dd2e23264c3c}" /> <Execution ProcessName="aspnet_wp" ProcessID="4616" ThreadID="6" /> <Channel /> <Computer>5JC2Y2J</Computer> </System> <ApplicationData> <TraceData> <DataItem> <TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Error"> <TraceIdentifier>http://msdn.microsoft.com/en-GB/library/System.ServiceModel.Diagnostics.ThrowingException.aspx</TraceIdentifier> <Description>Throwing an exception.</Description> <AppDomain>/LM/w3svc/1/ROOT/CopSilverlight.Web-1-129203938565564172</AppDomain> <Exception> <ExceptionType>System.ServiceModel.ProtocolException, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType> <Message>There is a problem with the XML that was received from the network. See inner exception for more details.</Message> <StackTrace> at System.ServiceModel.Channels.HttpRequestContext.CreateMessage() at System.ServiceModel.Channels.HttpChannelListener.HttpContextReceived(HttpRequestContext context, Action callback) at System.ServiceModel.Activation.HostedHttpTransportManager.HttpContextReceived(HostedHttpRequestAsyncResult result) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state) at System.Runtime.IOThreadScheduler.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Runtime.Fx.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) </StackTrace> <ExceptionString>System.ServiceModel.ProtocolException: There is a problem with the XML that was received from the network. See inner exception for more details. ---&gt; System.Xml.XmlException: The body of the message cannot be read because it is empty. --- End of inner exception stack trace ---</ExceptionString> <InnerException> <ExceptionType>System.Xml.XmlException, System.Xml, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType> <Message>The body of the message cannot be read because it is empty.</Message> <StackTrace> at System.ServiceModel.Channels.HttpRequestContext.CreateMessage() at System.ServiceModel.Channels.HttpChannelListener.HttpContextReceived(HttpRequestContext context, Action callback) at System.ServiceModel.Activation.HostedHttpTransportManager.HttpContextReceived(HostedHttpRequestAsyncResult result) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state) at System.Runtime.IOThreadScheduler.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Runtime.Fx.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) </StackTrace> <ExceptionString>System.Xml.XmlException: The body of the message cannot be read because it is empty.</ExceptionString> </InnerException> </Exception> </TraceRecord> </DataItem> </TraceData> </ApplicationData> </E2ETraceEvent>

    Read the article

  • WPF: how to find an element in a datatemplate from an itemscontrol

    - by EV
    Hi, I have the following problem: the application is using the custom itemscontrol called ToolBox. The elements of a toolbox are called toolboxitems and are custom contentcontrol. Now, the toolbox stores a number of images that are retrieved from a database and displayed. For that I use a datatemplate inside the toolbox control. However, when I try to drag and drop the elements, I don't get the image object but the database component. I thought that I should then traverse the structure to find the Image element. here's the code: Toolbox: public class Toolbox : ItemsControl { private Size defaultItemSize = new Size(65, 65); public Size DefaultItemSize { get { return this.defaultItemSize; } set { this.defaultItemSize = value; } } protected override DependencyObject GetContainerForItemOverride() { return new ToolboxItem(); } protected override bool IsItemItsOwnContainerOverride(object item) { return (item is ToolboxItem); } } ToolBoxItem: public class ToolboxItem : ContentControl { private Point? dragStartPoint = null; static ToolboxItem() { FrameworkElement.DefaultStyleKeyProperty.OverrideMetadata(typeof(ToolboxItem), new FrameworkPropertyMetadata(typeof(ToolboxItem))); } protected override void OnPreviewMouseDown(MouseButtonEventArgs e) { base.OnPreviewMouseDown(e); this.dragStartPoint = new Point?(e.GetPosition(this)); } public String url { get; private set; } protected override void OnMouseMove(MouseEventArgs e) { base.OnMouseMove(e); if (e.LeftButton != MouseButtonState.Pressed) { this.dragStartPoint = null; } if (this.dragStartPoint.HasValue) { Point position = e.GetPosition(this); if ((SystemParameters.MinimumHorizontalDragDistance <= Math.Abs((double)(position.X - this.dragStartPoint.Value.X))) || (SystemParameters.MinimumVerticalDragDistance <= Math.Abs((double)(position.Y - this.dragStartPoint.Value.Y)))) { string xamlString = XamlWriter.Save(this.Content); MessageBoxResult result = MessageBox.Show(xamlString); DataObject dataObject = new DataObject("DESIGNER_ITEM", xamlString); if (dataObject != null) { DragDrop.DoDragDrop(this, dataObject, DragDropEffects.Copy); } } e.Handled = true; } } private childItem FindVisualChild<childItem>(DependencyObject obj) where childItem : DependencyObject { for (int i = 0; i < VisualTreeHelper.GetChildrenCount(obj); i++) { DependencyObject child = VisualTreeHelper.GetChild(obj, i); if (child != null && child is childItem) return (childItem)child; else { childItem childOfChild = FindVisualChild<childItem>(child); if (childOfChild != null) return childOfChild; } } return null; } } here is the xaml file for ToolBox and toolbox item: <Style TargetType="{x:Type s:ToolboxItem}"> <Setter Property="Control.Padding" Value="5" /> <Setter Property="ContentControl.HorizontalContentAlignment" Value="Stretch" /> <Setter Property="ContentControl.VerticalContentAlignment" Value="Stretch" /> <Setter Property="ToolTip" Value="{Binding ToolTip}" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type s:ToolboxItem}"> <Grid> <Rectangle Name="Border" StrokeThickness="1" StrokeDashArray="2" Fill="Transparent" SnapsToDevicePixels="true" /> <ContentPresenter Content="{TemplateBinding ContentControl.Content}" Margin="{TemplateBinding Padding}" SnapsToDevicePixels="{TemplateBinding UIElement.SnapsToDevicePixels}" ContentTemplate="{TemplateBinding ContentTemplate}"/> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsMouseOver" Value="true"> <Setter TargetName="Border" Property="Stroke" Value="Gray" /> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> <Style TargetType="{x:Type s:Toolbox}"> <Setter Property="SnapsToDevicePixels" Value="true" /> <Setter Property="Focusable" Value="False" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate> <Border BorderThickness="{TemplateBinding Border.BorderThickness}" Padding="{TemplateBinding Control.Padding}" BorderBrush="{TemplateBinding Border.BorderBrush}" Background="{TemplateBinding Panel.Background}" SnapsToDevicePixels="True"> <ScrollViewer VerticalScrollBarVisibility="Auto"> <ItemsPresenter SnapsToDevicePixels="{TemplateBinding UIElement.SnapsToDevicePixels}" /> </ScrollViewer> </Border> </ControlTemplate> </Setter.Value> </Setter> <Setter Property="ItemsPanel"> <Setter.Value> <ItemsPanelTemplate> <WrapPanel Margin="0,5,0,5" ItemHeight="{Binding Path=DefaultItemSize.Height, RelativeSource={RelativeSource AncestorType=s:Toolbox}}" ItemWidth="{Binding Path=DefaultItemSize.Width, RelativeSource={RelativeSource AncestorType=s:Toolbox}}" /> </ItemsPanelTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary> Example usage: <Toolbox x:Name="NewLibrary" DefaultItemSize="55,55" ItemsSource="{Binding}" > <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel> <Image Source="{Binding Path=url}" /> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </Toolbox> The object that I get is a database object. When using a static resource I get the Image object. How to retrieve this Image object from a datatemplate? I though that I could use this tutorial: http://msdn.microsoft.com/en-us/library/bb613579.aspx But it does not seem to solve the problem. Could anyone suggest a solution? Thanks!

    Read the article

  • On automating a split-mirror ASM backup with EMC TimeFinder ...

    - by [email protected]
    Normal 0 21 false false false MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} Hi clerks,   Offloading the backup operation to another host using disk cloning could really improve the performance on highly busy databases ( 24x7, zero downtime and all this stuff ...) There are well know white papers on this subject, ASM included, but today Im showing you a nice way to automate the procedure using shell scripting with EMC TimeFinder technologies:   Assumptions: *********** ASM diskgroups name:   +data_${db_name} : asm data diskgroup +fra_${db_name} :  asm fra  diskgroup   EMC Time Finder sync groups name:   rac_${DB_NAME}_data_tf : data group rac_${DB_NAME}_fra_tf:   fra group     There are two scripts, one located on the production box ( bck_database.sh ) and the other one on the backup server node ( bck_database_mirror.sh ) The second one is remotly executed from the production host There are a bunch of variables along the code with selfexplanatory names I guess, anyway let me know if you want some help     #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    NAME ###     bck_database.sh ### ###    DESCRIPTION ###     Database backup on third mirror ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###    Oracle            28/01/10             - Creacion ###   V_DATE=`/bin/date +%Y%m%d_%H%M%S` V_FICH_LOG=`dirname $0`/trace_dir_location/`basename $0`.${V_DATE}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1     ADMIN_DIR=`dirname $0` . ${ADMIN_DIR}/setenv_instance.sh -- This script should set the instance vars like Oracle Home, Sid, db_name ... if [ $? -ne 0 ] then   echo "Error when setting the environment."   exit 1 fi   echo "${V_DATE} ####################################################" echo "Executing database backup: ${DB_NAME}" echo "####################################################################"   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm data diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm data diskgroups"   exit 2 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm data disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm data diskgroups"   exit 3 fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm fra diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm fra diskgroups"   exit 4 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm fra disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm fra diskgroups"   exit 5 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "ASM sync sucessfully completed!" echo "####################################################################"     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to BEGIN BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter system archive log current;   alter database ${DB_NAME} begin backup; ! if [ $? -ne 0 ] then   echo "Error when updating database status to BEGIN backup"   exit 6 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm data disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm data disks"   exit 7 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to END BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter database ${DB_NAME} end backup;   alter system archive log current; ! if [ $? -ne 0 ] then   echo "Error when updating database status to END backup"   exit 8 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Generating controlfile copies...." echo "####################################################################" rman<<-! connect target / run { allocate channel ch1 type DISK; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_mount.ctl'; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl'; } ! if [ $? -ne 0 ] then   echo "Error generating controlfile copies"   exit 9 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Resync RMAN catalog ....." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} resync catalog; ! if [ $? -ne 0 ] then   echo "Error when resyncing RMAN catalog"   exit 10 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm fra disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm fra disks"   exit 11 fi     echo "WARNING!: Calling bck_database_mirror.sh host ${NODE_BCK_SERVER}..." ssh ${NODO_BCK_SERVER} ${ADMIN_DIR_BCK}/bck_database_mirror.sh if [ $? -ne 0 ] then   echo "Error, when remote executing the backup "   exit 12 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Cleaning the archived redo logs already copied to tape ..." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} run { resync catalog; delete noprompt archivelog all backed up 1 times to device type sbt; } ! if [ $? -ne 0 ] then   echo "Error when cleaning the archived redo logs"   exit 13 fi echo "${V_DATE} ####################################################" echo "Backup sucessfully executed!!" echo "####################################################################" exit 0   ------------------------------------------------------------------------------ ------------------------** BACKUP SERVER NODE ** ----------------------------- ------------------------------------------------------------------------------   #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    ###    NAME ###     bck_database_mirror.sh ### ###    DESCRIPTION ###      Backup @ backup server ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###      Oracle                    28/01/10     - Creacion         V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ASM instance ..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_asm.sh -- This script is supposed to start the ASM instance in the backup server   if [ $? -ne 0 ]   then     echo "Error when tying to start ASM instance."     exit 1   fi       . ${V_ADMIN_DIR}/setenv_asm.sh -- This script is supposed to set the env. variables of the ASM instance   if [ $? -ne 0 ]   then     echo "Error when setting the ASM environment"     exit 1   fi       V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "The asm diskgroups/disks dettected are the following ..."   echo "####################################################################"     sqlplus /nolog <<-!     whenever sqlerror exit 1     connect / as sysdba     whenever sqlerror exit     SET LINES 200     COL PATH FORMAT A25     SELECT DISK.MOUNT_STATUS, DISK.PATH, DISK.NAME, DISK_GROUP.NAME, DISK_GROUP.TOTAL_MB FROM V\$ASM_DISK DISK, V\$ASM_DISKGROUP DISK_GROUP WHERE DISK.GROUP_NUMBER=DISK_GROUP.GROUP_NUMBER; !       V_ADMIN_DIR=`dirname $0`   . ${V_ADMIN_DIR}/setenv_instance.sh -- This script is supposed to set the env. variables of the database instance   if [ $? -ne 0 ]   then     echo "Error when setting the database instance environment"     exit 1   fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ${DB_NAME} in MOUNT mode..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_instance_mount.sh -- This script is supposed to do a startup mount   if [ $? -ne 0 ]   then     echo "Error starting  ${DB_NAME} in MOUNT mode"     exit 1   fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Executing RMAN backup..."   echo "####################################################################"   rman<<-!   connect target /   connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG}   run {   allocate channel ch1 type 'SBT_TAPE' parms'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opt)'; -- TDPO Media Library   crosscheck archivelog all;   backup tag BCK_CONTROLFILE_ST_${DB_NAME}   format 'ctl_%d_%s__%p_%t'   controlfilecopy '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl';   backup tag BCK_DATAFILE_ST_${DB_NAME} full   format 'db_%d_%s_%p_%t'database;   backup tag BCK_ARCHLOG_ST_${DB_NAME} format 'al_%d_%s_%p_%t' archivelog all;   release channel ch1;   } !   if [ $? -ne 0 ]   then     echo "Error executing the RMAN backup"     exit 1   fi     ${V_ADMIN_DIR}/stop_instance_immediate.sh -- This script is supposed to do a shutdown immediate of the database instance   ${ADMIN_DIR}/stop_asm_immediate.sh -- This script is supposed to do a shutdown immediate of the ASM instance   exit 0     fi   Hope it helps someone! --L

    Read the article

  • Install Control Center Agent on Oracle Application Server

    - by qianqian.wu
    Control Center Agent (CCA) The Control Center Agent is the OWB component that runs the Template Mappings in the Oracle Containers for J2EE (OC4J) server; also referred to as the J2EE Runtime. The Control Center Agent provides a Java-based runtime environment that can be installed on Oracle and non-Oracle database hosts. The Control Center Agent provides fundamental infrastructure for the heterogeneous, Code Template-based mapping support and Web services-related features of OWB in this release. In Oracle Warehouse Builder 11gR2 the Control Center Agent, by default will run in the built-in OC4J that is bundled in the Oracle Home. Besides that, you also have ability to install the Control Center Agent in an Oracle Application Server install. In this article, you will find step-by-step instructions how to install the Control Center Agent on an Oracle Application Server instance. The instructions cover the following tasks: Task 1: Install and Configure the Application Server Task 2: Deploy the Control Center Agent to the Application Server Task 3: Optional Configuration Tasks   Task 1: Install and Configure the Application Server Before configuring the Application Server, you need to install it from Oracle Application Server CD-ROM, or by downloading the installation program from Oracle Technology Network (OTN). Once the installation is completed, you are ready to configure the Application Server. The purpose of the configuration task is to make sure the Control Center Agent ear file can be deployed and runs in the Application Server successfully. The essential configuration tasks are outlined below: · Modify the OC4J Startup Script · Set up Control Center Agent Server Side Logging · Set up Audit Table Data Source · Copy ct_permissions.properties File · Set up Security Roles for Control Center Agent · Create JMS Queues · Install JDBC Drivers to OC4J Modify the OC4J Startup Script The OC4J startup script “opmn.xml” is located in Application Server configuration directory, $AS_HOME/opmn/conf. $AS_HOME stands for the root home directory of the application server. Open the file opmn.xml in a text editor, and alter the contents of the file as displayed in the following sample. You need to make sure that: The MaxPerSize is set to 128M. This is to ensure that you allocate enough PermGen space to OC4J to run Control Center Agent. This will prevent java.lang.OutOfMemoryError when running the agent. The Python.path sets the path for the Python library files used by the Control Center Agent: jython_lib.zip and jython_owblib.jar. These two files are in the $OWB_HOME/owb/lib/int directory, where $OWB_HOME is the directory where owb is installed. · The km_security_needed determines whether restrictions will be applied to the kinds of operating system commands allowed to be executed by the OWB Code Template script executed by Control Center Agent. Setting km_security_needed to “true” enforces such restriction while setting it to “false” removes such restrictions. Set up Control Center Agent Server Side Logging Ensure that you are in the Application Server configuration directory, $AS_HOME/j2ee/home/config. Open the file j2ee-logging.xml in a text editor and add the following lines to the log handler section. The jrt-internal-log-handler is the handler used by Control Center Agent runtime logger to create log files. Then add the following entry into the loggers section to create the logger for Control Center Agent runtime auditing. Set up Audit Table Data Source To enable Audit Table logging, a managed data source and connection pool need to be set up before Control Center Agent deployment. Ensure that you are in the Application Server configuration directory, $AS_HOME/j2ee/home/config. Open the file data-sources.xml in a text editor. Define the audit data source shown below in the file, <managed-data-source name="AuditDS" connection-pool-name="OWBSYS Audit   Connection Pool" jndi-name="jdbc/AuditDS"/> <connection-pool name="OWBSYS Audit Connection Pool">   <connection-factory factory-class="oracle.jdbc.pool.OracleDataSource"     user="owbsys_audit" password="owbsys_audit"     url="jdbc:oracle:thin:@//localhost:1521/ORCL"/> </connection-pool> Copy ct_permissions.properties File The ct_permissions.properties can be obtained from $OWB_HOME /owb/jrt/config/ directory. You need to copy the file to $AS_HOME/j2ee/home/config directory.This properties file takes effect when the setting km-security is set to true in Control Center Agent. By default the ALLOWED_CMD is commented out in ct_permissions.properties file. This prevents all system command from being invoked from scripts executed in Control Center Agent (when km-security is set to true). To allow certain system commands to be invoked, ALLOWED_CMD needs to be uncommented out, and the system commands (allowed to be invoked) need to be added to the ALLOWED_CMD. Set up Security Roles for Control Center Agent You can set up the Control Center Agent security roles through Oracle Enterprise Manager. In a web browser, navigate to Enterprise Manager Homepage (e.g. http://hostname:8889/em). 1. Log in using the oc4jadmin credentials. After the Cluster Topology page is loaded, click home (the OC4J instance). This takes you to the home page of the OC4J instance. On the OC4J home screen, click the Administration tab. On the Administration Tasks screen, expand Security. Click the task icon next to Security Providers. 2. On Security Providers page click on the button “Instance Level Security”. On Instance Level Security page, go to “Realms” tab. You will see a row for the default realm “jazn.com” in the results table. It has a “Roles” column and a “Users” column. Click on the number in “Roles” column. In the “Roles” page it will display all the roles available for the realm. Click on “Create” button to create a new role “OWB_J2EE_ EXECUTOR”. 3. On the Add Role screen, enter Name OWB_J2EE_EXECUTOR, and click OK. 4. Follow the same steps as before, and create a new role “OWB_J2EE_OPERATOR”. 5. Assign role “oc4j-administrators” and “OWB_J2EE_EXECUTOR” to the role “OWB_J2EE_OPERATOR” by moving these roles from “Available Roles” and click “OK” to save. 6. Go back to Instance Level Security page and create a new role “OWB_J2EE_ADMINISTRATOR”. 7. Assign roles “OWB_J2EE_ OPERATOR” and “OWB_J2EE_EXECUTOR” to the role “OWB_J2EE_ ADMINISTRATOR” by moving these roles from “Available Roles” and click “OK” to save. 8.Go back to Instance Level Security page. This time, click on the number in “Users” column for the realm “jazn.com”. In the “Users” page, it shows all the users defined for this realm. Locate the user “oc4jadmin” in the results table and click on it. 9. Assign the roles “OWB_J2EE_ADMINISTRATOR” and “oc4j-app-administrators” to this user by moving the role from the “Available Roles” selection box to “Selected Roles” box and click “Apply” to save. 10. Go back to Instance Level Security page and create a new role “OWB_INTERNAL_USERS”, assign no user or role to this role. Simply click “OK” to create this role. Now you have finished creating the security roles required for Control Center Agent. Create JMS Queues You need to create two JMS queues for Control Center Agent: owbQueue and abort_owbQueue. 1. Now go to OC4J home Page. On the OC4J home screen, click the Administration tab. On the Administration Tasks screen, expand Services and then expand Enterprise Messaging Service. Click the task icon next to JMS Destinations. 2. On JMS Destinations page, click “Create New” button to create a new JMS queue. On Add Destination page, choose “Queue” as Destination Type. Put “owbQueue” as Destination Name. Select “In Memory Persistence Only” as the Persistence Type and put “jms/owbQueue” as JNDI Location and click on “OK” to finish. 3. Follow the same instruction as above to create the owb_abortQueue. Now you have finished creating the JMS queues required for Control Center Agent. Install JDBC Drivers to OC4J In order to execute Code Templates using commercial databases other than Oracle, e.g. DB2, SQL Server etc, the corresponding jdbc driver files need to be added to $AS_HOME/j2ee/home/applib directory. 1. To install other JDBC drivers to OC4J, first obtain the .jar file containing the JDBC driver. All the external JDBC drivers .jar files can be found in the directory: $OWB_HOME/owb/lib/ext/. For DB2, the files needed are db2jcc.jar and db2jcc_license_cu.jar. For SQL Server the file is sqljdbc.jar. For sunopsis JDBC drivers, the file needed is snpsxmlo.jar. 2. Copy the required JDBC driver file into the directory $AS_HOME/j2ee/home/applib. Now you have finished the Application Server configuration. To make the configuration to take an effect, you need to restart the Application Server.   Task 2: Deploy the Control Center Agent to the Application Server Now you can deploy the Control Center Agent to the Application Server. In a web browser, navigate to Enterprise Manager Homepage (e.g. http://hostname:8889/em). 1. Log in using the oc4jadmin credentials. After the Cluster Topology page is loaded, click home (the OC4J instance). This takes you to the home page of the OC4J instance. On the OC4J home screen, click the Applications tab. Click Deploy to begin deploying Control Center Agent. 2. On the Deploy: Select Archive screen, under Archive, select Archive is present on local host. Upload the archive to the server where Application Server Control is running. Click Browse and locate the jrt.ear file in the $OWB_HOME/owb/jrt/applications directory. Under Deployment Plan, select Automatically create a new deployment plan. Click Next. 3. Wait for the ear file to be uploaded to Application Server. On the Deploy: Application Attributes screen, enter Application Name jrt, and Context Root jrt. Leave the other attributes at their default values. Click Next. 4. On Deploy: Deployment Settings screen, leave all attributes at their default values, and click Deploy. This will take about 1 minute or so and when the application is deployed successfully, a confirmation message will be displayed. Now the Control Center Agent is started automatically. Go back to OC4J home page and click on Applications tab to make sure the deployed application jrt is showing in the applications list.   Task 3: Optional Configuration Tasks The optional configuration tasks contain: · Secure Control Center Agent Web Service · Setting the PATH Environment Variable Secure Control Center Agent Web Service If you want to use JRTWebService with a secure website, you need to do the following steps, 1. Create a file “secure-web-site.xml” in the $AS_HOME/j2ee/home/config directory. The file can be obtained from $OWB_HOME/owb/jrt/config directory. A sample secure-web-site.xml is shown as below. We need to modify the “protocol” to “https”, and “secure” to “true”, also choose an port as the secure http port. Also we need to add the entry “ssl-config” in the file. Remember to use the absolute path for the key store file. 2. Modify the file “server.xml” that is located at $AS_HOME/j2ee/home/config directory. Then add the <web-site> element in the file for the secure-web-site. 3. Create a key store file “serverkeystore.jks” in the $AS_HOME/j2ee/home/config directory. The file can be obtained from $OWB_HOME/owb/jrt/config directory. After the three files are altered, restart the application server. Now you can access the JRTWebService in SSL way through https://hostname:4443/jrt/webservice. Setting the PATH Environment Variable Sometimes, some system commands such as linux ls, sh etc, can not be executed successfully during the script execution due to they are not found in PATH. To ensure they work normally, you can setup the environment variable PATH. Let’s navigate to the Enterprise Manager Homepage. 1. Go to OC4J home screen and click the Administration tab. Expand Administration Tasks, then expand Properties. Click the task icon next to Server Properties. 2. On the Server Properties screen, scroll down to Environment Variables section. Under Environment Variables, click Add Another Row. Enter PATH in Name, and fill Value with directories that contain the system commands. Click Apply.   After you work through this article, I believe you have developed a deeper understanding of the Control Center Agent installation process, and you can apply this knowledge in other installation plan such as Control Center Agent installation on Standalone OC4J.

    Read the article

  • Difficulty accessing Google Search API with Flex

    - by CM
    Hi - I am trying to get the number of incoming links to a page through the Google Search API. It is not working (just returning Null) Here is the code <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" creationComplete="init();" width="320" height="480" backgroundGradientColors="115115" backgroundGradientAlphas=".2" backgroundAlpha=".2" dropShadowEnabled="false"> <mx:Script> <![CDATA[ // // Author: Wayne IV Mike // Project: JSwoof - The Flex JSON library. // Description: Formated JSON loaded from txt file. // Date: 31st March 2009. // Contact: [email protected] , [email protected] // import json.*; import mx.controls.Alert; public function loadFile4(urlLink:String):void { var request:URLRequest = new URLRequest(urlLink); var urlLoad:URLLoader = new URLLoader(); urlLoad.addEventListener(Event.COMPLETE, fileLoaded4); urlLoad.load(request); } private function fileLoaded4(event:Event):void { var jObj:Object = JParser.decode(event.target.data); //Decode JSON from text file here. var jStr:String = JParser.encode(jObj); if(jStr != null && jStr != "") { var LinkTemp:String = jObj.estimatedResultCount; txtLinks.text = "Google Links " + LinkTemp; trace(event.target.data); } } /********************************************************************/ private function LinkLookup():void { loadFile4("http://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=link:twitter.com/" + NameSearch.text); } ]]> </mx:Script> <mx:TextInput x="17" y="86" id="NameSearch" text="cnnbrk" width="229" height="30" fontSize="16" fontWeight="bold" cornerRadius="10" shadowDirection="center" shadowDistance="5"/> <mx:Button x="253" y="85" label="Find" id="GoSearch" click="LinkLookup()" height="31"/> <mx:Label text="Links" id="txtLinks" width="233" textAlign="left" color="#FFFFFF" fontSize="14" height="21"/> </mx:Application> Sorry for the ugly format. I added a trace(event.target.data); and updated the code above. This is the result - [SWF] C:/Documents and Settings/Robert/My Documents/Flex Builder 3/.metadata/.plugins/com.adobe.flash.profiler/ProfilerAgent.swf - 17,508 bytes after decompression [SWF] C:\Documents and Settings\Robert\My Documents\Flex Builder 3\Formated\bin-debug\Formated.swf - 781,950 bytes after decompression [Unload SWF] C:/Documents and Settings/Robert/My Documents/Flex Builder 3/.metadata/.plugins/com.adobe.flash.profiler/ProfilerAgent.swf {"responseData": {"results":[{"GsearchResultClass":"GwebSearch","unescapedUrl":"http://twitter.com/britishredneck","url":"http://twitter.com/britishredneck","visibleUrl":"twitter.com","cacheUrl":"http://www.google.com/search?q\u003dcache:4pQXnMQCZA4J:twitter.com","title":"Martyn Jones (BritishRedneck) on Twitter","titleNoFormatting":"Martyn Jones (BritishRedneck) on Twitter","content":"Finally found a free and simple way to expand my reach on Twitter. A nice 20 second process. http://tpq.me/5gbrg #twpq 3:13 PM Jul 18th, 2009 from API \u003cb\u003e...\u003c/b\u003e"},{"GsearchResultClass":"GwebSearch","unescapedUrl":"http://twitter.com/dshlian/favorites","url":"http://twitter.com/dshlian/favorites","visibleUrl":"twitter.com","cacheUrl":"http://www.google.com/search?q\u003dcache:79qm5Pz7O5QJ:twitter.com","title":"Twitter","titleNoFormatting":"Twitter","content":"Twitter is without a doubt the best way to share and discover what is happening right now."},{"GsearchResultClass":"GwebSearch","unescapedUrl":"http://twitter.com/rosannepeterson","url":"http://twitter.com/rosannepeterson","visibleUrl":"twitter.com","cacheUrl":"http://www.google.com/search?q\u003dcache:q11IcnW9l30J:twitter.com","title":"Rosanne Peterson (rosannepeterson) on Twitter","titleNoFormatting":"Rosanne Peterson (rosannepeterson) on Twitter","content":"Tx.All is well. Looking forward to the holday. Perhaps after will be time for certification! 8:14 AM Dec 23rd, 2009 from txt; I am also reading \u0026quot;How I \u003cb\u003e...\u003c/b\u003e"},{"GsearchResultClass":"GwebSearch","unescapedUrl":"http://twitter.com/MRSalesTraining","url":"http://twitter.com/MRSalesTraining","visibleUrl":"twitter.com","cacheUrl":"http://www.google.com/search?q\u003dcache:uBNGhud0vfEJ:twitter.com","title":"Medrep (MRSalesTraining) on Twitter","titleNoFormatting":"Medrep (MRSalesTraining) on Twitter","content":"Working away on Cardiovascular Medicine Module - heavy stuff for a Sunday evening!! 11:09 AM Nov 8th, 2009 from web; Today\u0026#39;s Student is tomorrow\u0026#39;s Medical \u003cb\u003e...\u003c/b\u003e"}],"cursor":{"pages":[{"start":"0","label":1},{"start":"4","label":2},{"start":"8","label":3},{"start":"12","label":4},{"start":"16","label":5},{"start":"20","label":6},{"start":"24","label":7},{"start":"28","label":8}],"estimatedResultCount":"64","currentPageIndex":0,"moreResultsUrl":"http://www.google.com/search?oe\u003dutf8\u0026ie\u003dutf8\u0026source\u003duds\u0026start\u003d0\u0026hl\u003den\u0026q\u003dlink%3Atwitter.com%2Fgenericmedlist"}}, "responseDetails": null, "responseStatus": 200} So the data return from the query is correct, and the difficulty lies in accessing the "estimatedResultCount" near the end of the JSON data. Any help would be greatly appreciated!

    Read the article

  • ASP.NET MVC 3 Hosting :: How to Deploy Web Apps Using ASP.NET MVC 3, Razor and EF Code First - Part I

    - by mbridge
    First, you can download the source code from http://efmvc.codeplex.com. The following frameworks will be used for this step by step tutorial. public class Category {     public int CategoryId { get; set; }     [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public virtual ICollection<Expense> Expenses { get; set; } } Expense Class public class Expense {             public int ExpenseId { get; set; }            public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }     public int CategoryId { get; set; }     public virtual Category Category { get; set; } }    Define Domain Model Let’s create domain model for our simple web application Category Class We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. In this post, we will be focusing on CRUD operations for the entity Category and will be working on the Expense entity with a View Model object in the later post. And the source code for this application will be refactored over time. The above entities are very simple POCO (Plain Old CLR Object) classes and the entity Category is decorated with validation attributes in the System.ComponentModel.DataAnnotations namespace. Now we want to use these entities for defining model objects for the Entity Framework 4. Using the Code First approach of Entity Framework, we can first define the entities by simply writing POCO classes without any coupling with any API or database library. This approach lets you focus on domain model which will enable Domain-Driven Development for applications. EF code first support is currently enabled with a separate API that is runs on top of the Entity Framework 4. EF Code First is reached CTP 5 when I am writing this article. Creating Context Class for Entity Framework We have created our domain model and let’s create a class in order to working with Entity Framework Code First. For this, you have to download EF Code First CTP 5 and add reference to the assembly EntitFramework.dll. You can also use NuGet to download add reference to EEF Code First. public class MyFinanceContext : DbContext {     public MyFinanceContext() : base("MyFinance") { }     public DbSet<Category> Categories { get; set; }     public DbSet<Expense> Expenses { get; set; }         }   The above class MyFinanceContext is derived from DbContext that can connect your model classes to a database. The MyFinanceContext class is mapping our Category and Expense class into database tables Categories and Expenses using DbSet<TEntity> where TEntity is any POCO class. When we are running the application at first time, it will automatically create the database. EF code-first look for a connection string in web.config or app.config that has the same name as the dbcontext class. If it is not find any connection string with the convention, it will automatically create database in local SQL Express database by default and the name of the database will be same name as the dbcontext class. You can also define the name of database in constructor of the the dbcontext class. Unlike NHibernate, we don’t have to use any XML based mapping files or Fluent interface for mapping between our model and database. The model classes of Code First are working on the basis of conventions and we can also use a fluent API to refine our model. The convention for primary key is ‘Id’ or ‘<class name>Id’.  If primary key properties are detected with type ‘int’, ‘long’ or ‘short’, they will automatically registered as identity columns in the database by default. Primary key detection is not case sensitive. We can define our model classes with validation attributes in the System.ComponentModel.DataAnnotations namespace and it automatically enforces validation rules when a model object is updated or saved. Generic Repository for EF Code First We have created model classes and dbcontext class. Now we have to create generic repository pattern for data persistence with EF code first. If you don’t know about the repository pattern, checkout Martin Fowler’s article on Repository Let’s create a generic repository to working with DbContext and DbSet generics. public interface IRepository<T> where T : class     {         void Add(T entity);         void Delete(T entity);         T GetById(long Id);         IEnumerable<T> All();     } RepositoryBasse – Generic Repository class protected MyFinanceContext Database {     get { return database ?? (database = DatabaseFactory.Get()); } } public virtual void Add(T entity) {     dbset.Add(entity);            }        public virtual void Delete(T entity) {     dbset.Remove(entity); }   public virtual T GetById(long id) {     return dbset.Find(id); }   public virtual IEnumerable<T> All() {     return dbset.ToList(); } } DatabaseFactory class public class DatabaseFactory : Disposable, IDatabaseFactory {     private MyFinanceContext database;     public MyFinanceContext Get()     {         return database ?? (database = new MyFinanceContext());     }     protected override void DisposeCore()     {         if (database != null)             database.Dispose();     } } Unit of Work If you are new to Unit of Work pattern, checkout Fowler’s article on Unit of Work . According to Martin Fowler, the Unit of Work pattern "maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems." Let’s create a class for handling Unit of Work public interface IUnitOfWork {     void Commit(); } UniOfWork class public class UnitOfWork : IUnitOfWork {     private readonly IDatabaseFactory databaseFactory;     private MyFinanceContext dataContext;       public UnitOfWork(IDatabaseFactory databaseFactory)     {         this.databaseFactory = databaseFactory;     }       protected MyFinanceContext DataContext     {         get { return dataContext ?? (dataContext = databaseFactory.Get()); }     }       public void Commit()     {         DataContext.Commit();     } } The Commit method of the UnitOfWork will call the commit method of MyFinanceContext class and it will execute the SaveChanges method of DbContext class.   Repository class for Category In this post, we will be focusing on the persistence against Category entity and will working on other entities in later post. Let’s create a repository for handling CRUD operations for Category using derive from a generic Repository RepositoryBase<T>. public class CategoryRepository: RepositoryBase<Category>, ICategoryRepository     {     public CategoryRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface ICategoryRepository : IRepository<Category> { } If we need additional methods than generic repository for the Category, we can define in the CategoryRepository. Dependency Injection using Unity 2.0 If you are new to Inversion of Control/ Dependency Injection or Unity, please have a look on my articles at http://weblogs.asp.net/shijuvarghese/archive/tags/IoC/default.aspx. I want to create a custom lifetime manager for Unity to store container in the current HttpContext. public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName] = newValue;     }     public void Dispose()     {         RemoveValue();     } } Let’s create controller factory for Unity in the ASP.NET MVC 3 application.                 404, String.Format(                     "The controller for path '{0}' could not be found" +     "or it does not implement IController.",                 reqContext.HttpContext.Request.Path));       if (!typeof(IController).IsAssignableFrom(controllerType))         throw new ArgumentException(                 string.Format(                     "Type requested is not a controller: {0}",                     controllerType.Name),                     "controllerType");     try     {         controller= container.Resolve(controllerType) as IController;     }     catch (Exception ex)     {         throw new InvalidOperationException(String.Format(                                 "Error resolving controller {0}",                                 controllerType.Name), ex);     }     return controller; }   } Configure contract and concrete types in Unity Let’s configure our contract and concrete types in Unity for resolving our dependencies. private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()                 .RegisterType<IDatabaseFactory, DatabaseFactory>(new HttpContextLifetimeManager<IDatabaseFactory>())     .RegisterType<IUnitOfWork, UnitOfWork>(new HttpContextLifetimeManager<IUnitOfWork>())     .RegisterType<ICategoryRepository, CategoryRepository>(new HttpContextLifetimeManager<ICategoryRepository>());                 //Set container for Controller Factory                ControllerBuilder.Current.SetControllerFactory(             new UnityControllerFactory(container)); } In the above ConfigureUnity method, we are registering our types onto Unity container with custom lifetime manager HttpContextLifetimeManager. Let’s call ConfigureUnity method in the Global.asax.cs for set controller factory for Unity and configuring the types with Unity. protected void Application_Start() {     AreaRegistration.RegisterAllAreas();     RegisterGlobalFilters(GlobalFilters.Filters);     RegisterRoutes(RouteTable.Routes);     ConfigureUnity(); } Developing web application using ASP.NET MVC 3 We have created our domain model for our web application and also have created repositories and configured dependencies with Unity container. Now we have to create controller classes and views for doing CRUD operations against the Category entity. Let’s create controller class for Category Category Controller public class CategoryController : Controller {     private readonly ICategoryRepository categoryRepository;     private readonly IUnitOfWork unitOfWork;           public CategoryController(ICategoryRepository categoryRepository, IUnitOfWork unitOfWork)     {         this.categoryRepository = categoryRepository;         this.unitOfWork = unitOfWork;     }       public ActionResult Index()     {         var categories = categoryRepository.All();         return View(categories);     }     [HttpGet]     public ActionResult Edit(int id)     {         var category = categoryRepository.GetById(id);         return View(category);     }       [HttpPost]     public ActionResult Edit(int id, FormCollection collection)     {         var category = categoryRepository.GetById(id);         if (TryUpdateModel(category))         {             unitOfWork.Commit();             return RedirectToAction("Index");         }         else return View(category);                 }       [HttpGet]     public ActionResult Create()     {         var category = new Category();         return View(category);     }           [HttpPost]     public ActionResult Create(Category category)     {         if (!ModelState.IsValid)         {             return View("Create", category);         }                     categoryRepository.Add(category);         unitOfWork.Commit();         return RedirectToAction("Index");     }       [HttpPost]     public ActionResult Delete(int  id)     {         var category = categoryRepository.GetById(id);         categoryRepository.Delete(category);         unitOfWork.Commit();         var categories = categoryRepository.All();         return PartialView("CategoryList", categories);       }        } Creating Views in Razor Now we are going to create views in Razor for our ASP.NET MVC 3 application.  Let’s create a partial view CategoryList.cshtml for listing category information and providing link for Edit and Delete operations. CategoryList.cshtml @using MyFinance.Helpers; @using MyFinance.Domain; @model IEnumerable<Category>      <table>         <tr>         <th>Actions</th>         <th>Name</th>          <th>Description</th>         </tr>     @foreach (var item in Model) {             <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.CategoryId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.CategoryId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divCategoryList" })                           </td>             <td>                 @item.Name             </td>             <td>                 @item.Description             </td>         </tr>         }       </table>     <p>         @Html.ActionLink("Create New", "Create")     </p> The delete link is providing Ajax functionality using the Ajax.ActionLink. This will call an Ajax request for Delete action method in the CategoryCotroller class. In the Delete action method, it will return Partial View CategoryList after deleting the record. We are using CategoryList view for the Ajax functionality and also for Index view using for displaying list of category information. Let’s create Index view using partial view CategoryList  Index.chtml @model IEnumerable<MyFinance.Domain.Category> @{     ViewBag.Title = "Index"; }    <h2>Category List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script>    <div id="divCategoryList">               @Html.Partial("CategoryList", Model) </div> We can call the partial views using Html.Partial helper method. Now we are going to create View pages for insert and update functionality for the Category. Both view pages are sharing common user interface for entering the category information. So I want to create an EditorTemplate for the Category information. We have to create the EditorTemplate with the same name of entity object so that we can refer it on view pages using @Html.EditorFor(model => model) . So let’s create template with name Category. Category.cshtml @model MyFinance.Domain.Category <div class="editor-label"> @Html.LabelFor(model => model.Name) </div> <div class="editor-field"> @Html.EditorFor(model => model.Name) @Html.ValidationMessageFor(model => model.Name) </div> <div class="editor-label"> @Html.LabelFor(model => model.Description) </div> <div class="editor-field"> @Html.EditorFor(model => model.Description) @Html.ValidationMessageFor(model => model.Description) </div> Let’s create view page for insert Category information @model MyFinance.Domain.Category   @{     ViewBag.Title = "Save"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) {     @Html.ValidationSummary(true)     <fieldset>         <legend>Category</legend>                @Html.EditorFor(model => model)               <p>             <input type="submit" value="Create" />         </p>     </fieldset> }   <div>     @Html.ActionLink("Back to List", "Index") </div> ViewStart file In Razor views, we can add a file named _viewstart.cshtml in the views directory  and this will be shared among the all views with in the Views directory. The below code in the _viewstart.cshtml, sets the Layout page for every Views in the Views folder.     @{     Layout = "~/Views/Shared/_Layout.cshtml"; } Tomorrow, we will cotinue the second part of this article. :)

    Read the article

  • Android: Strange out of memory issue

    - by Chrispix
    I am not sure where to start to explain this one. I have a list view with a couple image buttons on each row. When you click the list row, it launches a new activity. If you review some of my other posts, I have had to build my own tabs because of an issue w/ the camera layout. The activity that gets launched for result is a map. If I click on my button to launch the image preview (load an image off the sd card) the application returns from the activity back to the listview activity to the result handler to relaunch my new activity which is nothing more than an image widget. So here is the issue, the image preview on the list view is being done w/ the cursor & listadapter. This makes it pretty simple, but I am not sure how I can put a resized (i.e. smaller bit size not pixel) image as the src for the imgbutton on the fly. So I just resized the image that came off the phone camera. The issue is that I get an out of memory error when it tries to go back and re-launch the 2nd activity. ** My question : is there a way I can build the list adapter easily row by row, where I can resize on the fly (bit wise)? - this would be preferable as I also need to make some changes to the properties of the widgets/elements in each row as I am unable to select a row w/ touch screen b/c of focus issue. (I can use roller ball). ** I know I can do an out of band resize and save of my image, but that is not really what I want to do, but some sample code for that would be nice if that is your suggestion. As soon as I disabled the image on the listview it worked fine again. FYI : This is how I was doing it : String[] from = new String[] { DBHelper.KEY_BUSINESSNAME, DBHelper.KEY_ADDRESS, DBHelper.KEY_CITY, DBHelper.KEY_GPSLONG, DBHelper.KEY_GPSLAT, DBHelper.KEY_IMAGEFILENAME + ""}; to = new int[] { R.id.businessname, R.id.address, R.id.city, R.id.gpslong, R.id.gpslat, R.id.imagefilename }; notes = new SimpleCursorAdapter(this, R.layout.notes_row, c, from, to); setListAdapter(notes); Where R.id.imagefilename is a ButtonImage Here is my LogCat 01-25 05:05:49.877: ERROR/dalvikvm-heap(3896): 6291456-byte external allocation too large for this process. 01-25 05:05:49.877: ERROR/(3896): VM won't let us allocate 6291456 bytes 01-25 05:05:49.877: ERROR/AndroidRuntime(3896): Uncaught handler: thread main exiting due to uncaught exception 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): java.lang.OutOfMemoryError: bitmap size exceeds VM budget 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.graphics.BitmapFactory.nativeDecodeStream(Native Method) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:304) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.graphics.BitmapFactory.decodeFile(BitmapFactory.java:149) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.graphics.BitmapFactory.decodeFile(BitmapFactory.java:174) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.graphics.drawable.Drawable.createFromPath(Drawable.java:729) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.ImageView.resolveUri(ImageView.java:484) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.ImageView.setImageURI(ImageView.java:281) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.SimpleCursorAdapter.setViewImage(SimpleCursorAdapter.java:183) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.SimpleCursorAdapter.bindView(SimpleCursorAdapter.java:129) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.CursorAdapter.getView(CursorAdapter.java:150) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.AbsListView.obtainView(AbsListView.java:1057) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.ListView.makeAndAddView(ListView.java:1616) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.ListView.fillSpecific(ListView.java:1177) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.ListView.layoutChildren(ListView.java:1454) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.AbsListView.onLayout(AbsListView.java:937) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.view.View.layout(View.java:5611) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1119) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.LinearLayout.layoutHorizontal(LinearLayout.java:1108) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.LinearLayout.onLayout(LinearLayout.java:922) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.view.View.layout(View.java:5611) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.FrameLayout.onLayout(FrameLayout.java:294) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.view.View.layout(View.java:5611) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1119) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:999) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.LinearLayout.onLayout(LinearLayout.java:920) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.view.View.layout(View.java:5611) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.widget.FrameLayout.onLayout(FrameLayout.java:294) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.view.View.layout(View.java:5611) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.view.ViewRoot.performTraversals(ViewRoot.java:771) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.view.ViewRoot.handleMessage(ViewRoot.java:1103) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.os.Handler.dispatchMessage(Handler.java:88) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.os.Looper.loop(Looper.java:123) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at android.app.ActivityThread.main(ActivityThread.java:3742) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at java.lang.reflect.Method.invokeNative(Native Method) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at java.lang.reflect.Method.invoke(Method.java:515) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:739) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:497) 01-25 05:05:49.917: ERROR/AndroidRuntime(3896): at dalvik.system.NativeStart.main(Native Method) 01-25 05:10:01.127: ERROR/AndroidRuntime(3943): ERROR: thread attach failed I also have a new error when displaying an image : 01-25 22:13:18.594: DEBUG/skia(4204): xxxxxxxxxxx jpeg error 20 Improper call to JPEG library in state %d 01-25 22:13:18.604: INFO/System.out(4204): resolveUri failed on bad bitmap uri: 01-25 22:13:18.694: ERROR/dalvikvm-heap(4204): 6291456-byte external allocation too large for this process. 01-25 22:13:18.694: ERROR/(4204): VM won't let us allocate 6291456 bytes 01-25 22:13:18.694: DEBUG/skia(4204): xxxxxxxxxxxxxxxxxxxx allocPixelRef failed

    Read the article

  • eBooks on iPad vs. Kindle: More Debate than Smackdown

    - by andrewbrust
    When the iPad was presented at its San Francisco launch event on January 28th, Steve Jobs spent a significant amount of time explaining how well the device would serve as an eBook reader. He showed the iBooks reader application and iBookstore and laid down the gauntlet before Amazon and its beloved Kindle device. Almost immediately afterwards, criticism came rushing forth that the iPad could never beat the Kindle for book reading. The curious part of that criticism is that virtually no one offering it had actually used the iPad yet. A few weeks later, on April 3rd, the iPad was released for sale in the United States. I bought one on that day and in the few additional weeks that have elapsed, I’ve given quite a workout to most of its capabilities, including its eBook features. I’ve also spent some time with the Kindle, albeit a first-generation model, to see how it actually compares to the iPad. I had some expectations going in, but I came away with conclusions about each device that were more scenario-based than absolute. I present my findings to you here.   Vital Statistics Let’s start with an inventory of each device’s underlying technology. The iPad has a color, backlit LCD screen and an on-screen keyboard. It has a battery which, on a full charge, lasts anywhere from 6-10 hours. The Kindle offers a monochrome, reflective E Ink display, a physical keyboard and a battery that on my first gen loaner unit can go up to a week between charges (Amazon claims the battery on the Kindle 2 can last up to 2 weeks on a single charge). The Kindle connects to Amazon’s Kindle Store using a 3G modem (the technology and network vary depending on the model) that incurs no airtime service charges whatsoever. The iPad units that are on-sale today work over WiFi only. 3G-equipped models will be on sale shortly and will command a $130 premium over their WiFi-only counterparts. 3G service on the iPad, in the U.S. from AT&T, will be fee-based, with a 250MB plan at $14.99 per month and an unlimited plan at $29.99. No contract is required for 3G service. All these tech specs aside, I think a more useful observation is that the iPad is a multi-purpose Internet-connected entertainment device, while the Kindle is a dedicated reading device. The question is whether those differences in design and intended use create a clear-cut winner for reading electronic publications. Let’s take a look at each device, in isolation, now.   Kindle To me, what’s most innovative about the Kindle is its E Ink display. E Ink really looks like ink on a sheet of paper. It requires no backlight, it’s fully visible in direct sunlight and it causes almost none of the eyestrain that LCD-based computer display technology (like that used on the iPad) does. It’s really versatile in an all-around way. Forgive me if this sounds precious, but reading on it is really a joy. In fact, it’s a genuinely relaxing experience. Through the Kindle Store, Amazon allows users to download books (including audio books), magazines, newspapers and blog feeds. Books and magazines can be purchased either on a single-issue basis or as an annual subscription. Books, of course, are purchased singly. Oddly, blogs are not free, but instead carry a monthly subscription fee, typically $1.99. To me this is ludicrous, but I suppose the free 3G service is partially to blame. Books and magazine issues download quickly. Magazine and blog subscriptions cause new issues or posts to be pushed to your device on an automated basis. Available blogs include 9000-odd feeds that Amazon offers on the Kindle Store; unless I missed something, arbitrary RSS feeds are not supported (though there are third party workarounds to this limitation). The shopping experience is integrated well, has an huge selection, and offers certain graphical perks. For example, magazine and newspaper logos are displayed in menus, and book cover thumbnails appear as well. A simple search mechanism is provided and text entry through the physical keyboard is relatively painless. It’s very easy and straightforward to enter the store, find something you like and start reading it quickly. If you know what you’re looking for, it’s even faster. Given Kindle’s high portability, very reliable battery, instant-on capability and highly integrated content acquisition, it makes reading on whim, and in random spurts of downtime, very attractive. The Kindle’s home screen lists all of your publications, and easily lets you select one, then start reading it. Once opened, publications display in crisp, attractive text that is adjustable in size. “Turning” pages is achieved through buttons dedicated to the task. Notes can be recorded, bookmarks can be saved and pages can be saved as clippings. I am not an avid book reader, and yet I found the Kindle made it really fun, convenient and soothing to read. There’s something about the easy access to the material and the simplicity of the display that makes the Kindle seduce you into chilling out and reading page after page. On the other hand, the Kindle has an awkward navigation interface. While menus are displayed clearly on the screen, the method of selecting menu items is tricky: alongside the right-hand edge of the main display is a thin column that acts as a second display. It has a white background, and a scrollable silver cursor that is moved up or down through the use of the device’s scrollwheel. Picking a menu item on the main display involves scrolling the silver cursor to a position parallel to that menu item and pushing the scrollwheel in. This navigation technique creates a disconnect, literally. You don’t really click on a selection so much as you gesture toward it. I got used to this technique quickly, but I didn’t love it. It definitely created a kind of anxiety in me, making me feel the need to speed through menus and get to my destination document quickly. Once there, I could calm down and relax. Books are great on the Kindle. Magazines and newspapers much less so. I found the rendering of photographs, and even illustrations, to be unacceptably crude. For this reason, I expect that reading textbooks on the Kindle may leave students wanting. I found that the original flow and layout of any publication was sacrificed on the Kindle. In effect, browsing a magazine or newspaper was almost impossible. Reading the text of individual articles was enjoyable, but having to read this way made the whole experience much more “a la carte” than cohesive and thematic between articles. I imagine that for academic journals this is ideal, but for consumer publications it imposes a stripped-down, low-fidelity experience that evokes a sense of deprivation. In general, the Kindle is great for reading text. For just about anything else, especially activity that involves exploratory browsing, meandering and short-attention-span reading, it presents a real barrier to entry and adoption. Avid book readers will enjoy the Kindle (if they’re not already). It’s a great device for losing oneself in a book over long sittings. Multitaskers who are more interested in periodicals, be they online or off, will like it much less, as they will find compromise, and even sacrifice, to be palpable.   iPad The iPad is a very different device from the Kindle. While the Kindle is oriented to pages of text, the iPad orbits around applications and their interfaces. Be it the pinch and zoom experience in the browser, the rich media features that augment content on news and weather sites, or the ability to interact with social networking services like Twitter, the iPad is versatile. While it shares a slate-like form factor with the Kindle, it’s effectively an elegant personal computer. One of its many features is the iBook application and integration of the iBookstore. But it’s a multi-purpose device. That turns out to be good and bad, depending on what you’re reading. The iBookstore is great for browsing. It’s color, rich animation-laden user interface make it possible to shop for books, rather than merely search and acquire them. Unfortunately, its selection is rather sparse at the moment. If you’re looking for a New York Times bestseller, or other popular titles, you should be OK. If you want to read something more specialized, it’s much harder. Unlike the awkward navigation interface of the Kindle, the iPad offers a nearly flawless touch-screen interface that seduces the user into tinkering and kibitzing every bit as much as the Kindle lulls you into a deep, concentrated read. It’s a dynamic and interactive device, whereas the Kindle is static and passive. The iBook reader is slick and fun. Use the iPad in landscape mode and you can read the book in 2-up (left/right 2-page) display; use it in portrait mode and you can read one page at a time. Rather than clicking a hardware button to turn pages, you simply drag and wipe from right-to-left to flip the single or right-hand page. The page actually travels through an animated path as it would in a physical book. The intuitiveness of the interface is uncanny. The reader also accommodates saving of bookmarks, searching of the text, and the ability to highlight a word and look it up in a dictionary. Pages display brightly and clearly. They’re easy to read. But the backlight and the glare made me less comfortable than I was with the Kindle. The knowledge that completely different applications (including the Web and email and Twitter) were just a few taps away made me antsy and very tempted to task-switch. The knowledge that battery life is an issue created subtle discomfort. If the Kindle makes you feel like you’re in a library reading room, then the iPad makes you feel, at best, like you’re under fluorescent lights at a Barnes and Noble or Borders store. If you’re lucky, you’d be on a couch or at a reading table in the store, but you might also be standing up, in the aisles. Clearly, I didn’t find this conducive to focused and sustained reading. But that may have more to do with my own tendency to read periodicals far more than books, and my neurotic . And, truth be known, the book reading experience, when not explicitly compared to Kindle’s, was still pleasant. It is also important to point out that Kindle Store-sourced books can be read on the iPad through a Kindle reader application, from Amazon, specific to the device. This offered a less rich experience than the iBooks reader, but it was completely adequate. Despite the Kindle brand of the reader, however, it offered little in terms of simulating the reading experience on its namesake device. When it comes to periodicals, the iPad wins hands down. Magazines, even if merely scanned images of their print editions, read on the iPad in a way that felt similar to reading hard copy. The full color display, touch navigation and even the ability to render advertisements in their full glory makes the iPad a great way to read through any piece of work that is measured in pages, rather than chapters. There are many ways to get magazines and newspapers onto the iPad, including the Zinio reader, and publication-specific applications like the Wall Street Journal’s and Popular Science’s. The New York Times’ free Editors’ Choice application offers a Times Reader-like interface to a subset of the Gray Lady’s daily content. The completely Web-based but iPad-optimized Times Skimmer site (at www.nytimes.com/timesskimmer) works well too. Even conventional Web sites themselves can be read much like magazines, given the iPad’s ability to zoom in on the text and crop out advertisements on the margins. While the Kindle does have an experimental Web browser, it reminded me a lot of early mobile phone browsers, only in a larger size. For text-heavy sites with simple layout, it works fine. For just about anything else, it becomes more trouble than it’s worth. And given the way magazine articles make me think of things I want to look up online, I think that’s a real liability for the Kindle.   Summing Up What I came to realize is that the Kindle isn’t so much a computer or even an Internet device as it is a printer. While it doesn’t use physical paper, it still renders its content a page at a time, just like a laser printer does, and its output appears strikingly similar. You can read the rendered text, but you can’t interact with it in any way. That’s why the navigation requires a separate cursor display area. And because of the page-oriented rendering behavior, turning pages causes a flash on the display and requires a sometimes long pause before the next page is rendered. The good side of this is that once the page is generated, no battery power is required to display it. That makes for great battery life, optimal viewing under most lighting conditions (as long as there is some light) and low-eyestrain text-centric display of content. The Kindle is highly portable, has an excellent selection in its store and is refreshingly distraction-free. All of this is ideal for reading books. And iPad doesn’t offer any of it. What iPad does offer is versatility, variety, richness and luxury. It’s flush with accoutrements even if it’s low on focused, sustained text display. That makes it inferior to the Kindle for book reading. But that also makes it better than the Kindle for almost everything else. As such, and given that its book reading experience is still decent (even if not superior), I think the iPad will give Kindle a run for its money. True book lovers, and people on a budget, will want the Kindle. People with a robust amount of discretionary income may want both devices. Everyone else who is interested in a slate form factor e-reading device, especially if they also wish to have leisure-friendly Internet access, will likely choose the iPad exclusively. One thing is for sure: iPad has reduced Kindle’s market, and may have shifted its mass market potential to a mere niche play. If Amazon is smart, it will improve its iPad-based Kindle reader app significantly. It can then leverage the iPad channel as a significant market for the Kindle Store. After all, selling the eBooks themselves is what Amazon should care most about.

    Read the article

  • Instruments memory leak iphone

    - by dubbeat
    Hi, I posted this problem a few days ago but it was very muddled and my question wasnt very clear so I removed it. I've been digging around and the memory leak is still persiting. Hopefully this attempt will be clearer. First off I've run the static analyzer and it reports no memory leaks. I then ran Instruments and it pointed to a memory leak at this line of code. As far as I can see there is no memory leak. featured=[[UILabel alloc]initWithFrame:CGRectMake(130,15, 200, 15)]; //[featured setFont:[UIFont UIFontboldSystemFontOfSize:20]]; featured.font = [UIFont boldSystemFontOfSize:20]; featured.backgroundColor= [UIColor clearColor]; featured.textColor=[UIColor blackColor]; featured.text= @"Featured Promo"; [self.view addSubview:featured]; [featured release]; featured=nil; If I comment out the above code Instruments reports another memory leak in another block of code where there is no discernible leak. UIButton *populartbutton = [[UIButton buttonWithType:UIButtonTypeRoundedRect]]; populartbutton.frame = CGRectMake(112, 145, 90, 22); // size and position of button [populartbutton setTitle:@"Popular" forState:UIControlStateNormal]; populartbutton.backgroundColor = [UIColor clearColor]; populartbutton.adjustsImageWhenHighlighted = YES; [populartbutton addTarget:self action:@selector(getpopular:) forControlEvents:UIControlEventTouchUpInside]; [self.view addSubview:populartbutton]; Instruments also says Responsible Library = Core Graphics Responsible Frame = open_handle_to_dylib_path This Is the stack trace. 53 Promo start 52 Promo main /Users/..2/main.m:14 51 UIKit UIApplicationMain 50 UIKit -[UIApplication _run] 49 CoreFoundation CFRunLoopRunInMode 48 CoreFoundation CFRunLoopRunSpecific 47 GraphicsServices PurpleEventCallback 46 UIKit _UIApplicationHandleEvent 45 UIKit -[UIApplication sendEvent:] 44 UIKit -[UIApplication handleEvent:withNewEvent:] 43 UIKit -[UIApplication _reportAppLaunchFinished] 42 QuartzCore CA::Transaction::commit() 41 QuartzCore CA::Context::commit_transaction(CA::Transaction*) 40 QuartzCore CALayerLayoutIfNeeded 39 QuartzCore -[CALayer layoutSublayers] 38 UIKit -[UILayoutContainerView layoutSubviews] 37 UIKit -[UINavigationController _startDeferredTransitionIfNeeded] 36 UIKit -[UINavigationController _startTransition:fromViewController:toViewController:] 35 UIKit -[UINavigationController _layoutViewController:] 34 UIKit -[UINavigationController_computeAndApplyScrollContentInsetDeltaForViewController:] 33 UIKit -[UIViewController contentScrollView] 32 UIKit -[UIViewController view] 31 Promo -[FeaturedLevelViewController viewDidLoad] /Users/..s/FeaturedLevelViewController.m:67 // THIS IS MY CLASS WHERE THE CODE SAMPLES ABOVE ARE FROM 30 UIKit -[UILabel initWithFrame:] 29 UIKit -[UILabel _commonInit] 28 UIKit +[UILabel defaultFont] 27 UIKit +[UIFont systemFontOfSize:] 26 GraphicsServices GSFontCreateWithName 25 CoreGraphics CGFontCreateWithName 24 CoreGraphics CGFontCreateWithFontName 23 CoreGraphics CGFontFinderGetDefault 22 CoreGraphics CGFontGetVTable 21 libSystem.B.dylib pthread_once 20 CoreGraphics load_vtable 19 CoreGraphics load_library 18 CoreGraphics CGLibraryLoadFunction 17 CoreGraphics load_function 16 CoreGraphics open_handle_to_dylib_path 15 libSystem.B.dylib dlopen 14 dyld dlopen 13 dyld dyld::link(ImageLoader*, bool, ImageLoader::RPathChain const&) 12 dyld ImageLoader::link(ImageLoader::LinkContext const&, bool, bool, ImageLoader::RPathChain const&) 11 dyld ImageLoader::recursiveLoadLibraries(ImageLoader::LinkContext const&, bool, ImageLoader::RPathChain const&) 10 dyld dyld::libraryLocator(char const*, bool, char const*, ImageLoader::RPathChain const*) 9 dyld dyld::load(char const*, dyld::LoadContext const&) 8 dyld dyld::loadPhase0(char const*, dyld::LoadContext const&, std::vector<char const*, std::allocator<char const*> >*) 7 dyld dyld::loadPhase1(char const*, dyld::LoadContext const&, std::vector<char const*, std::allocator<char const*> >*) 6 dyld dyld::loadPhase3(char const*, dyld::LoadContext const&, std::vector<char const*, std::allocator<char const*> >*) 5 dyld dyld::loadPhase4(char const*, dyld::LoadContext const&, std::vector<char const*, std::allocator<char const*> >*) 4 dyld dyld::loadPhase5(char const*, dyld::LoadContext const&, std::vector<char const*, std::allocator<char const*> >*) 3 dyld dyld::mkstringf(char const*, ...) 2 dyld strdup 1 dyld malloc 0 libSystem.B.dylib malloc I'm really not too sure how to use this information to fix the problem so any guidance would be appreciated. Perhaps the answer is in the trace but I just don't know what to look for? EDIT:: The above stack trace is when running on the simulator. The following is from running on a device. This trace does not point to any of my own classes 23 Promo 0x0 22 libSystem.B.dylib _pthread_body 21 Foundation __NSThread__main__ 20 Foundation +[NSThread exit] 19 libSystem.B.dylib _pthread_exit 18 libSystem.B.dylib _pthread_tsd_cleanup 17 QuartzCore CA::Transaction::release_thread(void*) 16 QuartzCore CA::Transaction::commit() 15 QuartzCore CA::Context::commit_transaction(CA::Transaction*) 14 QuartzCore CALayerDisplayIfNeeded 13 QuartzCore -[CALayer display] 12 QuartzCore -[CALayer _display] 11 QuartzCore CABackingStoreUpdate 10 QuartzCore backing_callback(CGContext*, void*) 9 QuartzCore -[CALayer drawInContext:] 8 UIKit -[UIView(CALayerDelegate) drawLayer:inContext:] 7 UIKit -[UILabel drawRect:] 6 UIKit -[UILabel drawTextInRect:] 5 UIKit -[UILabel _drawTextInRect:baselineCalculationOnly:] 4 UIKit -[NSString(UIStringDrawing) drawAtPoint:forWidth:withFont:lineBreakMode:] 3 UIKit -[NSString(UIStringDrawing) drawAtPoint:forWidth:withFont:lineBreakMode:letterSpacing:includeEmoji:] 2 WebCore WKSetCurrentGraphicsContext 1 WebCore CurrentThreadContext() 0 libSystem.B.dylib calloc

    Read the article

  • CodePlex Daily Summary for Tuesday, May 25, 2010

    CodePlex Daily Summary for Tuesday, May 25, 2010New ProjectsBibleNames: BibleNames BibleNames BibleNames BibleNames BibleNamesBing Search for PHP Developers: This is the Bing SDK for PHP, a toolkit that allows you to easily use Bing's API to fetch search results and use in your own application. The Bin...Fading Clock: Fading Clock is a normal clock that has the ability to fade in out. It's developed in C# as a Windows App in .net 2.0.Fuzzy string matching algorithm in C# and LINQ: Fuzzy matching strings. Find out how similar two string is, and find the best fuzzy matching string from a string table. Given a string (strA) and...HexTile editor: Testing hexagonal tile editorhgcheck12: hgcheck12Metaverse Router: Metaverse Router makes it easier for MIIS, ILM, FIM Sync engine administrators to manage multiple provisioning modules, turn on/off provisioning wi...MyVocabulary: Use MyVocabulary to structure and test the words you want to learn in a foreign language. This is a .net 3.5 windows forms application developed in...phpxw: Phpxw 是一个简易的PHP框架。以我自己的姓名命名的。 Phpxw is a simple PHP framework. Take my name named.Plop: Social networking wrappers and projects.PST Data Structure View Tool: PST Data Structure View Tool (PSTViewTool) is a tool supporting the PST file format documentation effort. It allows the user to browse the internal...PST File Format SDK: PST File Format SDK (pstsdk) is a cross platform header only C++ library for reading PST files.QWine: QWine is Queue Machine ApplicationSharePoint Cross Site Collection Security Trimmed Navigation: This SP2010 project will show security trimmed navigation that works across site collections. The project is written for SP2010, but can be easily ...SharePoint List Field Manager: The SharePoint List Field Manager allows users to manage the Boolean properties of a list such as Read Only, Hidden, Show in New Form etc... It sup...Silverlight Toolbar for DataGrid working with RIA Services or local data: DataGridToolbar contains two controls for Silverlight DataGrid designed for RIA Services and local data. It can be used to filter or remove a data,...SilverShader - Silverlight Pixel Shader Demos: SilverShader is an extensible Silverlight application that is used to demonstrate the effect of different pixel shaders. The shaders can be applied...SNCFT Gadget: Ce gadget permet de consulter les horaires des trains et de chercher des informations sur le site de la société nationale des chemins de fer tunisi...Software Transaction Memory: Software Transaction Memory for .NETStreamInsight Samples: This project contains sample code for StreamInsight, Microsoft's platform for complex event processing. The purpose of the samples is to provide a ...StyleAnalizer: A CSS parserSudoku (Multiplayer in RnD): Sudoku project was to practice on C# by making a desktop application using some algorithm Before this, I had worked on http://shaktisaran.tech.o...Tiplican: A small website built for the purpose of learning .Net 4 and MVC 2TPager: Mercurial pager with color support on Windowsunirca: UNIRCA projectWcfTrace: The WcfTrace is a easy way to collect information about WCF-service call order, processing time and etc. It's developed in C#.New ReleasesASP.NET TimePicker Control: 12 24 Hour Bug Fix: 12 24 Hour Bug FixASP.NET TimePicker Control: ASP.NET TimePicker Control: This release fixes a focus bug that manifests itself when switching focus between two different timepicker controls on the same page, and make chan...ASP.NET TimePicker Control: ASP.NET TimePicker Control - colon CSS Fix: Fixes ":" seperator placement issues being too hi and too low in IE and FireFox, respectively.ASP.NET TimePicker Control: Release fixes 24 Hour Mode Bug: Release fixes 24 Hour Mode BugBFBC2 PRoCon: PRoCon 0.5.1.1: Visit http://phogue.net/?p=604 for release notes.BFBC2 PRoCon: PRoCon 0.5.1.2: Release notes can be found at http://phogue.net/?p=604BFBC2 PRoCon: PRoCon 0.5.1.4: Ha.. choosing the "stable" option at the moment is a bit of a joke =\ Release notes at http://phogue.net/?p=604BFBC2 PRoCon: PRoCon 0.5.1.5: BWHAHAHA stable.. ha. Actually this ones looking pretty good now. http://phogue.net/?p=604Bojinx: Bojinx Core V4.5.14: Issues fixed in this release: Fixed an issue that caused referencePropertyName when used through a property configuration in the context to not wo...Bojinx: Bojinx Debugger V0.9B: Output trace and filtering that works with the Bojinx logger.CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1.5 and 4.0.1.5 beta3: Fixed fairly serious bug #13290 http://cassinidev.codeplex.com/WorkItem/View.aspx?WorkItemId=13290Content Rendering: Content Rendering API 1.0.0 Revision 46406: Initial releaseDeploy Workflow Manager: Deploy Workflow Manager Web Part v2: Recommend you test in your development environment first BEFORE using in production.dotSpatial: System.Spatial.Projections Zip May 24, 2010: Adds a new spherical projection.eComic: eComic 2010.0.0.2: Quick release to fix a couple of bugs found in the previous version. Version 2010.0.0.2 Fixed crash error when accessing the "Go To Page" dialog ...Exchange 2010 RBAC Editor (RBAC GUI) - updated on 5/24/2010: RBAC Editor 0.9.4.1: Some bugs fixed; support for unscopoedtoplevel (adding script is not working yet) Please use email address in About menu of the tool for any feedb...Facebook Graph Toolkit: Preview 1 Binaries: The first preview release of the toolkit. Not recommended for use in production, but enought to get started developing your app.Fading Clock: Clock.zip: Clock.zip is a zip file that contains the application Clock.exe.hgcheck12: Rel8082: Rel8082hgcheck12: scsc: scasMetaverse Router: Metaverse Router v1.0: Initial stable release (v.1.0.0.0) of Metaverse Router Working with: FIM 2010 Synchronization Engine ILM 2007 MIIS 2003MSTestGlider: MSTestGlider 1.5: What MSTestGlider 1.5.zip unzips to: http://i49.tinypic.com/2lcv4eg.png If you compile MSTestGlider from its source you will need to change the ou...MyVocabulary: Version 1.0: First releaseNLog - Advanced .NET Logging: Nightly Build 2010.05.24.001: Changes since the last build:2010-05-23 20:45:37 Jarek Kowalski fixed whitespace in NLog.wxs 2010-05-23 12:01:48 Jarek Kowalski BufferingTargetWra...NoteExpress User Tools (NEUT) - Do it by ourselves!: NoteExpress User Tools 2.0.0: 测试版本:NoteExpress 2.5.0.1154 +调整了Tab页的排列方式 注:2.0未做大的改动,仅仅是运行环境由原来的.net 3.5升级到4.0。openrs: Beta Release (Revision 1): This is the beta release of the openrs framework. Please make sure you submit issues in the issue tracker tab. As this is beta, extreme, flawless ...openrs: Revision 2: Revision 2 of the framework. Basic worker example as well as minor improvements on the auth packet.phpxw: Phpxw: Phpxw 1.0 phpxw 是一个简易的PHP框架。以我自己的姓名命名的。 Phpxw is a simple PHP framework. Take my name named. 支持基本的业务逻辑流程,功能模块化,实现了简易的模板技术,同时也可以支持外接模板引擎。 Support...sELedit: sELedit v1.1a: Fixed: clean file before overwriting Fixed: list57 will only created when eLC.Lists.length > 57sGSHOPedit: sGSHOPedit v1.0a: Fixed: bug with wrong item array re-size when adding items after deleting items Added: link to project page pwdatabase.com version is now selec...SharePoint Cross Site Collection Security Trimmed Navigation: Release 1.0.0.0: If you want just the .wsp, and start using this, you can grab it here. Just stsadm add/deploy to your website, and activate the feature as describ...Silverlight 4.0 Popup Menu: Context Menu for Silverlight 4.0 v1.24 Beta: - Updated the demo and added clipboard cut/copy and paste functionality. - Added delay on hover events for both parent and child menus. - Parent me...Silverlight Toolbar for DataGrid working with RIA Services or local data: DataGridToolBar Beta: For Silverlight 4.0sMAPtool: sMAPtool v0.7d (without Maps): Added: link to project pagesMODfix: sMODfix v1.0a: Added: Support for ECM v52 modelssNPCedit: sNPCedit v0.9a: browse source commits for all changes...SocialScapes: SocialScapes TwitterWidget 1.0.0: The SocialScapes TwitterWidget is a DotNetNuke Widget for displaying Twitter searches. This widget will be used to replace the twitter functionali...SQL Server 2005 and 2008 - Backup, Integrity Check and Index Optimization: 23 May 2010: This is the latest version of my solution for Backup, Integrity Check and Index Optimization in SQL Server 2005, SQL Server 2008 and SQL Server 200...sqwarea: Sqwarea 0.0.280.0 (alpha): This release brings a lot of improvements. We strongly recommend you to upgrade to this version.sTASKedit: sTASKedit v0.7c: Minor Changes in GUI & BehaviourSudoku (Multiplayer in RnD): Sudoku (Multiplayer in RnD) 1.0.0.0 source: Sudoku project was to practice on C# by making a desktop application using some algorithm Idea: The basic idea of algorithm is from http://www.ac...Sudoku (Multiplayer in RnD): Sudoku (Multiplayer in RnD) 1.0.0.1 source: Worked on user-interface, would improve it Sudoku project was to practice on C# by making a desktop application using some algorithm Idea: The b...TFS WorkItem Watcher: TFS WorkItem Watcher Version 1.0: This version contains the following new features: Added support to autodetect whether to start as a service or to start in console mode. The "-c" ...TfsPolicyPack: TfsPolicyPack 0.1: This is the first release of the TfsPolicyPack. This release includes the following policies: CustomRegexPathPolicythinktecture Starter STS (Community Edition): StarterSTS v1.1 CTP: Added ActAs / identity delegation support.TPager: TPager-20100524: TPager 2010-05-24 releaseTrance Layer: TranceLayer Transformer: Transformer is a Beta version 2, morphing from "Digger" to "Transformer" release cycle. It is intended to be used as a demonstration of muscles wh...TweetSharp: TweetSharp v1.0.0.0: Changes in v1.0.0Added 100% public code comments Bug fixes based on feedback from the Release Candidate Changes to handle Twitter schema additi...VCC: Latest build, v2.1.30524.0: Automatic drop of latest buildWCF Client Generator: Version 0.9.2.33468: Version 0.9.2.33468 Fixed: Nested enum types names are not handled correctly. Can't close Visual Studio if generated files are open when the code...Word 2007 Redaction Tool: Version 1.2: A minor update to the Word 2007 Redaction Tool. This version can be installed directly over any existing version. Updates to Version 1.2Fixed bugs:...xPollinate - Windows Live Writer Cross Post Plugin: 1.0.0.5 for WLW 14.0.8117.416: This version works with WLW 14.0.8117.416. This release includes a fix to enable publishing posts that have been opened directly from a blog, but ...Yet another developer blog - Examples: jQuery Autocomplete in ASP.NET MVC: This sample application shows how to use jQuery Autocomplete plugin in ASP.NET MVC. This application is accompanied by the following entry on Yet a...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRawrpatterns & practices: Windows Azure Security GuidanceSqlServerExtensionsGMap.NET - Great Maps for Windows Forms & PresentationMono.AddinsCaliburn: An Application Framework for WPF and SilverlightBlogEngine.NETIonics Isapi Rewrite FilterSQL Server PowerShell Extensions

    Read the article

  • When adding WCF service reference, configuration details are not added to web.config

    - by Mikey Cee
    Hi, I am trying to add a WCF service reference to my web application using VS2010. It seems to add OK, but the web.config is not updated, meaning I get a runtime exception: Could not find default endpoint element that references contract 'CoolService.CoolService' in the ServiceModel client configuration section. This might be because no configuration file was found for your application, or because no endpoint element matching this contract could be found in the client element. Obviously, because the service is not defined in my web.config. Steps to reproduce: Right click solution Add New Project ASP.NET Empty Web Application. Right click Service References in the new web app Add Service Reference. Enter address of my service and click Go. My service is visible in the left-hand Services section, and I can see all its operations. Type a namespace for my service. Click OK. The service reference is generated correctly, and I can open the Reference.cs file, and it all looks OK. Open the web.config file. It is still empty! <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> <system.serviceModel> <bindings /> <client /> </system.serviceModel> Why is this happening? It also happens with a console application, or any other project type I try. Any help? Here is the app.config from my WCF service: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" /> </system.web> <!-- When deploying the service library project, the content of the config file must be added to the host's app.config file. System.Configuration does not support config files for libraries. --> <system.serviceModel> <services> <service name="CoolSQL.Server.WCF.CoolService"> <endpoint address="" binding="webHttpBinding" contract="CoolSQL.Server.WCF.CoolService" behaviorConfiguration="SilverlightFaultBehavior"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="http://localhost:8732/Design_Time_Addresses/CoolSQL.Server.WCF/CoolService/" /> </baseAddresses> </host> </service> </services> <behaviors> <endpointBehaviors> <behavior name="webBehavior"> <webHttp /> </behavior> <behavior name="SilverlightFaultBehavior"> <silverlightFaults /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name=""> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <webHttpBinding> <binding name="DefaultBinding" bypassProxyOnLocal="true" useDefaultWebProxy="false" hostNameComparisonMode="WeakWildcard" sendTimeout="00:05:00" openTimeout="00:05:00" receiveTimeout="00:00:10" maxReceivedMessageSize="2147483647" transferMode="Streamed"> <readerQuotas maxArrayLength="2147483647" maxStringContentLength="2147483647" /> </binding> </webHttpBinding> </bindings> <extensions> <behaviorExtensions> <add name="silverlightFaults" type="CoolSQL.Server.WCF.SilverlightFaultBehavior, CoolSQL.Server.WCF" /> </behaviorExtensions> </extensions> <diagnostics> <messageLogging logEntireMessage="true" logMalformedMessages="false" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="false" maxMessagesToLog="3000" maxSizeOfMessageToLog="2000" /> </diagnostics> </system.serviceModel> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0" /> </startup> <system.diagnostics> <sources> <source name="System.ServiceModel.MessageLogging" switchValue="Information, ActivityTracing"> <listeners> <add name="messages" type="System.Diagnostics.XmlWriterTraceListener" initializeData="c:\messages.e2e" /> </listeners> </source> </sources> </system.diagnostics> </configuration>

    Read the article

< Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >