Search Results

Search found 11991 results on 480 pages for 'cedric copy'.

Page 109/480 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Unmountable boot volume blue screen, what should I do?

    - by Josh
    I was trying to install an update from NVIDIA for my GTX 560, but while it was installing, my computer shut off. After a few minutes, I turned it back on. It got to the Windows boot screen and then had a blue screen error and if left on it would just keep doing that. A few details about my PC: I haven't added any new hardware or software, I'm running Windows XP Professional 32 bit and Windows XP Professional 64 bit on the same hard drive for about 2 years now. I have 2 other hard drives also, but I don't have one large enough to hold everything from my main hard drive, so formatting isn't an option. Now, as for what I've done so far: I've scanned the RAM with "memtest - 86 v3.4" and it said that it was good. I scanned the hard drive in question with chkdsk /r and it gets to 50% and tells me something along the lines of "the drive has one or more unrepairable problems". I also tried to use chkdsk on the drive I installed the new copy of Windows XP on and it got to 75% then jumped back down to 50% and stayed there (I had to reboot the pc). So, after that, I turned off auto reboot and got to read the blue screen error code and I looked it up only to find that nobody seems to have this problem, just problems close to it. The error code is 0x000000ed and I've seen a lot of these online but none that matched the detailed part of the code UNMOUNTABLE_BOOT_VOLUME 0x000000ed (0xfffffadf513c19a0, 0xffffffffc0000006, 0, 0) So, I have installed another copy of Windows XP Professional 32 bit on one of my other hard drives in hopes of accessing the data on the drive in question and when it booted it asked if I wanted chkdsk to scan the drive in question and this is what it found: file record segments 12740, 12741, 12742 and 12743 were reported unreadable. Then it says "recovering lost files" but it sits there for a few seconds and then just boots to Windows. I can't access the drive in question from Windows as far as I can tell, it just says "drive not accessible" and when I go to properties it says that the drive has 100% free space. So, after that failed I didn't give up, I looked for another way to access the drive in question. I used a Ubuntu bootable disk and was able to access the drive in question without any problems. However, I can't access the registry editor because it's a .exe file and that won't load from Ubuntu. I made a copy of the "Windows" folder and put it on one of my other drives and that's where I'm stuck at now. I'm sure my drive works fine, I know chkdsk can't fix the problem with it and I know what caused the problem in the first place for the most part, but I don't know what to do about it. I have a laptop that I can use to download and burn disks if needed and I also have the other copy of Windows XP Professional 32 bit that I can use that's installed on the computer in question (so I know it's not a hardware issue). I'm pretty sure it's a driver issue or the update was editing the registry when it shut off and left me when a broken registry. I've tried accessing C:\Windows\System32\CONFIG only to find that the Windows XP disk repair option can't even access the files on the drive in question. It seems I'll need to be able to do everything from Ubuntu unless there is something I haven't tried with the Windows XP disk. I didn't install the update on Windows XP 64 bit but yet it also has the same blue screen error (that's where the error code above came from but I haven't checked to see if they are the same). They both stopped working at the same time, so I assume it's one problem causing both to not work.

    Read the article

  • Command-line video editing in Linux (cut, join and preview)

    - by sdaau
    I have rather simple editing needs - I need to cut up some videos, maybe insert some PNGs in between them, and join these videos (don't need transitions, effects, etc.). Basically, pitivi would do what I want - except, I use 640x480 30 fps AVI's from a camera, and as soon as I put in over a couple of minutes of that kind of material, pitivi starts freezing on preview, and thus becomes unusable. So, I started looking for a command line tool for Linux; I guess only ffmpeg (command line - Using ffmpeg to cut up video - Super User) and mplayer (Sam - Edit video file with mencoder under linux) are so far candidates, but I cannot find examples of the use I have in mind.   Basically, I'd imagine there's an encoder and player tools (like ffmpeg vs ffplay; or mencoder vs mplayer) - such that, to begin with, the edit sequence could be specified directly on the command line, preferably with frame resolution - a pseudocode would look like: videnctool -compose --file=vid1.avi --start=00:00:30:12 --end=00:01:45:00 --file=vid2.avi --start=00:05:00:00 --end=00:07:12:25 --file=mypicture.png --duration=00:00:02:00 --file=vid3.avi --start=00:02:00:00 --end=00:02:45:10 --output=editedvid.avi ... or, it could have a "playlist" text file, like: vid1.avi 00:00:30:12 00:01:45:00 vid2.avi 00:05:00:00 00:07:12:25 mypicture.png - 00:00:02:00 vid3.avi 00:02:00:00 00:02:45:10 ... so it could be called with videnctool -compose --playlist=playlist.txt --output=editedvid.avi The idea here would be that all of the videos are in the same format - allowing the tool to avoid transcoding, and just do a "raw copy" instead (as in mencoder's copy codec: "-oac copy -ovc copy") - or in lack of that, uncompressed audio/video would be OK (although it would eat a bit of space). In the case of the still image, the tool would use the encoding set by the video files.   The thing is, I can so far see that mencoder and ffmpeg can operate on individual files; e.g. cut a single section from a single file, or join files (mencoder also has Edit Decision Lists (EDL), which can be used to do frame-exact cutting - so you can define multiple cut regions, but it's again attributed to a single file). Which implies I have to work on cutting pieces first from individual files first (each of which would demand own temporary file on disk), and then joining them in a final video file. I would then imagine, that there is a corresponding player tool, which can read the same command line option format / playlist file as the encoding tool - except it will not generate an output file, but instead play the video; e.g. in pseudocode: vidplaytool --playlist=playlist.txt --start=00:01:14 --end=00:03:13 ... and, given there's enough memory, it would generate a low-res video preview in RAM, and play it back in a window, while offering some limited interaction ( like mplayer's keyboard shortcuts for play, pause, rewind, step frame). Of course, I'd imagine the start and end times to refer to the entire playlist, and include any file that may end up in that region in the playlist. Thus, the end result of all this would be: command line operation; no temporary files while doing the editing - and also no temporary files (nor transcoding) when rendering final output... which I myself think would be nice. So, while I think that all of the above may be a bit of a stretch - does there exist anything that would approximate the workflow described above?

    Read the article

  • Quick guide to Oracle IRM 11g: Configuring SSL

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index So far in this guide we have an IRM Server up and running, however I skipped over SSL configuration in the previous article because I wanted to focus in more detail now. You can, if you wish, not bother with setting up SSL, but considering this is a security technology it is worthwhile doing. Contents Setting up a one way, self signed SSL certificate in WebLogic Setting up an official SSL certificate in Apache 2.x Configuring Apache to proxy traffic to the IRM server There are two common scenarios in which an Oracle IRM server is configured. For a development or evaluation system, people usually communicate directly to the WebLogic Server running the IRM service. However in a production environment and for some proof of concept evaluations that require a setup reflecting a production system, the traffic to the IRM server travels via a web server proxy, commonly Apache. In this guide we are building an Oracle Enterprise Linux based IRM service and this article will go over the configuration of SSL in WebLogic and also in Apache. Like in the past articles, we are going to use two host names in the configuration below,irm.company.com will refer to the public Apache server irm.company.internal will refer to the internal WebLogic IRM server Setting up a one way, self signed SSL certificate in WebLogic First lets look at creating just a simple self signed SSL certificate to be used in WebLogic. This is a quick and easy way to get SSL working in your environment, however the downside is that no browsers are going to trust this certificate you create and you'll need to manually install the certificate onto any machine's communicating with the server. This is fine for development or when you have only a few users evaluating the system, but for any significant use it's usually better to have a fully trusted certificate in use and I explain that in the next section. But for now lets go through creating, installing and testing a self signed certificate. We use a library in Java to create the certificates, open a console and running the following commands. Note you should choose your own secure passwords whenever you see password below. [oracle@irm /] source /oracle/middleware/wlserver_10.3/server/bin/setWLSEnv.sh [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irm /] java utils.CertGen -selfsigned -certfile MyOwnSelfCA.cer -keyfile MyOwnSelfKey.key -keyfilepass password -cn "irm.oracle.demo" [oracle@irm /] java utils.ImportPrivateKey -keystore MyOwnIdentityStore.jks -storepass password -keypass password -alias trustself -certfile MyOwnSelfCA.cer.pem -keyfile MyOwnSelfKey.key.pem -keyfilepass password [oracle@irm /] keytool -import -trustcacerts -alias trustself -keystore TrustMyOwnSelf.jks -file MyOwnSelfCA.cer.der -keyalg RSA We now have two Java Key Stores, MyOwnIdentityStore.jks and TrustMyOwnSelf.jks. These contain keys and certificates which we will use in WebLogic Server. Now we need to tell the IRM server to use these stores when setting up SSL connections for incoming requests. Make sure the Admin server is running and login into the WebLogic Console at http://irm.company.intranet:7001/console and do the following; In the menu on the left, select the + next to Environment to expose the submenu, then click on Servers. You will see two servers in the list, AdminServer(admin) and IRM_server1. If the IRM server is running, shut it down either by hitting CONTROL + C in the console window it was started from, or you can switch to the CONTROL tab, select IRM_server1 and then select the Shutdown menu and then Force Shutdown Now. In the Configuration tab select IRM_server1 and switch to the Keystores tab. By default WebLogic Server uses it's own demo identity and trust. We are now going to switch to the self signed one's we've just created. So select the Change button and switch to Custom Identity and Custom Trust and hit save. Now we have to complete the resulting fields, the setting's i've used in my evaluation server are below. IdentityCustom Identity Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/MyOwnIdentityStore.jks Custom Identity Keystore Type: JKS Custom Identity Keystore Passphrase: password Confirm Custom Identity Keystore Passphrase: password TrustCustom Trust Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/TrustMyOwnSelf.jks Custom Trust Keystore Type: JKS Custom Trust Keystore Passphrase: password Confirm Custom Trust Keystore Passphrase: password Now click on the SSL tab for the IRM_server1 and enter in the alias and passphrase, in my demo here the details are; IdentityPrivate Key Alias: trustself Private Key Passphrase: password Confirm Private Key Passphrase: password And hit save. Now lets test a connection to the IRM server over HTTPS using SSL. Go back to a console window and start the IRM server, a quick reminder on how to do this is... [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/bin [oracle@irm /] ./startManagedWeblogic IRM_server1 Once running, open a browser and head to the SSL port of the server. By default the IRM server will be listening on the URL https://irm.company.intranet:16101/irm_rights. Note in the example image on the right the port is 7002 because it's a system that has the IRM services installed on the Admin server, this isn't typical (or advisable). Your system is going to have a separate managed server which will be listening on port 16101. Once you open this address you will notice that your browser is going to complain that the server certificate is untrusted. The images on the right show how Firefox displays this error. You are going to be prompted every time you create a new SSL session with the server, both from the browser and more annoyingly from the IRM Desktop. If you plan on always using a self signed certificate, it is worth adding it to the Windows certificate store so that when you are accessing sealed content you do not keep being informed this certificate is not trusted. Follow these instructions (which are for Internet Explorer 8, they may vary for your version of IE.) Start Internet Explorer and open the URL to your IRM server over SSL, e.g. https://irm.company.intranet:16101/irm_rights. IE will complain that about the certificate, click on Continue to this website (not recommended). From the IE Tools menu select Internet Options and from the resulting dialog select Security and then click on Trusted Sites and then the Sites button. Add to the list of trusted sites a URL which mates the server you are accessing, e.g. https://irm.company.intranet/ and select OK. Now refresh the page you were accessing and next to the URL you should see a red cross and the words Certificate Error. Click on this button and select View Certificates. You will now see a dialog with the details of the self signed certificate and the Install Certificate... button should be enabled. Click on this to start the wizard. Click next and you'll be asked where you should install the certificate. Change the option to Place all certificates in the following store. Select browse and choose the Trusted Root Certification Authorities location and hit OK. You'll then be prompted to install the certificate and answer yes. You also need to import the root signed certificate into the same location, so once again select the red Certificate Error option and this time when viewing the certificate, switch to the Certification Path tab and you should see a CertGenCAB certificate. Select this and then click on View Certificate and go through the same process as above to import the certificate into the store. Finally close all instances of the IE browser and re-access the IRM server URL again, this time you should not receive any errors. Setting up an official SSL certificate in Apache 2.x At this point we now have an IRM server that you can communicate with over SSL. However this certificate isn't trusted by any browser because it's path of trust doesn't end in a recognized certificate authority (CA). Also you are communicating directly to the WebLogic Server over a non standard SSL port, 16101. In a production environment it is common to have another device handle the initial public internet traffic and then proxy this to the WebLogic server. The diagram below shows a very simplified view of this type of deployment. What i'm going to walk through next is configuring Apache to proxy traffic to a WebLogic server and also to use a real SSL certificate from an official CA. First step is to configure Apache to handle incoming requests over SSL. In this guide I am configuring the IRM service in Oracle Enterprise Linux 5 update 3 and Apache 2.2.3 which came with OpenSSL and mod_ssl components. Before I purchase an SSL certificate, I need to generate a certificate request from the server. Oracle.com uses Verisign and for my own personal needs I use cheaper certificates from GoDaddy. The following instructions are specific to Apache, but there are many references out there for other web servers. For Apache I have OpenSSL and the commands are; [oracle@irm /] cd /usr/bin [oracle@irm bin] openssl genrsa -des3 -out irm-apache-server.key 2048 Generating RSA private key, 2048 bit long modulus ............................+++ .........+++ e is 65537 (0x10001) Enter pass phrase for irm-apache-server.key: Verifying - Enter pass phrase for irm-apache-server.key: [oracle@irm bin] openssl req -new -key irm-apache-server.key -out irm-apache-server.csr Enter pass phrase for irm-apache-server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:San Francisco Organization Name (eg, company) [My Company Ltd]:Oracle Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:irm.company.com Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:testing An optional company name []: You must make sure to remember the pass phrase you used in the initial key generation, you will need this when later configuring Apache. In the /usr/bin directory there are now two new files. The irm-apache-server.csr contains our certificate request and is what you cut and paste, or upload, to your certificate authority when you purchase and validate your SSL certificate. In response you will typically get two files. Your server certificate and another certificate file that will likely contain a set of certificates from your CA which validate your certificate's trust. Next we need to configure Apache to use these files. Typically there is an ssl.conf file which is where all the SSL configuration is done. On my Oracle Enterprise Linux server this file is located in /etc/httpd/conf.d/ssl.conf and i've added the following lines. <VirtualHost irm.company.com> # Setup SSL for irm.company.com ServerName irm.company.com SSLEngine On SSLCertificateFile /oracle/secure/irm.company.com.crt SSLCertificateKeyFile /oracle/secure/irm.company.com.key SSLCertificateChainFile /oracle/secure/gd_bundle.crt </VirtualHost> Restarting Apache (apachectl restart) and I can now attempt to connect to the Apache server in a web browser, https://irm.company.com/. If all is configured correctly I should now see an Apache test page delivered to me over HTTPS. Configuring Apache to proxy traffic to the IRM server Final piece in setting up SSL is to have Apache proxy requests for the IRM server but do so securely. So the requests to Apache will be over HTTPS using a legitimate certificate, but we can also configure Apache to proxy these requests internally across to the IRM server using SSL with the self signed certificate we generated at the start of this article. To do this proxying we use the WebLogic Web Server plugin for Apache which you can download here from Oracle. Download the zip file and extract onto the server. The file extraction reveals a set of zip files, each one specific to a supported web server. In my instance I am using Apache 2.2 32bit on an Oracle Enterprise Linux, 64 bit server. If you are not sure what version your Apache server is, run the command /usr/sbin/httpd -V and you'll see version and it its 32 or 64 bit. Mine is a 32bit server so I need to extract the file WLSPlugin1.1-Apache2.2-linux32-x86.zip. The from the resulting lib folder copy the file mod_wl.so into /usr/lib/httpd/modules/. First we want to test that the plug in will work for regular HTTP traffic. Edit the httpd.conf for Apache and add the following section at the bottom. LoadModule weblogic_module modules/mod_wl.so <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16100    WLLogFile /tmp/wl-proxy.log </IfModule> <Location /irm_rights>    SetHandler weblogic-handler </Location> <Location /irm_desktop>    SetHandler weblogic-handler </Location> <Location /irm_sealing>    SetHandler weblogic-handler </Location> <Location /irm_services>    SetHandler weblogic-handler </Location> Now restart Apache again (apachectl restart) and now open a browser to http://irm.company.com/irm_rights. Apache will proxy the HTTP traffic from the port 80 of your Apache server to the IRM service listening on port 16100 of the WebLogic Managed server. Note above I have included all four of the Locations you might wish to proxy. http://irm.company.internalirm_rights is the URL to the management website, /irm_desktop is the URL used for the IRM Desktop to communicate. irm_sealing is for web services based document sealing and irm_services is for IRM server web services. The last two are typically only used when you have the IRM server integrated with another application and it is unlikely you'd be accessing these resources from the public facing Apache server. However, just in case, i've mentioned them above. Now let's enable SSL communication from Apache to WebLogic. In the ZIP file we extracted were some more modules we need to copy into the Apache folder. Looking back in the lib that we extracted, there are some more files. Copy the following into the /usr/lib/httpd/modules/ folder. libwlssl.so libnnz11.so libclntsh.so.11.1 Now the documentation states that should only need to do this, but I found that I also needed to create an environment variable called LD_LIBRARY_PATH and point this to the folder /usr/lib/httpd/modules/. If I didn't do this, starting Apache with the WebLogic module configured to SSL would throw the error. [crit] (20014)Internal error: WL SSL Init failed for server: (null) on 0 So I had to edit the file /etc/profile and add the following lines at the bottom. You may already have the LD_LIBRARY_PATH variable defined, therefore simply add this path to it. LD_LIBRARY_PATH=/usr/lib/httpd/modules/ export LD_LIBRARY_PATH Now the WebLogic plug in uses an Oracle Wallet to store the required certificates.You'll need to copy the self signed certificate from the IRM server over to the Apache server. Copy over the MyOwnSelfCA.cer.der into the same folder where you are storing your public certificates, in my example this is /oracle/secure. It's worth mentioning these files should ONLY be readable by root (the user Apache runs as). Now lets create an Oracle Wallet and import the self signed certificate from the IRM server. The file orapki was included in the bin folder of the Apache 1.1 plugin zip you extracted. orapki wallet create -wallet /oracle/secure/my-wallet -auto_login_only orapki wallet add -wallet /oracle/secure/my-wallet -trusted_cert -cert MyOwnSelfCA.cer.der -auto_login_only Finally change the httpd.conf to reflect that we want the WebLogic Apache plug-in to use HTTPS/SSL and not just plain HTTP. <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16101    SecureProxy ON    WLSSLWallet /oracle/secure/my-wallet    WLLogFile /tmp/wl-proxy.log </IfModule> Then restart Apache once more and you can go back to the browser to test the communication. Opening the URL https://irm.company.com/irm_rights will proxy your request to the WebLogic server at https://irm.company.internal:16101/irm_rights. At this point you have a fully functional Oracle IRM service, the next step is to create a sealed document and test the entire system.

    Read the article

  • UPDATE FOR BI PUBLISHER ENTERPRISE 10.1.3.4.1 MARCH 2010

    - by Tim Dexter
    Latest roll up patch for 10.1.3.4.1 is now out in the wild. Yep, there are bug fixes but the guys have implemented some great enhancements. I'll be covering some of them over the coming weeks, from collapsing bookmarks in your PDFs to better MS AD support to 'true' Excel templates, yes you read that correctly! Patch is available from Oracle's support site. Just search for patch 9546699. Here's the contents and readme, apologies for the big list but at least you can search against it for a particular fix. This patch contains backports of following bugs for BI Publisher Enterprise 10.1.3.4.0 and 10.1.3.4.1. 6193342 - REG:SAMPLE DATA FILE FOR PDF FORM MAPPING IS NOT VALIDATED 6261875 - ERRONEOUS PRECISION VALIDATION ON ONLINE ANALYZER 6439437 - NULL POINTER EXCEPTION WHEN PROCESSING TABLE OF CONTENT 6460974 - BACS EFT PAYMENT INSTRUCTION OUTPUT FILE IS EMPTY 6939721 - BIP: REPORT BUSTING DELIVERY KEY VALUES CANNOT CONTAIN SEVERAL SPECIAL CHARACTER 6996069 - USING XML DB FOR BI REPOSITORY FAILS WITH RESOURCENOTFOUNDEXCEPTION 7207434 - TIMEZONE:SHOULD NOT DO TIMEZONE CONVERSION AGAINST CANONICAL DATE YYYY-MM-DD 7371531 - SUPPORT FOR CSV OUTPUT FOR STRUCTURED XML AND NON SQL DATA SOURCES 7596148 - ER: LDAP FOR MS AD TO SEARCH FROM AD ROOT 7646139 - WEBSERVICES ERROR 7829516 - BIP STANDALONE FAILS TO BURST USING XSL-FO TEMPLATES 8219848 - PDF TEMPLATE REPORT NOT PERFORMING PAGE BREAK 8232116 - PARAMETER VALUE IS PASSED AS NULL,IF IT CONTAINS 'AND' WITHIN THE STRING 8250690 - NOT ABLE TO UPLOAD TEMPLATE VIA BIP API 8288459 - ER: QUERY BUILDER OPTION TO NOT INCLUDE TABLENAME. PREFIX IN SQL 8289600 - REPORT TITLE AND DESCRIPTION CAN'T SUPPORT MULTIPLE LANGUAGES 8327080 - CAN NOT CONFIGURE ORACLE EBUSINESS SUITE SECURITY MODEL WITH ORACLE RAC 8332164 - AN XDO PROPERTY TO ENABLE DEBUG LOGGING 8333289 - WEB SERVICE JOBS FAIL AFTER BIP STARTED UP 8340239 - HTTP NOTIFY IS MISSING IN SCHEDULEREPORTREQUEST 8360933 - UNABLE TO USE LOGGED IN BI USER AS THE WSSECURITY USERNAME IN A VARIABLE FORMAT 8400744 - ADMINISTRATOR USER DOES NOT HAVE FULL ADMINISTRATOR RIGHTS 8402436 - CRASH CAUSED BY UNDETERMINED ATTRKEY ERROR IN MULTI-THREADED 8403779 - IMPOSSIBLE TO CONFIGURE PARAMETER FOR A REPORT 8412259 - PDF, RTF OUTPUT NOT HANDLING THE TABLE BORDER AND CONTENT OVERFLOWS TO NEXT PAGE 8483919 - DYNAMIC DATASOURCE WEBSERVICE SHOULD WORK WITH SERVERSIDE CONNECTIONS 8444382 - ID ATTRIBUTE IN TITLE-PAGE DOES NOT WORK WITH SELECTACTION PROPERTY 8446681 - UI LANGUAGE IS NOT REFLECTED AT THE FIRST LOG IN 8449884 - PUBLICREPORTSERVICE FAILS ON EMAIL DELIVERY USING BIP 10.1.3.4.0D+ - NPE 8454858 - DB: XMLP_ADMIN CAN SEE ALL THE FOLDERS BUT ONLY HAS VIEW PERMISSIONS 8458818 - PDFBOOKBINDER FAILS WITH OUTOFMEMORY ERROR WHEN TRYING TO BIND > 1500 PDFS 8463992 - INCORRECT IMPLEMENTATION OF XLIFF SPECIFICATION 8468777 - BI PUBLISHER QUERY BUILDER NOT LOADING SCHEMA OBJECTS 8477310 - QUERY BUILDER NOT WORK WITH SSL ON STANDALONE OC4J 8506701 - POSITIVE PAY FILE WITH OPTIONS NOT CREATING FILE CHECKS OVER 2500 8506761 - PERFORMANCE: PDFBOOKBINDER CLASS TAKES 4 HOURS TO BIND 4000 PAGES 8535604 - NPE WHEN CLICKING "ANALYZER FOR EXCEL" BUTTON IN ALL_* REPORTS 8536246 - REMOVE-PDF-FIELDS DOES NOT WORK WITH CHECKBOXES WITH OPT ARRAY 8541792 - NULLPOINTER EXCEPTION WHILE USING SFTP PROTOCOL 8554443 - LOGGING TIME STAMP IN 10G: THE HOUR PART IS WRONG 8558007 - UNABLE TO LOGIN BIP WITH UNPRIVILEGED USER WHEN XDB IS USED FOR REPORSITORY STOR 8565758 - NEED TO CONNECT IMPERSONATION TO DATA SOURCE WITH PL/SQL FUNCTION 8567235 - EFTPROCESSOR AND XDO DEBUG ENABLED CAUSES ORG.XML.SAX.SAXPARSEEXCEPTION 8572216 - EFTPROCESSOR NOT THREAD SAFE - CAUSING CORRUPTED REPORTS TO BE GENERATED 8575776 - LANDSCAPE REPORT ORIENTAION NOT SELECTED WHEN REPORT IS PRINTED WITH PS 8588330 - XLIFF GENERATING WITH WRONG MAXWIDTH ATTRIBUTE IN SOME TRANS-UNITS 8584446 - EFTGENERATOR DOES NOT USE XSLT SCALABILITY - JAVA.LANG.OUTOFMEMORY EXCEPTION 8594954 - ENG: BIP NOTIFY MESSAGE BECOMES ENGLISH 8599646 - ER:EXTRA SPACE ADDED BELOW IMAGE IN A TABLE CELL OF TEMPLATE IN FIREFOX 8605110 - PDFSIGNATURE API ENCOUNTERS JAVA.LANG.NULLPOINTEREXCEPTION ON PDF WITH WATERMARK 8660915 - BURSTING WITH DATA TEMPLATE NOT WORKING WITH OPTION: VALUE=FALSE 8660920 - ER: EXTRACT XHTML DATA USING XDODTEXE IN XHTML FORMAT 8667150 - PROBLEM WITH 3RD APPLICATION ABOUT PDF GENERATED WITH BI PUBLISHER 8683547 - "CLICK VIEW REPORT BUTTON TO GENERATE THE REPORT" MESSAGE IS DISPLAYED 8713080 - SEARCH" PARAMETER IS NOT SHOWING NON ENGLISH DATA IN INTERNET EXPLORER 8724778 - EXCEL ANALYZER PARAMETERS DO NOT WORK WITH EXCEL 2007 8725450 - UIX 2.3.6.6 UPTAKE FOR 10.1.3.4.1 8728807 - DYNAMIC JDBC DATA SOURCE WITH PRE-PROCESS FUNCTION BASED ON EXISTING DATA SOURCE 8759558 - XDO TEMPLATE SHOWS CURRENCY IN WRONG FORMAT FOR DUNNING 8792894 - EFTPROCESSOR DOES NOT SUPPORT XSL TEMPLATE AS INPUTSTREAM 8793550 - BIP GENERATES CSV REPORTS OUTPUT FORMAT WITH EXTENTION .OUT NOT .CSV IN EMAIL 8819869 - PERIOD CLOSE VALUE SUMMARY REPORT (XML) RUNNING INTO WARNING 8825732 - MY FOLDERS LINK BROKEN WITH USER NAME THAT INCLUDES A SLASH (/) OBIEE SECURITY 8831948 - TRYING TO GENERATE A SCATTER PLOT USING THE CHART WIZARD 8842299 - SEEDED QUERY ALWAYS RETURNS RESULTS BASED ON FIRST COLUMN 8858027 - NODE.GETTEXTCONTEXT() NOT AVAILABLE IN 10G UNDER OC4J 8859957 - REPORT TITLE ALIGNMENT GOES BAD FOR REPORTS WITH XLIFF FILE ATTACHED 8860957 - ER: IMPROVE PERFORMANCE OF ANSWERS PARAMETERS 8891537 - GETREPORTPARAMETERS WEB SERVICE API ISSUES WITH OAAM REPORTS 8891558 - GETTING SQLEXCEPTION IN GENERATEREPORT WEB SERVICE API ON OAAM REPORTS 8927796 - ER: DYANAMIC DATA SOURCE SUPPORT BY DATA SOURCE NAME 8969898 - BI PUBLISHER WEB SERVICE GETREPORTPARAMETERS DOES NOT TRANSLATE PARAMETER LABEL 8998967 - MULTIPLE XSL PREDICATES ELEMENT[A='A'] [B='B'] CAUSES XML-22019 ERROR 9012511 - SCALABLE MODE IS NOT WORKING IN XMLPUBLISHER 10.1.3.4 9016976 - ER: PRINT XSL-T AND FOPROCESSING TIMING INFORMATION 9018580 - WEB SERVICE CALL FAILS WHEN REPORT INCLUDES SEARCH TYPE 9018657 - JOB FAILS WHEN LOV QUERY CONTAINS BIND VARIABLES :XDO_USER_UI_LOCALE 9021224 - PERFORMANCE ISSUE TO VIEW DASHBOARD PAGE WITH BIP REPORT LINKS 9022440 - ER: SUPPORT "COMB OF N CHARACTERS" FEATURE PDF FORM TEXT FIELDS 9026236 - XPATH DOES NOT WORK CORRECTLY IN 10.1.3.4.1 9051652 - FILE EXTENSION OF CSV OUTPUT IS TXT WHEN IT IS EXPORTED FROM REPORT VIEWER 9053770 - WHEN SENDING CSV REPORT OUTPUT BY EMAIL SOMETIMES IT IS SENT WITHOUT EXTENSION 9066483 - PDFBOOKBINDER LEAVE SOME TEMPORARY FILES AFTER MERGING TITLE PAGE OR TOC 9102420 - USE RELATIVE PATHS IN HYPERLINKS 9127185 - CHECKBOX NOT WORK ON SUB TEMPLATE 9149679 - BASE URL IS NOT PASSED CORRECTLY 9149691 - PROVIDE A WAY TO DISABLE THE ABILITY TO CREATE SCHEDULED REPORT JOB "PUBLIC" 9167822 - NOTIFICATION URL BREAKS ON FOLDER NAMES WITH SPACES 9167913 - CHARTS ARE MISSING IN PDF OUTPUTS WHEN THE DEFAULT OUTPUT FORMAT IS NOT A PDF 9217965 - REPORT HISTORY TAKES LONG TIME TO RENDER THE PAGE 9236674 - BI PUBLISHER PARAMETERS DO NOT CASCADE REFRESH AFTER SECOND PARAMETER 9283933 - OPTION TO COLLAPSE PDF OUTPUT BOOKMARKS BY DEFAULT 9287245 - SAVE COMPLETED SCHEDULED REPORTS IN ITS REPORT NAME AND NOT IN A GENERIC NAME 9348862 - ADD FEATURE TO DISABLE THE XSLT1.0-COMPATIBILITY IN RTF TEMPLATE 9355897 - ER: NEED A SAFE DIVIDE FUNCTION 9364169 - UIX 2.3.6.6 PATCH UPTAKE FOR 10.1.3.4.1 9365153 - LEADING WHITESPACE CHARACTERS IN A FIELD TRIMMED WHEN RUN VIEW OR EXPORT TO .CSV 9389039 - LONG TEXT IS NOT WRAPPED PROPERLY IN THE AUTOSHAPE ON RTF TEMPLATE 9475697 - ENH: SUB-TEMPLATE:DYNAMIC VARIABLE WITH PARAMETER VALUE IN CALL-TEMPLATE CLAUSE 9484549 - CHANGE DEFAULT FOR "XSLT1.0-COMPATIBILITY" TO FALSE FOR 10G 9508499 - UNABLE TO READ EXCEL FILE IF MORE THAN 1800 ROWS GENERATED 9546078 - EMAIL DELIVERY INFORMATION SHOULD NOT BE SAVED AND AUTO-FED IN JOB SUBMISSION 9546101 - EXCEPTION OCCURS WHEN SFTP/FTP REMOTE FILENAME DOSE NOT CONTAIN A SLASH '/' 9546117 - SFTP REPORT DELIVERY FAILS WITH NO CLASS DEF FOUND EXCEPTION ON WEBLOGIC 9.2 Following bugs are included in 10.1.3.4.1 and they are only applied to 10.1.3.4.0. 4612604 - FROM EDGE ATTRIBUTE OF HEADER AND FOOTER IS NOT PRESERVED 6621006 - PARAMNAMEVALUE ELEMENT DEFINITION SHOULD HAVE PARAMETER TYPE 6811967 - DATE PARAMETER NOT HANDLING DATE OFFSET WHEN PASSED UPPERCASE Z FOR OFFSET 6864451 - WHEN BIP REPORTS TIMEOUT, THE PROCESS TO LOG BACK IN IS NOT USER FRIENDLY 6869887 - FUSION CURRENCY BRD:4.1.4/4.1.6 OVERRIDINDG MASK /W XSLT._XDOCURMASKS /W SYMBOL 6959078 - "TEXT FIELD CONTAINS COMMA-SEPARATED VALUES" DOESN'T WORK IN CASE OF STRING 6994647 - GETTING ERROR MESSAGE SAYING JOB FAILED EVEN THOUGH WORKS OK IN BI PUBLISHER 7133143 - ENABLE USER TO ENTER 'TODAY' AS VALUE TO DATE PARAMETER IN SCHEDULE REPORT UI 7165117 - QA_BIP_FUNC:-CLOSED LIFE TIME REPORT ERROR MESSAGE IN CMD 7167068 - LEADER-LENGTH OR RULE-THICKNESS PROPRTY IS TOO LARGE 7219517 - NEED EXTENSION FUNCTIONS TO URL ENCODE TEXT STRING. 7269228 - TEMPLATEHELPER PRODUCES A GARBLED OUTPUT WHEN INVOKED BY MULTIPLE THREADS 7276813 - GETREPORTPARAMETERSRETURN ELEMENT SHOULD HAVE DEFAULT VALUE 7279046 - SCHDEULER:UNABLE TO DELETE A JOB USING API 7280336 - ER: BI PUBLISHER - SITEMINDER SUPPORT - GENERIC NON-ORACLE SSO SUPPORT 7281468 - MODIFY SQL SERVER PROPERTIES TO USE HYP DATA DIRECT IN JDBCDEFAULTS.XML 7281495 - PLEASE ADD SUPPORTED DBS TO JDBCDEFAULT.XML AND LIST EACH DB VERSION SEPARATELY 7282456 - FUSION CURRENCY BRD 4.1.9.2: CURRENCY AMOUTS SHOULD NOT BE WRAPPED. MINUS SIGN 7282507 - FUSION CURRENCY BRD4.1.2.5:DISPLAY CURRENCY AND LOCALE DERIVED CURRENCY SYMBOL 7284780 - FUSION CURRENCY BRD 4.1.12.4 CORRECTLY ALIGN NEGATIVE CURRENCY AMOUNTS 7306874 - OPP ERROR - JAVA.LANG.OUTOFMEMORYERROR: ZIP002:OUTOFMEMORYERROR, MEM_ERROR 7309596 - SIEBELCRM: BIP ENHANCEMENT REQUEST FOR SIEBEL PARAMETERIZATION 7337173 - UI LOCALE IS ALWAYS REWRITTEN TO EN WHEN MOVE FROM DASHBOARD 7338349 - REG:ANALYZER REPORT WITH AVERAGE FUNCTION FAIL TO RUN FOR NON INTERACTIVE FORMAT 7343757 - OUTPUT FORMAT OF TEMPLATES IS NOT SAVING 7345989 - SET XDK REPLACEILLEGALCHARS AND ENHANCE XSLTWRAPPER WARNING 7354775 - UNEXPECTED BEHAVIOR OF LAYOUT TEMPLATE PARAMETER OF RUNREPORT WEBSERVICES API 7354798 - SEQUENCE ORDER OF PARAMETERS FOR THE RUNREPORT WEBSERVICES API 7358973 - PARALLEL SFTP DELIVERY FAILS DUE TO SSHEXCEPTION: CORRUPT MAC ON INPUT 7370110 - REGN:FAIL WHEN USE JNDI TO XMLDB REPORT REPOSITORY 7375859 - NEW WEBSERVICE REQUIRED FOR RUNREPORT 7375892 - REQUIRE NEW WEBSERVICE TO CHECK IF REPORTFOLDER EXISTS 7377686 - TEXT-ALIGN NOT APPLIED IN PDF IN HEBREW LOCALE 7413722 - RUNREPORT API DOES NOT PASS BACK ANY GENERATED EXCEPTIONS TO SCHEDULEREPORT 7435420 - FUSION CURRENCY: SUPPORT MICROSOFT(JAVA) FORMAT MASK WITH CURRENCY 7441486 - ER: ADD PARAMETER FOR SFTP TO BURSTING QUERY 7458169 - SSO WITH OID LDAP COULD NOT FETCH OID ROLES 7461161 - EMAIL DELIVERY FAILS - DELIVERYEXCEPTION: 0 BYTE AVAILABLE IN THE GIVEN INPU 7580715 - INCORRECT FORMATTING OF DATES IN TIMEZONE GMT+13 7582694 - INVALID MAXWIDTH VALUE CAUSES NLS FAILURES 7583693 - JAVA.LANG.NULLPOINTEREXCEPTION RAISED WHEN GENERATING HRMS BENEFITS PDF REPORT 7587998 - NEWLY CREATED USERS IN OID CANT ACCESS REPORTS UNTILL BI PUBLISHER IS RESTARTED 7588317 - TABLE OF CONTENT ALWAYS IN THE SAME FONT 7590084 - REMOVING THE BIP ENTERPRISE BANNER BUT KEEPING THE REPORTS & SCHEDULES TAB 7590112 - SOMEONE NOT PRIVILEGED ACCESS BIP DIRECTLY SHOULD GET A CUSTOM PAGE 7590125 - AUTOMATING CREATION OF USERS AND ROLES 7597902 - TIMEZONE SUPPORT IN RUNREPORT WEBSERVICE API 7599031 - XML PUBLISHER SUM(CURRENT-GROUP()) FAILS 7609178 - ISSUE WITH TAGS EXTRACTED FROM RTF TEMPLATE 7613024 - HEADER/FOOTER SETTINGS OF RTF TEMPLATE ARE NOT RETAINING IN THE RTF OUTPUT 7623988 - ADD XSLT FUNCTION TO PRINT XDO PROPERTIES 7625975 - RETRIEVING PARAMETER LOV FROM RTF TEMPLATE 7629445 - SPELL OUT A NUMBER INTO WORDS 7641827 - ANALYTICS FROZED AFTER PAGE TAB WHICH INCLUDES [BI PUBLISHER REPORT] WERE CLICKE 7645504 - BIP REPORT FROZED AFTER THE SAME DASHBOARD BIP REPORTS WERE CLICKED SIMULTANEOUS 7649561 - RECEIVE 'TO MANY OPEN FILE HANDLES' ERROR CAUSING BI TO CRASH 7654155 - BIP REMOVES THE FIRST FILE SEPARATOR WHEN RE-ENTER REPOSITORY LOCATION IN ADMIN 7656834 - NEED AN OPTION TO NOT APPEND SCHEMA NAME IN GENERATED QUERY 7660292 - ER: XDOPARSER UPGRADE TO XDK 11G 7687862 - BIP DATA EXTRACTING ENHANCEMENT FOR SIEBEL BIP INTEGRATION 7694875 - ADMINISTRATOR IS SUPER USER WHETHER CONFIGURED MANDATORY_USER_ROLE OR NOT 7697592 - BI PUBLISHER STRINGINDEXOUTOFBOUNDSEXCEPTION WHEN PRINTING LABEL FROM SIM 7702372 - ARABIC/ENGLISH NUMBER/DATE PROBLEM, TOTAL PAGE NUMBER NOT RENDERED IN ENGLISH 7707987 - OUTOFMEMORY BURSTING A BI PUBLISHER REPORT BI SERVER DATA SOURCE 7712026 - ER: CHANGE CHART OUTPUT FORMAT TO PNG IN HTML OUTPUT 7833732 - THE 'SEARCH' PARAMETER TYPE CANNOT BE USED IN IE6 UNDER WINDOWS 8214839 - ER: INCREASE COLUMN SIZE IN SCHEDULER TABLE XMLP_SCHED_JOB 8218271 - ISSUES WHILE CONVERTING EXCEL TO XML 8218452 - BI PUBLISHER STANDALONE : GRAPHICS WITHOUT COLORS IF MORE THAN 33 PAGES 8250980 - USER WITH XMLP_ADMIN RESPONSIBILITY IS NOT ABLE TO EDIT REPORT IN BIP 8262410 - IMPOSSIBLE TO PRINT PDF CREATED BY BI PUBLISHER VIA 3RD PARTY PDF APPLICATION 8274369 - QA: CANNOT DELETE EMAIL SERVER UNDER DELIVERY CONFIGURATION 8284173 - FO:VISIBILITY="HIDDEN" DOESN'T WORK WITH FO:PAGE-NUMBER-CITATION 8288421 - THE VALUE OF VIEW BY GO BACK TO MY HISTORY IN SCHEDULES TAB 8299212 - REG: THE SPECIFICAL BI USER DIDN'T GET THE CORRECT REPORT HISTORY 8301767 - ORA-01795 ERROR OCCURED AFTER ACCESSING DASHBOARD PAGE WHICH INCLUDES BIP 8304944 - ADD SIEBEL SECURITY MODEL IN BI PUBLISHER 10.1.3.4.1 8312814 - QA:HOT:OBI SERVER JDBC DRIVER BIJDBC14.JAR IN XMLPSERVER.WAR IS INCORRECT 8323679 - BI PUBLISHER SENDS HTML REPORT TO OUTLOOK CLIENT AS ATTACHMENT NOT INLINE 8370794 - HISTORY OF COMPLETED SCHEDULER JOBS STILL SHOW ONE AS RUNNING ON CLUSTER ENV 8390970 - OUT OF MEMORY EXCEPTION RAISED, WHILE SAVING THE DATA 8393681 - CHECKBOX IS SHOWING UP AS CHECKED WHEN DATA IS NOT CHECKED VALUE 8725450 - UIX 2.3.6.6 UPTAKE FOR 10.1.3.4.1 UIX fixes: 6866363 - SUPPORT FOR JAVA DATE FORMAT AS PER JDK 1.4 AND ABOVE 6829124 - DATE PARAMETER NOT HANDLING DATE OFFSET AS PER JAVA STANDARDS ---------------------------- INSTALLATION FOR ENTERPRISE ---------------------------- Upgrade from 10.1.3.4.0d (patch 8284524, 8398280) and 10.1.3.4.1 does not require step 8 and step 9. 1 - Make a backup copy of the xmlp-server-config.xml file located in <application installation>/WEB-INF/ directory, where your application server unpacked the BI Publisher war or ear file. Example: In an Oracle AS/OC4J 10.1.3 deployment, the location is <ORACLE_HOME>/j2ee/home/applications/xmlpserver/xmlpserver/WEB-INF/xmlp-server-config.xml 2 - Back up all the directories under the BI Publisher repository (for example: {Oracle_Home}/xmlp/XMLP). 3 - If you are using Scheduling, back up your existing BI Publisher Scheduler schema. 4 - Shut down BI Publisher. 5 - Undeploy the BI Publisher application ("xmlpserver") from your J2EE application server. See your application server documentation for instructions how to undeploy an application. 6 - Deploy the 10.1.3.4 xmlpserver.ear or xmlpserver.war to your application server. See "Manually Installing BI Publisher to Your J2EE Application Server" secition of BI Publisher Installation Guide for guidelines for your application server type. 7 - Copy the saved backup copy of the xmlp-server-config.xml file from step 1 to the newly created BI Publisher <application installation>/WEB-INF/ directory, where your application server unpacked the BI Publisher war or ear file. Example: In an Oracle AS/OC4J 10.1.3 deployment, the location is <ORACLE_HOME>/j2ee/home/applications/xmlpserver/xmlpserver/WEB-INF/xmlp-server-config.xml 8 - Copy ssodefaults.xml to the following directory. And replace [host]:[port] with your server's information. Default values for other properties can be updated depending on your configuration. <Existing Repository>\XMLP\Admin\Security 9 - Copy database-config.xml to the following directory. <Existing Repository>\XMLP\Admin\Scheduler 10 - Restart xmlpserver application or Application Server ---------------------------------- IBM WEBSPHERE 6.1 DEPLOYMENT NOTE ---------------------------------- When users fail to log on to BI Publisher with "HTTP 500 Internal Server Error" on WebSphere 6.1, you must change Class Loader configuration to avoid the error. (bug7506253 - XMLPSERVER WON'T START AFTER DEPLOYMENT TO WEBSPHERE 6.1) SystemErr.log: java.lang.VerifyError: class loading constraint violated (class: oracle/xml/parser/v2/XMLNode method: xdkSetQxName(Loracle/xml/util/QxName;)V) at pc: 0 .... Class Loader Configuration Steps: 1 - Login to WebSphere Admin console. Click Enterprise Applications under Applications menu 2 - Click xmlpserver application name from the list 3 - Select "Class loading and update detection" 4 - Update class loader configuration as follows in Class Loader -> General Properties * Polling interval for updated files: [0] Seconds * Class loader order: [x] Classes loaded with application class loader first * WAR class loader policy: [x] Single class loader for application 5 - Apply this change and save the new configuration. 6 - Restart xmlpserver application Please refer to WebSphere 6.1 documentation for more details. "http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.base.doc/info/aes/ae/trun_classload_entapp.html"> http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.base.doc/info/aes/ae/trun_classload_entapp.html ------------------------------------------------------- Oracle WebLogic Server 11g R1 (10.3.1) Deployment NOTE ------------------------------------------------------- If you are deploying BI Publisher to WebLogic Server 10.3.1, you must add the following setting at startup for the domain that contains the BI Publisher server in the /weblogic_home/user_projects/domains/base_domain/bin/startWebLogic.sh script : -Dtoplink.xml.platform=oracle.toplink.platform.xml.jaxp.JAXPPlatform This setting is required to enable BI Publisher to find the TopLink JAR files to create the Scheduler tables.

    Read the article

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • CodePlex Daily Summary for Friday, October 28, 2011

    CodePlex Daily Summary for Friday, October 28, 2011Popular ReleasesWriteableBitmapEx: WriteableBitmapEx 0.9.8.5: Added a Rotate method for arbitrary angles (RotateFree). Provided by montago. See http://writeablebitmapex.codeplex.com/workitem/15214 Added Nokola's anti-aliased line drawing implementation. http://nokola.com/blog/post/2010/10/14/Anti-aliased-Lines-And-Optimizing-Code-for-Windows-Phone-7e28093First-Look.aspx Updated the Windows Phone project to WP 7.1 Mango. Added an extension file for the Windows Phone specific extensions and added the SaveToMediaLibrary extension including support fo...Duckworth Lewis Professional Edition Calculator: DLcalc 3.0: DLcalc 3.0 can perform Duckworth/Lewis Professional Edition calculations 100% accurately. It also produces over-by-over and ball-by-ball PAR score tables.Media Companion: MC 3.420b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Movies Fixed: Fanart and poster scraping issues TV Shows (Re)Added: Rebuild single show Fixed: Issue when shows are moved from original location Ability to handle " for actor nicknames Crash when episode name contains "<" (does not scrape yet) Clears fanart when switch...patterns & practices - Unity: Unity 3.0 for .NET4.5 Preview: The Unity 3.0.1026.0 Preview enables Unity to work on .NET 4.5 with both the WinRT and desktop profiles. The major changes include: Unity projects updated to target .NET 4.5. Dynamic build plans modified to use compiled lambda expressions instead of Reflection.Emit Converting reflection to use the new TypeInfo for reflection. Projects updated to work with the Microsoft Visual Studio 2011 Preview Notes/Known Issues: The Microsoft.Practices.Unity.UnityServiceLocator class cannot be use...Catel - WPF, Silverlight and Windows Phone 7 MVVM toolkit: 2.3: Catel history ============= (+) Added (*) Changed (-) Removed (x) Error / bug (fix) For more information about issues or new feature requests, please visit: http://catel.codeplex.com Documentation can be found at: http://catel.catenalogic.com ********************************************************** =========== Version 2.3 =========== Release date: ============= 2011/10/27 Added/fixed: ============ (+) Added new (non-generic) overloads in ServiceLocator for registering types (+) WP7 ...Managed Extensibility Framework: MEF 2 Preview 4: Detailed information on this release is available on the BCL team blog.AcDown????? - Anime&Comic Downloader: AcDown????? v3.6: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6?? ??“????”...SQL Backup Helper: SQL Backup Helper v1.0: Version 1.0 Changes Description added to settings table Automatic LOG files truncation added to BACKUP stored procedure Only database in status ONLINE will be backed upMySemanticSearch Sample: MySemanticSearch Installer (CTP3): Note: This release of the MySemanticSearch Sample works with SQL Server 2012 CTP3. Installation InstructionsDownload this self-extracting archive to your computer Execute the self-extracting archive Accept the licensing agreement Choose a target directory on your computer and extract the files Open Windows PowerShell command prompt with elevated priveleges Execute the following command: Set-ExecutionPolicy Unrestricted Close the Windows PowerShell command prompt Run C:\MySema...Path Copy Copy: 8.0: New version that mostly adds lots of requested features: 11340 11339 11338 11337 This version also features a more elaborate Settings UI that has several tabs. I tried to add some notes to better explain the use and purpose of the various options. The Path Copy Copy documentation is also on the way, both to explain how to develop custom plugins and to explain how to pre-configure options if you're a network admin. Stay tuned.MVC Controls Toolkit: Mvc Controls Toolkit 1.5.0: Added: The new Client Blocks feaure of Views A new "move" js method for the TreeViews The NewHtmlCreated js event to the DataGrid Improved the ChoiceList structure that now allows also the selection list of a dropdown to be chosen with a lambda expression Improved the AcceptViewHintAttribute controller filter. Now a client can specify not only the name of a View or Partial View it prefers, but also to receive just the rough data in Json format. Fixed: Issue with partial thrust Cl...Free SharePoint Master Pages: Buried Alive (Halloween) Theme: Release Notes *Created for Halloween, you will find theme file, custom css file and images. *Created by Al Roome @AlstarRoome Features: Custom styling for web part Custom background *Screenshot https://s3.amazonaws.com/kkhipple/post/sharepoint-showcase-halloween.pngDevForce Application Framework: DevForce AF 2.0.3 RTW: PrerequisitesWPF 4.0 Silverlight 4.0 DevForce 2010 6.1.3.1 Download ContentsDebug and Release Assemblies API Documentation Source code License.txt Requirements.txt Release HighlightsNew: EventAggregator event forwarding New: EntityManagerInterceptor<T> to intercept EntityManger events New: IHarnessAware to allow for ViewModel setup when executed inside of the Development Harness New: Improved design time stability New: Support for add-in development New: CoroutineFns.To...NicAudio: NicAudio 2.0.5: Minor change to accept special DTS stereo modes (LtRt, AB,...)Windows Azure Toolkit for Windows Phone: Windows Azure Toolkit for Windows Phone v1.3.1: Upgraded Windows Azure projects to Windows Azure Tools for Microsoft Visual Studio 2010 1.5 – September 2011 Upgraded the tools tools to support the Windows Phone Developer Tools RTW Update SQL Azure only scenarios to use ASP.NET Universal Providers (through the System.Web.Providers v1.0.1 NuGet package) Changed Shared Access Signature service interface to support more operations Refactored Blobs API to have a similar interface and usage to that provided by the Windows Azure SDK Stor...xUnit.net Contrib: xunitcontrib-resharper 0.4.4 (dotCover): xunitcontrib release 0.4.4 (ReSharper runner) This release provides a test runner plugin for Resharper 6.0 RTM, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release addresses the following issues:Support for dotCover code coverage 4132 Note that this build work against ALL VERSIONS of xunit. The files are compiled against xunit.dll 1.8 - DO NOT REPLACE THIS FILE. Thanks to xunit's version independent runner system, this package can r...Ribbon Editor for Microsoft Dynamics CRM 2011: Ribbon Editor (0.1.2122.266): Added CodePlex and PayPal links New icon Bug fix: can't connect to an IFD deployment when the discovery service url has been customizedSiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.0.921.340): Added CodePlex and PayPal links New iconDotNet.Framework.Common: DotNet.Framework.Common 4.0: ??????????,????????????XML Explorer: XML Explorer 4.0.5: Changes in 4.0.5: Added 'Copy Attribute XPath to Address Bar' feature. Added methods for decoding node text and value from Base64 encoded strings, and copying them to the clipboard. Added 'ChildNodeDefinitions' to the options, which allows for easier navigation of parent-child and ID-IDREF relationships. Discovery happens on-demand, as nodes are expanded and child nodes are added. Nodes can now have 'virtual' child nodes, defined by an xpath to select an identifier (usually relative to ...New Projects#foo REST: HashFoo.REST is a simple message based layer that sits on top of ASP.NET MVC. Allows for service development based on POCO message objects and handlers, while still using the ever improving ASP.NET MVC infrastructure.Activity Tracking Log: The Activity Tracking Log is a pluggable component intended to provide user and system activity tracking functions for ASP.Net/MVC applications. Represents a set of HTTP handlers and modules that expose activity analytic reports and client side API. Easy to configure and use.ACTLAPoC: ACTLAPoCAnalysis of algorithms: This is a collective repository for a few academic projects.Anomaly: Anomaly is an application that can be used for generating one or more passwords, with varying levels of complexity. Archer: A shopping bags app for Windows Phone 7.5 code name "Archer".ASMX WebService Logger: This project provide a library to provide asp.net asmx web service logging mechanism, include when who access which web method, the detailed request/respond soap content. Build Versioning Services: This project is essentially an assembly that contains a WCF Service and a set of accompanying MSBuild tasks. The service provides functionality to maintain version numbers for applications separately from the source code of the application. The MSBuild tasks provide the functionality the WCF service provides to build scripts.Church CRM - Bookstore: The Congregations Relationship management Suite is a suite of tools designed to provide web based services and organizational tools geared toward online Congregation and Church resource management and social interaction. Church CRM - Sermons: The Congregations Relationship management Suite is a suite of tools designed to provide web based services and organizational tools geared toward online Congregation and Church resource management and social interaction. DasLabs: das labsdedu: Dedu is a SNS&Fourm web site that share the material about programming technique Find Me XML: Help people find your project. Write a concise, reader-focused summary. Example: <project name> makes it easier for <target user group> to <activity>. You'll no longer have to <activity>. It's developed in <programming language>.GEMySiteLockDown: This nifty little application generates a batch file that, when run, will set the LOCK state of a chosen Site Collection and sub sites to either NoAdditions or Readonly or NoAccessHEP Linux Access: Tools that make it simpler to access linux computer from Windows. Especially geared towards Fermilab and CERN computing.iCycle: Simple cycle application that allows the tracking of exercises and routes.ImportToTS: Import to TSLegal Dashboard: legal dashboardLWB-DOTNET: This project contains ASP.NET Ajax support for the AJAX-Lightweight Binder. Mass Mailing: Mass Mailing is an application to allow for mailing of a single email to a large email list. It is written in c# with a WPF front end. Allows for attachments, multiple smtp servers, and burst control.MineCobalt: MineCobalt is a administration system for the MineCraft server.MLBLDetector: MLBLDetectormokodownloader: this is a simple tool used to download images from websitenetgod: netgod opensource projectNews Feed: News Feed is a Windows Phone App which makes it easier for users to check out the latest news, sports and technology headlines and opinions from various customizable news sources (default CNN, Guardian, Daily Mail, Ta Nea). Uses RSS feeds. Free to download and distribute.P I: P. I.Precious Metals Pricing - nopCommerce Plugin: This is a plugin for the nopCommerce 2.x e-commerce platform. It allows product pricing to be based on the changing market values of precious metals. This is useful for companies that sell items such as coins or jewelry where the sale price fluctuates with market trends.produksi: A production ticket system.Prose: Prose is an playground for an experimental JavaScript like language compiler. Eventually it will implement 0-CFA, CFA2, and a Tracing JITRazor Generator Contrib: This project extends the capabilities of the PrecompiledMvcViewEngine (part of Razor Generator project). It supports precompiled Razor views in multiple assemblies.Rootfus: ROOT Surface. An attempt to make the final step in a histogram based analysis visual and easy. This has been attempted in the field before but has always failed - scripts and text seem to be a more natural way to do this. This project is an experiment to see if it is possible to do it another way with some basic visual programming. Based on the ROOT tool (http://root.cern.ch). This is aimed squarly at people who use ROOT as a final analysis tool.SimpleWebService: SimpleWebServicesmetgbr: Autohandel smetgbrSound Recorder for Windows Phone 7: The Sound Recorder App for WP7 allows you to record, save and play sound on your windows phone device. Co - developped by Dimitris Gkanatsios and Konstantinos Kyriakopoulos. Free to download and distribute. Don't hesitate to send me your comments/questions/angry complaints.SQL Refactor: A tool to aid in the refactoring of large SQL statements. Provides a comparison between the original query and the refactored one as well as maintaining a history of the iterations.stargame: reserch game projectTask Parallel Library Helper: TPLHelper is a helper library for the Task Parallel Library in .NET 4.0. It aims to add the ability to queue tasks with dependancies and have them added to the scheduler once all dependant taks are completed, it will also have some common usage such as time taken.TeamView: Team view is a tool to help the project manager, team members take a better view to the view of progress, quality in the project.View weather forecasts for multiple cities on mobile devices: View current weather temperature, low & high, and icon for weather condition for multiple cities in a single page on mobile devices. Uses ASP.NET WebForms, jQuery Mobile.Web Service App.: This program is an simple example web service application.Windows Azure Storage Mapper: a library for azure storage WolfGenerator: Generation code on script-like language with some intresting features.Xaml to Code Converter: This tool converts xaml designer text in normal C# code.XAML Toolkit: This will eventually be a toolkit that supports WPF, WinRT and possibly Silverlight.????: ??,?????、????、????????

    Read the article

  • CodePlex Daily Summary for Thursday, August 01, 2013

    CodePlex Daily Summary for Thursday, August 01, 2013Popular ReleasesmyCollections: Version 2.7.11.0: New in this version : Added Copy To functionality (useful if you have NAS or media player). You can now use you web cam to scan UPC Code or take picture for cover. Added Persian Language Improved Ukrainian Translation Improved Pdf export Improved Cover Flow Improved TMDB Information. Improved Zappiti and Dune player compatibility Improved GameDB provider. Fix IMDB provider. Fix issue with rating BugFixing and performance improvement.wsubi: wsubi-1.6: Enhancements included in this release: Implement issue #6 - Add a 'query' Command for .sql scripts You can now run T-SQL scripts with the application. Results can be out for review to the console or saved to a file. For more details on running queries, check the updated Documentation page.SuperSocket, an extensible socket application framework: SuperSocket 1.6 beta 2: The changes included in this release: introduced ServerManager improved the code about process level isolation added new configuration attribute "storeLocation" for the certificate node added sendTimeOut support for async sending fixed a bug that when you close a session, the data is being sent won't be sent suppressed the socket error 10060 fixed the default clear idle session parameters in the config model class added configuration section detecting and give proper exception ...GoAgent GUI: GoAgent GUI 1.4.0: ??????????: Windows XP SP3 Windows Vista SP1 Windows 7 Windows 8 Windows 8.1 ??Windows 8.1???????????? ????: .Net Framework 4.0 http://59.111.20.19/download/17718/33240761/4/exe/69/54/177181/dotNetFx40Fullx86x64.exe Microsoft Visual C++ 2010 Redistributable Package http://59.111.20.23/download/4894298/8555584/1/exe/28/176/1348305610524944/vcredistx86.exe ???????????????。 ?????Windows XP?Windows 7????,???????????,?issue??????。uygw@outlook.comMVC Generator: MVC Generator Visual Studio Addin: This is the latest build, this includes the MVCGenerator.dll, and Visual Studio Addin file. See the home page of this project for installation instructions.nopCommerce. Open source shopping cart (ASP.NET MVC): nopCommerce 3.10: Highlight features & improvements: • Performance optimization. • New more user-friendly product/product-variant logic. Now we'll have only products (simple and grouped). • Bundle products support added. • Allow a store owner to associate product image for product variant attribute values. To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).ExtJS based ASP.NET Controls: FineUI v3.3.1: ??FineUI ?? ExtJS ??? ASP.NET ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license)。 ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI ???? ExtJS ????????,???? ExtJS ?,???????????ExtJS?: 1. ????? FineUI ? ExtJS ?:http://fineui.com/bbs/fo...AutoNLayered - Domain Oriented N-Layered .NET 4.5: AutoNLayered v1.0.5: - Fix Dtos. Abstract collections replaced by concrete (correct serialization WCF). - OrderBy in navigation properties. - Unit Test with Fakes. - Map of entities/dto moved to application services. - Libraries updated. Warning using Fakes: http://connect.microsoft.com/VisualStudio/feedback/details/782031/visual-studio-2012-add-fakes-assembly-does-not-add-all-needed-referencesPath Copy Copy: 11.1: Minor release with two new features: Submenu's contextual menu item now has an icon next to it Added reference to JavaScript regular expression format in Settings application Since this release does not have any glaring bug fixes, it is more of an optional update for existing users. It depends on whether you want to be able to spot the Path Copy Copy submenu more easily. I recommend you install it to see if the icon makes sense. As always, please don't hesitate to leave feedback via Discus...Lib.Web.Mvc & Yet another developer blog: Lib.Web.Mvc 6.3.0: Lib.Web.Mvc is a library which contains some helper classes for ASP.NET MVC such as strongly typed jqGrid helper, XSL transformation HtmlHelper/ActionResult, FileResult with range request support, custom attributes and more. Release contains: Lib.Web.Mvc.dll with xml documentation file Standalone documentation in chm file and change log Library source code Sample application for strongly typed jqGrid helper is available here. Sample application for XSL transformation HtmlHelper/ActionRe...Media Companion: Media Companion MC3.574b: Some good bug fixes been going on with the new XBMC-Link function. Thanks to all who were able to do testing and gave feedback. New:* Added some adhoc extra General movie filters, one of which is Plot = Outline (see fixes above). To see the filters, add the following line to your config.xml: <ShowExtraMovieFilters>True</ShowExtraMovieFilters>. The others are: Imdb in folder name, Imdb in not folder name & Imdb not in folder name & year mismatch. * Movie - display <tag> list on browser tab ...OfflineBrowser: Preview Release with Search: I've added search to this release.VG-Ripper & PG-Ripper: VG-Ripper 2.9.46: changes FIXED LoginFIM 2010 GoogleApps MA: GoogleAppsMA1.1.2: Fixed bug during import. - Fixed following bug. - In some condition, 'dn is missing' error occur.Install Verify Tool: Install Verify Tool V 1.0 With Client: Use a windows service to do a remote validation work. QA can use this tool to verify daily build installation.C# Intellisense for Notepad++: 'Namespace resolution' release: Auto-Completion from "empty spot" Add missing "using" statementsOpen Source Job board: Version X3: Full version of job board, didn't have monies to fund it so it's free.DSeX DragonSpeak eXtended Editor: Version 1.0.116.0726: Cleaned up Wizard Interface Added Functionality for RTF UndoRedo IE Inserting Text from Wizard output to the Tabbed Editor Added Sanity Checks to Search/Replace Dialog to prevent crashes Fixed Template and Paste undoredo Fix Undoredo Blank spots Added New_FileTag Const = "(New FIle)" Added Filename to Modified FileClose queries (Thanks Lothus Marque)Math.NET Numerics: Math.NET Numerics v2.6.0: What's New in Math.NET Numerics 2.6 - Announcement, Explanations and Sample Code. New: Linear Curve Fitting Linear least-squares fitting (regression) to lines, polynomials and linear combinations of arbitrary functions. Multi-dimensional fitting. Also works well in F# with the F# extensions. New: Root Finding Brent's method. ~Candy Chiu, Alexander Täschner Bisection method. ~Scott Stephens, Alexander Täschner Broyden's method, for multi-dimensional functions. ~Alexander Täschner ...mojoPortal: 2.3.9.8: see release notes on mojoportal.com https://www.mojoportal.com/mojoportal-2398-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0, but we recommend you to use .NET 4 with either .NET 4 or ideally .NET 4.5 hosting, we will probably drop support for .NET 3.5 in the near future. The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code and are not intended for use in Visual Studio. To downl...New Projects.NET Micro Framework for STM32F4 with GCC support: The project adds GCC compiler support to .NET Micro Framework code for the STM32F4 family of ARM Cortex-based MCUs originally created by Oberon microsystems.AD Group Comparison Tool: This tool scans Active Directory for groups with matching or empty membership lists, identifying redundant groups that can possibly be eliminated.AdvGenWebRSSReader: This project is to build a portal to manage the rss feed.Best Framework: Using This framework most of .net functions that needs a lot of code writing became available almost in 1 line(s) of code. CargoOnLine: ????????cRumble Framework: C# Reporting Framework for support different technologies and a uncoupled reports.DbDataSource: DbDataSource is a ASP.NET WebForms DataSource control for simple use with EntityFramework CodeFirst.Excel Powershell Library: ExcelPSLib is a PowerShell Module that allows easy creation of XLSX file by using the EPPlus 3.1 .Net LibraryGenProj, a tool for automatic maintenance of Visual Studio .csproj files: Creates a .csproj by scanning folders for files and injecting XML into a previously created .csproj template. Uses a configuration file to control behavior.HLSLBuild: fxc integration for MSBuild and Visual Studio.HotelManagerByHuaibao: Here is a hotel management system on developing.ItemMover ..:: I Like SharePoint ::..: The ItemMover offers your users the ability to move list items between any folder from content type “Folder Content Types”. JadeTours: This is the website for JadeToursNaughty Dog Texture Viewer: A simple program that can view textures inside .pak files from games like The Last of Us and Uncharted series.Number Guessing Game: This is my version of the number guessing game. The computer will generate a number between 1-20 and the goal is to guess that number.prakark07312013Git01: *bold* _italics_ +underline+ ! Heading 1 !! Heading 2 * Bullet List ** Bullet List 2 # Number List ## Number List 2 [another wiki page] [url:http://www.example.prakark07312013Hg01: https://ajaxcontroltoolkit.codeplex.com/workitem/list/basicprakark07312013TFS01: *bold* _italics_ +underline+ ! Heading 1 !! Heading 2 * Bullet List ** Bullet List 2 # Number List ## Number List 2 [another wiki page] [url:http://www.example.PythonCode: Some python code.S1ToKindle: send s1 post to kindleSAML2: An implementation of the SAML 2 specification for .NET.SAML2.Logging.CommonLogging: A logging provider for SAML2 based on Common.Logging.SAML2.Logging.Log4Net: A logging provider for SAML2 based on Log4Net.SAML2.Profiles.DKSAML20: Extension profile for SAML2 which provides OASIS SAML 2.0 validation for the DK specification.SharePoint 2010 Export User Information to Text file, SQL server: This project will help you to export general information about uses from sharepoint 2010 to text file, which you can export into any other database like SQLtestdd07312013git: ftestdd07312013tfs: hjVarian Developers Forum: This project supports Varian collaborators and customers in their work with Varian Medical Systems public APIs.vb??????: vb??????Vinyl: Vinyl is a way to electronically keep track of your record collection.Visual Basic Web Browser: Web browser in visual basicWorkerHourManager: hour managment system,ASP,.NET,college

    Read the article

  • Connecting SceneBuilder edited FXML to Java code

    - by daniel
    Recently I had to answer several questions regarding how to connect an UI built with the JavaFX SceneBuilder 1.0 Developer Preview to Java Code. So I figured out that a short overview might be helpful. But first, let me state the obvious. What is FXML? To make it short, FXML is an XML based declaration format for JavaFX. JavaFX provides an FXML loader which will parse FXML files and from that construct a graph of Java object. It may sound complex when stated like that but it is actually quite simple. Here is an example of FXML file, which instantiate a StackPane and puts a Button inside it: -- <?xml version="1.0" encoding="UTF-8"?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <?import javafx.scene.paint.*?> <StackPane prefHeight="150.0" prefWidth="200.0" xmlns:fx="http://javafx.com/fxml"> <children> <Button mnemonicParsing="false" text="Button" /> </children> </StackPane> ... and here is the code I would have had to write if I had chosen to do the same thing programatically: import javafx.scene.control.*; import javafx.scene.layout.*; ... final Button button = new Button("Button"); button.setMnemonicParsing(false); final StackPane stackPane = new StackPane(); stackPane.setPrefWidth(200.0); stackPane.setPrefHeight(150.0); stacPane.getChildren().add(button); As you can see - FXML is rather simple to understand - as it is quite close to the JavaFX API. So OK FXML is simple, but why would I use it?Well, there are several answers to that - but my own favorite is: because you can make it with SceneBuilder. What is SceneBuilder? In short SceneBuilder is a layout tool that will let you graphically build JavaFX user interfaces by dragging and dropping JavaFX components from a library, and save it as an FXML file. SceneBuilder can also be used to load and modify JavaFX scenegraphs declared in FXML. Here is how I made the small FXML file above: Start the JavaFX SceneBuilder 1.0 Developer Preview In the Library on the left hand side, click on 'StackPane' and drag it on the content view (the white rectangle) In the Library, select a Button and drag it onto the StackPane on the content view. In the Hierarchy Panel on the left hand side - select the StackPane component, then invoke 'Edit > Trim To Selected' from the menubar That's it - you can now save, and you will obtain the small FXML file shown above. Of course this is only a trivial sample, made for the sake of the example - and SceneBuilder will let you create much more complex UIs. So, I have now an FXML file. But what do I do with it? How do I include it in my program? How do I write my main class? Loading an FXML file with JavaFX Well, that's the easy part - because the piece of code you need to write never changes. You can download and look at the SceneBuilder samples if you need to get convinced, but here is the short version: Create a Java class (let's call it 'Main.java') which extends javafx.application.Application In the same directory copy/save the FXML file you just created using SceneBuilder. Let's name it "simple.fxml" Now here is the Java code for the Main class, which simply loads the FXML file and puts it as root in a stage's scene. /* * Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved. */ package simple; import java.util.logging.Level; import java.util.logging.Logger; import javafx.application.Application; import javafx.fxml.FXMLLoader; import javafx.scene.Scene; import javafx.scene.layout.StackPane; import javafx.stage.Stage; public class Main extends Application { /** * @param args the command line arguments */ public static void main(String[] args) { Application.launch(Main.class, (java.lang.String[])null); } @Override public void start(Stage primaryStage) { try { StackPane page = (StackPane) FXMLLoader.load(Main.class.getResource("simple.fxml")); Scene scene = new Scene(page); primaryStage.setScene(scene); primaryStage.setTitle("FXML is Simple"); primaryStage.show(); } catch (Exception ex) { Logger.getLogger(Main.class.getName()).log(Level.SEVERE, null, ex); } } } Great! Now I only have to use my favorite IDE to compile the class and run it. But... wait... what does it do? Well nothing. It just displays a button in the middle of a window. There's no logic attached to it. So how do we do that? How can I connect this button to my application logic? Here is how: Connection to code First let's define our application logic. Since this post is only intended to give a very brief overview - let's keep things simple. Let's say that the only thing I want to do is print a message on System.out when the user clicks on my button. To do that, I'll need to register an action handler with my button. And to do that, I'll need to somehow get a handle on my button. I'll need some kind of controller logic that will get my button and add my action handler to it. So how do I get a handle to my button and pass it to my controller? Once again - this is easy: I just need to write a controller class for my FXML. With each FXML file, it is possible to associate a controller class defined for that FXML. That controller class will make the link between the UI (the objects defined in the FXML) and the application logic. To each object defined in FXML we can associate an fx:id. The value of the id must be unique within the scope of the FXML, and is the name of an instance variable inside the controller class, in which the object will be injected. Since I want to have access to my button, I will need to add an fx:id to my button in FXML, and declare an @FXML variable in my controller class with the same name. In other words - I will need to add fx:id="myButton" to my button in FXML: -- <Button fx:id="myButton" mnemonicParsing="false" text="Button" /> and declare @FXML private Button myButton in my controller class @FXML private Button myButton; // value will be injected by the FXMLLoader Let's see how to do this. Add an fx:id to the Button object Load "simple.fxml" in SceneBuilder - if not already done In the hierarchy panel (bottom left), or directly on the content view, select the Button object. Open the Properties sections of the inspector (right panel) for the button object At the top of the section, you will see a text field labelled fx:id. Enter myButton in that field and validate. Associate a controller class with the FXML file Still in SceneBuilder, select the top root object (in our case, that's the StackPane), and open the Code section of the inspector (right hand side) At the top of the section you should see a text field labelled Controller Class. In the field, type simple.SimpleController. This is the name of the class we're going to create manually. If you save at this point, the FXML will look like this: -- <?xml version="1.0" encoding="UTF-8"?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <?import javafx.scene.paint.*?> <StackPane prefHeight="150.0" prefWidth="200.0" xmlns:fx="http://javafx.com/fxml" fx:controller="simple.SimpleController"> <children> <Button fx:id="myButton" mnemonicParsing="false" text="Button" /> </children> </StackPane> As you can see, the name of the controller class has been added to the root object: fx:controller="simple.SimpleController" Coding the controller class In your favorite IDE, create an empty SimpleController.java class. Now what does a controller class looks like? What should we put inside? Well - SceneBuilder will help you there: it will show you an example of controller skeleton tailored for your FXML. In the menu bar, invoke View > Show Sample Controller Skeleton. A popup appears, displaying a suggestion for the controller skeleton: copy the code displayed there, and paste it into your SimpleController.java: /** * Sample Skeleton for "simple.fxml" Controller Class * Use copy/paste to copy paste this code into your favorite IDE **/ package simple; import java.net.URL; import java.util.ResourceBundle; import javafx.fxml.FXML; import javafx.fxml.Initializable; import javafx.scene.control.Button; public class SimpleController implements Initializable { @FXML // fx:id="myButton" private Button myButton; // Value injected by FXMLLoader @Override // This method is called by the FXMLLoader when initialization is complete public void initialize(URL fxmlFileLocation, ResourceBundle resources) { assert myButton != null : "fx:id=\"myButton\" was not injected: check your FXML file 'simple.fxml'."; // initialize your logic here: all @FXML variables will have been injected } } Note that the code displayed by SceneBuilder is there only for educational purpose: SceneBuilder does not create and does not modify Java files. This is simply a hint of what you can use, given the fx:id present in your FXML file. You are free to copy all or part of the displayed code and paste it into your own Java class. Now at this point, there only remains to add our logic to the controller class. Quite easy: in the initialize method, I will register an action handler with my button: () { @Override public void handle(ActionEvent event) { System.out.println("That was easy, wasn't it?"); } }); ... -- ... // initialize your logic here: all @FXML variables will have been injected myButton.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { System.out.println("That was easy, wasn't it?"); } }); ... That's it - if you now compile everything in your IDE, and run your application, clicking on the button should print a message on the console! Summary What happens is that in Main.java, the FXMLLoader will load simple.fxml from the jar/classpath, as specified by 'FXMLLoader.load(Main.class.getResource("simple.fxml"))'. When loading simple.fxml, the loader will find the name of the controller class, as specified by 'fx:controller="simple.SimpleController"' in the FXML. Upon finding the name of the controller class, the loader will create an instance of that class, in which it will try to inject all the objects that have an fx:id in the FXML. Thus, after having created '<Button fx:id="myButton" ... />', the FXMLLoader will inject the button instance into the '@FXML private Button myButton;' instance variable found on the controller instance. This is because The instance variable has an @FXML annotation, The name of the variable exactly matches the value of the fx:id Finally, when the whole FXML has been loaded, the FXMLLoader will call the controller's initialize method, and our code that registers an action handler with the button will be executed. For a complete example, take a look at the HelloWorld SceneBuilder sample. Also make sure to follow the SceneBuilder Get Started guide, which will guide you through a much more complete example. Of course, there are more elegant ways to set up an Event Handler using FXML and SceneBuilder. There are also many different ways to work with the FXMLLoader. But since it's starting to be very late here, I think it will have to wait for another post. I hope you have enjoyed the tour! --daniel

    Read the article

  • How To Remove People and Objects From Photographs In Photoshop

    - by Eric Z Goodnight
    You might think that it’s a complicated process to remove objects from photographs. But really Photoshop makes it quite simple, even when removing all traces of a person from digital photographs. Read on to see just how easy it is. Photoshop was originally created to be an image editing program, and it excels at it. With hardly any Photoshop experience, any beginner can begin removing objects or people from their photos. Have some friends that photobombed an otherwise great pic? Tell them to say their farewells, because here’s how to get rid of them with Photoshop! Tools for Removing Objects Removing an object is not really “magical” work. Your goal is basically to cover up the information you don’t want in an image with information you do want. In this sample image, we want to remove the cigar smoking man, and leave the geisha. Here’s a couple of the tools that can be useful to work with when attempting this kind of task. Clone Stamp and Pattern Stamp Tool: Samples parts of your image from your background, and allows you to paint into your image with your mouse or stylus. Eraser and Brush Tools: Paint flat colors and shapes, and erase cloned layers of image information. Basic, down and dirty photo editing tools. Pen, Quick Selection, Lasso, and Crop tools: Select, isolate, and remove parts of your image with these selection tools. All useful in their own way. Some, like the pen tool, are nightmarishly tough on beginners. Remove a Person with the Clone Stamp Tool (Video) The video above uses the Clone Stamp tool to sample and paint with the background texture. It’s a simple tool to use, although it can be confusing, possibly counter-intuitive. Here’s some pointers, in addition to the video above. Select shortcut key to choose the Clone tool stamp from the Tools Panel. Always create a copy of your background layer before doing heavy edits by right clicking on the background in your Layers Panel and selecting “Duplicate.” Hold with the Clone Tool selected, and click anywhere in your image to sample that area. When you’re sampling an area, your cursor is “Aligned” with your sample area. When you paint, your sample area moves. You can turn the “Aligned” setting off by clicking the in the Options Panel at the top of your screen if you want. Change your brush size and hardness as shown in the video by right-clicking in your image. Use your lasso to copy and paste pieces of your image in order to cover up any parts that seem appropriate. Photoshop Magic with the “Content-Aware Fill” One of the hallmark features of CS5 is the “Content-Aware Fill.” Content aware fill can be an excellent shortcut to removing objects and even people in Photoshop, but it is somewhat limited, and can get confused. Here’s a basic rundown on how it works. Select an object using your Lasso tool, shortcut key . The Lasso works fine as this selection can be rough. Navigate to Edit > Fill, and select “Content-Aware,” as illustrated above, from the pull-down menu. It’s surprisingly simple. After some processing, Photoshop has done the work of removing the object for you. It takes a few moments, and it is not perfect, so be prepared to touch it up with some Copy-Paste, or some Clone stamp action. Content Aware Fill Has Its Limits Keep in mind that the Content Aware Fill is meant to be used with other techniques in mind. It doesn’t always perform perfectly, but can give you a great starting point. Take this image for instance. It is actually plausible to hide this figure and make this image look like he was never there at all. With a selection made with the Lasso tool, navigate to Edit > Fill and select “Content Aware” again. The result is surprisingly good, but as you can see, worthy of some touch up. With a result like this one, you’ll have to get your hands dirty with copy-paste to create believable lines in the background. With many photographs, Content Aware Fill will simply get confused and give you results you won’t be happy with. Additional Touch Up for Bad Background Textures with the Pattern Stamp Tool For the perfectionist, cleaning up the lumpy looking textures that the Clone Stamp can leave is fairly simple using the Pattern Stamp Tool. Sample an piece of your image with your Marquee Tool, shortcut key . Navigate to Edit > Define Pattern to create a new Pattern from your selection. Click OK to continue. Click and hold down on the Clone Stamp tool in your Tools Panel until you can select the Pattern Stamp Tool. Pick your new pattern from the Options at the top of your screen, in the Options Panel. Then simply right click in your image in order to pick as soft a brush as possible to paint with. Paint into your image until your background is as smooth as you want it to be, making your painted out object more and more invisible. If you get lines from your repeated texture, experiment turning the on and off and paint over them. In addition to this, simple use of the Crop Tool, shortcut , can recompose an image, making it look as if it never had another object in it at all. Combine these techniques to find a method that works best for your images. Have questions or comments concerning Graphics, Photos, Filetypes, or Photoshop? Send your questions to [email protected], and they may be featured in a future How-To Geek Graphics article. Image Credits: Geisha Kyoto Gion by Todd Laracuenta via Wikipedia, used under Creative Commons. Moai Rano raraku by Aurbina, in Public Domain. Chris Young visits Wrigley by TonyTheTiger, via Wikipedia, used under Creative Commons. Latest Features How-To Geek ETC Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents

    Read the article

  • Silverlight for Everyone!!

    - by subodhnpushpak
    Someone asked me to compare Silverlight / HTML development. I realized that the question can be answered in many ways: Below is the high level comparison between a HTML /JavaScript client and Silverlight client and why silverlight was chosen over HTML / JavaScript client (based on type of users and major functionalities provided): 1. For end users Browser compatibility Silverlight is a plug-in and requires installation first. However, it does provides consistent look and feel across all browsers. For HTML / DHTML, there is a need to tweak JavaScript for each of the browser supported. In fact, tags like <span> and <div> works differently on different browser / version. So, HTML works on most of the systems but also requires lot of efforts coding-wise to adhere to all standards/ browsers / versions. Out of browser support No support in HTML. Third party tools like  Google gears offers some functionalities but there are lots of issues around platform and accessibility. Out of box support for out-of-browser support. provides features like drag and drop onto application surface. Cut and copy paste in HTML HTML is displayed in browser; which, in turn provides facilities for cut copy and paste. Silverlight (specially 4) provides rich features for cut-copy-paste along with full control over what can be cut copy pasted by end users and .advanced features like visual tree printing. Rich user experience HTML can provide some rich experience by use of some JavaScript libraries like JQuery. However, extensive use of JavaScript combined with various versions of browsers and the supported JavaScript makes the solution cumbersome. Silverlight is meant for RIA experience. User data storage on client end In HTML only small amount of data can be stored that too in cookies. In Silverlight large data may be stored, that too in secure way. This increases the response time. Post back In HTML / JavaScript the post back can be stopped by use of AJAX. Extensive use of AJAX can be a bottleneck as browser stack is used for the calls. Both look and feel and data travel over network.                           In Silverlight everything run the client side. Calls are made to server ONLY for data; which also reduces network traffic in long run. 2. For Developers Coding effort HTML / JavaScript can take considerable amount to code if features (requirements) are rich. For AJAX like interfaces; knowledge of third party kits like DOJO / Yahoo UI / JQuery is required which has steep learning curve. ASP .Net coding world revolves mostly along <table> tags for alignments whereas most popular tools provide <div> tags; which requires lots of tweaking. AJAX calls can be a bottlenecks for performance, if the calls are many. In Silverlight; coding is in C#, which is managed code. XAML is also very intuitive and Blend can be used to provide look and feel. Event handling is much clean than in JavaScript. Provides for many clean patterns like MVVM and composable application. Each call to server is asynchronous in silverlight. AJAX is in built into silverlight. Threading can be done at the client side itself to provide for better responsiveness; etc. Debugging Debugging in HTML / JavaScript is difficult. As JavaScript is interpreted; there is NO compile time error handling. Debugging in Silverlight is very helpful. As it is compiled; it provides rich features for both compile time and run time error handling. Multi -targeting browsers HTML / JavaScript have different rendering behaviours in different browsers / and their versions. JavaScript have to be written to sublime the differences in browser behaviours. Silverlight works exactly the same in all browsers and works on almost all popular browser. Multi-targeting desktop No support in HTML / JavaScript Silverlight is very close to WPF. Bot the platform may be easily targeted while maintaining the same source code. Rich toolkit HTML /JavaScript have limited toolkit as controls Silverlight provides a rich set of controls including graphs, audio, video, layout, etc. 3. For Architects Design Patterns Silverlight provides for patterns like MVVM (MVC) and rich (fat)  client architecture. This segregates the "separation of concern" very clearly. Client (silverlight) does what it is expected to do and server does what it is expected of. In HTML / JavaScript world most of the processing is done on the server side. Extensibility Silverlight provides great deal of extensibility as custom controls may be made. Extensibility is NOT restricted by browser but by the plug-in silverlight runs in. HTML / JavaScript works in a certain way and extensibility is generally done on the server side rather than client end. Client side is restricted by the limitations of the browser. Performance Silverlight provides localized storage which may be used for cached data storage. this reduces the response time. As processing can be done on client side itself; there is no need for server round trips. this decreases the round about time. Look and feel of the application is downloaded ONLY initially, afterwards ONLY data is fetched form the server. Security Silverlight is compiled code downloaded as .XAP; As compared to HTML / JavaScript, it provides more secure sandboxed approach. Cross - scripting is inherently prohibited in silverlight by default. If proper guidelines are followed silverlight provides much robust security mechanism as against HTML / JavaScript world. For example; knowing server Address in obfuscated JavaScript is easier than a compressed compiled obfuscated silverlight .XAP file. Some of these like (offline and Canvas support) will be available in HTML5. However, the timelines are not encouraging at all. According to Ian Hickson, editor of the HTML5 specification, the specification to reach the W3C Candidate Recommendation stage during 2012, and W3C Recommendation in the year 2022 or later. see http://en.wikipedia.org/wiki/HTML5 for details. The above is MY opinion. I will love to hear yours; do let me know via comments. Technorati Tags: Silverlight

    Read the article

  • tile_static, tile_barrier, and tiled matrix multiplication with C++ AMP

    - by Daniel Moth
    We ended the previous post with a mechanical transformation of the C++ AMP matrix multiplication example to the tiled model and in the process introduced tiled_index and tiled_grid. This is part 2. tile_static memory You all know that in regular CPU code, static variables have the same value regardless of which thread accesses the static variable. This is in contrast with non-static local variables, where each thread has its own copy. Back to C++ AMP, the same rules apply and each thread has its own value for local variables in your lambda, whereas all threads see the same global memory, which is the data they have access to via the array and array_view. In addition, on an accelerator like the GPU, there is a programmable cache, a third kind of memory type if you'd like to think of it that way (some call it shared memory, others call it scratchpad memory). Variables stored in that memory share the same value for every thread in the same tile. So, when you use the tiled model, you can have variables where each thread in the same tile sees the same value for that variable, that threads from other tiles do not. The new storage class for local variables introduced for this purpose is called tile_static. You can only use tile_static in restrict(direct3d) functions, and only when explicitly using the tiled model. What this looks like in code should be no surprise, but here is a snippet to confirm your mental image, using a good old regular C array // each tile of threads has its own copy of locA, // shared among the threads of the tile tile_static float locA[16][16]; Note that tile_static variables are scoped and have the lifetime of the tile, and they cannot have constructors or destructors. tile_barrier In amp.h one of the types introduced is tile_barrier. You cannot construct this object yourself (although if you had one, you could use a copy constructor to create another one). So how do you get one of these? You get it, from a tiled_index object. Beyond the 4 properties returning index objects, tiled_index has another property, barrier, that returns a tile_barrier object. The tile_barrier class exposes a single member, the method wait. 15: // Given a tiled_index object named t_idx 16: t_idx.barrier.wait(); 17: // more code …in the code above, all threads in the tile will reach line 16 before a single one progresses to line 17. Note that all threads must be able to reach the barrier, i.e. if you had branchy code in such a way which meant that there is a chance that not all threads could reach line 16, then the code above would be illegal. Tiled Matrix Multiplication Example – part 2 So now that we added to our understanding the concepts of tile_static and tile_barrier, let me obfuscate rewrite the matrix multiplication code so that it takes advantage of tiling. Before you start reading this, I suggest you get a cup of your favorite non-alcoholic beverage to enjoy while you try to fully understand the code. 01: void MatrixMultiplyTiled(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: static const int TS = 16; 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M,N,vC); 07: parallel_for_each(c.grid.tile< TS, TS >(), 08: [=] (tiled_index< TS, TS> t_idx) restrict(direct3d) 09: { 10: int row = t_idx.local[0]; int col = t_idx.local[1]; 11: float sum = 0.0f; 12: for (int i = 0; i < W; i += TS) { 13: tile_static float locA[TS][TS], locB[TS][TS]; 14: locA[row][col] = a(t_idx.global[0], col + i); 15: locB[row][col] = b(row + i, t_idx.global[1]); 16: t_idx.barrier.wait(); 17: for (int k = 0; k < TS; k++) 18: sum += locA[row][k] * locB[k][col]; 19: t_idx.barrier.wait(); 20: } 21: c[t_idx.global] = sum; 22: }); 23: } Notice that all the code up to line 9 is the same as per the changes we made in part 1 of tiling introduction. If you squint, the body of the lambda itself preserves the original algorithm on lines 10, 11, and 17, 18, and 21. The difference being that those lines use new indexing and the tile_static arrays; the tile_static arrays are declared and initialized on the brand new lines 13-15. On those lines we copy from the global memory represented by the array_view objects (a and b), to the tile_static vanilla arrays (locA and locB) – we are copying enough to fit a tile. Because in the code that follows on line 18 we expect the data for this tile to be in the tile_static storage, we need to synchronize the threads within each tile with a barrier, which we do on line 16 (to avoid accessing uninitialized memory on line 18). We also need to synchronize the threads within a tile on line 19, again to avoid the race between lines 14, 15 (retrieving the next set of data for each tile and overwriting the previous set) and line 18 (not being done processing the previous set of data). Luckily, as part of the awesome C++ AMP debugger in Visual Studio there is an option that helps you find such races, but that is a story for another blog post another time. May I suggest reading the next section, and then coming back to re-read and walk through this code with pen and paper to really grok what is going on, if you haven't already? Cool. Why would I introduce this tiling complexity into my code? Funny you should ask that, I was just about to tell you. There is only one reason we tiled our extent, had to deal with finding a good tile size, ensure the number of threads we schedule are correctly divisible with the tile size, had to use a tiled_index instead of a normal index, and had to understand tile_barrier and to figure out where we need to use it, and double the size of our lambda in terms of lines of code: the reason is to be able to use tile_static memory. Why do we want to use tile_static memory? Because accessing tile_static memory is around 10 times faster than accessing the global memory on an accelerator like the GPU, e.g. in the code above, if you can get 150GB/second accessing data from the array_view a, you can get 1500GB/second accessing the tile_static array locA. And since by definition you are dealing with really large data sets, the savings really pay off. We have seen tiled implementations being twice as fast as their non-tiled counterparts. Now, some algorithms will not have performance benefits from tiling (and in fact may deteriorate), e.g. algorithms that require you to go only once to global memory will not benefit from tiling, since with tiling you already have to fetch the data once from global memory! Other algorithms may benefit, but you may decide that you are happy with your code being 150 times faster than the serial-version you had, and you do not need to invest to make it 250 times faster. Also algorithms with more than 3 dimensions, which C++ AMP supports in the non-tiled model, cannot be tiled. Also note that in future releases, we may invest in making the non-tiled model, which already uses tiling under the covers, go the extra step and use tile_static memory on your behalf, but it is obviously way to early to commit to anything like that, and we certainly don't do any of that today. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

    - by Andreea Vaduva
    Untitled Document table { border: thin solid; } Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users. So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.   TIP It is not necessary to highlight and copy the row or column descriptions   Next from the Smart View ribbon select Copy Data Point. Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View;from it select the Paste Data Point icon. The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers. =After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.) It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text: Before refresh: After refresh: From now on for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again. As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).   So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:   By selecting the Manage POV option from the Smart View meny in Word or Power Point…   … the following POV Manager – Queries window opens:   You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV. TIP Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV. And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks: Select one of the numbers from within your Word or Power Point document by double-clicking.   Then select the Visualize in Excel option from the Smart View menu. Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.) However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features. So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected] . About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.  

    Read the article

  • How to Never Use iTunes With Your iPhone, iPad, or iPod Touch

    - by Chris Hoffman
    iTunes isn’t an amazing program on Windows. There was a time when Apple device users had to plug their devices into their PCs or Macs and use iTunes for device activation, updates, and syncing, but iTunes is no longer necessary. Apple still allows you to use iTunes for these things, but you don’t have to. Your iOS device can function independently from iTunes, so you should never be forced to plug it into a PC or Mac. Device Activation When the iPad first came out, it was touted as a device that could replace full PCs and Macs for people who only needed to perform light computing tasks. Yet, to set up a new iPad, users had to plug it into a PC or Mac running iTunes and use iTunes to activate the device. This is no longer necessary. With new iPads, iPhones, and iPod Touches, you can simply go through the setup process after turning on your new device without ever having to plug it into iTunes. Just connect to a Wi-Fi or cellular data network and log in with your Apple ID when prompted. You’ll still see an option that allows you to activate the device via iTunes, but this should only be necessary if you don’t have a wireless Internet connection available for your device. Operating System Updates You no longer have to use Apple’s iTunes software to update to a new version of Apple’s iOS operating system, either. Just open the Settings app on your device, select the General category, and tap Software Update. You’ll be able to update right from your device without ever opening iTunes. Purchased iTunes Media Apple allows you to easily access content you’ve purchased from the iTunes Store on any device. You don’t have to connect your device to your computer and sync via iTunes. For example, you can purchase a movie from the iTunes Store. Then, without any syncing, you can open the iTunes Store app on any of your iOS devices, tap the Purchased section, and see stuff you’ve downloaded. You can download the content right from the store to your device. This also works for apps — apps you purchase from the App Store can be accessed in the Purchased section on the App Store on your device later. You don’t have to sync apps from iTunes to your device, although iTunes still allows you to. You can even set up automatic downloads from the iTunes & App Store settings screen. This would allow you to purchase content on one device and have it automatically download to your other devices without any hassle. Music Apple allows you to re-download purchased music from the iTunes Store in the same way. However, there’s a good chance you have your own music you didn’t purchase from iTunes. Maybe you spent time ripping it all from your old CDs and you’ve been syncing it to your devices via iTunes ever since. Apple’s solution for this is named iTunes Match. This feature isn’t free, but it’s not a bad deal at all. For $25 per year, Apple allows you to upload all your music to your iCloud account. You can then access all your music from any iPhone, IPad, or iPod Touch. You can stream all your music — perfect if you have a huge library and little storage on your device — and choose which songs you want to download to your device for offline use. When you add additional music to your computer, iTunes will notice it and upload it using iTunes Match, making it available for streaming and downloading directly from your iOS devices without any syncing. This feature is named iTunes Match because it doesn’t just upload music — if Apple already has a song you upload, it will “match” your song with Apple’s copy. This means you may get higher-quality versions of your songs if you ripped them from CD at a lower bitrate. Podcasts You don’t have to use iTunes to subscribe to podcasts and sync them to your devices. Even if you have a lowly iPod Touch, you can install APple’s Podcasts app from the app store. Use it to subscribe to podcasts and configure them to automatically download directly to your device. You can use other podcast apps for this, too. Backups You can continue backing up your device’s data through iTunes, generating local backups that are stored on your computer. However, new iOS devices are configured to automatically back up their data to iCloud. This happens automatically in the background without you even having to think about it, and you can restore such backups when setting up a device simply by logging in with your Apple ID. Personal Data In the days of PalmPilots, people would use desktop programs like iTunes to sync their email, contacts, and calendar events with their mobile devices. You probably shouldn’t have to sync this data form your computer. Just sign into your email account — for example, a Gmail account — on your device and iOS will automatically pull your email, contacts, and calendar events from your associated account. Photos Rather than connecting your iOS device to your computer and syncing photos from it, you can use an app that automatically uploads your photos to a web service. Dropbox, Google+, and even Flickr all have this feature in their apps. You’ll be able to access your photos from any computer and have a backup copy without any syncing required. You may still need to use iTunes if you want to sync local music without paying for iTunes Match or copy local video files to your device. Copying large local files over is the only real scenario where you’d need iTunes. If you don’t need to copy such files over, you can go ahead and uninstall iTunes from your Windows PC if you like. You shouldn’t need it.     

    Read the article

  • Top 10 Browser Productivity Tips

    - by Renso
    Originally posted on: http://geekswithblogs.net/renso/archive/2013/10/14/top-10-browser-productivity-tips.aspxYou don’t have to be a geek to be a productive browser user. The tips below have been selected by actions users take most of the time to navigate a web-site but use long-standing keyboard or mouse actions to get them done, when there are keyboard short-cuts you can use instead. Since you hands are already on the keyboard it is almost always faster to sue a keyboard shortcut to get something done that you usually used the mouse for. For example right-clicking on something to copy it and then doing to same for pasting something is very time consuming, keyboard shortcuts have been created that simplify the task. All it takes are a few memory brain cells to remember them. Here are the tips, in no particular order:   Tip 1 Hold down the spacebar on your keyboard to page to the end of your web page rather than using your mouse. This is really a slow way of doing it. If you want to page one page at a time, hit the spacebar once, and again to page again. But if you want to page all the way to the end of the web page simply hit Ctrl+End (that is hold down the Ctrl key and hit the End button on your keyboard). To get to the top of your web page, simply hit Ctrl + Home to go all the way to the top of your web page. Tip 2 Where are my downloads? Some folks run downloads again-and-again because they do not know where the last one went and they do not see the popup, or browser note on their web page in the footer, etc. Simply hit Ctrl+J. Works in most browsers. Tip 3 Selecting a US state from a drop down box. Don’t use the mouse, takes just way too long to scroll. When you tab to the drop down box or click on it with your mouse, simply hit the first character of the state and it will be selected. For Texas for example hit the letter “T” twice on your keyboard to get to it. The same concept can be applied to any drop down box that is alphabetical or numerically sorted. Tip 4 Fixing spelling errors. All modern-day browsers support this now. You see the red wavy lines underscoring a word, yes it is a spelling error. How do you fix it? Don’t overtype it or try and fix it manually, fist right-click on it and a list of suggestions comes up. If it does not show up, like my name “Renso” and you know how to spell your name as in this example, look further down the list of options (the little window popup that appears when you right click) and you should see an option to “Add to Dictionary”. Be warned, when you add it, it only adds it to the browser you’re using’s dictionary. If you use Google Chrome, Firefox and IE, each one will have their own list. Tip 5 So you have trouble seeing the text on the screen. Or you are looking at a photo, for example in Facebook. You want to zoom in to read better or zoom into a photo a bit more. Hit Ctrl++ (hold down Ctrl key and hit the plus key – actually it’s the equal key but it is easier to remember that it is plus for bigger). Hit the minus to zoom out. Now you can’t remember what the original size was since you were so excited to hit it 20 times, or was that 21… Simply hit Ctrl+0 (that is zero) and it will reset it to the default. Tip 6 So you closed a couple of tabs in your browser. Suddenly you remember something you wanted to double-check something on one of the tabs, you cannot remember the URL ad the tab is gone forever, or is it? Simply hit Ctrl+Shift+t and it will bring back your tabs one by one each time you click the T. This has also been a great way for me to quickly close some tabs because I don’t want my boss to see I’m shopping and then hitting Ctrl+Shift+t to quickly get it back and complete my check-put and purchase. Or, for parents, when you walk into your daughter’s room and you see she quickly clicks and closes a window/tab in here browser. Not to worry my little darling, daddy will Ctrl+Shift+t and see what boys on Facebook you were talking too… Tip 7 The web browser is frozen on your PC/Laptop/Whatever, in this example it may be your Internet Explorer browser. I don’t mention Firefox or Chrome here because it probably never happens in their world. You cannot close it, it won’t respond to anything you have done s far except for the next step you are about to take, which is throw your two-day old coffee on your keyboard. This happens especially on sites that want to force you to complete a purchase order. Hit Ctrl+Alt+Del on your keyboard on any version of windows, select TASK MANAGER. In the  First Tab, which is the Process Tab, look for the item in question. In this example you should see Internet Explorer. Right-click it and select “End Task”. It will force the thread out of memory and terminate that process. You can of course do this with any program running under your account. Tip 8 This is a personal favorite of mine. To select words in the paragraph without using the mouse. You don’t want to select one character at a time like when you use the Ctrl+arrows as it can be very slow if you want to select a lot of text. You also want to select whole words. Simply use the Ctrl+Shift_arrow (right or left depending which direction you want to go. Tip 9 I was a bit reluctant to add this one, but being in the professional services industry still come across many-a-folk that simply can’t copy-and-paste them-all text or images that reside on them screens, y’all. Ctrl+c to copy and Ctrl+v to paste it. Works a lot faster than using the mouse. You may be asking: “Well why in the devil did they not use Ctrl+p for paste…. because that is for printing. This is of course not limited to the browser world, it applies to almost any piece of software running on PC or Mac. Go try it on an image on your browser, right-click it and select copy. Open a word document and Ctrl+v to paste the image in there. Please consider copyright laws. Tip 10 Getting rid of annoying ads. Now this only works when you load a web page, meaning when you get back to the same page later you will have to do this again and you will need to learn a tool to do it, WELL WORTH IT. For example, I use GrooveShark to listen to music but I don’t like the ads they show. Install a tool like Firebug for Firefox or use the Ctrl+Shift+I on Chrome to bring up the developer toolbar. Shows at the bottom of the page. With Firefox, once you have installed Firebug as an add-on, a yellow bug should appear on the top right-hand-side of your browser, click on it to display the developer toolbar. You will need to learn how to use it, but once you know how to select an item/section on the window (usually just right-click the add you don’t want to see and select “Inspect Element”, the developer toolbar will appear (if not already there)) and then simply hit delete and it will remove the add from the screen. If you don’t know HTML you may need to play with it a bit, but once you understand how it works can open up a whole new world for you on how web pages actually work. If you can think of any others that have saved you a ton of time please let me know so I can add them to a top 99 list.

    Read the article

  • Copying one form's values to another form using JQuery

    - by rsturim
    I have a "shipping" form that I want to offer users the ability to copy their input values over to their "billing" form by simply checking a checkbox. I've coded up a solution that works -- but, I'm sort of new to jQuery and wanted some criticism on how I went about achieving this. Is this well done -- any refactorings you'd recommend? Any advice would be much appreciated! The Script <script type="text/javascript"> $(function() { $("#copy").click(function() { if($(this).is(":checked")){ var $allShippingInputs = $(":input:not(input[type=submit])", "form#shipping"); $allShippingInputs.each(function() { var billingInput = "#" + this.name.replace("ship", "bill"); $(billingInput).val($(this).val()); }) //console.log("checked"); } else { $(':input','#billing') .not(':button, :submit, :reset, :hidden') .val('') .removeAttr('checked') .removeAttr('selected'); //console.log("not checked") } }); }); </script> The Form <div> <form action="" method="get" name="shipping" id="shipping"> <fieldset> <legend>Shipping</legend> <ul> <li> <label for="ship_first_name">First Name:</label> <input type="text" name="ship_first_name" id="ship_first_name" value="John" size="" /> </li> <li> <label for="ship_last_name">Last Name:</label> <input type="text" name="ship_last_name" id="ship_last_name" value="Smith" size="" /> </li> <li> <label for="ship_state">State:</label> <select name="ship_state" id="ship_state"> <option value="RI">Rhode Island</option> <option value="VT" selected="selected">Vermont</option> <option value="CT">Connecticut</option> </select> </li> <li> <label for="ship_zip_code">Zip Code</label> <input type="text" name="ship_zip_code" id="ship_zip_code" value="05401" size="8" /> </li> <li> <input type="submit" name="" /> </li> </ul> </fieldset> </form> </div> <div> <form action="" method="get" name="billing" id="billing"> <fieldset> <legend>Billing</legend> <ul> <li> <input type="checkbox" name="copy" id="copy" /> <label for="copy">Same of my shipping</label> </li> <li> <label for="bill_first_name">First Name:</label> <input type="text" name="bill_first_name" id="bill_first_name" value="" size="" /> </li> <li> <label for="bill_last_name">Last Name:</label> <input type="text" name="bill_last_name" id="bill_last_name" value="" size="" /> </li> <li> <label for="bill_state">State:</label> <select name="bill_state" id="bill_state"> <option>-- Choose State --</option> <option value="RI">Rhode Island</option> <option value="VT">Vermont</option> <option value="CT">Connecticut</option> </select> </li> <li> <label for="bill_zip_code">Zip Code</label> <input type="text" name="bill_zip_code" id="bill_zip_code" value="" size="8" /> </li> <li> <input type="submit" name="" /> </li> </ul> </fieldset> </form> </div>

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox

    - by The Old Toxophilist
    Following on from my previous blog entry "Modifying the Base Template" I decided to put together a quick blog to show how to create a small VirtualBox, guest, that can be used to execute the ModifyJeOS and hence edit you templates. One of the main advantages of this is that Templates can be created away from the Exalogic Environment. For the Guest OS I chose OEL 6u3 and decided to create it as a basic server because I did not require a graphical interface but it's a simple change to create it with a GUI. Required Software Virtual Box. Oracle Enterprise Linux. Creating the VM I'll assume that the reader is experienced with Virtual Box and installing OEL and hence will make this section brief. Create VirtualBox Guest Create a new VirtualBox Guest and select oracle Linux 64 bit. Follow through the create process and select Dynamic Disk Size and the default 12GB disk size. The actual image will be a lot smaller than this but the OEL install will fail with insufficient disk space if you attempt a smaller size. Once the guest has been created attach the previously downloaded OEL 6u3 iso to the cd drive and start the guest. Install OEL On starting the guest the system will boot off the associated OEL 6u3 iso and take you through the standard installation process. Select all the appropriate information but when you reach the installation type select Basic Server because we do not need that additional packages and only need to access through the command line interface. Complete the installation and reboot the Guest. At this point we now have a basic OEL server running. Installing Guest Add-ons Before we can easily access the Guest we will need to add the VirtualBox guest add-ons. These will provide better keyboard and mouse integration and allow access the shared folders on the host machine. Before we can do this we will need to do the following: Enable Networking. Install additional rpms.  To enable the networking (eth0), that appears to be disabled by default, we can execute: ifup eth0 This will start the eth0 connection but once the Guest is rebooted the network will be down again. To resolve this you will need to edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file and change the ONBOOT parameter to "yes". Now we have enabled the network we will need to install a number of addition rpm. First we will need to configure the yum repository as follows: [ol6_latest] name=Oracle Linux $releasever Latest ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 [ol6_ga_base] name=Oracle Linux $releasever GA installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/0/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u1_base] name=Oracle Linux $releasever Update 1 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/1/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u2_base] name=Oracle Linux $releasever Update 2 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/2/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u3_base] name=Oracle Linux $releasever Update 3 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/3/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_UEK_latest] name=Latest Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 [ol6_UEK_base] name=Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 Once the repository has been edited we will need to execute the following yum commands: yum update yum install gcc yum install kernel-uek-devel yum install kernel-devel yum install createrepo At this point we now have all the additional packages required to install the VirtualBox Guest Add-ons. So select Devices->InstallGuest Additions on you running guest: This will simply place the VirtualBoxGuestAdditions.iso in the virtual cd and we will need to execute the following before we can run them. mkdir /media/cdrom mount -t iso9660 -o ro /dev/cdrom /media/cdrom cd /media/cdrom/ ls ./VBoxLinuxAdditions.run This will initiate the install and kernel rebuild. What you will notice is that during the installation a Failed will be displayed but this is simply because we have no graphical components. At this point we the installation will also have added the vboxsf group to the system and to access any shared folders we will create our user will need to be a member of this group an so the next stage is to add the root user to this group as follows: usermod -G vboxsf root cat /etc/group cat /etc/passwd init 0 Now simply shutdown the guest and add the Shared folder within your guests settings. Install ModifyJeOS Once the shared folder has been added restart the guest and change directory into the shared folder (/media/sf_<folder name>). For the next step I am assuming the ModifyJeOS rpms are located in the shared folder. We can simply execute: rpm -ivh ovm-modify-jeos-1.1.0-17.el5.noarch.rpm # Test with modifyjeos Using ModifyJeOS I have a modified MountSystemImg.sh script that should be copied into the /root/bin directory (you may need to create this) and from here it can be executed from any location: MountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Export for later i.e. during unmount export LOOP=`losetup -f` export SYSTEMIMG=/mnt/elsystem export TEMPLATEDIR=`pwd` # Make Temp Mount Directory mkdir -p $SYSTEMIMG # Create Loop for the System Image losetup $LOOP System.img kpartx -a $LOOP mount /dev/mapper/`basename $LOOP`p2 $SYSTEMIMG #Change Dir into mounted Image cd $SYSTEMIMG echo "######################################################################" echo "### ###" echo "### Starting Bash shell for editing. When completed log out to ###" echo "### Unmount the System.img file. ###" echo "### ###" echo "######################################################################" echo bash cd ~ cd $TEMPLATEDIR umount $SYSTEMIMG kpartx -d $LOOP losetup -d $LOOP rm -rf $SYSTEMIMG This script will simple create a mount directory, mount the System.img and then start a new shell in the mounted directory. On exiting the shell it will unmount the System.img. It only requires that you execute the script in the directory containing the System.img. These can be created under the mounted shared directory. In the example below I have extracted the Base template within the shared folder and then renamed it OEL_40GB_ROOT before changing into that directory and executing the script.

    Read the article

  • Oracle Certification and virtualization Solutions.

    - by scoter
    As stated in official MOS ( My Oracle Support ) document 249212.1 support for Oracle products on non-Oracle VM platforms follow exactly the same stance as support for VMware and, so, the only x86 virtualization software solution certified for any Oracle product is "Oracle VM". Based on the fact that: Oracle VM is totally free ( you have the option to buy Oracle-Support ) Certified is pretty different from supported ( OracleVM is certified, others could be supported ) With Oracle VM you may not require to reproduce your issue(s) on physical server Oracle VM is the only x86 software solution that allows hard-partitioning *** *** see details to these Oracle public links: http://www.oracle.com/technetwork/server-storage/vm/ovm-hardpart-168217.pdf http://www.oracle.com/us/corporate/pricing/partitioning-070609.pdf people started asking to migrate from third party virtualization software (ex. RH KVM, VMWare) to Oracle VM. Migrating RH KVM guest to Oracle VM. OracleVM has a built-in P2V utility ( Official Documentation ) but in some cases we can't use it, due to : network inaccessibility between hypervisors ( KVM and OVM ) network slowness between hypervisors (KVM and OVM) size of the guest virtual-disks Here you'll find a step-by-step guide to "manually" migrate a guest machine from KVM to OVM. 1. Verify source guest characteristics. Using KVM web console you can verify characteristics of the guest you need to migrate, such as: CPU Cores details Defined Memory ( RAM ) Name of your guest Guest operating system Disks details ( number and size ) Network details ( number of NICs and network configuration ) 2. Export your guest in OVF / OVA format.  The export from Redhat KVM ( kernel virtual machine ) will create a structured export of your guest: [root@ovmserver1 mnt]# lltotal 12drwxrwx--- 5 36 36 4096 Oct 19 2012 b8296fca-13c4-4841-a50f-773b5139fcee b8296fca-13c4-4841-a50f-773b5139fcee is the ID of the guest exported from RH-KVM [root@ovmserver1 mnt]# cd b8296fca-13c4-4841-a50f-773b5139fcee/[root@ovmserver1 b8296fca-13c4-4841-a50f-773b5139fcee]# ls -ltrtotal 12drwxr-x--- 4 36 36 4096 Oct 19  2012 masterdrwxrwx--- 2 36 36 4096 Oct 29  2012 dom_mddrwxrwx--- 4 36 36 4096 Oct 31  2012 images images contains your virtual-disks exported [root@ovmserver1 b8296fca-13c4-4841-a50f-773b5139fcee]# cd images/[root@ovmserver1 images]# ls -ltratotal 16drwxrwx--- 5 36 36 4096 Oct 19  2012 ..drwxrwx--- 2 36 36 4096 Oct 31  2012 d4ef928d-6dc6-4743-b20d-568b424728a5drwxrwx--- 2 36 36 4096 Oct 31  2012 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1drwxrwx--- 4 36 36 4096 Oct 31  2012 .[root@ovmserver1 images]# cd d4ef928d-6dc6-4743-b20d-568b424728a5/[root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# ls -ltotal 5169092-rwxr----- 1 36 36 187904819200 Oct 31  2012 4c03b1cf-67cc-4af0-ad1e-529fd665dac1-rw-rw---- 1 36 36          341 Oct 31  2012 4c03b1cf-67cc-4af0-ad1e-529fd665dac1.meta[root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# file 4c03b1cf-67cc-4af0-ad1e-529fd665dac14c03b1cf-67cc-4af0-ad1e-529fd665dac1: LVM2 (Linux Logical Volume Manager) , UUID: sZL1Ttpy0vNqykaPahEo3hK3lGhwspv 4c03b1cf-67cc-4af0-ad1e-529fd665dac1 is the first exported disk ( physical volume ) [root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# cd ../4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/[root@ovmserver1 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1]# ls -ltotal 5568076-rwxr----- 1 36 36 107374182400 Oct 31  2012 9020f2e1-7b8a-4641-8f80-749768cc237a-rw-rw---- 1 36 36          341 Oct 31  2012 9020f2e1-7b8a-4641-8f80-749768cc237a.meta[root@ovmserver1 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1]# file 9020f2e1-7b8a-4641-8f80-749768cc237a9020f2e1-7b8a-4641-8f80-749768cc237a: x86 boot sector; partition 1: ID=0x83, active, starthead 1, startsector 63, 401562 sectors; partition 2: ID=0x82, starthead 0, startsector 401625, 65529135 sectors; startsector 63, 401562 sectors; partition 2: ID=0x82, starthead 0, startsector 401625, 65529135 sectors; partition 3: ID=0x83, starthead 254, startsector 65930760, 8385930 sectors; partition 4: ID=0x5, starthead 254, startsector 74316690, 135395820 sectors, code offset 0x48 9020f2e1-7b8a-4641-8f80-749768cc237a is the second exported disk, with partition 1 bootable 3. Prepare the new guest on Oracle VM. By Ovm-Manager we can prepare the guest where we will move the exported virtual-disks; under the Tab "Servers and VMs": click on  and create your guest with parameters collected before (point 1): - add NICs on different networks: - add virtual-disks; in this case we add two disks of 1.0 GB each one; we will extend the virtual disk copying the source KVM virtual-disk ( see next steps ) - verify virtual-disks created ( under Repositories tab ) 4. Verify OVM virtual-disks names. [root@ovmserver1 VirtualMachines]# grep -r hyptest_rdbms * 0004fb0000060000a906b423f44da98e/vm.cfg:OVM_simple_name = 'hyptest_rdbms' [root@ovmserver1 VirtualMachines]# cd 0004fb0000060000a906b423f44da98e [root@ovmserver1 0004fb0000060000a906b423f44da98e]# more vm.cfgvif = ['mac=00:21:f6:0f:3f:85,bridge=0004fb001089128', 'mac=00:21:f6:0f:3f:8e,bridge=0004fb00101971d'] OVM_simple_name = 'hyptest_rdbms' vnclisten = '127.0.0.1' disk = ['file:/OVS/Repositories/0004fb00000300004f17b7368139eb41/ VirtualDisks/0004fb000012000097c1bfea9834b17d.img,xvda,w', 'file:/OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img,xvdb,w'] vncunused = '1' uuid = '0004fb00-0006-0000-a906-b423f44da98e' on_reboot = 'restart' cpu_weight = 27500 memory = 32768 cpu_cap = 0 maxvcpus = 8 OVM_high_availability = True maxmem = 32768 vnc = '1' OVM_description = '' on_poweroff = 'destroy' on_crash = 'restart' name = '0004fb0000060000a906b423f44da98e' guest_os_type = 'linux' builder = 'hvm' vcpus = 8 keymap = 'en-us' OVM_os_type = 'Oracle Linux 5' OVM_cpu_compat_group = '' OVM_domain_type = 'xen_hvm' disk2 ovm ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img disk1 ovm ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb000012000097c1bfea9834b17d.img Summarizing disk1 --source ==> /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/9020f2e1-7b8a-4641-8f80-749768cc237a disk1 --dest ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb000012000097c1bfea9834b17d.img disk2 --source ==> /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/d4ef928d-6dc6-4743-b20d-568b424728a5/4c03b1cf-67cc-4af0-ad1e-529fd665dac1 disk2 --dest ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img 5. Copy KVM exported virtual-disks to OVM virtual-disks. Keeping your Oracle VM guest stopped you can copy KVM exported virtual-disks to OVM virtual-disks; what I did is only to locally mount the filesystem containing the exported virtual-disk ( by an usb device ) on my OVS; the copy automatically resize OVM virtual-disks ( previously created with a size of 1GB ) . nohup cp /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/9020f2e1-7b8a-4641-8f80-749768cc237a /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/0004fb000012000097c1bfea9834b17d.img & nohup cp /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/d4ef928d-6dc6-4743-b20d-568b424728a5/4c03b1cf-67cc-4af0-ad1e-529fd665dac1 /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/0004fb0000120000cde6a11c3cb1d0be.img & 7. When copy completed refresh repository to aknowledge the new-disks size. 7. After "refresh repository" is completed, start guest machine by Oracle VM manager. After the first start of your guest: - verify that you can see all disks and partitions - verify that your guest is network reachable ( MAC Address of your NICs changed ) Eventually you can also evaluate to convert your guest to PVM ( Paravirtualized virtual Machine ) following official Oracle documentation. Ciao Simon COTER ps: next-time I'd like to post an article reporting how to manually migrate Virtual-Iron guests to OracleVM.  Comments and corrections are welcome. 

    Read the article

  • MEF + Plug-In not updating

    - by mybrokengnome
    I asked this on the MEF Codeplex forum already, but I haven't gotten a response yet, so I figured I'd try StackOverflow. Here's the original post if anyone's interested (this is just a copy from it): MEF Codeplex "Let me first say that I'm completely new to MEF (just discovered it today) and am very happy with it so far. However, I've ran in to a problem that is very frustrating. I'm creating an app that will have a plugin architecture and the plugins will only be stored in a single DLL file (or coded into the main app). The DLL file needs to be able to be recompiled during run-time and the app should recognize this and re-load the plugins (I know this is difficult, but it's a requirement). To accomplish this I took the approach covered http://blog.maartenballiauw.be/category/MEF.aspx there (look for WebServerDirectoryCatalog). Basically the idea is to "monitor the plugins folder, copy the new/modified assemblies to the web application’s /bin folder and instruct MEF to load its exports from there." This is my code, which is probably not the correct way to do it but it's what I found in some samples around the net: main()... string myExecName = Assembly.GetExecutingAssembly().Location; string myPath = System.IO.Path.GetDirectoryName(myExecName); catalog = new AggregateCatalog(); pluginCatalog = new MyDirectoryCatalog(myPath + @"/Plugins"); catalog.Catalogs.Add(pluginCatalog); exportContainer = new CompositionContainer(catalog); CompositionBatch compBatch = new CompositionBatch(); compBatch.AddPart(this); compBatch.AddPart(catalog); exportContainer.Compose(compBatch); and private FileSystemWatcher fileSystemWatcher; public DirectoryCatalog directoryCatalog; private string path; private string extension; public MyDirectoryCatalog(string path) { Initialize(path, "*.dll", "*.dll"); } private void Initialize(string path, string extension, string modulePattern) { this.path = path; this.extension = extension; fileSystemWatcher = new FileSystemWatcher(path, modulePattern); fileSystemWatcher.Changed += new FileSystemEventHandler(fileSystemWatcher_Changed); fileSystemWatcher.Created += new FileSystemEventHandler(fileSystemWatcher_Created); fileSystemWatcher.Deleted += new FileSystemEventHandler(fileSystemWatcher_Deleted); fileSystemWatcher.Renamed += new RenamedEventHandler(fileSystemWatcher_Renamed); fileSystemWatcher.IncludeSubdirectories = false; fileSystemWatcher.EnableRaisingEvents = true; Refresh(); } void fileSystemWatcher_Renamed(object sender, RenamedEventArgs e) { RemoveFromBin(e.OldName); Refresh(); } void fileSystemWatcher_Deleted(object sender, FileSystemEventArgs e) { RemoveFromBin(e.Name); Refresh(); } void fileSystemWatcher_Created(object sender, FileSystemEventArgs e) { Refresh(); } void fileSystemWatcher_Changed(object sender, FileSystemEventArgs e) { Refresh(); } private void Refresh() { // Determine /bin path string binPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Plugins"); string newPath = ""; // Copy files to /bin foreach (string file in Directory.GetFiles(path, extension, SearchOption.TopDirectoryOnly)) { try { DirectoryInfo dInfo = new DirectoryInfo(binPath); DirectoryInfo[] dirs = dInfo.GetDirectories(); int count = dirs.Count() + 1; newPath = binPath + "/" + count; DirectoryInfo dInfo2 = new DirectoryInfo(newPath); if (!dInfo2.Exists) dInfo2.Create(); File.Copy(file, System.IO.Path.Combine(newPath, System.IO.Path.GetFileName(file)), true); } catch { // Not that big deal... Blog readers will probably kill me for this bit of code :-) } } // Create new directory catalog directoryCatalog = new DirectoryCatalog(newPath, extension); directoryCatalog.Refresh(); } public override IQueryable<ComposablePartDefinition> Parts { get { return directoryCatalog.Parts; } } private void RemoveFromBin(string name) { string binPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, ""); File.Delete(Path.Combine(binPath, name)); } So all this actually works, and after the end of the code in main my IEnumerable variable is actually filled with all the plugins in the DLL (which if you follow the code is located in Plugins/1 so that I can modify the dll in the plugins folder). So now at this point I should be able to re-compile the plugins DLL, drop it in to the Plugins folder, my FileWatcher detect that it's changed, and then copy it into folder "2" and directoryCatalog should point to the new folder. All this actually works! The problem is, even though it seems like every thing is pointed to the right place, my IEnumerable variable is never updated with the new plugins. So close, but yet so far! Any suggestions? I know the downsides of doing it this way, that no dll is actually getting unloaded and causing a memory leak, but it's a Windows App and will probably be started at least once a day, and the plugins are un-likely to change that often, but it's still a requirement from the client that it does this without re-loading the app. Thanks! Thanks for any help you all can provide, it's driving me crazy not being able to figure this out."

    Read the article

  • Building Visual Studio Setup Projects with TFS 2010 Team Build

    - by Jakob Ehn
    One of the most common complaints from people starting to use Team Build is that is doesn’t support building Microsoft’s own Setup and Deployment project (*.vdproj). When creating a default build definition that compiles a solution containing a setup project, you’ll get the following warning: The project file "MyProject.vdproj" is not supported by MSBuild and cannot be built.   This is what the problem is all about. MSBuild, that is used for compiling your projects, does not understand the proprietary vdproj format defined by Microsoft quite some time ago. Unfortunately there is no sign that this will change in the near future, in fact the setup projects has barely changed at all since they were introduced. VS 2010 brings no new features or improvements hen it comes to the setup projects. VS 2010 does include a limited version of InstallShield which promises to be more MSBuild friendly and with more or less the same features as VS setup projects. I hope to get a closer look at this installer project type soon. But, how do we go about to build a Visual Studio setup project and produce an MSI as part of a Team Build process? Well, since only one application known to man understands the vdproj projects, we will have to installa copy of Visual Studio on the build server. Sad but true. After doing this, we use the Visual Studio command line interface (devenv) to perform the build. In this post I will show how to do this by using the InvokeProcess activity directly in a build workflow template. You’ll want to run build your setup projects after you have successfully compiled the projects.   Install Visual Studio 2010 on the build server(s)   Open your build process template /remember to branch or copy the xaml file before modifying it!)   Locate the Try to Compile the Project activity   Drop an instance of the InvokeProcess activity from the toolbox onto the designer, after the Run MSBuild for Project activity   Drop an instance of the WriteBuildMessage activity inside the Handle Standard Output section. Set the Importance property to Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High (NB: This is necessary if you want the output from devenv to show up in the build log when running the build with the default verbosity) Set the Message property to stdOutput   Drop an instance of the WriteBuildError activity to the Handle Error Output section Set the Message property to errOutput   Select the InvokeProcess activity and set the values of the parameters to:     The finished workflow should look like this:     This will generate the MSI files, but they won’t be copied to the drop location. This is because we are using devenv and not MSBuild, so we have to do this explicitly   Drop a Sequence activity somewhere after the Copy to Drop location activity.   Create a variable in the Sequence activity of type IEnumerable<String> and call it GeneratedInstallers   Drop a FindMatchingFiles activity in the sequence activity and set the properties to:     Drop a ForEach<String> activity after the FindMatchingFiles activity. Set the Value property to GeneratedInstallers   Drop an InvokeProcess activity inside the ForEach activity.  FileName: “xcopy.exe” Arguments: String.Format("""{0}"" ""{1}""", item, BuildDetail.DropLocation) The Sequence activity should look like this:     Save the build process template and check it in.   Run the build and verify that the MSI’s is built and copied to the drop location.   Note 1: One of the drawback of using devenv like this in a team build is that since all the output from the default compilations is placed in the Binaries folder, the outputs is not avaialable when devenv is invoked, which causes the whole solution to rebuild again. In TFS 2008, this was pretty simple to fix by using the CustomizableOutDir property. In TFS 2010, the same feature is not avaialble. Jim Lamb blogged about this recently, have a look at it if you have a problem with this: http://blogs.msdn.com/jimlamb/archive/2010/04/13/customizableoutdir-in-tfs-2010.aspx   Note 2: Although the above solution works, a better approach is to wrap this in a custom activity that you can use in your builds. I will come back to this in a future post.

    Read the article

  • SOA 10g Developing a Simple Hello World Process

    - by [email protected]
    Softwares & Hardware Needed Intel Pentium D CPU 3 GHz, 2 GB RAM, Windows XP System ( Thats what i am using ) You could as well use Linux , but please choose High End RAM 10G SOA Suite from Oracle(TM) , Read Installation documents at www.Oracle.com J Developer 10.1.3.3 Official Documents at http://www.oracle.com/technology/products/ias/bpel/index.html java -version Java HotSpot(TM) Client VM (build 1.5.0_06-b05, mixed mode)BPEL Introduction - Developing a Simple Hello World Process  Synchronous BPEL Process      This Exercise focuses on developing a Synchronous Process, which mean you give input to the BPEL Process you get output immediately no waiting at all. The Objective of this exercise is to give input as name and it greets with Hello Appended by that name example, if I give input as "James" the BPEL process returns "Hello James". 1. Open the Oracle JDeveloper click on File -> New Application give the name "JamesApp" you can give your own name if it pleases you. Select the folder where you want to place the application. Click "OK" 2. Right Click on the "JamesApp" in the Application Navigator, Select New Menu. 3. Select "Projects" under "General" and "BPEL Process Project", click "OK" these steps remain same for all BPEL Projects 4. Project Setting Wizard Appears, Give the "Process Name" as "MyBPELProc" and Namespace as http://xmlns.james.com/ MyBPELProc, Select Template as "Synchronous BPEL Process click "Next" 5. Accept the input and output schema names as it is, click "Finish" 6. You would see the BPEL Process Designer, some of the folders such as Integration content and Resources are created and few more files 7. Assign Activity : Allows Assigning values to variables or copying values of one variable to another and also do some string manipulation or mathematical operations In the component palette at extreme right, select Process Activities from the drop down, and drag and drop "Assign" between "receive Input" and "replyOutput" 8. You can right click and edit the Assign activity and give any suitable name "AssignHello", 9. Select "Copy Operation" Tab create "Copy Operation" 10. In the From variables click on expression builder, select input under "input variable", Click on insert into expression bar, complete the concat syntax, Note to use "Ctrl+space bar" inside expression window to Auto Populate the expression as shown in the figure below. What we are actually doing here is concatenating the String "Hello ", with the variable value received through the variable named "input" 11. Observe that once an expression is completed the "To Variable" is assigned to a variable by name "result" 12. Finally the copy variable looks as below 13. It's the time to deploy, start the SOA Suite 14. Establish connection to the Server from JDeveloper, this can be done adding a New Application Server under Connection, give the server name, username and password and test connection. 15. Deploy the "MyBPELProc" to the "default domain" 16. http://localhost:8080/ allows connecting to SOA Suite web portal, click on "BPEL Control" , login with the username "oc4jadmin" password what ever you gave during installation 17. "MyBPELProc" is visisble under "Deployed BPEL Processes" in the "Dashboard" Tab, click on the it 18. Initiate tab open to accept input, enter data such as input is "James" click on "Post XML Button" 19. Click on Visual Flow 20. Click on receive Input , it shows "James" as input received 21. Click on reply Output, it shows "Hello James" so the BPEL process is successfully executed. 22. It may be worth seeing all the instance created everytime a BPEL process is executed by giving some inputs. Purge All button allows to delete all the unwanted previous instances of BPEL process, dont worry it wont delete the BPEL process itself :-) 23. It may also be some importance to understand the XSD File which holds input & output variable names & data types. 24. You could drag n drop variables as elements over sequence at the designer or directly edit the XML Source file. 

    Read the article

  • How to use crontab, .netrc, and git push?

    - by Jon
    Hi all, I am in the process of automating the backups from various servers to a central point then pushing those config changes into a git repo so i can track any changes over time. The rest of the scripts are working well, I can copy / rsync the files across the network to a central point. The last script is to get the config files to be put into / updated in repository. The script is as follows: #!/bin/bash clear SERVERNAME="betty" SCRIPTDIR="/home/jon" GITROOT="/tmp/git" TEMPROOT="/tmp/backups" BACKUPROOTDIR="/mnt/backups" echo " - running as user: $UID" echo "backingup git config on $SERVERNAME" echo "" # check to see if root backup folder exists, otherwise create it. if [ -d $GITROOT ]; then rm -rf $GITROOT fi mkdir $GITROOT cd $GITROOT echo " - testing if home is where I think it should be!" echo $HOME echo " - testing if it can see netrc" tail $HOME/.netrc git clone http://192.168.10.97:8000/repositories/HOH-config-backups.git cd HOH-config-backups echo " - copy Configuration Folders across" cp -r $BACKUPROOTDIR/Configuration/* $GITROOT/HOH-config-backups/ cp -r $BACKUPROOTDIR/scripts $GITROOT/HOH-config-backups/ git add . git commit -a -m "committing any new configuration changes!" git push origin master echo "" echo "Git repo updated" echo "" echo " - backing up this script" FIREWIGSCRIPTLOC="$BACKUPROOTDIR/scripts/$SERVERNAME" if [ ! -d $FIREWIGSCRIPTLOC ]; then mkdir $FIREWIGSCRIPTLOC fi cp /home/jon/gitConfig.sh $FIREWIGSCRIPTLOC The git repo is on a different machine in the network using Apache and HTTP-backend.exe (smart HTTP protocol). If I run this script as me "jon" it works. If I run it in crontab it fails. git uses the /home/jon/.netrc file for authentication: machine 192.168.10.97 login gitconfig password 1234579 The log from crontab is: TERM environment variable not set. - running as user: 1000 backingup git config on betty - testing if home is where I think it should be! /home/jon - testing if it can see netrc machine 192.168.10.97 login gitconfig password 1234579 got 08de5bc2b27b4940d9412256e76d5e3c3d9dbcdd walk 08de5bc2b27b4940d9412256e76d5e3c3d9dbcdd got be880f2d306778a538d592e7a02eb19f416612f7 got bd387e8def9f77aafa798bf53e80d949aba443e8 got 1bc1a59e12775841d4c59d77c63b8a73823138c2 walk bd387e8def9f77aafa798bf53e80d949aba443e8 Getting alternates list for http://192.168.10.97:8000/repositories/HOH-config-backups.git got 030512237bca72faf211e0e8ec2906164eac34f6 got 9bc2f575240bc1f61ff7d69777ce1a165d06b184 got b8400f7f01429104a9d4786a6bb1a16d293e37c1 got 2403b5bf611010e0b401f776f0e23b09ce744838 got 1a27944c48269ef3608a8f2466e43402d06faac0 got b686f45b7d57af4fa8ca0d528bb85216d6247e19 Getting pack list for http://192.168.10.97:8000/repositories/HOH-config-backups.git Getting index for pack ae881957c0f0e8c22eb6cc889a22ef78eb4ce6ff Getting pack ae881957c0f0e8c22eb6cc889a22ef78eb4ce6ff which contains ff84d6d48e9326066438d167a10251218d612b3d walk b686f45b7d57af4fa8ca0d528bb85216d6247e19 got 364e30daec17814073e668f490bb84af891fe1f7 got 23f6497e7f9b80e0d90adad73bd0407a0e5ac6ce got 9e77c47574b5e23ea669afe0c23ab235e4917ee1 got 6654e0d328a216b3783e98c47206cb2d01b3353d got 28821ffd437d2689ffb82c6e4b9c3f5372c95c4b got 8c384a24f645389e4d4b08013c79e9e73a658342 got d203be0123736ee025ce20c081f1489098648dfc got 1852603bf7709e71417d8ccec02390279d533642 got fb753a26b20b04694419fce8ecdaa8dbec105cf1 got 736028997cd84dd1c135f57e9d246674b9cd0b9d got 7af836249e20096d0476a548d5be702a071cdd4b got 240dc39d9db50df63073fc7927b2d002dfa0f54c got 93abd36e3935a01011eb753b635a1a0e984bf31e got c6269e28fecf4d8d0d98b9358aecb3acff02df44 got b0aa29432f73e64032682a351d436c24b14078ab walk 240dc39d9db50df63073fc7927b2d002dfa0f54c got 58fb66d9f35f8a5e32ff4683309c5f0c2a3a03c5 got 0da2def4de0565483cdbe6b87418ee2beb122e58 got 0f6a86c6f87ed52ad2ed01e5c6edd661d364930c got 437a93d27b5bb89c739a0564a34a616e832c3ebe got fe0385abe5c0acd8462268dac330bae00e934f1b got 24259f8f5c5c9ee974a75fe3d1e07c02e3e20fe9 got d29f624bf1a5eceedaa86c10fee35f62747c7d04 got 0154e4c987132585ea7a92b77d02dba285512d6b got eda8bf526567c25ee70addb2ad3c3c6aa57eac77 got 9f3d9d7262d66f9fa4f6a13b7c86199953f4bc4e got 8e20881e19667aa22245d0598646991067455a4d got abb1123145689b35eb19519952c71253ee45fa98 got dfeff593c79b4156ce2ce1adf043d0e80356488c got e20c5b48b1d360e0bcf34189e3f3d2bbf23e92cc got b13eb81cc274780322ecf786372320343926bec9 walk 8de83868b3fac748b0a55eba16c8f668ec852abb got b5961421bbc42afe7a07cc1c8b615aba26ba74d7 got 2650ba819019df4193b482733e29ca79b29f3f2c got b3111e1be8103e91803a97a817ed81f28025aca1 got b060be934d709684f5eb5dad3c03932a3589e864 got cf70d2043f081d7a4438e9d5a290a9f986c84060 got 80bf0f1cc836feab86d6935bb7968d8555a8d531 got da318d167920e34bc6573e4fc236249ccbbee316 got d82ac853d387b760149599e6e1ab96403f6ec672 got 0005f691d1f46550fdb4e56025f52e30a5b18cc2 Initialized empty Git repository in /tmp/git/HOH-config-backups/.git/ - copy Configuration Folders across Created commit 424df2f: committing any new configuration changes! 3 files changed, 55 insertions(+), 1 deletions(-) create mode 100755 scripts/betty/gitConfig.sh error: Cannot access URL http://192.168.10.97:8000/repositories/HOH-config-backups.git/, return code 22 error: failed to push some refs to 'http://192.168.10.97:8000/repositories/HOH-config-backups.git' Git repo updated - backing up this script cp: cannot create regular file `/mnt/backups/scripts/betty/gitConfig.sh': Permission denied my crontab is: # m h dom mon dow command 04 * * * * /home/jon/gitConfig.sh > /tmp/gitconfig.log 2>&1 I open it by doing: $crontab -e i.e. not as root. I am a bit confused as to why it is not running as my user (or what user id 1000 is). Not sure what I need to do to get the push with git to work within crontab. edit: found out about the userid: jon@betty:~$ id uid=1000(jon) gid=1000(jon) groups=4(adm),20(dialout),24(cdrom),46(plugdev),109(sambashare),114(lpadmin),115(admin),1000(jon) here is my $HOME/.gitconfig file: [user] name = Jon Hawkins email = [email protected] Thanks

    Read the article

  • Preview Before You Paste with Live Preview in Office 2010

    - by DigitalGeekery
    Do you often find yourself frustrated that content you just copied and pasted didn’t turn out the way you expected? With the new Live Preview in Office 2010, you can preview how copied content will look when it’s pasted even between Office applications. Not every paste preview option will be available in every circumstance. The available options will be based on the applications being used and what content is copied. Copy your content like normal by right-clicking and selecting Copy, pressing Crtl + C, or selecting Copy from the Home tab. Next, select your location to paste the content. Now you can access the Paste Preview buttons either by selecting the Paste dropdown list from the Home tab…   …Or by right-clicking. As you hover your cursor over each of the Paste Options buttons, you will see a preview of what it will look like if you paste using that option. Click the corresponding button when you find the paste option you like. The “Paste” will paste all the content and formatting as you can see below. Values will paste values only, no formatting.   Formatting will paste only the formatting, no values. Hover over Paste Special to reveal any additional paste options. The process is similar in other Office applications. As you can see in the Word document below, Keep Text Only will paste the text, but not the orange color format from the original text.   Even after you’ve pasted, there is still time to change your mind. After you paste content you’ll see a Paste Option button near your content. If you don’t, you can pull it up by pressing the Ctrl key. Note: This is also available after using Ctrl + V to paste. Click to enable the dropdown and select one of the available options.   Using Live Paste Preview between multiple applications is just as easy. If we preview pasting the content from our Word document into PowerPoint by using the Keep Source Formatting option, we’ll see that the outcome looks awful. Selecting the Use Destination Theme will merge the text into the theme of the PowerPoint document and looks a lot better on our slide.   Live Paste Preview is a nice addition to Office 2010 and is sure to save time spent undoing the unexpected consequences of pasting content. Looking for more Office 2010 tips? Check out some of our other Office 2010 posts like how to create a customized tab on the Office 2010 ribbon, and how to use the streamlined printing features in Office 2010. Similar Articles Productive Geek Tips Edit Microsoft Word 2007 Documents in Print PreviewPreview Documents Without Opening Them In Word 2007How to See Where a TinyUrl Is Really Linking ToHow To Upload Office 2010 Documents to Web Apps Technical PreviewPreview Links and Images in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor

    Read the article

  • SQLAuthority News – Follow up on – Replace a Column Name in Multiple Stored Procedure all together

    - by pinaldave
    Last month I had a fantastic time with lots of puzzles and brain teasers, the amount of participation which I have received on the blog is indeed inspiring to write more. One of the blog post was about how to replace a column name in all the stored procedures. The article had very interesting conversation as a follow up. Please read the original article Replace a Column Name in Multiple Stored Procedure all together before reading this blog further as they are connected. Let us start few of the interesting comments. SQL Server Expert Imran Mohammed had a wonderful first and excellent note. I suggest all of you to read it. Imran stresses on the Data Modelling and Logical as well as Physical Design. Developers must create a logical design and get approval on naming convention, data types, references, constraints, indexes etc. He further suggested that one should not cut steps but must follow all the industry standards and guidelines. Here extended my blog post with following note – “Extending Pinal’s answer, what you can do is go to database properties, all tasks, scripts objects, in scripting wizard select all the stored procedure for which you want to change column name, export the query to a new window and then do find and replace, all in once window and execute the script. But make sure you check what you are replacing, sometimes column names are also used in table names, for ex:Table Name: Product and Column Name: ProductId, ProductName”. Thanks Imran Great Points!  Gatej Alexandru suggested that it is not good idea to DROP or CREATE but rather use ALTER as quite possible there may be permissions issue as well. Very good point let me see if I can write blog post over it. Vinay Kumar and SQLStudent144 have proposed another method to achieve the same. I am combining their solution and writing them here. Step 1. Press Ctrl+T or change “Result to Text” mode. Step 2. Execute below commands.SELECT 'EXEC sp_helptext [' + referencing_schema_name + '.' + referencing_entity_name + ']' FROM sys.dm_sql_referencing_entities('schema.objectname','OBJECT') Where schema.objectname is the object or table you are searching for. Step 3. Now copy the result and paste in new window. Again Press Ctrl+T or change “Result to Text” mode. Step 4. Copy the result and paste in new window. Execute the query. Step 5. Copy the result and paste in new window. Step 6. Now find your searching text in the script, make necessary changes and execute this script. Do not forget to remove the code which is generated in resultset which are not relevant to the T-SQL Script. Digitqr suggest we can do this for other objects besides Stored Procedure as well. Iosif suggests to use tool SQL Search from RedGate. I guess this sums it well. We have an alternative perspective to our original issue of replacing the column name in multiple stored procedure. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

  • Getting UPK data into Excel

    - by maria.cozzolino(at)oracle.com
    Did you ever want someone to review your UPK outline outside of the Developer? You can send your outline to an Excel report, which can be distributed through email. Depending on how much additional data you want with your outline, there are two ways you can do this task. Basic data: • You can print a listing of all the items in the outline. • With your outline open, choose File/Print... • Choose the "Save document as" command on the right, and choose Excel (or xlsx). • HINT: If you have not expanded your entire outline, it's faster to use the commands in Developer to expand the entire outline. However, you can expand specific sections by clicking on them in the print preview. • NOTE: If you have the Details view displayed rather than the Player view, you can print all the data that appears in that view. Advanced data: If you desire a more detailed report, you can use the HP Quality Center publishing style, which also creates an Excel file. This style contains a default set of fields for use with Quality Center, but any of the metadata fields can be added to the report, and it can be used for more than just importing into HP Quality Center. To add additional columns to the HP Quality Center publishing style: 1. Make a copy of the publishing style. This process ensures that you have a good copy to revert to if something goes wrong with your customizations, and also allows you to keep your modifications when the software is upgraded. 2. Open the copy of the columnspec.xml file in your favorite XML editor - I use notepad. (This file is located in a language-specific folder in the HP Quality Center publishing style.) 3. Scroll down the columnspec file until you find the column to include. All the metadata fields that can be added to the report are listed in the columnspec file - you just need to tell the system to include the columns. 4. You will see a series of sections like this: 5. Change the value for "col export" to "yes". This will include the column in the Excel file. 6. If desired, change the value for "Play_ModesColHeader" to be whatever name you wish to appear in the Excel column heading. 7. Save the columnspec file. 8. Save the publishing style package. Now, when you publish for HP Quality Center, you will see your newly added columns. You can refer to the section on Customizing HP Quality Center Output in the Content Deployment Guide for additional customization details. Happy customization! I'd be interested in hearing what other uses you have for Excel reporting. Wishing you and yours a happy and healthy New Year! ~~Maria Cozzolino, Manager of Software Requirements and UI

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >