Search Results

Search found 21283 results on 852 pages for 'control flow'.

Page 825/852 | < Previous Page | 821 822 823 824 825 826 827 828 829 830 831 832  | Next Page >

  • Windows Azure Evolution &ndash; Preview Developer Portal

    - by Shaun
    With the MEET Windows Azure event on 7th June, there are many new features and updates in windows azure platform. In the coming several posts I will try to cover some of them. And in the first post here I would like to just have a quick walkthrough of the new preview developer portal.   History of the Developer Portal If you have been working with windows azure since 2009 or 2010, you should remember the first version of the developer portal. It was built in HTML with very limited features. I have the impression when I was using is old one. The layout is not that attractive and you have very limited features. On November, 2010 alone with the SDK 1.3 release, the developer portal was getting a big jump. In order to give more usability and features this it turned to be built on Silverlight. Hence it runs like a desktop application with many windows, lists, commands and context menus. From 2010 till now many features were involved into this portal, such as the remote desktop, co-admin, virtual connect, VM role, etc.. And the portal itself became more and more complicated. But it brought some problems by using the Silverlight. The first one is the browser capability. As you know in most mobile and tablet device the browser doesn’t allow the rich content plugin, such as Flash and Silverlight. This means people cannot open and configure their azure services from their iPad, iPhone and Windows Phone, etc., even though what they need may just be restart a hosted service, or view the status of their databases. Another problem is the performance. Silverlight provides rich experience to the users, but also needs more bandwidth. So in this upgrade the preview developer portal will be back to use HTML, with JavaScript, as a mobile friendly, cross browser, interactively web site.   Preview Portal vs. Silverlight Portal Before I started to talk about the new preview portal I’d better highlight that, this preview portal is a PREVIEW version, which means even though you can do almost all features that already in the old one, as long as some cool new features I will mention in the coming several posts, there are something still under developed and migrated. So sometimes you need to switch back to the old one. For example, in preview portal there is no co-admin manage function, no remote desktop function and the SQL database manage function will take you back to the old SQL Azure Manage Portal. But as Microsoft said these missing features will be moved in the preview portal in the couple of next few months. Since the public URL of the developer portal, https://windows.azure.com/, had been changed to point to this preview one, you need to click to preview button on top of the page and click the “Take me to the previous portal” link.   Overview There are four parts in the preview portal. On the top is the header which shows the account you are currently logging in. If you click on the header it will show the top menu of windows azure, where you can navigate to the windows azure home page, the price information page, community and account, etc.. The navigation bar is on the left hand side, with the categories listed below. ALL ITEMS All items in your windows azure account, includes the web sites, services, databases, etc.. WEB SITES The web sites in your windows azure account. It will only show the web sites you have. The linked resources will be shown if you drill down into a web site. VIRTUAL MACHINES The virtual machines that you had been deployed to azure. CLOUD SERVICES All windows azure hosted services in your account. SQL DATABASES All SQL databases (SQL Azure) in your account. STORAGE All windows azure storage services in your account. NETWORKS The virtual network (Windows Azure Connect) you had been created. The available items will be listed in the main part of the page based on which category your currently selected. If there’s no item it will show the link to you to quick create. At the bottom of the page there will be the command and information bar. Based on what is selected and what is performed by the user, it will show the related information and commands. For example, in the image below when I was creating a new web site, the information bar told me that my web site is being provisioned; and there are two commands in the command bar. And once it ready the command bar will show some commands that I can do to my new web site. The “Web Sites” is a new feature introduced alone with this upgrade. It gives us an easier and quicker way to establish a website from the scratch or from some existing library. I will introduce it more details in the coming next post. Also in the command bar you can create a service by clicking the NEW button. It will slide the creation panel up to you.   Where’s My Hosted Services The Windows Azure Hosted Services had been renamed to the Cloud Services. Create a new service would be very easy. Just click the NEW button at the bottom of the page, and select the CLOUD SERVICE and QIUICK CREATE. This will create a blank hosted service without deployment and certificate. It just needs you to specify the service URL and the affinity/region. Then the service will be shown in the list. If you clicked the item all information will be shown in the main part. Since there’s no package deployed to this service so currently we cannot see any information about it. But we can upload the package by using the command at the bottom. And as you can see, we could manage the configuration, instances, certificates and we can scale up and down (change the VM size), in and out (increase and decrease the instance count) to our service. Assuming I had created an ASP.NET MVC 3 web role project in Visual Studio and completed the package. Then I can click the UPLOAD button in this page to deploy my package. In the popping up window I just specify my deployment name, package file and configure file. Also I can check the box below so that it will NOT warn me if only one instance of this deployment. Once we clicked the OK button our package will be uploaded and provisioned by the platform. After a while we can see the service was ready from the information bar. We can have the basic information about this service and deployment if we to the dashboard page. For example the usage overview diagram, status, URL, public IP address, etc.. In the configure page we can view and change the CSCFG content such as the monitor setting, connection strings, OS family. In scale page we can increase and decrease the count of the instances. And in the instances page we can view all instances status. And, if your services is using some SQL databases and storages they will be shown as the linked resources under the linked resources page. And you can manage the certificates of this service as well under the certificates page.   How About My Storage Services The storage service can be managed by clicking into the STORAGES link in the navigation bar. And we can create a new storage service from the NEW button. After specify the storage name and region it will be previsioned by the platform. If you want to copy or manage the storage key you can just click the Manage Keys button at the bottom, which is very easy. What I want to highlight here is that, you can monitor your storage service by enabling the monitor configuration. Click the storage item in the list and navigate to the configure page. As you can see in the page you can enable the monitoring for blob, table and queue. And you can also enable the logging when any requests come to the storage. But as the tooltip shown in the page, enabling the monitoring and logging will increase the usage of the storage, which means increase the bill of them. So make sure you enable them properly.   And My SQL Databases (SQL Azure) The last thing I want to quick introduce is the SQL databases, which was formally named SQL Azure. You can create a new SQL Database Server and a new database by clicking the ADD button under the SQL Database navigation item. In the popping up windows just specify the database name, the edition, size, collation and the server. You can select an existing SQL Database Server if you have, or cerate a new one. If you selected to create a new server, there will be another step you need to do, which is specify the server login, password and the region. Once it ready you can mange your databases as well as the servers in the portal. In a particular server you can update the firewall settings in its Configure page. So, What Else There are some other area on the preview portal I didn’t cover, such as the virtual machines, virtual network and web sites. Regarding the virtual machines and web sites I will talk about them in the future separated post. Regarding the virtual network, it the Windows Azure Connect we are familiar with. But as I mention in the beginning of this post, the preview portal is still under developed. Some features are not available here. For example, you cannot manage the co-admin of your subscriptions, you cannot open the remote desktop on your hosted services, and you cannot navigate to the Windows Azure Service Bus, Access Control and Caching, which formally named Windows Azure AppFabric directly. In these cases you need to navigate back to the old portal. So in the coming several months we might need to use both these two sites.   Summary In this post I quick introduced the new windows azure developer portal. Since it had been rearranged and renamed I demonstrated some features that existing in the old portal, such as how to create and deploy a hosted service, how to provision a storage service and SQL database. All features in the old portal had been, is being and will be migrated into this new portal, but some of them were in a different category and page we need to figure out.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • ODEE Green Field (Windows) Part 5 - Deployment and Validation

    - by AndyL-Oracle
    And here we are, almost finished with our installation of Oracle Documaker Enterprise Edition ("ODEE") in a Windows green field environment. Let's recap what we've done so far: In part 1, I went over the basic process that I intended to show with installing an ODEE on a green field server. I walked you through the basic installation of Oracle 11g database In part 2, I covered the installation of WebLogic application server. In part 3, I showed you how to install SOA Suite for WebLogic. In part 4, we did the first part of the installation of ODEE itself. What remains after all of that, is the deployment of the ODEE components onto the database and application server - so let's get to it! DATABASE First, we'll deploy the schemas to the database. The schemas are created during the ODEE installation according to the responses provided during the install process. To deploy the schemas, you'll need to login to the database server in your green field environment. Open a command line and CD into ODEE_HOME\documaker\database\oracle11g.Run SQLPLUS as SYSDBA and execute dmkr_admin.sql:  sqlplus / as sysdba @dmkr_admin.sql Execute dmkr_asline.sql, dmkr_admin_correspondence_example.sql.  If you require additional languages, run the appropriate SQL scripts (e.g. dmkr_asline_es.sql for Spanish). APPLICATION SERVER Next, we'll deploy the WebLogic domain and it's components - Documaker web services, Documaker Interactive, Documaker dashboard, and more. To deploy the components, you'll need to login to the application server in your green field environment. 1. Open Windows Explorer and navigate to ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts.2. Using a text editor such as Notepad++, modify weblogic_installation_properties and set location of MIDDLEWARE_HOME and ODEE HOME. If you have used the defaults you’ll probably need to change the E: to C: and that’s it. Save the changes.3. Continuing in the same directory, use your text editor to modify set_middleware_env.cmd and set the drive and path to MIDDLEWARE_HOME. If you have used the defaults you’ll probably need to just change E: to C: and that’s it. Save the changes.4. In the same directory, execute wls_create_domain.cmd by double-clicking it. This should run to completion. If it does not, review any errors and correct them, and rerun the script.5. In the same directory, execute wls_add_correspondence.cmd by double-clicking it - again this should run to completion. 6. Next, we'll start the AdminServer - this is the main WebLogic domain server. To start it, use Windows Explorer and navigate to MIDDLEWARE_HOME\user_projects\domains\idocumaker_domain. Double-click startWebLogic.cmd and the server startup will begin. Once you see output that indicates that the server status changed to RUNNING you may proceed.  a. Note: if you saw database connection errors, you probably didn’t make sure your database name and connection type match. You can change this manually in the WebLogic Console. Open a browser and navigate to http://localhost:7001/console (replace localhost with the name of your application server host if you aren't opening the browser on the server), and login with the the weblogic credential you provided in the ODEE installation process. b. Once you're logged in, open Services?Data Sources. Select dmkr_admin and click Connection Pool.  c. The end of the URL should match the connection type you chose. If you chose ServiceName, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<serviceName> and if you chose SID, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<SIDname> d. An example serviceName is a fully qualified DNS-style name, e.g. "idmaker.us.oracle.com". (It does not need to actually resolve in DNS). An example SID is just a name, e.g. IDMAKER. e. Save the change and repeat for the data source dmkr_asline.  f. You will also need to make the same changes in the ODEE_HOME/documaker/docfactory/config/context/.bindings file - open the file in a text editor, locate the URL lines and make the appropriate change, then save the file.  7. Back in the ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts directory, execute create_users_groups.cmd. 8. In the same directory, execute create_users_groups_correspondence_example.cmd. 9. Open a browser and navigate to http://localhost:7001/jpsquery. Replace localhost with the name of your application server host if you aren't running the browser on the application server. If you changed the default port for the AdminServer from 7001, use the port you changed it to. You should see output like this: 10. Start the WebLogic managed servers by opening a command prompt and navigating to MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/. When you start the servers listed below, you will be prompted to enter the WebLogic credentials to start the server. You can prevent this by providing the credential in the startManagedwebLogic.cmd file for the WLS_USER and WLS_PASS values. Note that the credential will be stored in cleartext. To start the server, type in the command shown. a. Start the JMS Server: ./startManagedWebLogic.cmd jms_server b. Start Dashboard/Documaker Administrator: ./startManagedWebLogic.cmd dmkr_server c. Start Documaker Interactive for Correspondence: ./startManagedWebLogic.cmd idm_server SOA Composites  If you're planning on testing out the approval process components of BPEL that can be used with Documaker Interactive, then use the following steps to deploy the SOA composites. If you're not going to use BPEL, you can skip to the next section.1. Stop the servers listed in the previous section (Step 10) in the reverse order that they were started.2. Run the Domain configuration command: navigate to and execute MIDDLEWARE_HOME/wlserver_10.3/common/bin/config.cmd.3. Select Extend and click next. 4. Select the iDocumaker Domain and click Next. 5. Select the Oracle SOA Suite – 11.1.1.0 (this may automatically select other components which is OK). Click Next. 6. View the Configure JDBC resources screen. You should not make any changes. Click Next. 7. Check both connections and click Test Connections. After successful test, click Next. If the tests fail, something is broken. Go back to configure JDBC resources and check your service name/SID. 8. Check all schemas. Set a password (will be the same for all schemas). Enter the database information (service name, host name, port). Click Next. 9. Connections should test successfully. If not, go back and fix any errors. Click Next. 10. Click Next to pass through Optional Configuration. 11. Click Extend. 12. Click Done. 13. Open a terminal window and navigate to/execute: ODEE_HOME/documaker/j2ee/weblogic/oracle11g/bpel/antbuild.cmd14. Start the WebLogic Servers – AdminServer, jms_server, dmkr_server, idm_server. If you forgot how to do this, see the previous section Step 10. Note: if you previously changed the startManagedWebLogic.cmd script for WLS_USER and WLS_PASS you will need to make those changes again. 15. Start the WebLogic server soa_server1: MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/startManagedWebLogic.cmd soa_server116. Open a browser to http://localhost:7001/console and login. 17. Navigate to Services?Data Sources and select DMKR_ASLINE. 18. Click the Targets tab. Check soa_server1, then click Save. Repeat for the DMKR_ADMIN data source. 19. Open a command prompt and navigate to ODEE_HOME/j2ee/weblogic/oracle11g/scripts, then execute deploy_soa.cmd. That's it! (As if that wasn't enough?) DOCUMAKER Deploy the sample MRL resources by navigating to/executing ODEE_HOME/documaker/mstrres/dmres/deploysamplemrl.bat. You should see approximately 500 resources deployed into the database. Start the Factory Services. Start?Run?services.msc. Locate the service named "ODDF xxxx" and right-click, select Start. Note that each Assembly Line has a separate Factory setup, including its own Factory service and Docupresentment service. The services are named for the assembly line and the machine on which they are installed (because you could have multiple machines servicing a single assembly line, so this allows for easy scripting to control all the services if you choose to do so. Repeat for the Docupresentment service. Note that each Assembly Line has a separate Docupresentment. Using Windows Explorer, navigate to ODEE_HOME/documaker/mstrres/dmres/input and select one of the XML files, and copy it into ODEE_HOME/documaker/hotdirectory. Note: if you chose a different hot directory during installation, copy the file there instead. Momentarily you should see the XML file disappear! Open browser and navigate to http://localhost:10001/DocumakerDashboard (previous versions 12.0-12.2 use http://localhost:10001/dashboard) and verify that job processed successfully. Note that some transactions may fail if you do not have a properly configured email server, and this is ok. You can set up a simple SMTP server (just search the internet for "SMTP developer" and you'll get several to choose from.  So... that's it? Where are we at this point? You now have a completely functional ODEE installation, from soup to nuts as they say. You can further expand your installation by doing some of the following activities: clustering WebLogic services configuring WebLogic for redundancy configuring Oracle 11g for RAC adding additional Factory servers for redundancy/processing capacity setting up a real MRL (instead of the sample resources) testing Documaker Web Services for job submission and more!  I certainly hope you've enjoyed this and find it useful. If you find yourself running into trouble, visit the Oracle Community for Documaker - there is plenty of activity there and you can ask questions. For more concentrated assistance, you can engage an Oracle consultant who is a subject matter expert to assist you. Feel free to email me [andy (dot) little (at) oracle (dot) com] and I can connect you with the appropriate resource to get started. Best of luck! -Andy 

    Read the article

  • Warning message during boot after installation of kernel 3.3: Kernel needs AppArmor 2.4 compatibility patch

    - by Matus Frisik
    I have Ubuntu Server 11.10 and after installation of kernel 3.3 (I just followed instructions from site www.upbuntu.com - How To Install Linux 3.3 Kernel In Ubuntu 11.10/12.04) It shows me following message during boot: fsck from util-linux 2.19.1 fsck from util-linux 2.19.1 /dev/sda5: clean, 204099/1152816 files, 988854/4608639 blocks /dev/sda6: clean, 2345/1281120 files, 142711/5120710 blocks modem-manager[830]: ModemManager (version 0.5) starting... * Starting mDNS/DNS-SD daemon [154G[ OK ] * Starting CUPS printing spooler/server [154G[ OK ] * Starting Mount network filesystems [154G[ OK ] * Stopping Mount network filesystems [154G[ OK ] * Starting System V initialisation compatibility [154G[ OK ] * Stopping Failsafe Boot Delay [154G[ OK ] Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/bin.ping (/etc/apparmor.d/bin.ping line 28): profile /bin/ping network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/lightdm-guest-session (/etc/apparmor.d/lightdm-guest-session line 71): profile /usr/lib/lightdm/lightdm-guest-session-wrapper network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/sbin.dhclient (/etc/apparmor.d/sbin.dhclient line 73): profile /sbin/dhclient network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/sbin.klogd (/etc/apparmor.d/sbin.klogd line 35): profile /sbin/klogd network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/sbin.syslog-ng (/etc/apparmor.d/sbin.syslog-ng line 52): profile /sbin/syslog-ng network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/sbin.syslogd (/etc/apparmor.d/sbin.syslogd line 40): profile /sbin/syslogd network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.bin.chromium-browser (/etc/apparmor.d/usr.bin.chromium-browser line 165): profile /usr/lib/chromium-browser/chromium-browser network rules not enforced Warning from /etc/apparmor.d/usr.bin.chromium-browser (/etc/apparmor.d/usr.bin.chromium-browser line 165): profile browser_java network rules not enforced Warning from /etc/apparmor.d/usr.bin.chromium-browser (/etc/apparmor.d/usr.bin.chromium-browser line 165): profile browser_openjdk network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.bin.evince (/etc/apparmor.d/usr.bin.evince line 142): profile /usr/bin/evince network rules not enforced Warning from /etc/apparmor.d/usr.bin.evince (/etc/apparmor.d/usr.bin.evince line 142): profile /usr/bin/evince-previewer network rules not enforced Warning from /etc/apparmor.d/usr.bin.evince (/etc/apparmor.d/usr.bin.evince line 142): profile /usr/bin/evince-thumbnailer network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.lib.dovecot.deliver (/etc/apparmor.d/usr.lib.dovecot.deliver line 24): profile /usr/lib/dovecot/deliver network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.lib.dovecot.dovecot-auth (/etc/apparmor.d/usr.lib.dovecot.dovecot-auth line 24): profile /usr/lib/dovecot/dovecot-auth network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.lib.dovecot.imap (/etc/apparmor.d/usr.lib.dovecot.imap line 23): profile /usr/lib/dovecot/imap network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.lib.dovecot.imap-login (/etc/apparmor.d/usr.lib.dovecot.imap-login line 22): profile /usr/lib/dovecot/imap-login network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.lib.dovecot.managesieve-login (/etc/apparmor.d/usr.lib.dovecot.managesieve-login line 22): profile /usr/lib/dovecot/managesieve-login network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.lib.dovecot.pop3 (/etc/apparmor.d/usr.lib.dovecot.pop3 line 22): profile /usr/lib/dovecot/pop3 network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.lib.dovecot.pop3-login (/etc/apparmor.d/usr.lib.dovecot.pop3-login line 21): profile /usr/lib/dovecot/pop3-login network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.lib.telepathy (/etc/apparmor.d/usr.lib.telepathy line 86): profile /usr/lib/telepathy/mission-control-5 network rules not enforced Warning from /etc/apparmor.d/usr.lib.telepathy (/etc/apparmor.d/usr.lib.telepathy line 86): profile /usr/lib/telepathy/telepathy-* network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.sbin.avahi-daemon (/etc/apparmor.d/usr.sbin.avahi-daemon line 30): profile /usr/sbin/avahi-daemon network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.sbin.cupsd (/etc/apparmor.d/usr.sbin.cupsd line 170): profile /usr/lib/cups/backend/cups-pdf network rules not enforced Warning from /etc/apparmor.d/usr.sbin.cupsd (/etc/apparmor.d/usr.sbin.cupsd line 170): profile /usr/sbin/cupsd network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.sbin.dnsmasq (/etc/apparmor.d/usr.sbin.dnsmasq line 51): profile /usr/sbin/dnsmasq network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.sbin.dovecot (/etc/apparmor.d/usr.sbin.dovecot line 37): profile /usr/sbin/dovecot network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.sbin.identd (/etc/apparmor.d/usr.sbin.identd line 31): profile /usr/sbin/identd network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.sbin.mdnsd (/etc/apparmor.d/usr.sbin.mdnsd line 35): profile /usr/sbin/mdnsd network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.sbin.mysqld (/etc/apparmor.d/usr.sbin.mysqld line 44): profile /usr/sbin/mysqld network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.sbin.nmbd (/etc/apparmor.d/usr.sbin.nmbd line 21): profile /usr/sbin/nmbd network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.sbin.nscd (/etc/apparmor.d/usr.sbin.nscd line 46): profile /usr/sbin/nscd network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.sbin.smbd (/etc/apparmor.d/usr.sbin.smbd line 40): profile /usr/sbin/smbd network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.sbin.tcpdump (/etc/apparmor.d/usr.sbin.tcpdump line 64): profile /usr/sbin/tcpdump network rules not enforced Cache read/write disabled: /sys/kernel/security/apparmor/features interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.) Warning from /etc/apparmor.d/usr.sbin.traceroute (/etc/apparmor.d/usr.sbin.traceroute line 26): profile /usr/sbin/traceroute network rules not enforced * Starting AppArmor profiles [160G [154G[ OK ] speech-dispatcher disabled; edit /etc/default/speech-dispatcher Checking for running unattended-upgrades: What does this warnings mean and how can I fix it? Informations about my system: response@response:~$ uname -a Linux response 3.3.0-030300-generic #201203182135 SMP Mon Mar 19 01:43:18 UTC 2012 i686 i686 i386 GNU/Linux

    Read the article

  • Clustering Basics and Challenges

    - by Karoly Vegh
    For upcoming posts it seemed to be a good idea to dedicate some time for cluster basic concepts and theory. This post misses a lot of details that would explode the articlesize, should you have questions, do not hesitate to ask them in the comments.  The goal here is to get some concepts straight. I can't promise to give you an overall complete definitions of cluster, cluster agent, quorum, voting, fencing, split brain condition, so the following is more of an explanation. Here we go. -------- Cluster, HA, failover, switchover, scalability -------- An attempted definition of a Cluster: A cluster is a set (2+) server nodes dedicated to keep application services alive, communicating through the cluster software/framework with eachother, test and probe health status of servernodes/services and with quorum based decisions and with switchover/failover techniques keep the application services running on them available. That is, should a node that runs a service unexpectedly lose functionality/connection, the other ones would take over the and run the services, so that availability is guaranteed. To provide availability while strictly sticking to a consistent clusterconfiguration is the main goal of a cluster.  At this point we have to add that this defines a HA-cluster, a High-Availability cluster, where the clusternodes are planned to run the services in an active-standby, or failover fashion. An example could be a single instance database. Some applications can be run in a distributed or scalable fashion. In the latter case instances of the application run actively on separate clusternodes serving servicerequests simultaneously. An example for this version could be a webserver that forwards connection requests to many backend servers in a round-robin way. Or a database running in active-active RAC setup.  -------- Cluster arhitecture, interconnect, topologies -------- Now, what is a cluster made of? Servers, right. These servers (the clusternodes) need to communicate. This of course happens over the network, usually over dedicated network interfaces interconnecting all the clusternodes. These connection are called interconnects.How many clusternodes are in a cluster? There are different cluster topologies. The most simple one is a clustered pair topology, involving only two clusternodes:  There are several more topologies, clicking the image above will take you to the relevant documentation. Also, to answer the question Solaris Cluster allows you to run up to 16 servers in a cluster. Where shall these clusternodes be placed? A very important question. The right answer is: It depends on what you plan to achieve with the cluster. Do you plan to avoid only a server outage? Then you can place them right next to eachother in the datacenter. Do you need to avoid DataCenter outage? In that case of course you should place them at least in different fire zones. Or in two geographically distant DataCenters to avoid disasters like floods, large-scale fires or power outages. We call this a stretched- or campus cluster, the clusternodes being several kilometers away from eachother. To cover really large distances, you probably need to move to a GeoCluster, which is a different kind of animal.  What is a geocluster? A Geographic Cluster in Solaris Cluster terms is actually a metacluster between two, separate (locally-HA) clusters.  -------- Cluster resource types, agents, resources, resource groups -------- So how does the cluster manage my applications? The cluster needs to start, stop and probe your applications. If you application runs, the cluster needs to check regularly if the application state is healthy, does it respond over the network, does it have all the processes running, etc. This is called probing. If the cluster deems the application is in a faulty state, then it can try to restart it locally or decide to switch (stop on node A, start on node B) the service. Starting, stopping and probing are the three actions that a cluster agent does. There are many different kinds of agents included in Solaris Cluster, but you can build your own too. Examples are an agent that manages (mounts, moves) ZFS filesystems, or the Oracle DB HA agent that cares about the database, or an agent that moves a floating IP address between nodes. There are lots of other agents included for Apache, Tomcat, MySQL, Oracle DB, Oracle Weblogic, Zones, LDoms, NFS, DNS, etc.We also need to clarify the difference between a cluster resource and the cluster resource group.A cluster resource is something that is managed by a cluster agent. Cluster resource types are included in Solaris cluster (see above, e.g. HAStoragePlus, HA-Oracle, LogicalHost). You can group cluster resources into cluster resourcegroups, and switch these groups together from one node to another. To stick to the example above, to move an Oracle DB service from one node to another, you have to switch the group between nodes, and the agents of the cluster resources in the group will do the following:  On node A Shut down the DB Unconfigure the LogicalHost IP the DB Listener listens on unmount the filesystem   Then, on node B: mount the FS configure the IP  startup the DB -------- Voting, Quorum, Split Brain Condition, Fencing, Amnesia -------- How do the clusternodes agree upon their action? How do they decide which node runs what services? Another important question. Running a cluster is a strictly democratic thing.Every node has votes, and you need the majority of votes to have the deciding power. Now, this is usually no problem, clusternodes think very much all alike. Still, every action needs to be governed upon in a productive system, and has to be agreed upon. Agreeing is easy as long as the clusternodes all behave and talk to eachother over the interconnect. But if the interconnect is gone/down, this all gets tricky and confusing. Clusternodes think like this: "My job is to run these services. The other node does not answer my interconnect communication, it must be down. I'd better take control and run the services!". The problem is, as I have already mentioned, clusternodes very much think alike. If the interconnect is gone, they all assume the other node is down, and they all want to mount the data backend, enable the IP and run the database. Double IPs, double mounts, double DB instances - now that is trouble. Also, in a 2-node cluster they both have only 50% of the votes, that is, they themselves alone are not allowed to run a cluster.  This is where you need a quorum device. According to Wikipedia, the "requirement for a quorum is protection against totally unrepresentative action in the name of the body by an unduly small number of persons.". They need additional votes to run the cluster. For this requirement a 2-node cluster needs a quorum device or a quorum server. If the interconnect is gone, (this is what we call a split brain condition) both nodes start to race and try to reserve the quorum device to themselves. They do this, because the quorum device bears an additional vote, that could ensure majority (50% +1). The one that manages to lock the quorum device (e.g. if it's an FC LUN, it SCSI reserves it) wins the right to build/run a cluster, the other one - realizing he was late - panics/reboots to ensure the cluster config stays consistent.  Losing the interconnect isn't only endangering the availability of services, but it also endangers the cluster configuration consistence. Just imagine node A being down and during that the cluster configuration changes. Now node B goes down, and node A comes up. It isn't uptodate about the cluster configuration's changes so it will refuse to start a cluster, since that would lead to cluster amnesia, that is the cluster had some changes, but now runs with an older cluster configuration repository state, that is it's like it forgot about the changes.  Also, to ensure application data consistence, the clusternode that wins the race makes sure that a server that isn't part of or can't currently join the cluster can access the devices. This procedure is called fencing. This usually happens to storage LUNs via SCSI reservation.  Now, another important question: Where do I place the quorum disk?  Imagine having two sites, two separate datacenters, one in the north of the city and the other one in the south part of it. You run a stretched cluster in the clustered pair topology. Where do you place the quorum disk/server? If you put it into the north DC, and that gets hit by a meteor, you lose one clusternode, which isn't a problem, but you also lose your quorum, and the south clusternode can't keep the cluster running lacking the votes. This problem can't be solved with two sites and a campus cluster. You will need a third site to either place the quorum server to, or a third clusternode. Otherwise, lacking majority, if you lose the site that had your quorum, you lose the cluster. Okay, we covered the very basics. We haven't talked about virtualization support, CCR, ClusterFilesystems, DID devices, affinities, storage-replication, management tools, upgrade procedures - should those be interesting for you, let me know in the comments, along with any other questions. Given enough demand I'd be glad to write a followup post too. Now I really want to move on to the second part in the series: ClusterInstallation.  Oh, as for additional source of information, I recommend the documentation: http://docs.oracle.com/cd/E23623_01/index.html, and the OTN Oracle Solaris Cluster site: http://www.oracle.com/technetwork/server-storage/solaris-cluster/index.html

    Read the article

  • How to mount a blu-ray drive?

    - by Stephan Schielke
    Maybe it is for the best to close this question. This has nothing to do with a bluray drive in general anymore. Probably a hardware defect. I will try to test it with a windows system and different cables again... Thx so far. I have a bluray/dvd/cdrom drive with SATA. Ubuntu wont find it under /dev/sd wodim --devices wodim: Overview of accessible drives (1 found) : ------------------------------------------------------------------------- 0 dev='/dev/sg2' rwrw-- : 'HL-DT-ST' 'BDDVDRW CH08LS10' ------------------------------------------------------------------------- cdrecord -scanbus scsibus2: 2,0,0 200) 'HL-DT-ST' 'BDDVDRW CH08LS10' '2.00' Removable CD-ROM fdisk dont even lists it. Ubuntu only automounts blank DVDs, but neither CDROM nor Blurays. I also changed the sata slot, sata cable and the power cable. The drive works with a windows system. This happens when I try to mount: sudo mount -t auto /dev/scd0 /media/bluray mount: you must specify the filesystem type I tried all filesystems there are. I also installed makemkv. It finds the drive but not the disc. Here is my /dev ls -al /dev total 12 drwxr-xr-x 17 root root 4420 2011-11-25 19:36 . drwxr-xr-x 28 root root 4096 2011-11-25 07:12 .. crw------- 1 root root 10, 235 2011-11-25 19:28 autofs -rw-r--r-- 1 root root 630 2011-11-25 19:28 .blkid.tab -rw-r--r-- 1 root root 630 2011-11-25 19:28 .blkid.tab.old drwxr-xr-x 2 root root 700 2011-11-25 19:27 block drwxr-xr-x 2 root root 100 2011-11-25 19:27 bsg crw------- 1 root root 10, 234 2011-11-25 19:28 btrfs-control drwxr-xr-x 3 root root 60 2011-11-25 19:27 bus drwxr-xr-x 2 root root 3820 2011-11-25 19:28 char crw------- 1 root root 5, 1 2011-11-25 19:28 console lrwxrwxrwx 1 root root 11 2011-11-25 19:28 core -> /proc/kcore drwxr-xr-x 2 root root 60 2011-11-25 19:28 cpu crw------- 1 root root 10, 60 2011-11-25 19:28 cpu_dma_latency drwxr-xr-x 7 root root 140 2011-11-25 19:27 disk crw------- 1 root root 10, 61 2011-11-25 19:28 ecryptfs crw-rw---- 1 root video 29, 0 2011-11-25 19:28 fb0 lrwxrwxrwx 1 root root 13 2011-11-25 19:28 fd -> /proc/self/fd crw-rw-rw- 1 root root 1, 7 2011-11-25 19:28 full crw-rw-rw- 1 root fuse 10, 229 2011-11-25 19:28 fuse crw------- 1 root root 251, 0 2011-11-25 19:28 hidraw0 crw------- 1 root root 251, 1 2011-11-25 19:28 hidraw1 crw------- 1 root root 10, 228 2011-11-25 19:28 hpet lrwxrwxrwx 1 root root 14 2011-11-25 19:27 .initramfs -> /run/initramfs drwxr-xr-x 4 root root 220 2011-11-25 19:28 input crw------- 1 root root 1, 11 2011-11-25 19:28 kmsg srw-rw-rw- 1 root root 0 2011-11-25 19:28 log brw-rw---- 1 root disk 7, 0 2011-11-25 19:28 loop0 brw-rw---- 1 root disk 7, 1 2011-11-25 19:28 loop1 brw-rw---- 1 root disk 7, 2 2011-11-25 19:28 loop2 brw-rw---- 1 root disk 7, 3 2011-11-25 19:28 loop3 brw-rw---- 1 root disk 7, 4 2011-11-25 19:28 loop4 brw-rw---- 1 root disk 7, 5 2011-11-25 19:28 loop5 brw-rw---- 1 root disk 7, 6 2011-11-25 19:28 loop6 brw-rw---- 1 root disk 7, 7 2011-11-25 19:28 loop7 drwxr-xr-x 2 root root 60 2011-11-25 19:27 mapper crw------- 1 root root 10, 227 2011-11-25 19:28 mcelog crw-r----- 1 root kmem 1, 1 2011-11-25 19:28 mem drwxr-xr-x 2 root root 60 2011-11-25 19:27 net crw------- 1 root root 10, 59 2011-11-25 19:28 network_latency crw------- 1 root root 10, 58 2011-11-25 19:28 network_throughput crw-rw-rw- 1 root root 1, 3 2011-11-25 19:28 null crw-rw-rw- 1 root root 195, 0 2011-11-25 19:28 nvidia0 crw-rw-rw- 1 root root 195, 255 2011-11-25 19:28 nvidiactl crw------- 1 root root 1, 12 2011-11-25 19:28 oldmem crw-r----- 1 root kmem 1, 4 2011-11-25 19:28 port crw------- 1 root root 108, 0 2011-11-25 19:28 ppp crw------- 1 root root 10, 1 2011-11-25 19:28 psaux crw-rw-rw- 1 root tty 5, 2 2011-11-25 20:00 ptmx drwxr-xr-x 2 root root 0 2011-11-25 19:27 pts brw-rw---- 1 root disk 1, 0 2011-11-25 19:28 ram0 brw-rw---- 1 root disk 1, 1 2011-11-25 19:28 ram1 brw-rw---- 1 root disk 1, 10 2011-11-25 19:28 ram10 brw-rw---- 1 root disk 1, 11 2011-11-25 19:28 ram11 brw-rw---- 1 root disk 1, 12 2011-11-25 19:28 ram12 brw-rw---- 1 root disk 1, 13 2011-11-25 19:28 ram13 brw-rw---- 1 root disk 1, 14 2011-11-25 19:28 ram14 brw-rw---- 1 root disk 1, 15 2011-11-25 19:28 ram15 brw-rw---- 1 root disk 1, 2 2011-11-25 19:28 ram2 brw-rw---- 1 root disk 1, 3 2011-11-25 19:28 ram3 brw-rw---- 1 root disk 1, 4 2011-11-25 19:28 ram4 brw-rw---- 1 root disk 1, 5 2011-11-25 19:28 ram5 brw-rw---- 1 root disk 1, 6 2011-11-25 19:28 ram6 brw-rw---- 1 root disk 1, 7 2011-11-25 19:28 ram7 brw-rw---- 1 root disk 1, 8 2011-11-25 19:28 ram8 brw-rw---- 1 root disk 1, 9 2011-11-25 19:28 ram9 crw-rw-rw- 1 root root 1, 8 2011-11-25 19:28 random crw-rw-r--+ 1 root root 10, 62 2011-11-25 19:28 rfkill lrwxrwxrwx 1 root root 4 2011-11-25 19:28 rtc -> rtc0 crw------- 1 root root 254, 0 2011-11-25 19:28 rtc0 lrwxrwxrwx 1 root root 3 2011-11-25 19:38 scd0 -> sr0 brw-rw---- 1 root disk 8, 0 2011-11-25 19:28 sda brw-rw---- 1 root disk 8, 1 2011-11-25 19:28 sda1 brw-rw---- 1 root disk 8, 2 2011-11-25 19:28 sda2 brw-rw---- 1 root disk 8, 3 2011-11-25 19:28 sda3 brw-rw---- 1 root disk 8, 5 2011-11-25 19:28 sda5 brw-rw---- 1 root disk 8, 6 2011-11-25 19:28 sda6 brw-rw---- 1 root disk 8, 16 2011-11-25 19:28 sdb brw-rw---- 1 root disk 8, 17 2011-11-25 19:28 sdb1 crw-rw---- 1 root disk 21, 0 2011-11-25 19:28 sg0 crw-rw---- 1 root disk 21, 1 2011-11-25 19:28 sg1 crw-rw----+ 1 root cdrom 21, 2 2011-11-25 19:28 sg2 lrwxrwxrwx 1 root root 8 2011-11-25 19:28 shm -> /run/shm crw------- 1 root root 10, 231 2011-11-25 19:28 snapshot drwxr-xr-x 4 root root 280 2011-11-25 19:28 snd brw-rw----+ 1 root cdrom 11, 0 2011-11-25 19:38 sr0 lrwxrwxrwx 1 root root 15 2011-11-25 19:28 stderr -> /proc/self/fd/2 lrwxrwxrwx 1 root root 15 2011-11-25 19:28 stdin -> /proc/self/fd/0 lrwxrwxrwx 1 root root 15 2011-11-25 19:28 stdout -> /proc/self/fd/1 crw-rw-rw- 1 root tty 5, 0 2011-11-25 19:35 tty crw--w---- 1 root tty 4, 0 2011-11-25 19:28 tty0 crw------- 1 root root 4, 1 2011-11-25 19:28 tty1 crw--w---- 1 root tty 4, 10 2011-11-25 19:28 tty10 crw--w---- 1 root tty 4, 11 2011-11-25 19:28 tty11 crw--w---- 1 root tty 4, 12 2011-11-25 19:28 tty12 crw--w---- 1 root tty 4, 13 2011-11-25 19:28 tty13 crw--w---- 1 root tty 4, 14 2011-11-25 19:28 tty14 crw--w---- 1 root tty 4, 15 2011-11-25 19:28 tty15 crw--w---- 1 root tty 4, 16 2011-11-25 19:28 tty16 crw--w---- 1 root tty 4, 17 2011-11-25 19:28 tty17 crw--w---- 1 root tty 4, 18 2011-11-25 19:28 tty18 crw--w---- 1 root tty 4, 19 2011-11-25 19:28 tty19 crw------- 1 root root 4, 2 2011-11-25 19:28 tty2 crw--w---- 1 root tty 4, 20 2011-11-25 19:28 tty20 crw--w---- 1 root tty 4, 21 2011-11-25 19:28 tty21 crw--w---- 1 root tty 4, 22 2011-11-25 19:28 tty22 crw--w---- 1 root tty 4, 23 2011-11-25 19:28 tty23 crw--w---- 1 root tty 4, 24 2011-11-25 19:28 tty24 crw--w---- 1 root tty 4, 25 2011-11-25 19:28 tty25 crw--w---- 1 root tty 4, 26 2011-11-25 19:28 tty26 crw--w---- 1 root tty 4, 27 2011-11-25 19:28 tty27 crw--w---- 1 root tty 4, 28 2011-11-25 19:28 tty28 crw--w---- 1 root tty 4, 29 2011-11-25 19:28 tty29 crw------- 1 root root 4, 3 2011-11-25 19:28 tty3 crw--w---- 1 root tty 4, 30 2011-11-25 19:28 tty30 crw--w---- 1 root tty 4, 31 2011-11-25 19:28 tty31 crw--w---- 1 root tty 4, 32 2011-11-25 19:28 tty32 crw--w---- 1 root tty 4, 33 2011-11-25 19:28 tty33 crw--w---- 1 root tty 4, 34 2011-11-25 19:28 tty34 crw--w---- 1 root tty 4, 35 2011-11-25 19:28 tty35 crw--w---- 1 root tty 4, 36 2011-11-25 19:28 tty36 crw--w---- 1 root tty 4, 37 2011-11-25 19:28 tty37 crw--w---- 1 root tty 4, 38 2011-11-25 19:28 tty38 crw--w---- 1 root tty 4, 39 2011-11-25 19:28 tty39 crw------- 1 root root 4, 4 2011-11-25 19:28 tty4 crw--w---- 1 root tty 4, 40 2011-11-25 19:28 tty40 crw--w---- 1 root tty 4, 41 2011-11-25 19:28 tty41 crw--w---- 1 root tty 4, 42 2011-11-25 19:28 tty42 crw--w---- 1 root tty 4, 43 2011-11-25 19:28 tty43 crw--w---- 1 root tty 4, 44 2011-11-25 19:28 tty44 crw--w---- 1 root tty 4, 45 2011-11-25 19:28 tty45 crw--w---- 1 root tty 4, 46 2011-11-25 19:28 tty46 crw--w---- 1 root tty 4, 47 2011-11-25 19:28 tty47 crw--w---- 1 root tty 4, 48 2011-11-25 19:28 tty48 crw--w---- 1 root tty 4, 49 2011-11-25 19:28 tty49 crw------- 1 root root 4, 5 2011-11-25 19:28 tty5 crw--w---- 1 root tty 4, 50 2011-11-25 19:28 tty50 crw--w---- 1 root tty 4, 51 2011-11-25 19:28 tty51 crw--w---- 1 root tty 4, 52 2011-11-25 19:28 tty52 crw--w---- 1 root tty 4, 53 2011-11-25 19:28 tty53 crw--w---- 1 root tty 4, 54 2011-11-25 19:28 tty54 crw--w---- 1 root tty 4, 55 2011-11-25 19:28 tty55 crw--w---- 1 root tty 4, 56 2011-11-25 19:28 tty56 crw--w---- 1 root tty 4, 57 2011-11-25 19:28 tty57 crw--w---- 1 root tty 4, 58 2011-11-25 19:28 tty58 crw--w---- 1 root tty 4, 59 2011-11-25 19:28 tty59 crw------- 1 root root 4, 6 2011-11-25 19:28 tty6 crw--w---- 1 root tty 4, 60 2011-11-25 19:28 tty60 crw--w---- 1 root tty 4, 61 2011-11-25 19:28 tty61 crw--w---- 1 root tty 4, 62 2011-11-25 19:28 tty62 crw--w---- 1 root tty 4, 63 2011-11-25 19:28 tty63 crw--w---- 1 root tty 4, 7 2011-11-25 19:28 tty7 crw--w---- 1 root tty 4, 8 2011-11-25 19:28 tty8 crw--w---- 1 root tty 4, 9 2011-11-25 19:28 tty9 crw------- 1 root root 5, 3 2011-11-25 19:28 ttyprintk crw-rw---- 1 root dialout 4, 64 2011-11-25 19:28 ttyS0 crw-rw---- 1 root dialout 4, 65 2011-11-25 19:28 ttyS1 crw-rw---- 1 root dialout 4, 74 2011-11-25 19:28 ttyS10 crw-rw---- 1 root dialout 4, 75 2011-11-25 19:28 ttyS11 crw-rw---- 1 root dialout 4, 76 2011-11-25 19:28 ttyS12 crw-rw---- 1 root dialout 4, 77 2011-11-25 19:28 ttyS13 crw-rw---- 1 root dialout 4, 78 2011-11-25 19:28 ttyS14 crw-rw---- 1 root dialout 4, 79 2011-11-25 19:28 ttyS15 crw-rw---- 1 root dialout 4, 80 2011-11-25 19:28 ttyS16 crw-rw---- 1 root dialout 4, 81 2011-11-25 19:28 ttyS17 crw-rw---- 1 root dialout 4, 82 2011-11-25 19:28 ttyS18 crw-rw---- 1 root dialout 4, 83 2011-11-25 19:28 ttyS19 crw-rw---- 1 root dialout 4, 66 2011-11-25 19:28 ttyS2 crw-rw---- 1 root dialout 4, 84 2011-11-25 19:28 ttyS20 crw-rw---- 1 root dialout 4, 85 2011-11-25 19:28 ttyS21 crw-rw---- 1 root dialout 4, 86 2011-11-25 19:28 ttyS22 crw-rw---- 1 root dialout 4, 87 2011-11-25 19:28 ttyS23 crw-rw---- 1 root dialout 4, 88 2011-11-25 19:28 ttyS24 crw-rw---- 1 root dialout 4, 89 2011-11-25 19:28 ttyS25 crw-rw---- 1 root dialout 4, 90 2011-11-25 19:28 ttyS26 crw-rw---- 1 root dialout 4, 91 2011-11-25 19:28 ttyS27 crw-rw---- 1 root dialout 4, 92 2011-11-25 19:28 ttyS28 crw-rw---- 1 root dialout 4, 93 2011-11-25 19:28 ttyS29 crw-rw---- 1 root dialout 4, 67 2011-11-25 19:28 ttyS3 crw-rw---- 1 root dialout 4, 94 2011-11-25 19:28 ttyS30 crw-rw---- 1 root dialout 4, 95 2011-11-25 19:28 ttyS31 crw-rw---- 1 root dialout 4, 68 2011-11-25 19:28 ttyS4 crw-rw---- 1 root dialout 4, 69 2011-11-25 19:28 ttyS5 crw-rw---- 1 root dialout 4, 70 2011-11-25 19:28 ttyS6 crw-rw---- 1 root dialout 4, 71 2011-11-25 19:28 ttyS7 crw-rw---- 1 root dialout 4, 72 2011-11-25 19:28 ttyS8 crw-rw---- 1 root dialout 4, 73 2011-11-25 19:28 ttyS9 d rwxr-xr-x 3 root root 60 2011-11-25 19:28 .udev crw-r----- 1 root root 10, 223 2011-11-25 19:28 uinput crw-rw-rw- 1 root root 1, 9 2011-11-25 19:28 urandom drwxr-xr-x 2 root root 60 2011-11-25 19:27 usb crw------- 1 root root 252, 0 2011-11-25 19:28 usbmon0 crw------- 1 root root 252, 1 2011-11-25 19:28 usbmon1 crw------- 1 root root 252, 2 2011-11-25 19:28 usbmon2 crw------- 1 root root 252, 3 2011-11-25 19:28 usbmon3 crw------- 1 root root 252, 4 2011-11-25 19:28 usbmon4 crw------- 1 root root 252, 5 2011-11-25 19:28 usbmon5 crw------- 1 root root 252, 6 2011-11-25 19:28 usbmon6 crw------- 1 root root 252, 7 2011-11-25 19:28 usbmon7 crw------- 1 root root 252, 8 2011-11-25 19:28 usbmon8 drwxr-xr-x 4 root root 80 2011-11-25 19:28 v4l crw------- 1 root root 10, 57 2011-11-25 19:28 vboxdrv crw------- 1 root root 10, 56 2011-11-25 19:28 vboxnetctl drwxr-x--- 4 root vboxusers 80 2011-11-25 19:28 vboxusb crw-rw---- 1 root tty 7, 0 2011-11-25 19:28 vcs crw-rw---- 1 root tty 7, 1 2011-11-25 19:28 vcs1 crw-rw---- 1 root tty 7, 2 2011-11-25 19:28 vcs2 crw-rw---- 1 root tty 7, 3 2011-11-25 19:28 vcs3 crw-rw---- 1 root tty 7, 4 2011-11-25 19:28 vcs4 crw-rw---- 1 root tty 7, 5 2011-11-25 19:28 vcs5 crw-rw---- 1 root tty 7, 6 2011-11-25 19:28 vcs6 crw-rw---- 1 root tty 7, 128 2011-11-25 19:28 vcsa crw-rw---- 1 root tty 7, 129 2011-11-25 19:28 vcsa1 crw-rw---- 1 root tty 7, 130 2011-11-25 19:28 vcsa2 crw-rw---- 1 root tty 7, 131 2011-11-25 19:28 vcsa3 crw-rw---- 1 root tty 7, 132 2011-11-25 19:28 vcsa4 crw-rw---- 1 root tty 7, 133 2011-11-25 19:28 vcsa5 crw-rw---- 1 root tty 7, 134 2011-11-25 19:28 vcsa6 crw------- 1 root root 10, 63 2011-11-25 19:28 vga_arbiter crw-rw----+ 1 root video 81, 0 2011-11-25 19:28 video0 crw-rw-rw- 1 root root 1, 5 2011-11-25 19:28 zero sg_scan -i gives me: sudo sg_scan -i /dev/sg0: scsi0 channel=0 id=0 lun=0 [em] ATA ST31000524NS SN12 [rmb=0 cmdq=0 pqual=0 pdev=0x0] /dev/sg1: scsi0 channel=0 id=1 lun=0 [em] ATA WDC WD15EADS-00S 01.0 [rmb=0 cmdq=0 pqual=0 pdev=0x0] /dev/sg2: scsi2 channel=0 id=0 lun=0 [em] HL-DT-ST BDDVDRW CH08LS10 2.00 [rmb=1 cmdq=0 pqual=0 pdev=0x5] sg_map gives me: /dev/sg0 /dev/sda /dev/sg1 /dev/sdb /dev/sg2 /dev/scd0 lsscsi -l gives me: [0:0:0:0] disk ATA ST31000524NS SN12 /dev/sda state=running queue_depth=1 scsi_level=6 type=0 device_blocked=0 timeout=30 [0:0:1:0] disk ATA WDC WD15EADS-00S 01.0 /dev/sdb state=running queue_depth=1 scsi_level=6 type=0 device_blocked=0 timeout=30 [2:0:0:0] cd/dvd HL-DT-ST BDDVDRW CH08LS10 2.00 /dev/sr0 state=running queue_depth=1 scsi_level=6 type=5 device_blocked=0 timeout=30 my udf mod is: filename: /lib/modules/3.0.0-14-generic/kernel/fs/udf/udf.ko license: GPL description: Universal Disk Format Filesystem author: Ben Fennema srcversion: 6ABDE012374D96B9685B8E5 depends: crc-itu-t vermagic: 3.0.0-14-generic SMP mod_unload modversions Do I need special drivers or mods enabled? Do I need to change some BIOS settings? edit: Somehow I am now able to fire the mount command without any filesystem errors, but now I get: mount: no medium found on /dev/sr0

    Read the article

  • CodePlex Daily Summary for Saturday, May 12, 2012

    CodePlex Daily Summary for Saturday, May 12, 2012Popular ReleasesKanboxAPI: KanboxAPI beta: ????? Token Info List DownloadMedia Companion: Media Companion 3.502b: It has been a slow week, but this release addresses a couple of recent bugs: Movies Multi-part Movies - Existing .nfo files that differed in name from the first part, were missed and scraped again. Trailers - MC attempted to scrape info for existing trailers. TV Shows Show Scraping - shows available only in the non-default language would not show up in the main browser. The correct language can now be selected using the TV Show Selector for a single show. General Will no longer prompt for ...NewLife XCode ??????: XCode v8.5.2012.0508、XCoder v4.7.2012.0320: X????: 1,????For .Net 4.0?? XCoder????: 1,???????,????X????,?????? XCode????: 1,Insert/Update/Delete???????????????,???SQL???? 2,IEntityOperate?????? 3,????????IEntityTree 4,????????????????? 5,?????????? 6,??????????????dycom: v1.0: DYCom ????????:Silverlight, Windows phone 7.5.NETMF_for_STM32: Beta 1 Release: First public beta release.Google Book Downloader: Google Books Downloader Lite 1.0: Google Books Downloader Lite 1.0Python Tools for Visual Studio: 1.5 Alpha: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 Alpha. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports Cpython, IronPython, Jython and Pypy • Python editor with advanced member, signature intellisense and refactoring • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0 RC1 Refresh 1: JayData 1.0.0 RC1 Refresh 1 JayData is a unified data access API to webSQL, indexedDB, OData, Facebook and YQL. Overview The major feature of this release is related to OData provider, FunctionImport is now generally supported. Now you can consume OData service operations (WebMethods). We extended the JaySvcUtil to generate the necessary metadata. We included many fixes, such as the Visual Studio 2010 IntelliSense optimalization (RC1 was optimized only to VS11). It's recommended to upgrade...AD Gallery: AD Gallery 1.2.7: NewsFixed a bug which caused the current thumbnail not to be highlighted Added a hook to take complete control over how descriptions are handled, take a look under Documentation for more info Added removeAllImages()51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.4.8: One Click Install from NuGet Data ChangesIncludes 42 new browser properties in both the Lite and Premium data sets. Premium Data includes many new devices including Nokia Lumia 900, BlackBerry 9220 and HTC One, the Samsung Galaxy Tab 2 range and Samsung Galaxy S III. Lite data includes devices released in January 2012. Changes to Version 2.1.4.81. The IsFirstTime method of the RedirectModule will now return the same value when called multiple times for the same request. This was prevent...Mugen Injection: Mugen Injection ver 2.2 (WinRT supported): Added NamedParameterAttribute, OptionalParameterAttribute. Added behaviors ICycleDependencyBehavior, IResolveUnregisteredTypeBehavior. Added WinRT support. Added support for NET 4.5. Added support for MVC 4.NShape - .Net Diagramming Framework for Industrial Applications: NShape 2.0.1: Changes in 2.0.1:Bugfixes: IRepository.Insert(Shape shape) and IRepository.Insert(IEnumerable<Shape> shapes) no longer insert shape connections. Several context menu items did display although the required permission was not granted Display did not reset the visible and active layers when changing the diagram NullReferenceException when pressing Del key and no shape was selected Changed Behavior: LayerCollection.Find("") no longer throws an exception. Improvements: Display does not rese...AcDown????? - Anime&Comic Downloader: AcDown????? v3.11.6: ?? ●AcDown??????????、??、??????,????1M,????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。??????AcPlay?????,??????、????????????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDo...sb0t: sb0t 4.64: New commands added: #scribble <url> #adminscribble on #adminscribble offDocument.Editor: 2012.4: Whats new for Document.Editor 2012.4: Improved Template support Improved Options Dialog Minor Bug Fix's, improvements and speed upsJson.NET: Json.NET 4.5 Release 5: New feature - Added ItemIsReference, ItemReferenceLoopHandling, ItemTypeNameHandling, ItemConverterType to JsonPropertyAttribute New feature - Added ItemRequired to JsonObjectAttribute New feature - Added Path to JsonWriterException Change - Improved deserializer call stack memory usage Change - Moved the PDB files out of the NuGet package into a symbols package Fix - Fixed infinite loop from an input error when reading an array and error handling is enabled Fix - Fixed base objec...BlackJumboDog: Ver5.6.1: 2012.05.07 Ver5.6.1 (1)????????????????(Ver5.6.0??)??? (2)HTTP?????SSL????????????(Ver5.6.0??)??? (3)HTTP?????2G??????????????????????????? (4)HTP???? ?????????ExtAspNet: ExtAspNet v3.1.5: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-05-06 v3.1.5 -????????:grid/grid_twogrid.aspx。 +?...SharpDevelop: SharpDevelop 4.2: Please see http://community.sharpdevelop.net/forums/t/15772.aspx for the release announcement.Desktop Google Reader: 1.4.4: Taskbar icon overlay (number of unread items) can now be switched off in preferences (Windows Vista / 7 only) Maximize button now can be toggled to be fullscreen (as befor) or only normal maximize (taskbar stays visible) in preferences List of feeds is now sorted by alphabetNew Projects3D Scene Editor: A generic 3d level editor built using XNA to speed up designing and building game levels.Attribute Based Cache using Unity Interception: Unity interception handler attribute for Caching which allows to apply boiler plate caching pattern to classes, and class members directly, without configuring them in the application configuration file. Configure your choice of Cache Provider (ObjectCache, Azure included) in the Unity IoC Container and apply the attribute to the method which you want to cache AutoCompleteBox for WinRT: None yetBAC2 Bachelor's Thesis Source Code: The source code of my Bachelor's ThesisBAMabase: It's a bamabase.BBSProject: BBSProjectBrain 2 - Game Engine: Brain 2 is a Game Engine that runs on multiple platforms.cobra-winldtp: Cobra - Windows version of Linux Desktop Testing Project (WinLDTP) - http://ldtp.freedesktop.org LDTP is a GUI test automation tool works on both Windows and Linux platform Windows GUI test automation tool written in C# and test scripts can be written in Python for now. Ruby API will be added soon.CS322: C# Programski jezikDoxBotPlugin: The DoxBotPlugin is a Plug-In for Ice-Chat9 that allows a user to use Icechat9 as both a normal IRC client and to switch on bot mode in certain channels to have DoxBotPlugin act on their behalf.dycom: DYCom??(DY Communication)???????????,?????????????????.????????????????. ??????????????????????,?????????????????。Eyes Protector: PL: Program pomagajacy w ochronie oczu przed przemeczeniem zwiazanym ze zbyt dluga praca przy komputerze. EN: ---kshell: ????Linux??????Lumia: This is Lumia project.MacroDoc: MacroDoc is an engine written in C# for composing documents from reusable pieces of structure and content.Mssql: This is Sql Server project.NETMF_for_STM32: This is the Codeplex project for NETMF for STM32 (F4 Edition). Ocular - a free, open source WYSIWYG editor for HTML: Ocular is a free C# WYSIWYG HTML editor, similar to Adobe Dreamweaver. We are always looking for contributors, so please help us!PeopleCredit: Prototype web service to maintain all employee credits.Project Server workflow: This workflow creates the project site for Basic project plan EPT when workflow task is approved.(This is the correction done over the Branching workflow provided with Project Server 2010 SDK). Workflow task is created using PSWApprovalTask Content type in Project Server Workflow Task List. Quick Reminder: PL: Program pomagajacy zapisac szybkie przypomnienia podczas pracy przy komputerze. EN: ---Random Projects: My random projects...Recommendation Engine Demo: How does the Amazon recommendation works? This is about visualizing the item to item collaborations filtering mechanism using a item-to-item matrix table. The item-to-item matrix, the vectors and the calculated data values are displayed. There are n different items and the item recommendation can display up to m items. There are implemented different item-to-item neighborhood functions. A simple max count of seen neighbor items, the Cosine Similarity and the Jaccard Index. A t...SaveSeaTurtle: Sea TurtleSharpGpx: SharpGpx implements an object model for reading and writing GPX (GPS eXchange Format).SlimDo: SlimDo is a scripting language coded in C#Spring: This is Spring.Net projectSQL Server Quick Tools Pack: SQL Server Quick Tools Pack for your sql server SuLD framework: Supported Link Discovery framework (SuLD) is a tool to discover links to multiple Linked Data datasets. Finding is supported by various features like synonym module or autocomplete.TQuery.Net: .Net??????WAAP - World of warcraft Auction house Analysis Project: A project in which we try to analyse prices and auctioneers on the various World of Warcraft Auction housesxhttp.net: The Xhttp.Net framework is a dotnet implementation of the Extended Hypertext Transfer Protocol (http://www.xhttp.org), with simple service integration, full arguments support including Base64 and DateTime, single and multiple asynchronous requests, data streaming, remote API creation from XHTTP service schemas, and a runtime plugin architecture.

    Read the article

  • SonicOS Enhanced 5.8.1.2 L2TP VPN Authentication Failed

    - by Dean A. Vassallo
    I have a SonicWall TZ 215 running SonicOS Enhanced 5.8.1.2-6o. I have configured the L2TP VPN using the default crypto suite ESP: 3DES/HMAC SHA1 (IKE). Proposals are as such: IKE (Phase 1) Proposal DH Group: Group 2 Encryption: 3DES Authentication: SHA1 Life Time (seconds): 28800 Ipsec (Phase 2) Proposal Protocol: ESP Encryption: 3DES Authentication: SHA1 Enable Perfect Forward Secrecy DISABLED Life Time (seconds): 28800 When attempting to connect via my Mac OS X client I get an authentication error. It appears to pass the pre-authentication but fails to complete. I am at a complete loss. I reconfigured from scratch multiple times...used simple usernames and passwords to verify this wasn't a miskeyed password issue. I have Here are the logs (noted IP has been removed for privacy): 7/1/13 8:19:05.174 PM pppd[1268]: setup_security_context server port: 0x1503 7/1/13 8:19:05.190 PM pppd[1268]: publish_entry SCDSet() failed: Success! 7/1/13 8:19:05.191 PM pppd[1268]: publish_entry SCDSet() failed: Success! 7/1/13 8:19:05.191 PM pppd[1268]: pppd 2.4.2 (Apple version 727.1.1) started by dean, uid 501 7/1/13 8:19:05.192 PM pppd[1268]: L2TP connecting to server ‘0.0.0.0’ (0.0.0.0)... 7/1/13 8:19:05.193 PM pppd[1268]: IPSec connection started 7/1/13 8:19:05.208 PM racoon[1269]: accepted connection on vpn control socket. 7/1/13 8:19:05.209 PM racoon[1269]: Connecting. 7/1/13 8:19:05.209 PM racoon[1269]: IPSec Phase 1 started (Initiated by me). 7/1/13 8:19:05.209 PM racoon[1269]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). 7/1/13 8:19:05.209 PM racoon[1269]: >>>>> phase change status = Phase 1 started by us 7/1/13 8:19:05.231 PM racoon[1269]: >>>>> phase change status = Phase 1 started by peer 7/1/13 8:19:05.231 PM racoon[1269]: IKE Packet: receive success. (Initiator, Main-Mode message 2). 7/1/13 8:19:05.234 PM racoon[1269]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). 7/1/13 8:19:05.293 PM racoon[1269]: IKE Packet: receive success. (Initiator, Main-Mode message 4). 7/1/13 8:19:05.295 PM racoon[1269]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). 7/1/13 8:19:05.315 PM racoon[1269]: IKEv1 Phase 1 AUTH: success. (Initiator, Main-Mode Message 6). 7/1/13 8:19:05.315 PM racoon[1269]: IKE Packet: receive success. (Initiator, Main-Mode message 6). 7/1/13 8:19:05.315 PM racoon[1269]: IKEv1 Phase 1 Initiator: success. (Initiator, Main-Mode). 7/1/13 8:19:05.315 PM racoon[1269]: IPSec Phase 1 established (Initiated by me). 7/1/13 8:19:06.307 PM racoon[1269]: IPSec Phase 2 started (Initiated by me). 7/1/13 8:19:06.307 PM racoon[1269]: >>>>> phase change status = Phase 2 started 7/1/13 8:19:06.308 PM racoon[1269]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). 7/1/13 8:19:06.332 PM racoon[1269]: attribute has been modified. 7/1/13 8:19:06.332 PM racoon[1269]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). 7/1/13 8:19:06.332 PM racoon[1269]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). 7/1/13 8:19:06.333 PM racoon[1269]: IKEv1 Phase 2 Initiator: success. (Initiator, Quick-Mode). 7/1/13 8:19:06.333 PM racoon[1269]: IPSec Phase 2 established (Initiated by me). 7/1/13 8:19:06.333 PM racoon[1269]: >>>>> phase change status = Phase 2 established 7/1/13 8:19:06.333 PM pppd[1268]: IPSec connection established 7/1/13 8:19:07.145 PM pppd[1268]: L2TP connection established. 7/1/13 8:19:07.000 PM kernel[0]: ppp0: is now delegating en0 (type 0x6, family 2, sub-family 3) 7/1/13 8:19:07.146 PM pppd[1268]: Connect: ppp0 <--> socket[34:18] 7/1/13 8:19:08.709 PM pppd[1268]: MS-CHAPv2 mutual authentication failed. 7/1/13 8:19:08.710 PM pppd[1268]: Connection terminated. 7/1/13 8:19:08.710 PM pppd[1268]: L2TP disconnecting... 7/1/13 8:19:08.711 PM pppd[1268]: L2TP disconnected 7/1/13 8:19:08.711 PM racoon[1269]: IPSec disconnecting from server 0.0.0.0 7/1/13 8:19:08.711 PM racoon[1269]: IKE Packet: transmit success. (Information message). 7/1/13 8:19:08.712 PM racoon[1269]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). 7/1/13 8:19:08.712 PM racoon[1269]: IKE Packet: transmit success. (Information message). 7/1/13 8:19:08.712 PM racoon[1269]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). 7/1/13 8:19:08.713 PM racoon[1269]: glob found no matches for path "/var/run/racoon/*.conf" 7/1/13 8:19:08.714 PM racoon[1269]: pfkey DELETE failed: No such file or directory

    Read the article

  • CodePlex Daily Summary for Sunday, November 11, 2012

    CodePlex Daily Summary for Sunday, November 11, 2012Popular ReleasesZXMAK2: Version 2.7.2.0: show extended rzx error info fix reset lag for PROFI ULA 5.xx fix reset behavior fix PROFI ULA timings (thanks to solegstar) fix #FF port for PROFI ULA add ATM710 memory module add new predefined machine configs: ATM Turbo 2, PROFI 3.XX???????: Monitor 2012-11-11: This is the first releaseVidCoder: 1.4.5 Beta: Removed the old Advanced user interface and moved x264 preset/profile/tune there instead. The functionality is still available through editing the options string. Added ability to specify the H.264 level. Added ability to choose VidCoder's interface language. If you are interested in translating, we can get VidCoder in your language! Updated WPF text rendering to use the better Display mode. Updated HandBrake core to SVN 5045. Removed logic that forced the .m4v extension in certain ...ImageGlass: Version 1.5: http://i1214.photobucket.com/albums/cc483/phapsuxeko/ImageGlass/1.png v1.5.4401.3015 Thumbnail bar: Increase loading speed Thumbnail image with ratio Support personal customization: mouse up, mouse down, mouse hover, selected item... Scroll to show all items Image viewer Zoom by scroll, or selected rectangle Speed up loading Zoom to cursor point New background design and customization and others... v1.5.4430.483 Thumbnail bar: Auto move scroll bar to selected image Show / Hi...Building Windows 8 Apps with C# and XAML: Full Source Chapters 1 - 10 for Windows 8 Fix 002: This is the full source from all chapters of the book, compiled and tested on Windows 8 RTM. Includes: A fix for the Netflix example from Chapter 6 that was missing a service reference A fix for the ImageHelper issue (images were not being saved) - this was due to the buffer being inadequate and required streaming the writeable bitmap to a buffer first before encoding and savingmyCollections: Version 2.3.2.0: New in this version : Added TheGamesDB.net API for Games and NDS Added Support for Windows Media Center Added Support for myMovies Added Support for XBMC Added Support for Dune HD Added Support for Mede8er Added Support for WD HDTV Added Fast search options Added order by Artist/Album for music You can now create covers and background for games You can now update your ID3 tag with the info of myCollections Fixed several provider Performance improvement New Splash ...Draw: Draw 1.0: Drawing PadPlayer Framework by Microsoft: Player Framework for Windows 8 (v1.0): IMPORTANT: List of breaking changes from preview 7 Ability to move control panel or individual elements outside media player. more info... New Entertainment app theme for out of the box support for Windows 8 Entertainment app guidelines. more info... VSIX reference names shortened. Allows seeing plugin name from "Add Reference" dialog without resizing. FreeWheel SmartXML now supports new "Standard" event callback type. Other minor misc fixes and improvements ADDITIONAL DOWNLOADSSmo...WebSearch.Net: WebSearch.Net 3.1: WebSearch.Net is an open-source research platform that provides uniform data source access, data modeling, feature calculation, data mining, etc. It facilitates the experiments of web search researchers due to its high flexibility and extensibility. The platform can be used or extended by any language compatible for .Net 2 framework, from C# (recommended), VB.Net to C++ and Java. Thanks to the large coverage of knowledge in web search research, it is necessary to model the techniques and main...Umbraco CMS: Umbraco 4.10.0: NugetNuGet BlogRead the release blog post for 4.10.0. Whats newMVC support New request pipeline Many, many bugfixes (see the issue tracker for a complete list) Read the documentation for the MVC bits. Breaking changesWe have done all we can not to break backwards compatibility, but we had to do some minor breaking changes: Removed graphicHeadlineFormat config setting from umbracoSettings.config (an old relic from the 3.x days) U4-690 DynamicNode ChildrenAsList was fixed, altering it'...SQL Server Partitioned Table Framework: Partitioned Table Framework Release 1.0: SQL Server 2012 ReleaseSharePoint Manager 2013: SharePoint Manager 2013 Release ver 1.0.12.1106: SharePoint Manager 2013 Release (ver: 1.0.12.1106) is now ready for SharePoint 2013. The new version has an expanded view of the SharePoint object model and has been tested on SharePoint 2013 RTM. As a bonus, the new version is also available for SharePoint 2010 as a separate download.D3D9Client: D3D9Client R7: New release for Orbiter 2010-P1 - Added horizon/sun angle for night-lights into the configuration file (default 10deg) - Some runway lights related bugs are fixed - Added more configuration options for runway lightsFiskalizacija za developere: FiskalizacijaDev 1.2: Verzija 1.2. je, prije svega, odgovor na novu verziju Tehnicke specifikacije (v1.1.) koja je objavljena prije nekoliko dana. Pored novosti vezanih uz (sitne) izmjene u spomenutoj novoj verziji Tehnicke dokumentacije, projekt smo prošili sa nekim dodatnim feature-ima od kojih je vecina proizašla iz vaših prijedloga - hvala :) Novosti u v1.2. su: - Neusuglašenost zahtjeva (http://fiskalizacija.codeplex.com/workitem/645) - Sample projekt - iznosi se množe sa 100 (http://fiskalizacija.codeplex.c...MFCMAPI: October 2012 Release: Build: 15.0.0.1036 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeDictationTool: DictationCool-WPF: • Open a media file to start a new dication. • Open a dct file to continue a dictation. • Compare your dictation with original text if exists. • Save your dictation to dct file, and restore it to continue later. • Save the compared result to html file.MCEBuddy 2.x: MCEBuddy 2.3.7: Changelog for 2.3.7 (32bit and 64bit) 1. Improved performance of MP4 Fast and M4V Fast Profiles (no deinterlacing, removed --decomb) 2. Improved priority handling 3. Added support for Pausing and Resume conversions 4. Added support for fallback to source directory if network destination directory is unavailable 5. MCEBuddy now installs ShowAnalyzer during installation 6. Added support for long description atom in iTunesDyanamic Reports (RDLC) - SharePoint 2010 Visual WebPart: Initial Release: This is a Initial Release.HTML Renderer: HTML Renderer 1.0.0.0 (3): Major performance improvement (http://theartofdev.wordpress.com/2012/10/25/how-i-optimized-html-renderer-and-fell-in-love-with-vs-profiler/) Minor fixes raised in issue tracker and discussions.Window Manager: Window Manager 1.0: First releaseNew Projectsarteytex: este es una prueba blockworld: An implementation of a goal stack planner.Customer Note: customer note is windows store applicationDraw: ?????????????:??????、CAD??、????。Football Management: Football Management System is web management system for football (soccer) leagues, teams and players. Hijri Converter API: This project is aimed to create a simple RESTful API using VB and ASP.NET to do Hijri-to-Gregorian and Gregorian-to-Hijri conversion.httpclient?????????: httpclient?????????(1)??????????(2)?????????(3)??2012-11-06??,???????。 Imagine Cup 2013: Develop project to Imagine Cup 2013MyAppReji: MyAppN2F Request: The N2F Request object is used to handle interactions between N2F and the global $_REQUEST variable, sanitizing any results which are returned.Orchard Metro Theme: Orchard Metro Theme is a clean and flexible multi-zone theme.Poker Clock And Goodies: poker w8ProjectASPReviewer: Review website for notebooks, tablets and smartphones.Prototype: Its about making an proto type for the final project.Prototype - 7COM0207: 7COM0207 web scripting module, Assignment 2QuickToAD: QuickToAD is a foundational development project for the purpose of jump-starting data-driven application projects.Release Manager: Release Manager is a project to design and develop Windows based Release Management Software.ResW File Code Generator: A Visual Studio 2012 Custom Tool for generating a strongly typed helper class for accessing localized resources from a .ResW file.SEO Tools: This is a website containing some commonly used SEO tools. I have only added a blog ping utility at this time but there is more to come. Thales communicator: A C# library that helps communicate with Thales HSMTrivial: A trivia framework: Trivial is a C# framework that helps you creating custom trivia-like applications.

    Read the article

  • Announcing Windows Azure Mobile Services

    - by ScottGu
    I’m excited to announce a new capability we are adding to Windows Azure today: Windows Azure Mobile Services Windows Azure Mobile Services makes it incredibly easy to connect a scalable cloud backend to your client and mobile applications.  It allows you to easily store structured data in the cloud that can span both devices and users, integrate it with user authentication, as well as send out updates to clients via push notifications. Today’s release enables you to add these capabilities to any Windows 8 app in literally minutes, and provides a super productive way for you to quickly build out your app ideas.  We’ll also be adding support to enable these same scenarios for Windows Phone, iOS, and Android devices soon. Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app that is cloud enabled using Windows Azure Mobile Services.  Or watch this video of me showing how to do it step by step. Getting Started If you don’t already have a Windows Azure account, you can sign up for a no-obligation Free Trial.  Once you are signed-up, click the “preview features” section under the “account” tab of the www.windowsazure.com website and enable your account to support the “Mobile Services” preview.   Instructions on how to enable this can be found here. Once you have the mobile services preview enabled, log into the Windows Azure Portal, click the “New” button and choose the new “Mobile Services” icon to create your first mobile backend.  Once created, you’ll see a quick-start page like below with instructions on how to connect your mobile service to an existing Windows 8 client app you have already started working on, or how to create and connect a brand-new Windows 8 client app with it: Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app  that stores data in Windows Azure. Storing Data in the Cloud Storing data in the cloud with Windows Azure Mobile Services is incredibly easy.  When you create a Windows Azure Mobile Service, we automatically associate it with a SQL Database inside Windows Azure.  The Windows Azure Mobile Service backend then provides built-in support for enabling remote apps to securely store and retrieve data from it (using secure REST end-points utilizing a JSON-based ODATA format) – without you having to write or deploy any custom server code.  Built-in management support is provided within the Windows Azure portal for creating new tables, browsing data, setting indexes, and controlling access permissions. This makes it incredibly easy to connect client applications to the cloud, and enables client developers who don’t have a server-code background to be productive from the very beginning.  They can instead focus on building the client app experience, and leverage Windows Azure Mobile Services to provide the cloud backend services they require.  Below is an example of client-side Windows 8 C#/XAML code that could be used to query data from a Windows Azure Mobile Service.  Client-side C# developers can write queries like this using LINQ and strongly typed POCO objects, which are then translated into HTTP REST queries that run against a Windows Azure Mobile Service.   Developers don’t have to write or deploy any custom server-side code in order to enable client-side code below to execute and asynchronously populate their client UI: Because Mobile Services is part of Windows Azure, developers can later choose to augment or extend their initial solution and add custom server functionality and more advanced logic if they want.  This provides maximum flexibility, and enables developers to grow and extend their solutions to meet any needs. User Authentication and Push Notifications Windows Azure Mobile Services also make it incredibly easy to integrate user authentication/authorization and push notifications within your applications.  You can use these capabilities to enable authentication and fine grain access control permissions to the data you store in the cloud, as well as to trigger push notifications to users/devices when the data changes.  Windows Azure Mobile Services supports the concept of “server scripts” (small chunks of server-side script that executes in response to actions) that make it really easy to enable these scenarios. Below are some tutorials that walkthrough common authentication/authorization/push scenarios you can do with Windows Azure Mobile Services and Windows 8 apps: Enabling User Authentication Authorizing Users  Get Started with Push Notifications Push Notifications to multiple Users Manage and Monitor your Mobile Service Just like with every other service in Windows Azure, you can monitor usage and metrics of your mobile service backend using the “Dashboard” tab within the Windows Azure Portal. The dashboard tab provides a built-in monitoring view of the API calls, Bandwidth, and server CPU cycles of your Windows Azure Mobile Service.   You can also use the “Logs” tab within the portal to review error messages.  This makes it easy to monitor and track how your application is doing. Scale Up as Your Business Grows Windows Azure Mobile Services now allows every Windows Azure customer to create and run up to 10 Mobile Services in a free, shared/multi-tenant hosting environment (where your mobile backend will be one of multiple apps running on a shared set of server resources).  This provides an easy way to get started on projects at no cost beyond the database you connect your Windows Azure Mobile Service to (note: each Windows Azure free trial account also includes a 1GB SQL Database that you can use with any number of apps or Windows Azure Mobile Services). If your client application becomes popular, you can click the “Scale” tab of your Mobile Service and switch from “Shared” to “Reserved” mode.  Doing so allows you to isolate your apps so that you are the only customer within a virtual machine.  This allows you to elastically scale the amount of resources your apps use – allowing you to scale-up (or scale-down) your capacity as your traffic grows: With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need.  This enables a super flexible model that is ideal for new mobile app scenarios, as well as startups who are just getting going.  Summary I’ve only scratched the surface of what you can do with Windows Azure Mobile Services – there are a lot more features to explore.  With Windows Azure Mobile Services you’ll be able to build mobile app experiences faster than ever, and enable even better user experiences – by connecting your client apps to the cloud. Visit the Windows Azure Mobile Services development center to learn more, and build your first Windows 8 app connected with Windows Azure today.  And read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app that is cloud enabled using Windows Azure Mobile Services. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Unity – Part 5: Injecting Values

    - by Ricardo Peres
    Introduction This is the fifth post on Unity. You can find the introductory post here, the second post, on dependency injection here, a third one on Aspect Oriented Programming (AOP) here and the latest so far, on writing custom extensions, here. This time we will talk about injecting simple values. An Inversion of Control (IoC) / Dependency Injector (DI) container like Unity can be used for things other than injecting complex class dependencies. It can also be used for setting property values or method/constructor parameters whenever a class is built. The main difference is that these values do not have a lifetime manager associated with them and do not come from the regular IoC registration store. Unlike, for instance, MEF, Unity won’t let you register as a dependency a string or an integer, so you have to take a different approach, which I will describe in this post. Scenario Let’s imagine we have a base interface that describes a logger – the same as in previous examples: 1: public interface ILogger 2: { 3: void Log(String message); 4: } And a concrete implementation that writes to a file: 1: public class FileLogger : ILogger 2: { 3: public String Filename 4: { 5: get; 6: set; 7: } 8:  9: #region ILogger Members 10:  11: public void Log(String message) 12: { 13: using (Stream file = File.OpenWrite(this.Filename)) 14: { 15: Byte[] data = Encoding.Default.GetBytes(message); 16: 17: file.Write(data, 0, data.Length); 18: } 19: } 20:  21: #endregion 22: } And let’s say we want the Filename property to come from the application settings (appSettings) section on the Web/App.config file. As usual with Unity, there is an extensibility point that allows us to automatically do this, both with code configuration or statically on the configuration file. Extending Injection We start by implementing a class that will retrieve a value from the appSettings by inheriting from ValueElement: 1: sealed class AppSettingsParameterValueElement : ValueElement, IDependencyResolverPolicy 2: { 3: #region Private methods 4: private Object CreateInstance(Type parameterType) 5: { 6: Object configurationValue = ConfigurationManager.AppSettings[this.AppSettingsKey]; 7:  8: if (parameterType != typeof(String)) 9: { 10: TypeConverter typeConverter = this.GetTypeConverter(parameterType); 11:  12: configurationValue = typeConverter.ConvertFromInvariantString(configurationValue as String); 13: } 14:  15: return (configurationValue); 16: } 17: #endregion 18:  19: #region Private methods 20: private TypeConverter GetTypeConverter(Type parameterType) 21: { 22: if (String.IsNullOrEmpty(this.TypeConverterTypeName) == false) 23: { 24: return (Activator.CreateInstance(TypeResolver.ResolveType(this.TypeConverterTypeName)) as TypeConverter); 25: } 26: else 27: { 28: return (TypeDescriptor.GetConverter(parameterType)); 29: } 30: } 31: #endregion 32:  33: #region Public override methods 34: public override InjectionParameterValue GetInjectionParameterValue(IUnityContainer container, Type parameterType) 35: { 36: Object value = this.CreateInstance(parameterType); 37: return (new InjectionParameter(parameterType, value)); 38: } 39: #endregion 40:  41: #region IDependencyResolverPolicy Members 42:  43: public Object Resolve(IBuilderContext context) 44: { 45: Type parameterType = null; 46:  47: if (context.CurrentOperation is ResolvingPropertyValueOperation) 48: { 49: ResolvingPropertyValueOperation op = (context.CurrentOperation as ResolvingPropertyValueOperation); 50: PropertyInfo prop = op.TypeBeingConstructed.GetProperty(op.PropertyName); 51: parameterType = prop.PropertyType; 52: } 53: else if (context.CurrentOperation is ConstructorArgumentResolveOperation) 54: { 55: ConstructorArgumentResolveOperation op = (context.CurrentOperation as ConstructorArgumentResolveOperation); 56: String args = op.ConstructorSignature.Split('(')[1].Split(')')[0]; 57: Type[] types = args.Split(',').Select(a => Type.GetType(a.Split(' ')[0])).ToArray(); 58: ConstructorInfo ctor = op.TypeBeingConstructed.GetConstructor(types); 59: parameterType = ctor.GetParameters().Where(p => p.Name == op.ParameterName).Single().ParameterType; 60: } 61: else if (context.CurrentOperation is MethodArgumentResolveOperation) 62: { 63: MethodArgumentResolveOperation op = (context.CurrentOperation as MethodArgumentResolveOperation); 64: String methodName = op.MethodSignature.Split('(')[0].Split(' ')[1]; 65: String args = op.MethodSignature.Split('(')[1].Split(')')[0]; 66: Type[] types = args.Split(',').Select(a => Type.GetType(a.Split(' ')[0])).ToArray(); 67: MethodInfo method = op.TypeBeingConstructed.GetMethod(methodName, types); 68: parameterType = method.GetParameters().Where(p => p.Name == op.ParameterName).Single().ParameterType; 69: } 70:  71: return (this.CreateInstance(parameterType)); 72: } 73:  74: #endregion 75:  76: #region Public properties 77: [ConfigurationProperty("appSettingsKey", IsRequired = true)] 78: public String AppSettingsKey 79: { 80: get 81: { 82: return ((String)base["appSettingsKey"]); 83: } 84:  85: set 86: { 87: base["appSettingsKey"] = value; 88: } 89: } 90: #endregion 91: } As you can see from the implementation of the IDependencyResolverPolicy.Resolve method, this will work in three different scenarios: When it is applied to a property; When it is applied to a constructor parameter; When it is applied to an initialization method. The implementation will even try to convert the value to its declared destination, for example, if the destination property is an Int32, it will try to convert the appSettings stored string to an Int32. Injection By Configuration If we want to configure injection by configuration, we need to implement a custom section extension by inheriting from SectionExtension, and registering our custom element with the name “appSettings”: 1: sealed class AppSettingsParameterInjectionElementExtension : SectionExtension 2: { 3: public override void AddExtensions(SectionExtensionContext context) 4: { 5: context.AddElement<AppSettingsParameterValueElement>("appSettings"); 6: } 7: } And on the configuration file, for setting a property, we use it like this: 1: <appSettings> 2: <add key="LoggerFilename" value="Log.txt"/> 3: </appSettings> 4: <unity xmlns="http://schemas.microsoft.com/practices/2010/unity"> 5: <container> 6: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.ConsoleLogger, MyAssembly"/> 7: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.FileLogger, MyAssembly" name="File"> 8: <lifetime type="singleton"/> 9: <property name="Filename"> 10: <appSettings appSettingsKey="LoggerFilename"/> 11: </property> 12: </register> 13: </container> 14: </unity> If we would like to inject the value as a constructor parameter, it would be instead: 1: <unity xmlns="http://schemas.microsoft.com/practices/2010/unity"> 2: <sectionExtension type="MyNamespace.AppSettingsParameterInjectionElementExtension, MyAssembly" /> 3: <container> 4: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.ConsoleLogger, MyAssembly"/> 5: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.FileLogger, MyAssembly" name="File"> 6: <lifetime type="singleton"/> 7: <constructor> 8: <param name="filename" type="System.String"> 9: <appSettings appSettingsKey="LoggerFilename"/> 10: </param> 11: </constructor> 12: </register> 13: </container> 14: </unity> Notice the appSettings section, where we add a LoggerFilename entry, which is the same as the one referred by our AppSettingsParameterInjectionElementExtension extension. For more advanced behavior, you can add a TypeConverterName attribute to the appSettings declaration, where you can pass an assembly qualified name of a class that inherits from TypeConverter. This class will be responsible for converting the appSettings value to a destination type. Injection By Attribute If we would like to use attributes instead, we need to create a custom attribute by inheriting from DependencyResolutionAttribute: 1: [Serializable] 2: [AttributeUsage(AttributeTargets.Parameter | AttributeTargets.Property, AllowMultiple = false, Inherited = true)] 3: public sealed class AppSettingsDependencyResolutionAttribute : DependencyResolutionAttribute 4: { 5: public AppSettingsDependencyResolutionAttribute(String appSettingsKey) 6: { 7: this.AppSettingsKey = appSettingsKey; 8: } 9:  10: public String TypeConverterTypeName 11: { 12: get; 13: set; 14: } 15:  16: public String AppSettingsKey 17: { 18: get; 19: private set; 20: } 21:  22: public override IDependencyResolverPolicy CreateResolver(Type typeToResolve) 23: { 24: return (new AppSettingsParameterValueElement() { AppSettingsKey = this.AppSettingsKey, TypeConverterTypeName = this.TypeConverterTypeName }); 25: } 26: } As for file configuration, there is a mandatory property for setting the appSettings key and an optional TypeConverterName  for setting the name of a TypeConverter. Both the custom attribute and the custom section return an instance of the injector AppSettingsParameterValueElement that we implemented in the first place. Now, the attribute needs to be placed before the injected class’ Filename property: 1: public class FileLogger : ILogger 2: { 3: [AppSettingsDependencyResolution("LoggerFilename")] 4: public String Filename 5: { 6: get; 7: set; 8: } 9:  10: #region ILogger Members 11:  12: public void Log(String message) 13: { 14: using (Stream file = File.OpenWrite(this.Filename)) 15: { 16: Byte[] data = Encoding.Default.GetBytes(message); 17: 18: file.Write(data, 0, data.Length); 19: } 20: } 21:  22: #endregion 23: } Or, if we wanted to use constructor injection: 1: public class FileLogger : ILogger 2: { 3: public String Filename 4: { 5: get; 6: set; 7: } 8:  9: public FileLogger([AppSettingsDependencyResolution("LoggerFilename")] String filename) 10: { 11: this.Filename = filename; 12: } 13:  14: #region ILogger Members 15:  16: public void Log(String message) 17: { 18: using (Stream file = File.OpenWrite(this.Filename)) 19: { 20: Byte[] data = Encoding.Default.GetBytes(message); 21: 22: file.Write(data, 0, data.Length); 23: } 24: } 25:  26: #endregion 27: } Usage Just do: 1: ILogger logger = ServiceLocator.Current.GetInstance<ILogger>("File"); And off you go! A simple way do avoid hardcoded values in component registrations. Of course, this same concept can be applied to registry keys, environment values, XML attributes, etc, etc, just change the implementation of the AppSettingsParameterValueElement class. Next stop: custom lifetime managers.

    Read the article

  • Does HTML 5 &ldquo;Rich vs. Reach&rdquo; a False Choice?

    - by andrewbrust
    The competition between the Web and proprietary rich platforms, including Windows, Mac OS, iPhone/iPad, Adobe’s Flash/AIR and Microsoft’s Silverlight, is not new. But with the emergence of HTML 5 and imminent support for it in the next release of the major Web browsers, the battle is heating up. And with the announcements made Wednesday at Google's I/O conference, it's getting kicked up yet another notch. The impact of this platform battle on companies in the media and advertising world, and the developers who serve them, is significant. The most prominent question is whether video and rich media online will shift towards pure HTML and away from plug-ins like Flash and Silverlight. In fact, certain features in HTML 5 make it suitable for development for line of business applications as well, further threatening those plug-in technologies. So what's the deal? Is this real or hype? To answer that question, I've done my own research into HTML 5's features and talked to several media-focused, New York area developers to get their opinions. I present my findings to you in this post. Before bearing down into HTML 5 specifics and practitioners’ quotes, let's set the context. To understand what HTML 5 can do, take a look at this video of Sports Illustrated’s HTML 5 prototype. This should start to get you bought into the idea that HTML 5 could be a game-changer. Next, if you happen to have installed the beta version of Google's Chrome 5 browser, take a look at the page linked to below, and in that page, click on any of the game thumbnails to see what's possible, without a plug-in, in this brave new world. (Note, although the instructions for each game tell you to press the A key to start, press the Z key instead.). Here's the link: http://www.kesiev.com/akihabara As an adjunct to what's enabled by HTML 5, consider the various transforms that are part of CSS 3. If you're running Safari as your browser, the following link will showcase this live; if not, you'll see a bitmap that will give you an idea of what's possible: http://webkit.org/blog/386/3d-transforms Are you starting to get the picture (literally)? What has up until now required browser plug-ins and other patches to HTML, most typically Flash, will soon be renderable, natively, in all major browsers. Moreover, it's looking likely that developers will be able to deliver such content and experiences in these browsers using one base of markup and script code (using straight JavaScript and/or jQuery), without resorting to browser-specific code and workarounds. If you're skeptical of this, I wouldn't blame you, especially with respect to Microsoft's Internet Explorer. However, i can tell you with confidence that even Microsoft is dedicated to full-on HTML 5 support in version 9 of that browser, which is currently under development. So what’s new in HTML 5, specifically, that makes sites like this possible?  The specification documents go into deep detail, and there’s no sense in rehashing them here, but a summary is probably in order.   Here is a non-authoritative, but useful, list of the major new feature areas in HTML 5: 2D drawing capabilities and 3D transforms. 2D drawing instructions can be embedded statically into a Web page; application interactivity and animation can be achieved through script.  As mentioned above, 3D transforms are technically part of version 3 of the CSS (Cascading Style Sheets) spec, rather than HTML 5, but they can nonetheless be thought of as part of the bundle.  They allow for rendering of 3D images and animations that, together with 2D drawing, make HTML-based games much more feasible than they are presently, as the links above demonstrate. Embedded audio and video. A media player can appear directly in a rendered Web page, using HTML markup and no plug-ins. Alternately, player controls can be hidden and the content can play automatically. Major enhancements to form-based input. This includes such things as specification of required fields, embedding of text “hints” into a control, limiting valid input on a field to dates, email addresses or a list of values.  There’s more to this, but the gist is that line-of-business applications, with complicated input and data validation, are supported directly Offline caching, local storage and client-side SQL database. These facilities allow Web applications to function more like native apps, even if no internet connection is available. User-defined data. Data (or metadata – data about data) can easily be embedded statically and/or retrieved and updated with Javascript code. This avoids having to embed that data in a separate file, or within script code. Taken together, these features position HTML to compete with, and perhaps overtake, Adobe’s Flash/AIR (and Microsoft’s Silverlight) as a viable Web platform for media, RIAs (rich internet applications – apps that function more like desktop software than Web sites) and interactive Web content, including games. What do players in the media world think about this?  From the embedded video above, we know what Sports Illustrated (and, therefore, Time Warner) think.  Hulu, the major Internet site for broadcast TV content, is on record as saying HTML5 video does not pass muster with them, at least not yet.  YouTube, on the other hand, already has an experimental HTML 5-based version of their site.  TechCrunch has reported that NetFlix is flirting with HTML 5 too, especially as it pertains to embedded browsers in TV-based devices.  And the New York Times’ Web site now embeds some video clips without resorting to Flash.  They have to – otherwise iPhone, iPod Touch and iPad users couldn’t see them in the Mobile Safari browser. What do media-focused developers think about all this?  I talked to several to get their opinions. Michael Pinto is CEO and Founder of Very Memorable Design whose primary focus has been to help marketing directors get traction online.  The firm’s client roster includes the likes Time, Inc., Scholastic and PBS.  Pinto predicts that “More and more microsites that were done entirely in Flash will be done more and more using jQuery. I can also see slideshows and video now being done without Flash. However if you needed to create a game or highly interactive activity Flash would still be the way to go for the web.” A dissenting view comes from Jesse Erlbaum, CEO of The Erlbaum Group, LLC, which serves numerous clients in the magazine publishing sector.  When I asked Erlbaum whether he thought HTML 5 and jQuery/JavaScript would steal significant market share from Flash, he responded “Not at all!  In particular, not for media and advertising customers!  These sectors are not generally in the business of making highly functional applications, which is the one place where HTML5/jQuery/etc really shines.” Ironically, Pinto’s firm is a heavy user of Flash for its projects and Erlbaum’s develops atop the “LAMP” (Linux, Apache, MySQL and PHP/Perl) stack.  For whatever reason, each firm seems to see the other’s toolset as a more viable choice.  But both agree that the developer tool story around HTML 5 is deficient.  Pinto explains “What’s lost with [HTML 5 and Javascript] techniques is that there isn’t a single widely favored easy-to-use tool of choice for authoring. So with Flash you can get up and running right away and not worry about what is different from one browser to the next.“  Erlbaum agrees, saying: “HTML5/Javascript lacks a sophisticated integrated development environment (IDE) which is an essential part of Flash.  If what someone is trying to make is primarily animation, it's a waste of time…to do this in Javascript.  It can be done much more easily in Flash, and with greater cross-browser compatibility and consistency due to the ubiquity of Flash.” Adobe (maker of Flash since its 2005 acquisition of Macromedia) likely agrees.  And for better or worse, they’ve decided to address this shortcoming of HTML 5, even at risk of diminishing their Flash platfrom. Yesterday Adobe announced that their hugely popular Deamweaver Web design authoring tool would directly support HTML 5 and CSS 3 development.  In fact, the Adobe Dreamweaver CS5 HTML5 Pack is downloadable now from Adobe Labs. Maybe Adobe is bowing to pressure from ardent Web professionals like Scott Kellum, Lead Designer at Channel V Media,  a digital and offline branding firm, serving the media and marketing sectors, among others.  Kellum told me that HTML 5 “…will definitely move people away from Flash. It has many of the same functionalities with faster load times and better accessibility. HTML5 will help Flash as well: with the new caching methods you can now even run Flash apps offline.” Although all three Web developers I interviewed would agree that Flash is still required for more sophisticated applications, Kellum seems to have put his finger on why HTML 5 may nonetheless dominate.  In his view, much of the Web development out there has little need for high-end capabilities: “Most people want to add a little punch to a navigation bar or some video and now you can get the biggest bang for your buck with HTML5, CSS3 and Javascript.” I’ve already mentioned that Google’s ongoing I/O conference, at the Moscone West center in San Francisco, is driving the HTML 5 news cycle, big time.  And Google made many announcements of their own, including the open sourcing of their VP8 video codec, new enterprise-oriented capabilities for its App Engine cloud offering, and the creation of the Chrome Web Store, which the company says will make it easier to find and “install” Web applications, in a fashion similar to  the way users procure native apps on various mobile platforms. HTML 5 looks to be disruptive, especially to the media world.  And even if the technology ends up disappointing, the chatter around it alone is causing big changes in the technology world.  If the richness it promises delivers, then magazine publishers and non-text digital advertisers may indeed have a platform for creating compelling content that loads quickly, is standards-based and will render identically in (the newest versions of) all major Web browsers.  Can this development in the digital arena save the titans of the print world?  I can’t predict, but it’s going to be fun to watch, and the competitive innovation from all players in both industries will likely be immense.

    Read the article

  • Dynamic Code for type casting Generic Types 'generically' in C#

    - by Rick Strahl
    C# is a strongly typed language and while that's a fundamental feature of the language there are more and more situations where dynamic types make a lot of sense. I've written quite a bit about how I use dynamic for creating new type extensions: Dynamic Types and DynamicObject References in C# Creating a dynamic, extensible C# Expando Object Creating a dynamic DataReader for dynamic Property Access Today I want to point out an example of a much simpler usage for dynamic that I use occasionally to get around potential static typing issues in C# code especially those concerning generic types. TypeCasting Generics Generic types have been around since .NET 2.0 I've run into a number of situations in the past - especially with generic types that don't implement specific interfaces that can be cast to - where I've been unable to properly cast an object when it's passed to a method or assigned to a property. Granted often this can be a sign of bad design, but in at least some situations the code that needs to be integrated is not under my control so I have to make due with what's available or the parent object is too complex or intermingled to be easily refactored to a new usage scenario. Here's an example that I ran into in my own RazorHosting library - so I have really no excuse, but I also don't see another clean way around it in this case. A Generic Example Imagine I've implemented a generic type like this: public class RazorEngine<TBaseTemplateType> where TBaseTemplateType : RazorTemplateBase, new() You can now happily instantiate new generic versions of this type with custom template bases or even a non-generic version which is implemented like this: public class RazorEngine : RazorEngine<RazorTemplateBase> { public RazorEngine() : base() { } } To instantiate one: var engine = new RazorEngine<MyCustomRazorTemplate>(); Now imagine that the template class receives a reference to the engine when it's instantiated. This code is fired as part of the Engine pipeline when it gets ready to execute the template. It instantiates the template and assigns itself to the template: var template = new TBaseTemplateType() { Engine = this } The problem here is that possibly many variations of RazorEngine<T> can be passed. I can have RazorTemplateBase, RazorFolderHostTemplateBase, CustomRazorTemplateBase etc. as generic parameters and the Engine property has to reflect that somehow. So, how would I cast that? My first inclination was to use an interface on the engine class and then cast to the interface.  Generally that works, but unfortunately here the engine class is generic and has a few members that require the template type in the member signatures. So while I certainly can implement an interface: public interface IRazorEngine<TBaseTemplateType> it doesn't really help for passing this generically templated object to the template class - I still can't cast it if multiple differently typed versions of the generic type could be passed. I have the exact same issue in that I can't specify a 'generic' generic parameter, since there's no underlying base type that's common. In light of this I decided on using object and the following syntax for the property (and the same would be true for a method parameter): public class RazorTemplateBase :MarshalByRefObject,IDisposable { public object Engine {get;set; } } Now because the Engine property is a non-typed object, when I need to do something with this value, I still have no way to cast it explicitly. What I really would need is: public RazorEngine<> Engine { get; set; } but that's not possible. Dynamic to the Rescue Luckily with the dynamic type this sort of thing can be mitigated fairly easily. For example here's a method that uses the Engine property and uses the well known class interface by simply casting the plain object reference to dynamic and then firing away on the properties and methods of the base template class that are common to all templates:/// <summary> /// Allows rendering a dynamic template from a string template /// passing in a model. This is like rendering a partial /// but providing the input as a /// </summary> public virtual string RenderTemplate(string template,object model) { if (template == null) return string.Empty; // if there's no template markup if(!template.Contains("@")) return template; // use dynamic to get around generic type casting dynamic engine = Engine; string result = engine.RenderTemplate(template, model); if (result == null) throw new ApplicationException("RenderTemplate failed: " + engine.ErrorMessage); return result; } Prior to .NET 4.0  I would have had to use Reflection for this sort of thing which would have a been a heck of a lot more verbose, but dynamic makes this so much easier and cleaner and in this case at least the overhead is negliable since it's a single dynamic operation on an otherwise very complex operation call. Dynamic as  a Bailout Sometimes this sort of thing often reeks of a design flaw, and I agree that in hindsight this could have been designed differently. But as is often the case this particular scenario wasn't planned for originally and removing the generic signatures from the base type would break a ton of other code in the framework. Given the existing fairly complex engine design, refactoring an interface to remove generic types just to make this particular code work would have been overkill. Instead dynamic provides a nice and simple and relatively clean solution. Now if there were many other places where this occurs I would probably consider reworking the code to make this cleaner but given this isolated instance and relatively low profile operation use of dynamic seems a valid choice for me. This solution really works anywhere where you might end up with an inheritance structure that doesn't have a common base or interface that is sufficient. In the example above I know what I'm getting but there's no common base type that I can cast to. All that said, it's a good idea to think about use of dynamic before you rush in. In many situations there are alternatives that can still work with static typing. Dynamic definitely has some overhead compared to direct static access of objects, so if possible we should definitely stick to static typing. In the example above the application already uses dynamics extensively for dynamic page page templating and passing models around so introducing dynamics here has very little additional overhead. The operation itself also fires of a fairly resource heavy operation where the overhead of a couple of dynamic member accesses are not a performance issue. So, what's your experience with dynamic as a bailout mechanism? © Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • TFS 2010 SDK: Smart Merge - Programmatically Create your own Merge Tool

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,TFS SDK,TFS API,TFS Merge Programmatically,TFS Work Items Programmatically,TFS Administration Console,ALM   The information available in the Merge window in Team Foundation Server 2010 is very important in the decision making during the merging process. However, at present the merge window shows very limited information, more that often you are interested to know the work item, files modified, code reviewer notes, policies overridden, etc associated with the change set. Our friends at Microsoft are working hard to change the game again with vNext, but because at present the merge window is a model window you have to cancel the merge process and go back one after the other to check the additional information you need. If you can relate to what i am saying, you will enjoy this blog post! I will show you how to programmatically create your own merging window using the TFS 2010 API. A few screen shots of the WPF TFS 2010 API – Custom Merging Application that we will be creating programmatically, Excited??? Let’s start coding… 1. Get All Team Project Collections for the TFS Server You can read more on connecting to TFS programmatically on my blog post => How to connect to TFS Programmatically 1: public static ReadOnlyCollection<CatalogNode> GetAllTeamProjectCollections() 2: { 3: TfsConfigurationServer configurationServer = 4: TfsConfigurationServerFactory. 5: GetConfigurationServer(new Uri("http://xxx:8080/tfs/")); 6: 7: CatalogNode catalogNode = configurationServer.CatalogNode; 8: return catalogNode.QueryChildren(new Guid[] 9: { CatalogResourceTypes.ProjectCollection }, 10: false, CatalogQueryOptions.None); 11: } 2. Get All Team Projects for the selected Team Project Collection You can read more on connecting to TFS programmatically on my blog post => How to connect to TFS Programmatically 1: public static ReadOnlyCollection<CatalogNode> GetTeamProjects(string instanceId) 2: { 3: ReadOnlyCollection<CatalogNode> teamProjects = null; 4: 5: TfsConfigurationServer configurationServer = 6: TfsConfigurationServerFactory.GetConfigurationServer(new Uri("http://xxx:8080/tfs/")); 7: 8: CatalogNode catalogNode = configurationServer.CatalogNode; 9: var teamProjectCollections = catalogNode.QueryChildren(new Guid[] {CatalogResourceTypes.ProjectCollection }, 10: false, CatalogQueryOptions.None); 11: 12: foreach (var teamProjectCollection in teamProjectCollections) 13: { 14: if (string.Compare(teamProjectCollection.Resource.Properties["InstanceId"], instanceId, true) == 0) 15: { 16: teamProjects = teamProjectCollection.QueryChildren(new Guid[] { CatalogResourceTypes.TeamProject }, false, 17: CatalogQueryOptions.None); 18: } 19: } 20: 21: return teamProjects; 22: } 3. Get All Branches with in a Team Project programmatically I will be passing the name of the Team Project for which i want to retrieve all the branches. When consuming the ‘Version Control Service’ you have the method QueryRootBranchObjects, you need to pass the recursion type => none, one, full. Full implies you are interested in all branches under that root branch. 1: public static List<BranchObject> GetParentBranch(string projectName) 2: { 3: var branches = new List<BranchObject>(); 4: 5: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<teamProjectName>")); 6: var versionControl = tfs.GetService<VersionControlServer>(); 7: 8: var allBranches = versionControl.QueryRootBranchObjects(RecursionType.Full); 9: 10: foreach (var branchObject in allBranches) 11: { 12: if (branchObject.Properties.RootItem.Item.ToUpper().Contains(projectName.ToUpper())) 13: { 14: branches.Add(branchObject); 15: } 16: } 17: 18: return branches; 19: } 4. Get All Branches associated to the Parent Branch Programmatically Now that we have the parent branch, it is important to retrieve all child branches of that parent branch. Lets see how we can achieve this using the TFS API. 1: public static List<ItemIdentifier> GetChildBranch(string parentBranch) 2: { 3: var branches = new List<ItemIdentifier>(); 4: 5: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 6: var versionControl = tfs.GetService<VersionControlServer>(); 7: 8: var i = new ItemIdentifier(parentBranch); 9: var allBranches = 10: versionControl.QueryBranchObjects(i, RecursionType.None); 11: 12: foreach (var branchObject in allBranches) 13: { 14: foreach (var childBranche in branchObject.ChildBranches) 15: { 16: branches.Add(childBranche); 17: } 18: } 19: 20: return branches; 21: } 5. Get Merge candidates between two branches Programmatically Now that we have the parent and the child branch that we are interested to perform a merge between we will use the method ‘GetMergeCandidates’ in the namespace ‘Microsoft.TeamFoundation.VersionControl.Client’ => http://msdn.microsoft.com/en-us/library/bb138934(v=VS.100).aspx 1: public static MergeCandidate[] GetMergeCandidates(string fromBranch, string toBranch) 2: { 3: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 4: var versionControl = tfs.GetService<VersionControlServer>(); 5: 6: return versionControl.GetMergeCandidates(fromBranch, toBranch, RecursionType.Full); 7: } 6. Get changeset details Programatically Now that we have the changeset id that we are interested in, we can get details of the changeset. The Changeset object contains the properties => http://msdn.microsoft.com/en-us/library/microsoft.teamfoundation.versioncontrol.client.changeset.aspx - Changes: Gets or sets an array of Change objects that comprise this changeset. - CheckinNote: Gets or sets the check-in note of the changeset. - Comment: Gets or sets the comment of the changeset. - PolicyOverride: Gets or sets the policy override information of this changeset. - WorkItems: Gets an array of work items that are associated with this changeset. 1: public static Changeset GetChangeSetDetails(int changeSetId) 2: { 3: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 4: var versionControl = tfs.GetService<VersionControlServer>(); 5: 6: return versionControl.GetChangeset(changeSetId); 7: } 7. Possibilities In future posts i will try and extend this idea to explore further possibilities, but few features that i am sure will further help during the merge decision making process would be, - View changed files - Compare modified file with current/previous version - Merge Preview - Last Merge date Any other features that you can think of?

    Read the article

  • T-SQL Tuesday #53-Matt's Making Me Do This!

    - by Most Valuable Yak (Rob Volk)
    Hello everyone! It's that time again, time for T-SQL Tuesday, the wonderful blog series started by Adam Machanic (b|t). This month we are hosted by Matt Velic (b|t) who asks the question, "Why So Serious?", in celebration of April Fool's Day. He asks the contributors for their dirty tricks. And for some reason that escapes me, he and Jeff Verheul (b|t) seem to think I might be able to write about those. Shocked, I am! Nah, not really. They're absolutely right, this one is gonna be fun! I took some inspiration from Matt's suggestions, namely Resource Governor and Login Triggers.  I've done some interesting login trigger stuff for a presentation, but nothing yet with Resource Governor. Best way to learn it! One of my oldest pet peeves is abuse of the sa login. Don't get me wrong, I use it too, but typically only as SQL Agent job owner. It's been a while since I've been stuck with it, but back when I started using SQL Server, EVERY application needed sa to function. It was hard-coded and couldn't be changed. (welllllll, that is if you didn't use a hex editor on the EXE file, but who would do such a thing?) My standard warning applies: don't run anything on this page in production. In fact, back up whatever server you're testing this on, including the master database. Snapshotting a VM is a good idea. Also make sure you have other sysadmin level logins on that server. So here's a standard template for a logon trigger to address those pesky sa users: CREATE TRIGGER SA_LOGIN_PRIORITY ON ALL SERVER WITH ENCRYPTION, EXECUTE AS N'sa' AFTER LOGON AS IF ORIGINAL_LOGIN()<>N'sa' OR APP_NAME() LIKE N'SQL Agent%' RETURN; -- interesting stuff goes here GO   What can you do for "interesting stuff"? Books Online limits itself to merely rolling back the logon, which will throw an error (and alert the person that the logon trigger fired).  That's a good use for logon triggers, but really not tricky enough for this blog.  Some of my suggestions are below: WAITFOR DELAY '23:59:59';   Or: EXEC sp_MSforeach_db 'EXEC sp_detach_db ''?'';'   Or: EXEC msdb.dbo.sp_add_job @job_name=N'`', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @delete_level=3; EXEC msdb.dbo.sp_add_jobserver @job_name=N'`', @server_name=@@SERVERNAME; EXEC msdb.dbo.sp_add_jobstep @job_name=N'`', @step_id=1, @step_name=N'`', @command=N'SHUTDOWN;'; EXEC msdb.dbo.sp_start_job @job_name=N'`';   Really, I don't want to spoil your own exploration, try it yourself!  The thing I really like about these is it lets me promote the idea that "sa is SLOW, sa is BUGGY, don't use sa!".  Before we get into Resource Governor, make sure to drop or disable that logon trigger. They don't work well in combination. (Had to redo all the following code when SSMS locked up) Resource Governor is a feature that lets you control how many resources a single session can consume. The main goal is to limit the damage from a runaway query. But we're not here to read about its main goal or normal usage! I'm trying to make people stop using sa BECAUSE IT'S SLOW! Here's how RG can do that: USE master; GO CREATE FUNCTION dbo.SA_LOGIN_PRIORITY() RETURNS sysname WITH SCHEMABINDING, ENCRYPTION AS BEGIN RETURN CASE WHEN ORIGINAL_LOGIN()=N'sa' AND APP_NAME() NOT LIKE N'SQL Agent%' THEN N'SA_LOGIN_PRIORITY' ELSE N'default' END END GO CREATE RESOURCE POOL SA_LOGIN_PRIORITY WITH ( MIN_CPU_PERCENT = 0 ,MAX_CPU_PERCENT = 1 ,CAP_CPU_PERCENT = 1 ,AFFINITY SCHEDULER = (0) ,MIN_MEMORY_PERCENT = 0 ,MAX_MEMORY_PERCENT = 1 -- ,MIN_IOPS_PER_VOLUME = 1 ,MAX_IOPS_PER_VOLUME = 1 -- uncomment for SQL Server 2014 ); CREATE WORKLOAD GROUP SA_LOGIN_PRIORITY WITH ( IMPORTANCE = LOW ,REQUEST_MAX_MEMORY_GRANT_PERCENT = 1 ,REQUEST_MAX_CPU_TIME_SEC = 1 ,REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 1 ,MAX_DOP = 1 ,GROUP_MAX_REQUESTS = 1 ) USING SA_LOGIN_PRIORITY; ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.SA_LOGIN_PRIORITY); ALTER RESOURCE GOVERNOR RECONFIGURE;   From top to bottom: Create a classifier function to determine which pool the session should go to. More info on classifier functions. Create the pool and provide a generous helping of resources for the sa login. Create the workload group and further prioritize those resources for the sa login. Apply the classifier function and reconfigure RG to use it. I have to say this one is a bit sneakier than the logon trigger, least of all you don't get any error messages.  I heartily recommend testing it in Management Studio, and click around the UI a lot, there's some fun behavior there. And DEFINITELY try it on SQL 2014 with the IO settings included!  You'll notice I made allowances for SQL Agent jobs owned by sa, they'll go into the default workload group.  You can add your own overrides to the classifier function if needed. Some interesting ideas I didn't have time for but expect you to get to before me: Set up different pools/workgroups with different settings and randomize which one the classifier chooses Do the same but base it on time of day (Books Online example covers this)... Or, which workstation it connects from. This can be modified for certain special people in your office who either don't listen, or are attracted (and attractive) to you. And if things go wrong you can always use the following from another sysadmin or Dedicated Admin connection: ALTER RESOURCE GOVERNOR DISABLE;   That will let you go in and either fix (or drop) the pools, workgroups and classifier function. So now that you know these types of things are possible, and if you are tired of your team using sa when they shouldn't, I expect you'll enjoy playing with these quite a bit! Unfortunately, the aforementioned Dedicated Admin Connection kinda poops on the party here.  Books Online for both topics will tell you that the DAC will not fire either feature. So if you have a crafty user who does their research, they can still sneak in with sa and do their bidding without being hampered. Of course, you can still detect their login via various methods, like a server trace, SQL Server Audit, extended events, and enabling "Audit Successful Logins" on the server.  These all have their downsides: traces take resources, extended events and SQL Audit can't fire off actions, and enabling successful logins will bloat your error log very quickly.  SQL Audit is also limited unless you have Enterprise Edition, and Resource Governor is Enterprise-only.  And WORST OF ALL, these features are all available and visible through the SSMS UI, so even a doofus developer or manager could find them. Fortunately there are Event Notifications! Event notifications are becoming one of my favorite features of SQL Server (keep an eye out for more blogs from me about them). They are practically unknown and heinously underutilized.  They are also a great gateway drug to using Service Broker, another great but underutilized feature. Hopefully this will get you to start using them, or at least your enemies in the office will once they read this, and then you'll have to learn them in order to fix things. So here's the setup: USE msdb; GO CREATE PROCEDURE dbo.SA_LOGIN_PRIORITY_act WITH ENCRYPTION AS DECLARE @x XML, @message nvarchar(max); RECEIVE @x=CAST(message_body AS XML) FROM SA_LOGIN_PRIORITY_q; IF @x.value('(//LoginName)[1]','sysname')=N'sa' AND @x.value('(//ApplicationName)[1]','sysname') NOT LIKE N'SQL Agent%' BEGIN -- interesting activation procedure stuff goes here END GO CREATE QUEUE SA_LOGIN_PRIORITY_q WITH STATUS=ON, RETENTION=OFF, ACTIVATION (PROCEDURE_NAME=dbo.SA_LOGIN_PRIORITY_act, MAX_QUEUE_READERS=1, EXECUTE AS OWNER); CREATE SERVICE SA_LOGIN_PRIORITY_s ON QUEUE SA_LOGIN_PRIORITY_q([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]); CREATE EVENT NOTIFICATION SA_LOGIN_PRIORITY_en ON SERVER WITH FAN_IN FOR AUDIT_LOGIN TO SERVICE N'SA_LOGIN_PRIORITY_s', N'current database' GO   From top to bottom: Create activation procedure for event notification queue. Create queue to accept messages from event notification, and activate the procedure to process those messages when received. Create service to send messages to that queue. Create event notification on AUDIT_LOGIN events that fire the service. I placed this in msdb as it is an available system database and already has Service Broker enabled by default. You should change this to another database if you can guarantee it won't get dropped. So what to put in place for "interesting activation procedure code"?  Hmmm, so far I haven't addressed Matt's suggestion of writing a lengthy script to send an annoying message: SET @[email protected]('(//HostName)[1]','sysname') + N' tried to log in to server ' + @x.value('(//ServerName)[1]','sysname') + N' as SA at ' + @x.value('(//StartTime)[1]','sysname') + N' using the ' + @x.value('(//ApplicationName)[1]','sysname') + N' program. That''s why you''re getting this message and the attached pornography which' + N' is bloating your inbox and violating company policy, among other things. If you know' + N' this person you can go to their desk and hit them, or use the following SQL to end their session: KILL ' + @x.value('(//SPID)[1]','sysname') + N'; Hopefully they''re in the middle of a huge query that they need to finish right away.' EXEC msdb.dbo.sp_send_dbmail @recipients=N'[email protected]', @subject=N'SA Login Alert', @query_result_width=32767, @body=@message, @query=N'EXEC sp_readerrorlog;', @attach_query_result_as_file=1, @query_attachment_filename=N'UtterlyGrossPorn_SeriouslyDontOpenIt.jpg' I'm not sure I'd call that a lengthy script, but the attachment should get pretty big, and I'm sure the email admins will love storing multiple copies of it.  The nice thing is that this also fires on Dedicated Admin connections! You can even identify DAC connections from the event data returned, I leave that as an exercise for you. You can use that info to change the action taken by the activation procedure, and since it's a stored procedure, it can pretty much do anything! Except KILL the SPID, or SHUTDOWN the server directly.  I'm still working on those.

    Read the article

  • ASP.NET MVC 3 Hosting :: Rolling with Razor in MVC v3 Preview

    - by mbridge
    Razor is an alternate view engine for asp.net MVC.  It was introduced in the “WebMatrix” tool and has now been released as part of the asp.net MVC 3 preview 1.  Basically, Razor allows us to replace the clunky <% %> syntax with a much cleaner coding model, which integrates very nicely with HTML.  Additionally, it provides some really nice features for master page type scenarios and you don’t lose access to any of the features you are currently familiar with, such as HTML helper methods. First, download and install the ASP.NET MVC Preview 1.  You can find this at http://www.microsoft.com/downloads/details.aspx?FamilyID=cb42f741-8fb1-4f43-a5fa-812096f8d1e8&displaylang=en. Now, follow these steps to create your first asp.net mvc project using Razor: 1. Open Visual Studio 2010 2. Create a new project.  Select File->New->Project (Shift Control N) 3. You will see the list of project types which should look similar to what’s shown:   4. Select “ASP.NET MVC 3 Web Application (Razor).”  Set the application name to RazorTest and the path to c:projectsRazorTest for this tutorial. If you select accidently select ASPX, you will end up with the standard asp.net view engine and template, which isn’t what you want. 5. For this tutorial, and ONLY for this tutorial, select “No, do not create a unit test project.”  In general, you should create and use a unit test project.  Code without unit tests is kind of like diet ice cream.  It just isn’t very good. Now, once we have this done, our brand new project will be created.    In all likelihood, Visual Studio will leave you looking at the “HomeController.cs” class, as shown below: Immediately, you should notice one difference.  The Index action used to look like: public ActionResult Index () { ViewData[“Message”] = “Welcome to ASP.Net MVC!”; Return View(); } While this will still compile and run just fine, ASP.Net MVC 3 has a much nicer way of doing this: public ActionResult Index() { ViewModel.Message = “Welcome to ASP.Net MVC!”; Return View(); } Instead of using ViewData we are using the new ViewModel object, which uses the new dynamic data typing of .Net 4.0 to allow us to express ourselves much more cleanly.  This isn’t a tutorial on ALL of MVC 3, but the ViewModel concept is one we will need as we dig into Razor. What comes in the box? When we create a project using the ASP.Net MVC 3 Template with Razor, we get a standard project setup, just like we did in ASP.NET MVC 2.0 but with some differences.  Instead of seeing “.aspx” view files and “.ascx” files, we see files with the “.cshtml” which is the default razor extension.  Before we discuss the details of a razor file, one thing to keep in mind is that since this is an extremely early preview, intellisense is not currently enabled with the razor view engine.  This is promised as an updated before the final release.  Just like with the aspx view engine, the convention of the folder name for a set of views matching the controller name without the word “Controller” still stands.  Similarly, each action in the controller will usually have a corresponding view file in the appropriate view directory.  Remember, in asp.net MVC, convention over configuration is key to successful development! The initial template organizes views in the following folders, located in the project under Views: - Account – The default account management views used by the Account controller.  Each file represents a distinct view. - Home – Views corresponding to the appropriate actions within the home controller. - Shared – This contains common view objects used by multiple views.  Within here, master pages are stored, as well as partial page views (user controls).  By convention, these partial views are named “_XXXPartial.cshtml” where XXX is the appropriate name, such as _LogonPartial.cshtml.  Additionally, display templates are stored under here. With this in mind, let us take a look at the index.cshtml file under the home view directory.  When you open up index.cshtml you should see 1:   @inherits System.Web.Mvc.WebViewPage 2:  @{ 3:          View.Title = "Home Page"; 4:       LayoutPage = "~/Views/Shared/_Layout.cshtml"; 5:   } 6:  <h2>@View.Message</h2> 7:  <p> 8:     To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC     9:    Website">http://asp.net/mvc</a>. 10:  </p> So looking through this, we observe the following facts: Line 1 imports the base page that all views (using Razor) are based on, which is System.Web.Mvc.WebViewPage.  Note that this is different than System.Web.MVC.ViewPage which is used by asp.net MVC 2.0 Also note that instead of the <% %> syntax, we use the very simple ‘@’ sign.  The View Engine contains enough context sensitive logic that it can even distinguish between @ in code and @ in an email.  It’s a very clean markup.  Line 2 introduces the idea of a code block in razor.  A code block is a scoping mechanism just like it is in a normal C# class.  It is designated by @{… }  and any C# code can be placed in between.  Note that this is all server side code just like it is when using the aspx engine and <% %>.  Line 3 allows us to set the page title in the client page’s file.  This is a new feature which I’ll talk more about when we get to master pages, but it is another of the nice things razor brings to asp.net mvc development. Line 4 is where we specify our “master” page, but as you can see, you can place it almost anywhere you want, because you tell it where it is located.  A Layout Page is similar to a master page, but it gains a bit when it comes to flexibility.  Again, we’ll come back to this in a later installment.  Line 6 and beyond is where we display the contents of our view.  No more using <%: %> intermixed with code.  Instead, we get to use very clean syntax such as @View.Message.  This is a lot easier to read than <%:@View.Message%> especially when intermixed with html.  For example: <p> My name is @View.Name and I live at @View.Address </p> Compare this to the equivalent using the aspx view engine <p> My name is <%:View.Name %> and I live at <%: View.Address %> </p> While not an earth shaking simplification, it is easier on the eyes.  As  we explore other features, this clean markup will become more and more valuable.

    Read the article

  • Silverlight Recruiting Application Part 4 - Navigation and Modules

    After our brief intermission (and the craziness of Q1 2010 release week), we're back on track here and today we get to dive into how we are going to navigate through our applications as well as how to set up our modules. That way, as I start adding the functionality- adding Jobs and Applicants, Interview Scheduling, and finally a handy Dashboard- you'll see how everything is communicating back and forth. This is all leading up to an eventual webinar, in which I'll dive into this process and give a honest look at the current story for MVVM vs. Code-Behind applications. (For a look at the future with SL4 and a little thing called MEF, check out what Ross is doing over at his blog!) Preamble... Before getting into really talking about this app, I've done a little bit of work ahead of time to create a ton of files that I'll need. Since the webinar is going to cover the Dashboard, it's not here, but otherwise this is a look at what the project layout looks like (and remember, this is both projects since they share the .Web): So as you can see, from an architecture perspective, the code-behind app is much smaller and more streamlined- aka a better fit for the one man shop that is me. Each module in the MVVM app has the same setup, which is the Module class and corresponding Views and ViewModels. Since the code-behind app doesn't need a go-between project like Infrastructure, each MVVM module is instead replaced by a single Silverlight UserControl which will contain all the logic for each respective bit of functionality. My Very First Module Navigation is going to be key to my application, so I figured the first thing I would setup is my MenuModule. First step here is creating a Silverlight Class Library named MenuModule, creatingthe View and ViewModel folders, and adding the MenuModule.cs class to handle module loading. The most important thing here is that my MenuModule inherits from IModule, which runs an Initialize on each module as it is created that, in my case, adds the views to the correct regions. Here's the MenuModule.cs code: public class MenuModule : IModule { private readonly IRegionManager regionManager; private readonly IUnityContainer container; public MenuModule(IUnityContainer container, IRegionManager regionmanager) { this.container = container; this.regionManager = regionmanager; } public void Initialize() { var addMenuView = container.Resolve<MenuView>(); regionManager.Regions["MenuRegion"].Add(addMenuView); } } Pretty straightforward here... We inject a container and region manager from Prism/Unity, then upon initialization we grab the view (out of our Views folder) and add it to the region it needs to live in. Simple, right? When the MenuView is created, the only thing in the code-behind is a reference to the set the MenuViewModel as the DataContext. I'd like to achieve MVVM nirvana and have zero code-behind by placing the viewmodel in the XAML, but for the reasons listed further below I can't. Navigation - MVVM Since navigation isn't the biggest concern in putting this whole thing together, I'm using the Button control to handle different options for loading up views/modules. There is another reason for this- out of the box, Prism has command support for buttons, which is one less custom command I had to work up for the functionality I would need. This comes from the Microsoft.Practices.Composite.Presentation assembly and looks as follows when put in code: <Button x:Name="xGoToJobs" Style="{StaticResource menuStyle}" Content="Jobs" cal:Click.Command="{Binding GoModule}" cal:Click.CommandParameter="JobPostingsView" /> For quick reference, 'menuStyle' is just taking care of margins and spacing, otherwise it looks, feels, and functions like everyone's favorite Button. What MVVM's this up is that the Click.Command is tying to a DelegateCommand (also coming fromPrism) on the backend. This setup allows you to tie user interaction to a command you setup in your viewmodel, which replaces the standard event-based setup you'd see in the code-behind app. Due to databinding magic, it all just works. When we get looking at the DelegateCommand in code, it ends up like this: public class MenuViewModel : ViewModelBase { private readonly IRegionManager regionManager; public DelegateCommand<object> GoModule { get; set; } public MenuViewModel(IRegionManager regionmanager) { this.regionManager = regionmanager; this.GoModule = new DelegateCommand<object>(this.goToView); } public void goToView(object obj) { MakeMeActive(this.regionManager, "MainRegion", obj.ToString()); } } Another for reference, ViewModelBase takes care of iNotifyPropertyChanged and MakeMeActive, which switches views in the MainRegion based on the parameters. So our public DelegateCommand GoModule ties to our command on the view, that in turn calls goToView, and the parameter on the button is the name of the view (which we pass with obj.ToString()) to activate. And how do the views get the names I can pass as a string? When I called regionManager.Regions[regionname].Add(view), there is an overload that allows for .Add(view, "viewname"), with viewname being what I use to activate views. You'll see that in action next installment, just wanted to clarify how that works. With this setup, I create two more buttons in my MenuView and the MenuModule is good to go. Last step is to make sure my MenuModule loads in my Bootstrapper: protected override IModuleCatalog GetModuleCatalog() { ModuleCatalog catalog = new ModuleCatalog(); // add modules here catalog.AddModule(typeof(MenuModule.MenuModule)); return catalog; } Clean, simple, MVVM-delicious. Navigation - Code-Behind Keeping with the history of significantly shorter code-behind sections of this series, Navigation will be no different. I promise. As I explained in a prior post, due to the one-project setup I don't have to worry about the same concerns so my menu is part of MainPage.xaml. So I can cheese-it a bit, though, since I've already got three buttons all set I'm just copying that code and adding three click-events instead of the command/commandparameter setup: <!-- Menu Region --> <StackPanel Grid.Row="1" Orientation="Vertical"> <Button x:Name="xJobsButton" Content="Jobs" Style="{StaticResource menuStyleCB}" Click="xJobsButton_Click" /> <Button x:Name="xApplicantsButton" Content="Applicants" Style="{StaticResource menuStyleCB}" Click="xApplicantsButton_Click" /> <Button x:Name="xSchedulingModule" Content="Scheduling" Style="{StaticResource menuStyleCB}" Click="xSchedulingModule_Click" /> </StackPanel> Simple, easy to use events, and no extra assemblies required! Since the code for loading each view will be similar, we'll focus on JobsView for now.The code-behind with this setup looks something like... private JobsView _jobsView; public MainPage() { InitializeComponent(); } private void xJobsButton_Click(object sender, RoutedEventArgs e) { if (MainRegion.Content.GetType() != typeof(JobsView)) { if (_jobsView == null) _jobsView = new JobsView(); MainRegion.Content = _jobsView; } } What am I doing here? First, for each 'view' I create a private reference which MainPage will hold on to. This allows for a little bit of state-maintenance when switching views. When a button is clicked, first we make sure the 'view' typeisn't active (why load it again if it is already at center stage?), then we check if the view has been created and create if necessary, then load it up. Three steps to switching views and is easy as pie. Part 4 Results The end result of all this is that I now have a menu module (MVVM) and a menu section (code-behind) that load their respective views. Since I'm using the same exact XAML (except with commands/events depending on the project), the end result for both is again exactly the same and I'll show a slightly larger image to show it off: Next time, we add the Jobs Module and wire up RadGridView and a separate edit page to handle adding and editing new jobs. That's when things get fun. And somewhere down the line, I'll make the menu look slicker. :) Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Amazon Web Services (AWS) Plug-in for Oracle Enterprise Manager

    - by Anand Akela
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Contributed by Sunil Kunisetty and Daniel Chan Introduction and ArchitectureAs more and more enterprises deploy some of their non-critical workload on Amazon Web Services (AWS), it’s becoming critical to monitor those public AWS resources along side with their on-premise resources. Oracle recently announced Oracle Enterprise Manager Plug-in for Amazon Web Services (AWS) allows you to achieve that goal. The on-premise Oracle Enterprise Manager (EM12c) acts as a single tool to get a comprehensive view of your public AWS resources as well as your private cloud resources.  By deploying the plug-in within your Cloud Control environment, you gain the following management features: Monitor EBS, EC2 and RDS instances on Amazon Web Services Gather performance metrics and configuration details for AWS instances Raise alerts and violations based on thresholds set on monitoring Generate reports based on the gathered data Users of this Plug-in can leverage the rich Enterprise Manager features such as system promotion, incident generation based on thresholds, integration with 3rd party ticketing applications etc. AWS Monitoring via this Plug-in is enabled via Amazon CloudWatch API and the users of this Plug-in are responsible for supplying credentials for accessing AWS and the CloudWatch API. This Plug-in can only be deployed on an EM12C R2 platform and agent version should be at minimum 12c R2.Here is a pictorial view of the overall architecture: Amazon Elastic Block Store (EBS) Amazon Elastic Compute Cloud (EC2) Amazon Relational Database Service (RDS) Here are a few key features: Rich and exhaustive list of metrics. Metrics can be gathered from an Agent running outside AWS. Critical configuration information. Custom Home Pages with charts and AWS configuration information. Generate incidents based on thresholds set on monitoring data. Discovery and Monitoring AWS instances can be added to EM12C either via the EM12c User Interface (UI) or the EM12c Command Line Interface ( EMCLI)  by providing the AWS credentials (Secret Key and Access Key Id) as well as resource specific properties as target properties. Here is a quick mapping of target types and properties for each AWS resources AWS Resource Type Target Type Resource specific properties EBS Resource Amazon EBS Service CloudWatch base URI, EC2 Base URI, Period, Volume Id, Proxy Server and Port EC2 Resource Amazon EC2 Service CloudWatch base URI, EC2 Base URI, Period, Instance  Id, Proxy Server and Port RDS Resource Amazon RDS Service CloudWatch base URI, RDS Base URI, Period, Instance  Id, Proxy Server and Port Proxy server and port are optional and are only needed if the agent is within the firewall. Here is an emcli example to add an EC2 target. Please read the Installation and Readme guide for more details and step-by-step instructions to deploy  the plugin and adding the AWS the instances. ./emcli add_target \       -name="<target name>" \       -type="AmazonEC2Service" \       -host="<host>" \       -properties="ProxyHost=<proxy server>;ProxyPort=<proxy port>;EC2_BaseURI=http://ec2.<region>.amazonaws.com;BaseURI=http://monitoring.<region>.amazonaws.com;InstanceId=<EC2 instance Id>;Period=<data point periond>"  \     -subseparator=properties="=" ./emcli set_monitoring_credential \                 -set_name="AWSKeyCredentialSet"  \                 -target_name="<target name>"  \                 -target_type="AmazonEC2Service" \                 -cred_type="AWSKeyCredential"  \                 -attributes="AccessKeyId:<access key id>;SecretKey:<secret key>" Emcli utility is found under the ORACLE_HOME of EM12C install. Once the instance is discovered, the target will show up under the ‘All Targets’ list under “Amazon EC2 Service’. Once the instances are added, one can navigate to the custom homepages for these resource types. The custom home pages not only include critical metrics, but also vital configuration parameters and incidents raised for these instances.  By mapping the configuration parameters as instance properties, we can slice-and-dice and group various AWS instance by leveraging the EM12C Config search feature. The following configuration properties and metrics are collected for these Resource types. Resource Type Configuration Properties Metrics EBS Resource Volume Id, Volume Type, Device Name, Size, Availability Zone Response: Status Utilization: QueueLength, IdleTime Volume Statistics: ReadBrandwith, WriteBandwidth, ReadThroughput, WriteThroughput Operation Statistics: ReadSize, WriteSize, ReadLatency, WriteLatency EC2 Resource Instance ID, Owner Id, Root Device type, Instance Type. Availability Zone Response: Status CPU Utilization: CPU Utilization Disk I/O:  DiskReadBytes, DiskWriteBytes, DiskReadOps, DiskWriteOps, DiskReadRate, DiskWriteRate, DiskIOThroughput, DiskReadOpsRate, DiskWriteOpsRate, DiskOperationThroughput Network I/O : NetworkIn, NetworkOut, NetworkInRate, NetworkOutRate, NetworkThroughput RDS Resource Instance ID, Database Engine Name, Database Engine Version, Database Instance Class, Allocated Storage Size, Availability Zone Response: Status Disk I/O:  ReadIOPS, WriteIOPS, ReadLatency, WriteLatency, ReadThroughput, WriteThroughput DB Utilization:  BinLogDiskUsage, CPUUtilization, DatabaseConnections, FreeableMemory, ReplicaLag, SwapUsage Custom Home Pages As mentioned above, we have custom home pages for these target types that include basic configuration information,  last 24 hours availability, top metrics and the incidents generated. Here are few snapshots. EBS Instance Home Page: EC2 Instance Home Page: RDS Instance Home Page: Further Reading: 1)      AWS Plugin download 2)      Installation and  Read Me. 3)      Screenwatch on SlideShare 4)      Extensibility Programmer's Guide 5)      Amazon Web Services

    Read the article

  • Windows 8.1 Will Start Encrypting Hard Drives By Default: Everything You Need to Know

    - by Chris Hoffman
    Windows 8.1 will automatically encrypt the storage on modern Windows PCs. This will help protect your files in case someone steals your laptop and tries to get at them, but it has important ramifications for data recovery. Previously, “BitLocker” was available on Professional and Enterprise editions of Windows, while “Device Encryption” was available on Windows RT and Windows Phone. Device encryption is included with all editions of Windows 8.1 — and it’s on by default. When Your Hard Drive Will Be Encrypted Windows 8.1 includes “Pervasive Device Encryption.” This works a bit differently from the standard BitLocker feature that has been included in Professional, Enterprise, and Ultimate editions of Windows for the past few versions. Before Windows 8.1 automatically enables Device Encryption, the following must be true: The Windows device “must support connected standby and meet the Windows Hardware Certification Kit (HCK) requirements for TPM and SecureBoot on ConnectedStandby systems.”  (Source) Older Windows PCs won’t support this feature, while new Windows 8.1 devices you pick up will have this feature enabled by default. When Windows 8.1 installs cleanly and the computer is prepared, device encryption is “initialized” on the system drive and other internal drives. Windows uses a clear key at this point, which is removed later when the recovery key is successfully backed up. The PC’s user must log in with a Microsoft account with administrator privileges or join the PC to a domain. If a Microsoft account is used, a recovery key will be backed up to Microsoft’s servers and encryption will be enabled. If a domain account is used, a recovery key will be backed up to Active Directory Domain Services and encryption will be enabled. If you have an older Windows computer that you’ve upgraded to Windows 8.1, it may not support Device Encryption. If you log in with a local user account, Device Encryption won’t be enabled. If you upgrade your Windows 8 device to Windows 8.1, you’ll need to enable device encryption, as it’s off by default when upgrading. Recovering An Encrypted Hard Drive Device encryption means that a thief can’t just pick up your laptop, insert a Linux live CD or Windows installer disc, and boot the alternate operating system to view your files without knowing your Windows password. It means that no one can just pull the hard drive from your device, connect the hard drive to another computer, and view the files. We’ve previously explained that your Windows password doesn’t actually secure your files. With Windows 8.1, average Windows users will finally be protected with encryption by default. However, there’s a problem — if you forget your password and are unable to log in, you’d also be unable to recover your files. This is likely why encryption is only enabled when a user logs in with a Microsoft account (or connects to a domain). Microsoft holds a recovery key, so you can gain access to your files by going through a recovery process. As long as you’re able to authenticate using your Microsoft account credentials — for example, by receiving an SMS message on the cell phone number connected to your Microsoft account — you’ll be able to recover your encrypted data. With Windows 8.1, it’s more important than ever to configure your Microsoft account’s security settings and recovery methods so you’ll be able to recover your files if you ever get locked out of your Microsoft account. Microsoft does hold the recovery key and would be capable of providing it to law enforcement if it was requested, which is certainly a legitimate concern in the age of PRISM. However, this encryption still provides protection from thieves picking up your hard drive and digging through your personal or business files. If you’re worried about a government or a determined thief who’s capable of gaining access to your Microsoft account, you’ll want to encrypt your hard drive with software that doesn’t upload a copy of your recovery key to the Internet, such as TrueCrypt. How to Disable Device Encryption There should be no real reason to disable device encryption. If nothing else, it’s a useful feature that will hopefully protect sensitive data in the real world where people — and even businesses — don’t enable encryption on their own. As encryption is only enabled on devices with the appropriate hardware and will be enabled by default, Microsoft has hopefully ensured that users won’t see noticeable slow-downs in performance. Encryption adds some overhead, but the overhead can hopefully be handled by dedicated hardware. If you’d like to enable a different encryption solution or just disable encryption entirely, you can control this yourself. To do so, open the PC settings app — swipe in from the right edge of the screen or press Windows Key + C, click the Settings icon, and select Change PC settings. Navigate to PC and devices -> PC info. At the bottom of the PC info pane, you’ll see a Device Encryption section. Select Turn Off if you want to disable device encryption, or select Turn On if you want to enable it — users upgrading from Windows 8 will have to enable it manually in this way. Note that Device Encryption can’t be disabled on Windows RT devices, such as Microsoft’s Surface RT and Surface 2. If you don’t see the Device Encryption section in this window, you’re likely using an older device that doesn’t meet the requirements and thus doesn’t support Device Encryption. For example, our Windows 8.1 virtual machine doesn’t offer Device Encryption configuration options. This is the new normal for Windows PCs, tablets, and devices in general. Where files on typical PCs were once ripe for easy access by thieves, Windows PCs are now encrypted by default and recovery keys are sent to Microsoft’s servers for safe keeping. This last part may be a bit creepy, but it’s easy to imagine average users forgetting their passwords — they’d be very upset if they lost all their files because they had to reset their passwords. It’s also an improvement over Windows PCs being completely unprotected by default.     

    Read the article

  • Metro: Understanding Observables

    - by Stephen.Walther
    The goal of this blog entry is to describe how the Observer Pattern is implemented in the WinJS library. You learn how to create observable objects which trigger notifications automatically when their properties are changed. Observables enable you to keep your user interface and your application data in sync. For example, by taking advantage of observables, you can update your user interface automatically whenever the properties of a product change. Observables are the foundation of declarative binding in the WinJS library. The WinJS library is not the first JavaScript library to include support for observables. For example, both the KnockoutJS library and the Microsoft Ajax Library (now part of the Ajax Control Toolkit) support observables. Creating an Observable Imagine that I have created a product object like this: var product = { name: "Milk", description: "Something to drink", price: 12.33 }; Nothing very exciting about this product. It has three properties named name, description, and price. Now, imagine that I want to be notified automatically whenever any of these properties are changed. In that case, I can create an observable product from my product object like this: var observableProduct = WinJS.Binding.as(product); This line of code creates a new JavaScript object named observableProduct from the existing JavaScript object named product. This new object also has a name, description, and price property. However, unlike the properties of the original product object, the properties of the observable product object trigger notifications when the properties are changed. Each of the properties of the new observable product object has been changed into accessor properties which have both a getter and a setter. For example, the observable product price property looks something like this: price: { get: function () { return this.getProperty(“price”); } set: function (value) { this.setProperty(“price”, value); } } When you read the price property then the getProperty() method is called and when you set the price property then the setProperty() method is called. The getProperty() and setProperty() methods are methods of the observable product object. The observable product object supports the following methods and properties: · addProperty(name, value) – Adds a new property to an observable and notifies any listeners. · backingData – An object which represents the value of each property. · bind(name, action) – Enables you to execute a function when a property changes. · getProperty(name) – Returns the value of a property using the string name of the property. · notify(name, newValue, oldValue) – A private method which executes each function in the _listeners array. · removeProperty(name) – Removes a property and notifies any listeners. · setProperty(name, value) – Updates a property and notifies any listeners. · unbind(name, action) – Enables you to stop executing a function in response to a property change. · updateProperty(name, value) – Updates a property and notifies any listeners. So when you create an observable, you get a new object with the same properties as an existing object. However, when you modify the properties of an observable object, then you can notify any listeners of the observable that the value of a particular property has changed automatically. Imagine that you change the value of the price property like this: observableProduct.price = 2.99; In that case, the following sequence of events is triggered: 1. The price setter calls the setProperty(“price”, 2.99) method 2. The setProperty() method updates the value of the backingData.price property and calls the notify() method 3. The notify() method executes each function in the collection of listeners associated with the price property Creating Observable Listeners If you want to be notified when a property of an observable object is changed, then you need to register a listener. You register a listener by using the bind() method like this: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { // Simple product object var product = { name: "Milk", description: "Something to drink", price: 12.33 }; // Create observable product var observableProduct = WinJS.Binding.as(product); // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); // Change the price observableProduct.price = 2.99; } }; app.start(); })(); In the code above, the bind() method is used to associate the price property with a function. When the price property is changed, the function logs the new value of the price property to the Visual Studio JavaScript console. The price property is associated with the function using the following line of code: // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); Coalescing Notifications If you make multiple changes to a property – one change immediately following another – then separate notifications won’t be sent. Instead, any listeners are notified only once. The notifications are coalesced into a single notification. For example, in the following code, the product price property is updated three times. However, only one message is written to the JavaScript console. Only the last value assigned to the price property is written to the JavaScript Console window: // Simple product object var product = { name: "Milk", description: "Something to drink", price: 12.33 }; // Create observable product var observableProduct = WinJS.Binding.as(product); // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); // Change the price observableProduct.price = 3.99; observableProduct.price = 2.99; observableProduct.price = 1.99; Only the last value assigned to price, the value 1.99, appears in the console: If there is a time delay between changes to a property then changes result in different notifications. For example, the following code updates the price property every second: // Simple product object var product = { name: "Milk", description: "Something to drink", price: 12.33 }; // Create observable product var observableProduct = WinJS.Binding.as(product); // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); // Add 1 to price every second window.setInterval(function () { observableProduct.price += 1; }, 1000); In this case, separate notification messages are logged to the JavaScript Console window: If you need to prevent multiple notifications from being coalesced into one then you can take advantage of promises. I discussed WinJS promises in a previous blog entry: http://stephenwalther.com/blog/archive/2012/02/22/windows-web-applications-promises.aspx Because the updateProperty() method returns a promise, you can create different notifications for each change in a property by using the following code: // Change the price observableProduct.updateProperty("price", 3.99) .then(function () { observableProduct.updateProperty("price", 2.99) .then(function () { observableProduct.updateProperty("price", 1.99); }); }); In this case, even though the price is immediately changed from 3.99 to 2.99 to 1.99, separate notifications for each new value of the price property are sent. Bypassing Notifications Normally, if a property of an observable object has listeners and you change the property then the listeners are notified. However, there are certain situations in which you might want to bypass notification. In other words, you might need to change a property value silently without triggering any functions registered for notification. If you want to change a property without triggering notifications then you should change the property by using the backingData property. The following code illustrates how you can change the price property silently: // Simple product object var product = { name: "Milk", description: "Something to drink", price: 12.33 }; // Create observable product var observableProduct = WinJS.Binding.as(product); // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); // Change the price silently observableProduct.backingData.price = 5.99; console.log(observableProduct.price); // Writes 5.99 The price is changed to the value 5.99 by changing the value of backingData.price. Because the observableProduct.price property is not set directly, any listeners associated with the price property are not notified. When you change the value of a property by using the backingData property, the change in the property happens synchronously. However, when you change the value of an observable property directly, the change is always made asynchronously. Summary The goal of this blog entry was to describe observables. In particular, we discussed how to create observables from existing JavaScript objects and bind functions to observable properties. You also learned how notifications are coalesced (and ways to prevent this coalescing). Finally, we discussed how you can use the backingData property to update an observable property without triggering notifications. In the next blog entry, we’ll see how observables are used with declarative binding to display the values of properties in an HTML document.

    Read the article

  • Craftsmanship is ALL that Matters

    - by Wayne Molina
    Today, I'm going to talk about a touchy subject: the notion of working in a company that doesn't use the prescribed "best practices" in its software development endeavours.  Over the years I have, using a variety of pseudonyms, asked this question on popular programming forums.  Although I always add in some minor variation of the story to avoid suspicion that it's the same person posting, the crux of the tale remains the same: A Programmer’s Tale A junior software developer has just started a new job at an average company, creating average line-of-business applications for internal use (the most typical scenario programmers find themselves in).  This hypothetical newbie has spent a lot of time reading up on the "theory" of software development, devouring books, blogs and screencasts from well-known and respected software developers in the community in order to broaden his knowledge and "do what the pros do".  He begins his new job, eager to apply what he's learned on a real-world project only to discover that his new teammates doesn't use any of those concepts and techniques.  They hack their way through development, or in a best-case scenario use some homebrew, thrown-together semblance of a framework for their applications that follows not one of the best practices suggested by the “elite” in the software community - things like TDD (TDD as a "best practice" is the only subjective part of this post, but it's included here due to a very large following of respected developers who consider it one), the SOLID principles, well-known and venerable tools, even version control in a worst case and truly nightmarish scenario.  Our protagonist is frustrated that he isn't doing things the "proper" way - a way he's spent personal time digesting and learning about and, more importantly, a way that some of the top developers in the industry advocate - and turns to a forum to ask the advice of his peers. Invariably the answer I, in the guise of the concerned newbie, will receive is that A) I don't know anything and should just shut my mouth and sling code the bad way like everybody else on the team, and B) These "best practices" are fade or a joke, and the only thing that matters is shipping software to your customers. I am here today to say that anyone who says this, or anything like it, is not only full of crap but indicative of exactly the type of “developer” that has helped to give our industry a bad name.  Here is why: One Who Knows Nothing, Understands Nothing On one hand, you have the cognoscenti of the .NET development world.  Guys like James Avery, Jeremy Miller, Ayende Rahien and Rob Conery; all well-respected and noted programmers that are pretty much our version of celebrities.  These guys write blogs, books, and post videos outlining the "correct" way of writing software to make sure it not only works but is maintainable and extensible and a joy to work with.  They tout the virtues of the SOLID principles, or of using TDD/BDD, or using a mature ORM like NHibernate, Subsonic or even Entity Framework. On the other hand, you have Joe Everyman, Lead Software Developer at Initrode Corporation - in our hypothetical story Joe is the junior developer's new boss.  Joe's been with Initrode for 10 years, starting as the company’s very first programmer and over the years building up a little fiefdom of his own until at the present he’s in charge of all Initrode’s software development.  Joe writes code the same way he always has, without bothering to learn much, if anything.  He looked at NHibernate once and found it was "too hard", so he uses a primitive implementation of the TableDataGateway pattern as a wrapper around SqlClient.SqlConnection and SqlClient.SqlCommand instead of an actual ORM (or, in a better case scenario, has created his own ORM); the thought of using LINQ or Entity Framework or really anything other than his own hastily homebrew solution has never occurred to him.  He doesn't understand TDD and considers “testing” to be using the .NET debugger to step through code, or simply loading up an app and entering some values to see if it works.  He doesn't really understand SOLID, and he doesn't care to.  He's worked as a programmer for years, and that's all that counts.  Right?  WRONG. Who would you rather trust?  Someone with years of experience and who writes books, creates well-known software and is akin to a celebrity, or someone with no credibility outside their own minute environment who throws around their clout and company seniority as the "proof" of their ability?  Joe Everyman may have years of experience at Initrode as a programmer, and says to do things "his way" but someone like Jeremy Miller or Ayende Rahien have years of experience at companies just like Initrode, THEY know ten times more than Joe Everyman knows or could ever hope to know, and THEY say to do things "this way". Here's another way of thinking about it: If you wanted to get into politics and needed advice on the best way to do it, would you rather listen to the mayor of Hicktown, USA or Barack Obama?  One is a small-time nobody while the other is very well-known and, as such, would probably have much more accurate and beneficial advice. NOTE: The selection of Barack Obama as an example in no way, shape, or form suggests a political affiliation or political bent to this post or blog, and no political innuendo should be mistakenly read from it; the intent was merely to compare a small-time persona with a well-known persona in a non-software field.  Feel free to replace the name "Barack Obama" with any well-known Congressman, Senator or US President of your choice. DIY Considered Harmful I will say right now that the homebrew development environment is the WORST one for an aspiring programmer, because it relies on nothing outside it's own little box - no useful skill outside of the small pond.  If you are forced to use some half-baked, homebrew ORM created by your Director of Software, you are not learning anything valuable you can take with you in the future; now, if you plan to stay at Initrode for 10 years like Joe Everyman, this is fine and dandy.  However if, like most of us, you want to advance your career outside a very narrow space you will do more harm than good by sticking it out in an environment where you, to be frank, know better than everybody else because you are aware of alternative and, in almost most cases, better tools for the job.  A junior developer who understands why the SOLID principles are good to follow, or why TDD is beneficial, or who knows that it's better to use NHibernate/Subsonic/EF/LINQ/well-known ORM versus some in-house one knows better than a senior developer with 20 years experience who doesn't understand any of that, plain and simple.  Anyone who disagrees is either a liar, or someone who, just like Joe Everyman, Lead Developer, relies on seniority and tenure rather than adapting their knowledge as things evolve. In many cases, the Joe Everymans of the world act this way out of fear - they cannot possibly fathom that a “junior” could know more than them; after all, they’ve spent 10 or more years in the same company, doing the same job, cranking out the same shoddy software.  And here comes a newbie who hasn’t spent 10+ years doing the same things, with a fresh and often radical take on the craft, and Joe Everyman is afraid he might have to put some real effort into his career again instead of just pointing to his 10 years of service at Initrode as “proof” that he’s good, or that he might have to learn something new to improve; in most cases the problem is Joe Everyman, and by extension Initrode itself, has a mentality of just being “good enough”, and mediocrity is the rule of the day. A Thorn Bush is No Place for a Phoenix My advice is that if you work on a team where they don't use the best practices that some of the most famous developers in our field say is the "right" way to do things (and have legions of people who agree), and YOU are aware of these practices and can see why they work, then LEAVE the company.  Find a company where they DO care about quality, and craftsmanship, otherwise you will never be happy.  There is no point in "dumbing" yourself down to the level of your co-workers and slinging code without care to craftsmanship.  In 95% of these situations there will be no point in bringing it to the attention of Joe Everyman because he won't listen; he might even get upset that someone is trying to "upstage" him and fire the newbie, and replace someone with loads of untapped potential with a drone that will just nod affirmatively and grind out the tasks assigned without question. Find a company that has people smart enough to listen to the "best and brightest", and be happy.  Do not, I repeat, DO NOT waste away in a job working for ignorant people.  At the end of the day software development IS a craft, and a level of craftsmanship is REQUIRED for any serious professional.  When you have knowledgeable people with the credibility to back it up saying one thing, and small-time people who are, to put it bluntly, nobodies in the field saying and doing something totally different because they can't comprehend it, leave the nobodies to their own devices to fade into obscurity.  Work for a company that uses REAL software engineering techniques and really cares about craftsmanship.  The biggest issue affecting our career, and the reason software development has never been the respected, white-collar career it was meant to be, is because hacks and charlatans can pass themselves off as professional programmers without following a lick of good advice from programmers much better at the craft than they are.  These modern day snake-oil salesmen entrench themselves in companies by hoodwinking non-technical businesspeople and customers with their shoddy wares, end up in senior/lead/executive positions, and push their lack of knowledge on everybody unfortunate enough to work with/for/under them, crushing any dissent or voices of reason and change under their tyrannical heel and leaving behind a trail of dismayed and, often, unemployed junior developers who were made examples of to keep up the facade and avoid the shadow of doubt being cast upon them. To sum this up another way: If you surround yourself with learned people, you will learn.  Surround yourself with ignorant people who can't, as the saying goes, see the forest through the trees, and you'll learn nothing of any real value.  There is more to software development than just writing code, and the end goal should not be just "shipping software", it should be shipping software that is extensible, maintainable, and above all else software whose creation has broadened your knowledge in some capacity, even if a minor one.  An eager newbie who knows theory and thirsts for knowledge can easily be moulded and taught the advanced topics, but the same can't be said of someone who only cares about the finish line.  This industry needs more people espousing the benefits of software craftsmanship and proper software engineering techniques, and less Joe Everymans who are unwilling to adapt or foster new ways of thinking. Conclusion - I Cast “Protection from Fire” I am fairly certain this post will spark some controversy and might even invite the flames.  Please keep in mind these are opinions and nothing more.  A little healthy rant and subsequent flamewar can be good for the soul once in a while.  To paraphrase The Godfather: It helps to get rid of the bad blood.

    Read the article

  • New OFM versions released SOA Suite 11.1.1.4 &amp; BPM 11.1.1.4 &amp; JDeveloper 11.1.1.4 WebLogic on JRockit 10.3.4 feedback from the community

    - by Jürgen Kress
    Oracle SOA Suite 11g Installations This is the latest release of the Oracle SOA Suite 11g. Please see the Documentation tab for Release Notes, Installation Guides and other release specific information. Please also see the List of New Features and Samples provided for this release. Release 11gR1 (11.1.1.4.0) Microsoft Windows (32-bit JVM) Linux (32-bit JVM) Generic Oracle JDeveloper 11g Rel 1 (11.1.1.x) (JDeveloper + ADF) Integrated development environment certified on Windows, Linux, and Macintosh. License is free (read the Pricing FAQ). Studio Edition for Windows (1.2 GB) | Studio Edition for Linux (1.3 GB) | See All See Additional Development Tools Oracle WebLogic Server 11g Rel 1 (10.3.4) Installers The WebLogic Server installers include Oracle Coherence and Oracle Enterprise Pack for Eclipse and supports development with other Fusion Middleware products . The zip includes WebLogic Server only and is intended for WebLogic Server development only. Linux x86 (1.1 GB) | Windows x86 (1 GB) Zip for Windows x86, Linux x86, Mac OS X (316 MB) | See All Oracle WebLogic Server 11gR1 (10.3.4) on JRockit Virtual Edition Download For additional downloads please visit the Oracle Fusion Middleware Products Update Center Share your feedback with the @soacommunity on twitter SOASimone Simone Geib SOA Suite 11gR1 (11.1.1.4.0) has just been released: http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html gschmutz gschmutz My new blog post: WebLogic Server, JDev, SOA, BPM, OSB and CEP 11.1.1.4 (PS3) available! - http://tinyurl.com/4negnpn simon_haslam Simon Haslam I'm very pleased to see WLS 10.3.4 for JRockit VE launched at the same time as the rest of PS3 http://j.mp/gl1nQm (32bit anyway) lucasjellema Lucas Jellema See http://www.oracle.com/ocom/groups/public/@otn/documents/webcontent/156082.xml for PS3 extension downloads BPM, SOA Editor, WebCenter demed demed List of new features in @OracleSOA 11gR1 PS3: http://bit.ly/fVRwsP is not extremely long but huge release by # of bugs fixed. Go! biemond Edwin Biemond WebLogic 10.3.4 new features http://bit.ly/f7L1Eu Exalogic Elastic Cloud , JPA2 , Maven plugin, OWSM policies on WebLogic SCA applications JDeveloper JDeveloper & ADF JDeveloper and Oracle ADF 11g Release 1 Patch Set 3 (11.1.1.4.0): New Features and Bug Fixes http://bit.ly/feghnY simon_haslam Simon Haslam WebLogic Server 10.3.4 (i.e. 11gR1 PS3) available now too http://bit.ly/eeysZ2 JDeveloper JDeveloper & ADF Share your impressions on the new JDeveloper 11g Patchset 3 release that came out today! Download it here: http://bit.ly/dogRN8 VikasAatOracle Vikas Anand SOA Suite 11gR1PS3 is Hotpluggable ...see list of features that @Demed posted..#soa #soacommunity   New versions of Oracle Fusion Middleware 11g R1 (11.1.1.4.x)  include: Oracle WebLogic Server 11g R1 (10.3.4) Oracle SOA Suite 11g R1 (11.1.1.4.0) Oracle Business Process Management 11g R1 (11.1.1.4.0) Oracle Complex Event Processing 11g R1 (11.1.1.4.0) Oracle Application Integration Architecture Foundation Pack 11g R1 (11.1.1.4.0) Oracle Service Bus 11g R1 (11.1.1.4.0) Oracle Enterprise Repository 11g R1 (11.1.1.4.0) Oracle Identity Management 11g R1 (11.1.1.4.0) Oracle Enterprise Content Management 11g R1 (11.1.1.4.0) Oracle WebCenter 11g R1 (11.1.1.4.0) - coming soon Oracle Forms, Reports, Portal & Discoverer 11g R1 (11.1.1.4.0) Oracle Repository Creation Utility 11g R1 (11.1.1.4.0) Oracle JDeveloper & Application Development Runtime 11g R1 (11.1.1.4.0) Resources Download  (OTN) Certification Documentation   New Features in Oracle SOA Suite 11g Release 1 (11.1.1.4.0) Updated: January, 2011 Go to Oracle SOA Suite 11g Doc Introduction Oracle SOA Suite 11gR1 (11.1.1.4.0) includes both bug fixes as well as new features listed below - click on the title of each feature for more details. Downloads, documentation links and more information on the Oracle SOA Suite available on the SOA Suite OTN page and as always, we welcome your feedback on the SOA OTN forum. New in Oracle SOA Suite in this release BPEL Component BPEL 2.0 support in JDeveloper The BPEL editor in JDeveloper now generates BPEL 2.0 code and introduces several new activities. Augmented XML variables auto-initialization capabilities The XML variable auto-initialization capabilities have been enhanced to support two need additional use cases: to initialize the to-spec node if it doesn't exist during the rule and to initialize array elements. New Assign Activity dialog The new Assign Activity supports the same drag & drop paradigm used for the XSLT mapper, greatly streamlining the task of assigning multiple variables. Mediator Component Time window parameter for the resequencer This new parameter lets users initiate a best-effort resequencing based on a time window rather than a number of messages. Support for attachments in the Mediator assign dialog The Mediator assign dialog now supports attachment, enabling usage of the Mediator to transmit attachments even if source and target schemas are different. Adapters & Bindings ChunkSize property added to the File Adapter header properties The ChunkSize property of the File Adapter is now available as a header property, allowing in-process modification of the value for this property. Improved support for distributed WLS JMS topics though automatic rebalancing of listeners The JMS Adapter has been enhanced to subscribe to administrative events from WLS JMS. Based on these events, it dynamically rebalances listeners when there are changes to the members of a local or remote WLS JMS distributed destination. JDeveloper configuration wizard for custom JCA adapters A new wizard is available in JDeveloper to configure custom-built adapters Administration & Enterprise Manager Enhanced purging capabilities to manage database growth Historical instance data can now be purged using three different strategies: batch script, scheduled batch script or data partitioning. Asynchronous bulk instance deletion in Enterprise Manager Bulk deletion of instances in Enterprise Manager now executes as an asynchronous operation in Enterprise Manager, returning control to the user as soon as the action has been submitted and acknowledged. B2B Ability to schedule partner downtime This feature allows trading partners to notify each other about planned downtime and to delay delivery of messages during that period. Message sequencing B2B now supports both inbound and outbound message sequencing. Simplified BAM integration with B2B B2B ships with various pre-configured artifacts to simplify monitoring in BAM. Instance Message Java API for B2B The new instance message Java API supports programmatic access to B2B instance message data. Oracle Service Bus (OSB) Certification of the File and FTP JCA Adapters The File and FTP JCA adapters are now certified for use with Oracle Service Bus (in addition to the native transports). Security enhancements Oracle Service Bus now supports SAML 2.0 as well as the OWSM authorization policies. Check the Oracle Service Bus 11.1.1.4 Release Notes for a complete list of new features. Installation, Hot-Pluggability & Certifications Ability to run Oracle SOA Suite on IBM WebSphere Application Server Oracle SOA Suite can now be deployed on IBM WebSphere Application Server Network Deployment (ND) 7.0.11 and IBM WebSphere Application Server 7.0.11. Single JVM developer installation template Oracle SOA Suite can now be targeted to the WebLogic admin server - there is no requirement to also have a managed server. This topology is intended to minimize the memory foorprint of development environments. This is in addition to the list of supported browsers, operating systems and databases already certified in prior releases. Complex Event Processing (CEP) IDE enhancements This release introduces several enhancements to the development IDE, such as adapter wizards and event-type repository. CQL enhancements CQL enhancements include JDBC data cartridges and parametrized queries. Tracing and injecting events in the Event Processing Network (EPN) In the development environment you can now trace and inject events. Check the Oracle CEP 11.1.1.4 Release Notes for a complete list of new features. SOA Suite page on OTN For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Suite 11.1.1.4,JDeveloper 11.1.1.4,WebLogic 10.3.4,JRockit 10.3.4,SOA Community,Oracle,OPN,SOA,Simone Geib,Guido Schmutz,Edwin Biemond,Lucas Jellema,Simon Haslam,Demed,Vikas Anand,Jürgen Kress

    Read the article

  • Week in Geek: IPv6 Capable Smartphones Compromise User Privacy Edition

    - by Asian Angel
    This week we learned how to “clone a disk, resize static windows, and create system function shortcuts”, use 45 different services, sites, and apps to help read favorite sites, add MP3 support to Audacity (for saving in MP3 format), install a Wii game loader for easy backups and fast load times, create a Blue Screen of Death in any color, and more. Photo by legofenris. Weekly News Links Photo by The H Security. IPv6: Smartphones compromise users’ privacy Since version 4 of the iOS operating system, Apple’s iPhones, iPads and iPods have been capable of handling IPv6, and most Android devices have been capable since version 2.1. However, the operating systems transfer an ID that discloses information about their users. Dumb phones can be attacked too Much of the discussion of security threats to mobile phones revolves around smartphones, but researchers have found that less advanced “feature phones,” still used by the majority of people around the world, also are vulnerable to attack. SCADA exploit – the dragon awakes The recent publication of an exploit for KingView, a software package for visualising industrial process control systems, appears to be having an effect. Threatpost reports that both the Chinese vendor Wellintech and Chinese CERT (CN-CERT) have now reacted. Sophos: Spam to get more malicious Spam is becoming more malicious in nature as trickery tactics change in line with current user interests, according to a new report released Tuesday by Sophos. Global spam traffic rebounds as Rustock wakes Spam is on the rise after the Rustock botnet awoke from its Christmas slumber, according to Symantec. Cracking WPA keys in the cloud At the forthcoming Black Hat conference, blogger Thomas Roth plans to demonstrate how weak WPA PSKs can be cracked quickly and easily using Amazon’s Elastic Compute Cloud (EC2) service. Microsoft Security Advisory: Vulnerability in Internet Explorer could allow remote code execution Provides a link to more details about the vulnerability and shows a work-around/fix for the problem. Adobe plans to make it easier to delete Flash cookies in web browsers The new API, NPAPI:ClearSiteData, will allow Flash cookies – also known as Local Shared Objects (LSO) – to be deleted directly in the browser’s settings. Firefox beta getting new database standard The ninth beta version of Firefox is set to get support for a standard called IndexedDB that provides a database interface useful for offline data storage and other tasks needing information on a browser’s computer. MetroPCS accused of blocking certain Net content MetroPCS is violating the FCC’s recently approved Net neutrality rules by blocking certain Internet content, say several public interest groups. Server and Tools chief Muglia to leave Microsoft in summer 2011 Microsoft veteran and Server & Tools Business (STB) President Bob Muglia is leaving Microsoft, according to an email that CEO Steve Ballmer sent to employees on January 10. Report: DOJ nearing decision on Google-ITA The U.S. Department of Justice is gearing up for a possible formal antitrust investigation into whether or not Google should be allowed to purchase travel software company ITA Software, according to a report. South Korea says Google Street View broke law Police in South Korea reportedly say Google broke the country’s law when its Street View service captured personal data from unsecure Wi-Fi networks. The backlash over Google’s HTML5 video bet Choosing strategies based on what you believe to be long-term benefits is generally a good idea when running a business, but if you manage to alienate the world in the process, the long term may become irrelevant. Google answers critics on HTML5 Web video move Google responded to critics of its decision to drop support for a popular HTML5 video codec by declaring that a royalty-supported standard for Web video will hold the Web hostage. Random TinyHacker Links A Special GiveAway: a Great Book & Great Security Software The team from 7 Tutorials has a special giveaway running during the month of January. Signed copies of their latest book, full 1-year licenses of BitDefender Internet Security 2011 and free 3-month trials for everyone willing to participate. One Click Rooting For Android Phones Here’s a nice tool that helps you root your Android phone effortlessly. New Angry Birds Free version 1.0 Available in the App Store. Google Code University Learn programming at Google Code University. Capture and Share Your Favorite Part Of a YouTube Video SnipSnip.it lets you share only the part of the video that you like. Super User Questions More great questions and answers from this past week’s popular topics at Super User. What are the Windows A: and B: drives used for? Does OS X support linux-like features? What is the easiest way to make a backup of an entire hard disk? Will shifting from Wireless to Wired network result in better performance? Is it legal to install Windows 7 Home Premium Retail inside VMware virtual machine? How-To Geek Weekly Article Recap Enjoy reading through our hottest articles from this past week. The 50 Best Ways to Disable Built-in Windows Features You Don’t Want The Best of CES (Consumer Electronics Show) in 2011 How to Upgrade Windows 7 Easily (And Understand Whether You Should) The Worst of CES (Consumer Electronics Show) in 2011 The How-To Geek Guide to Audio Editing: Basic Noise Removal One Year Ago on How-To Geek More great articles from one year ago filled with helpful geeky goodness for you to enjoy. Share Text & Images the Easy Way with JustPaste.it Start Portable Firefox in Safe Mode Firefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible Extensions Protect Your Computer from “Little Hands” with KidSafe Lock Prying Eyes Out of Your Minimized Windows Custom Crocheted Cylon-Cthulhu Hybrid What happens when you let your Cylon Centurion figure and your crocheted Cthulhu spend too many lonely nights together? A Cylon-Cthulhu hybrid, of course! You can get your own from the Cthulhu Chick store over on Etsy. Note: This is not an ad…Ruth is a friend of ours, and this Cylon-Cthulhu hybrid makes the perfect guard for the new MVP trophy in our office. The Geek Note Whether it is a geeky indoor project or just getting outside, we hope that you and your families have a terrific fun-filled weekend! Remember to keep sending those great tips in to us at [email protected]. Photo by qwrrty. Latest Features How-To Geek ETC How to Upgrade Windows 7 Easily (And Understand Whether You Should) The How-To Geek Guide to Audio Editing: Basic Noise Removal Install a Wii Game Loader for Easy Backups and Fast Load Times The Best of CES (Consumer Electronics Show) in 2011 The Worst of CES (Consumer Electronics Show) in 2011 HTG Projects: How to Create Your Own Custom Papercraft Toy Firefox 4.0 Beta 9 Available for Download – Get Your Copy Now The Frustrations of a Computer Literate Watching a Newbie Use a Computer [Humorous Video] Season0nPass Jailbreaks Current Gen Apple TVs IBM’s Jeopardy Playing Computer Watson Shows The Pros How It’s Done [Video] Tranquil Juice Drop Abstract Wallpaper Pulse Is a Sleek Newsreader for iOS and Android Devices

    Read the article

  • HPET for x86 BSP (how to build it for WCE8)

    - by Werner Willemsens
    Originally posted on: http://geekswithblogs.net/WernerWillemsens/archive/2014/08/02/157895.aspx"I needed a timer". That is how we started a few blogs ago our series about APIC and ACPI. Well, here it is. HPET (High Precision Event Timer) was introduced by Intel in early 2000 to: Replace old style Intel 8253 (1981!) and 8254 timers Support more accurate timers that could be used for multimedia purposes. Hence Microsoft and Intel sometimes refers to HPET as Multimedia timers. An HPET chip consists of a 64-bit up-counter (main counter) counting at a frequency of at least 10 MHz, and a set of (at least three, up to 256) comparators. These comparators are 32- or 64-bit wide. The HPET is discoverable via ACPI. The HPET circuit in recent Intel platforms is integrated into the SouthBridge chip (e.g. 82801) All HPET timers should support one-shot interrupt programming, while optionally they can support periodic interrupts. In most Intel SouthBridges I worked with, there are three HPET timers. TIMER0 supports both one-shot and periodic mode, while TIMER1 and TIMER2 are one-shot only. Each HPET timer can generate interrupts, both in old-style PIC mode and in APIC mode. However in PIC mode, interrupts cannot freely be chosen. Typically IRQ11 is available and cannot be shared with any other interrupt! Which makes the HPET in PIC mode virtually unusable. In APIC mode however more IRQs are available and can be shared with other interrupt generating devices. (Check the datasheet of your SouthBridge) Because of this higher level of freedom, I created the APIC BSP (see previous posts). The HPET driver code that I present you here uses this APIC mode. Hpet.reg [HKEY_LOCAL_MACHINE\Drivers\BuiltIn\Hpet] "Dll"="Hpet.dll" "Prefix"="HPT" "Order"=dword:10 "IsrDll"="giisr.dll" "IsrHandler"="ISRHandler" "Priority256"=dword:50 Because HPET does not reside on the PCI bus, but can be found through ACPI as a memory mapped device, you don't need to specify the "Class", "SubClass", "ProgIF" and other PCI related registry keys that you typically find for PCI devices. If a driver needs to run its internal thread(s) at a certain priority level, by convention in Windows CE you add the "Priority256" registry key. Through this key you can easily play with the driver's thread priority for better response and timer accuracy. See later. Hpet.cpp (Hpet.dll) This cpp file contains the complete HPET driver code. The file is part of a folder that you typically integrate in your BSP (\src\drivers\Hpet). It is written as sample (example) code, you most likely want to change this code to your specific needs. There are two sets of #define's that I use to control how the driver works. _TRIGGER_EVENT or _TRIGGER_SEMAPHORE: _TRIGGER_EVENT will let your driver trigger a Windows CE Event when the timer expires, _TRIGGER_SEMAPHORE will trigger a Windows CE counting Semaphore. The latter guarantees that no events get lost in case your application cannot always process the triggers fast enough. _TIMER0 or _TIMER2: both timers will trigger an event or semaphore periodically. _TIMER0 will use a periodic HPET timer interrupt, while _TIMER2 will reprogram a one-shot HPET timer after each interrupt. The one-shot approach is interesting if the frequency you wish to generate is not an even multiple of the HPET main counter frequency. The sample code uses an algorithm to generate a more correct frequency over a longer period (by reducing rounding errors). _TIMER1 is not used in the sample source code. HPT_Init() will locate the HPET I/O memory space, setup the HPET counter (_TIMER0 or _TIMER2) and install the Interrupt Service Thread (IST). Upon timer expiration, the IST will run and on its turn will generate a Windows CE Event or Semaphore. In case of _TIMER2 a new one-shot comparator value is calculated and set for the timer. The IRQ of the HPET timers are programmed to IRQ22, but you can choose typically from 20-23. The TIMERn_INT_ROUT_CAP bits in the TIMn_CONF register will tell you what IRQs you can choose from. HPT_IOControl() can be used to set a new HPET counter frequency (actually you configure the counter timeout value in microseconds), start and stop the timer, and request the current HPET counter value. The latter is interesting because the Windows CE QueryPerformanceCounter() and QueryPerformanceFrequency() APIs implement the same functionality, albeit based on other counter implementations. HpetDrvIst() contains the IST code. DWORD WINAPI HpetDrvIst(LPVOID lpArg) { psHpetDeviceContext pHwContext = (psHpetDeviceContext)lpArg; DWORD mainCount = READDWORD(pHwContext->g_hpet_va, GenCapIDReg + 4); // Main Counter Tick period (fempto sec 10E-15) DWORD i = 0; while (1) { WaitForSingleObject(pHwContext->g_isrEvent, INFINITE); #if defined(_TRIGGER_SEMAPHORE) LONG p = 0; BOOL b = ReleaseSemaphore(pHwContext->g_triggerEvent, 1, &p); #elif defined(_TRIGGER_EVENT) BOOL b = SetEvent(pHwContext->g_triggerEvent); #else #pragma error("Unknown TRIGGER") #endif #if defined(_TIMER0) DWORD currentCount = READDWORD(pHwContext->g_hpet_va, MainCounterReg); DWORD comparator = READDWORD(pHwContext->g_hpet_va, Tim0_ComparatorReg + 0); SETBIT(pHwContext->g_hpet_va, GenIntStaReg, 0); // clear interrupt on HPET level InterruptDone(pHwContext->g_sysIntr); // clear interrupt on OS level _LOGMSG(ZONE_INTERRUPT, (L"%s: HpetDrvIst 0 %06d %08X %08X", pHwContext->g_id, i++, currentCount, comparator)); #elif defined(_TIMER2) DWORD currentCount = READDWORD(pHwContext->g_hpet_va, MainCounterReg); DWORD previousComparator = READDWORD(pHwContext->g_hpet_va, Tim2_ComparatorReg + 0); pHwContext->g_counter2.QuadPart += pHwContext->g_comparator.QuadPart; // increment virtual counter (higher accuracy) DWORD comparator = (DWORD)(pHwContext->g_counter2.QuadPart >> 8); // "round" to real value WRITEDWORD(pHwContext->g_hpet_va, Tim2_ComparatorReg + 0, comparator); SETBIT(pHwContext->g_hpet_va, GenIntStaReg, 2); // clear interrupt on HPET level InterruptDone(pHwContext->g_sysIntr); // clear interrupt on OS level _LOGMSG(ZONE_INTERRUPT, (L"%s: HpetDrvIst 2 %06d %08X %08X (%08X)", pHwContext->g_id, i++, currentCount, comparator, comparator - previousComparator)); #else #pragma error("Unknown TIMER") #endif } return 1; } The following figure shows how the HPET hardware interrupt via ISR -> IST is translated in a Windows CE Event or Semaphore by the HPET driver. The Event or Semaphore can be used to trigger a Windows CE application. HpetTest.cpp (HpetTest.exe)This cpp file contains sample source how to use the HPET driver from an application. The file is part of a separate (smart device) VS2013 solution. It contains code to measure the generated Event/Semaphore times by means of GetSystemTime() and QueryPerformanceCounter() and QueryPerformanceFrequency() APIs. HPET evaluation If you scan the internet about HPET, you'll find many remarks about buggy HPET implementations and bad performance. Unfortunately that is true. I tested the HPET driver on an Intel ICH7M SBC (release date 2008). When a HPET timer expires on the ICH7M, an interrupt indeed is generated, but right after you clear the interrupt, a few more unwanted interrupts (too soon!) occur as well. I tested and debugged it for a loooong time, but I couldn't get it to work. I concluded ICH7M's HPET is buggy Intel hardware. I tested the HPET driver successfully on a more recent NM10 SBC (release date 2013). With the NM10 chipset however, I am not fully convinced about the timer's frequency accuracy. In the long run - on average - all is fine, but occasionally I experienced upto 20 microseconds delays (which were immediately compensated on the next interrupt). Of course, this was all measured by software, but I still experienced the occasional delay when both the HPET driver IST thread as the application thread ran at CeSetThreadPriority(1). If it is not the hardware, only the kernel can cause this delay. But Windows CE is an RTOS and I have never experienced such long delays with previous versions of Windows CE. I tested and developed this on WCE8, I am not heavily experienced with it yet. Internet forum threads however mention inaccurate HPET timer implementations as well. At this moment I haven't figured out what is going on here. Useful references: http://www.intel.com/content/dam/www/public/us/en/documents/technical-specifications/software-developers-hpet-spec-1-0a.pdf http://en.wikipedia.org/wiki/High_Precision_Event_Timer http://wiki.osdev.org/HPET Windows CE BSP source file package for HPET in MyBsp Note that this source code is "As Is". It is still under development and I cannot (and never will) guarantee the correctness of the code. Use it as a guide for your own HPET integration.

    Read the article

  • CodePlex Daily Summary for Monday, May 10, 2010

    CodePlex Daily Summary for Monday, May 10, 2010New ProjectsAzure Publish-Subscribe: Infrastructure implementing the publish-subscribe pattern in a Windows Azure context. The unit of publishing is an XML document with an optional l...Bakalarska prace SCSF: This work dealing with the technology of Microsoft Software Factories which makes possible an efficient development of applications under MS Windows.Begtostudy-Test: NoteExpress User Tool (NEUT) is a opensource project for NoteExpress user developpers to share their tools and ideas using secondary development. N...CodeReview: Code Review is an open source development tool based on the same approach than FxCop - check the compiled assemblies to enforce good practices rule...CommonFilter: CommonFilter is a subset of the CommonData project, containing just the functions and unit tests for filtering user input.Custom SharepointDesigner Actions: These are a couple of custom actions that i use in sharepoint designer to ease workflow creation.Customer Portal Accelerator for Microsoft Dynamics CRM: The Customer Portal accelerator for Microsoft Dynamics CRM provides businesses the ability to deliver portal capabilities to their customers while ...Danzica Asset Management System: Danzica is an asset management system written in C# that can ingest all of your hardware management systems into a single cohesive portal, giving y...Dot2Silverlight: Dot2Silverlight is a project thats enables to render graphs (written in Dot format) in Silverlight. dot2silverlight, dot, silverlight, C#, graphviz...eGrid_Windows7: This project will include a new platform independent version of the eGrid project. The new version will run on windows 7, using WPF 4 and the Surfa...Game project JAMK: game project on jamkHamcast for multi station coordination: Amateur Radio multiple station operation tends to have loggers and operators striving to get particular information from each other, like what IP a...Headspring Labs: Headspring Labs showcases presentations, samples, and starter kits developed by Headspring.Hongrui Software Development Management Platform: Hongrui Software Development Management Platform 鸿瑞软件开发管理平台(HrSDMP) LinkField: 带有链接的多列 Field migre.me plugin for Seesmic Desktop Platform: migre.me is a brazilian service created to reduces the size of URLs and provides tracking data for shortened links. migre.me plugin for Seesmic ...MISAO: MISAO is a presentation tool.Mongodb Management Studio: Mongodb Management Studio makes it easier for mongodb users (including DBA/Developers/Administrators) to use mongodb. It's developed in ASP.NET 4.0...MS Build for DotNetNuke module development: Automate the task of creating DotNetNuke module PA packs easily using MS Build. Create manifest files, include version #'s automatically and more.Rubyish: C# extensions providing a rubyish syntax to C#SharePoint 2007 Web Parts: The goal of this project is to develop a set of web parts for SharePoint 2007.SharePoint 2010 Web Parts: The goal of this project is to develop a set of web parts for SharePoint 2010.Taxomatic: Taxomatic adds the ability to bulk create Content Types and Columns to a SharePoint site collection. It also caters for the export of the Content T...trackuda: trackuda - track the motion!Web Camera Shooter: Small tool for taking screenshots from web camera and saving them to disk with few image filters as options.Windows API Code Pack Contrib: Extensions to Windows API Code PackNew ReleasesAlan Platform: Technical Preview 1: В центре данного релиза интерфейс, точнее не сам интерфейс, а принцип, по которому он построен. Используя парочку предоставленных свойств, можно со...Begtostudy-Test: Test: Don't Download.BFBC2 PRoCon: PRoCon 0.3.5.1: Release Notes ComingCBM-Command: 2010-05-09: Release Notes - 2010-05-09New Features FILE COPYING Changes Removed the Swap Panels functionality to make room for file copying. It's still in th...CBM-Command: 2010-05-10: Release Notes - 2010-05-10New Features Launching PRG Files New color schemes to better match the C128 and C64 platforms Function Keys Changes ...CommonFilter: CommonFilter0.3D: This initial release of CommonFilter 0.3D is a subset of the CommonData solution that contains just the filter functions.Custom SharepointDesigner Actions: Custom SPD Actions v1.0: This is the first version of the actions library, it includes: Calling a Webservice. Convert a string to a double. Convert a string to an integ...Customer Portal Accelerator for Microsoft Dynamics CRM: Customer Portal Accelerator for Dynamics CRM: The Customer Portal accelerator for Microsoft Dynamics CRM provides businesses the ability to deliver portal capabilities to their customers while ...EPiMVC - EPiServer CMS with ASP.NET MVC: EPiMVC CTP1: First release that mainly addresses routing. The release is described in greater detail here.Helium Frog Animator: Helium Frog 2_06SW3: This is a first release (not for end users as it is not packaged) of mods intended to make the program run easier on netbooks and customised for us...HouseFly controls: HouseFly controls alpha 0.9.8.0: HouseFly controls release 0.9.8.0 alphaID3Tag.Net: ID3TagLib.Net 1.1: The ID3Tag team is proud to release a new version of the ID3tag lib! New features: Removing of ID3V1 and ID3V2 tags Logging operations ( if enab...IT Tycoon: IT Tycoon 0.2.1: Switched to .NET Framework 4 and implemented some Parallel.ForEach calls.MeaMod Playme: MeaMod Playme 0.9.6.5 Bucking Bunny: MeaMod Playme 0.9.6.5 Bucking Bunny Version 0.9.6.5 | Change Set: 48050 -- Added Playme Store -- Added Buy Album -- Added OCDS/Core system -- Adde...migre.me plugin for Seesmic Desktop Platform: migre.me plugin 0.7.0.1: Initial release. Compatible with Seesmic Desktop Platform 0.7.0.772.MISAO: Ver. 5 Alpha(2010-05-10): Alpha versionMultiwfn: multiwfn1.3.2_source: multiwfn1.3.2_sourcePocket Wiki: Desktop Wiki (in dev): Full screen wiki for the PC - supports the same parsers that Pocket Wiki does. Currently in development but usable. Left side shows listbox of all ...Rubyish: alpha: intial buildSevenZipLib Library: v9.13 beta: New release to match 7-zip 9.13 betaShake - C# Make: Shake v0.1.11: Initial version of Shake's services (API), command line parameters (dynamic) now available via ShakeServices class. Introducing interfaces and bas...SharpNotes: SharpNotes (New): This is the release of SharpNotes.SharpNotes: SharpNotes Source (New): This is the source release of SharpNotes.StackOverflow Desktop Client in C# and WPF: StackOverflow Client 1.0: Improved UI Showing votes/answers/views on popups. Bug fixesTaxomatic: Design_0: Design documentThe Movie DB API: TMDB API v1.2: Updated to reflect changes in The Movie DB API.Web Camera Shooter: 1.0.0.0: Initial release. Unstable. Often exception AccessViolation from Touchless SDK.WF Personalplaner: Personalplaner v1.7.29.10127: - Drag und Drop wurde beim Plan und in der Maske unter Plan\Plan-Layout anpassen in die Grids eingefügt - Weitere kleine bugfixesWindows Phone 7 Panorama & Pivot controls: panorama + pivot controls v0.7 (samples included): Panorama and Pivot Controls source code + sample projects. - Phone.Controls.Samples : source code for the PanoramaControl and PivotControl. - Pic...XmlCodeEditor: Release 0.9 Alpha: Release 0.9 AlphaMost Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETPHPExcelMost Active Projectspatterns & practices – Enterprise LibraryRawrThe Information Literacy Education Learning Environment (ILE)Mirror Testing SystemCaliburn: An Application Framework for WPF and SilverlightjQuery Library for SharePoint Web Serviceswhitepatterns & practices - UnityTweetSharpBlogEngine.NET

    Read the article

  • CodePlex Daily Summary for Sunday, May 16, 2010

    CodePlex Daily Summary for Sunday, May 16, 2010New Projects3D Calculator: 3D Calc is a simple calculator application for Windows Phone 7, the purpose of this project is to demo the 3D animations capabilities of WP7 and sh...azaleas: AzaleasBlueset Studio Opensource Projects: Only for Opensource projects form Blueset Studio.Breck: A Phoenix and Jumper Moneky Production: Breck is a first person non-violent shooter developed in C++ and Dark GDK. After the main game is developed we are looking into making a sequel or...Discuz! Forum SDK: This project is use to login in and post or reply topic on discuz forum.Dominion.NET: Evolving Dominion source code originally written in VB6 and posted by "jatill" on Collectible Card Game Headquarters. Migration of the design and s...EkspSys2010-ITR: A mini project for the course Experimental System devolopment in spring 2010Facebook Graph Toolkit: This project is a .Net implementation of the Facebook Graph API. The aim of this project is to be a replacement to the existing Facebook Toolkit (h...iFree: This is a solution for Vietnamese network socialInfoPath Editor for Developer: InfoPath Editor for developer allows user to modify the html text directly inside InfoPath designer or filler and push the change back to InfoPath ...iZeit: Run your own online calendar, with blog integration, recurrence, todo list and categories.machgos dotNet Tests: Just some little test-projects for learningmim: TBAMinePost: MinePost is a game made for the first 48 hour Reddit Game Jam.Mockina: Mockina is a mock framework. Expression tree syntax is used to specify which members to mock, both public and non-public. The code is easy to under...MSBuild Launch Pad (mPad): This is just another shell extension for MSBuild to enable quick execution of MSBuild scripts via Windows Explorer context menu. (C) 2010 Lex LiPeacock: A browser like tabbed applicationPrimeCalculation: PrimeCalculation is a .NET app to calculate primes in a given range. Speed on Core2Duo 2,4GHZ: Found all primes from 0 to 1 billion in 35 seconds (...Slightly Silverlight: A Framework that leverages Silverlight for processing, business logic but standard HTML for the presentation layer.Stopwatch: Stopwatch is a tool for measuring the time. To start and pause stopwatch you only need to press a key on the keyboard. An additional context menu a...YAXLib: Yet Another XML Serialization Library for the .NET Framework: YAXLib is an XML Serialization library which helps you structure freely the XML result, choose among private and public fields to be serialized, an...New ReleasesActivate Your Glutes: v1.0.3.0: This release is a migration to VS2010, .Net 4, MVC2 and Entity Framework 4. The code has also been considerably cleaned up - taking advantage of E...AnyCAD: AnyCAD.Free.ENU.v1.1: http://www.anycad.net Modeling •2D: Line, Rectangle, Arc, Arch, Circle, Spline, Polygon •Feature: Extrude, Loft, Chamfer, Sweep, Revol •Boolean: ...Blueset Studio Opensource Projects: 多功能计算器 3.5: 稳定版本。Code for Rapid C# Windows Development eBook: LLBLGen LINQPad Data Context Driver Ver 1.0.0.0: First release of a Static LLBLGen Pro Data Context Driver for LINQPad I recommend LINQPad 4 as it seems more stable with this driver than LINQPad 2.DSQLT - Dynamic SQL Templates: Release 1.2. Some behaviour has changed!!: Attention. Some behaviour has changed! Now its necessary to use WildCards in the pattern-parameter for DSQLT.AllSourceContains DSQLT.Databases DSQ...FDS AutoCAD plug-in: FDS to AutoCAD plug-in: Basic functionality was implemented. Some routines like setting fds executable location are still not automated.Feature Builder Guidance Extensions: FBGX 2 - Standalone FX: Background: The Feature Builder Guidance is extensible and displays guidance content supplied by all the Feature Builder Guidance Extensions (FBGX...Floe IRC Client: Floe IRC Client 2010-05 R3: - You can now right click on the input box to get options for toggling bold, underline, colors, etc. - The size of the nickname column is now saved...Floe IRC Client: Floe IRC Client 2010-05 R4: - A user's channel status now appears next to their nick when they talk (e.g. @Nick or +Nick) - Fixed an error where certain kinds of network probl...HD-Trailers.NET Downloader: HD-Trailers.NET Downloader v1.0: Version 1.0 Thanks to Wolfgang for all his help. I let this project languish for too long while focusing on other things, but his involvement has ...InfoPath Editor for Developer: InfoPath Editor Beta 1: Intial Release: Can load InfoPath inner html. Can edit InfoPath inner html. InfoPath 2007 only.LinkSharp: LinkSharp 0.1.0: First release of LinkSharp. Set up iis, and use the sql script to create a new database.PowerAuras: PowerAuras V3.0.0F: This version adds better integration with GTFO New Flags Added PvP flag In 5-Man Instance In Raid Instance In Battleground In ArenaRx Contrib: V1.4: Add the ability to catch internal exception and the ability to publish error by queue adaptersSEO SiteMap: SEO SiteMap RC1: -SevenZipLib Library: v9.13.2: Stable release associated with 7z.dll 9.13 beta. Ability to create and update archives not implemente yet.Silverlight / WPF Controls: Upload, FlipPanel, DeepZoom, Animation, Encryption: Code Camp Demonstration: This code example demonstrates MVVM/MEF with WPF with attached properties,security and custom ICommand class.SQL Data Capture - Black Box Application Testing: SQLDataCapture V1.2: Added Entity Framework Support to CRUD generator (Insert Stored Procedure) and switched to VS 2010 for development.Stopwatch: Stopwatch 0.1: Stopwatch Release 0.1VCC: Latest build, v2.1.30515.0: Automatic drop of latest buildYet another developer blog - Examples: Asynchronous TreeView in ASP.NET MVC: This sample application shows how to use jQuery TreeView plugin for creating an asynchronous TreeView in ASP.NET MVC. This application is accompani...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRawrPHPExcelBlogEngine.NETMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersMirror Testing SystemN2 CMSStyleCop

    Read the article

< Previous Page | 821 822 823 824 825 826 827 828 829 830 831 832  | Next Page >