Search Results

Search found 7085 results on 284 pages for 'termine updates'.

Page 264/284 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Emit Knowledge - social network for knowledge sharing

    - by hajan
    Emit Knowledge, as the words refer - it's a social network for emitting / sharing knowledge from users by users. Those who can benefit the most out of this network is perhaps all of YOU who have something to share with others and contribute to the knowledge world. I've been closely communicating with the core team of this very, very interesting, brand new social network (with specific purpose!) about the concept, idea and the vision they have for their product and I can say with a lot of confidence that this network has real potential to become something from which we will all benefit. I won't speak much about that and would prefer to give you link and try it yourself - http://www.emitknowledge.com Mainly, through the past few months I've been testing this network and it is getting improved all the time. The user experience is great, you can easily find out what you need and it follows some known patterns that are common for all social networks. They have some real good ideas and plans that are already under development for the next updates of their product. You can do micro blogging or you can do regular normal blogging… it’s up to you, and the way it works, it is seamless. Here is a short Question and Answers (QA) interview I made with the lead of the team, Marijan Nikolovski: 1. Can you please explain us briefly, what is Emit Knowledge? Emit Knowledge is a brand new knowledge based social network, delivering quality content from users to users. We believe that people’s knowledge, experience and professional thoughts compose quality content, worth sharing among millions around the world. Therefore, we created the platform that matches people’s need to share and gain knowledge in the most suitable and comfortable way. Easy to work with, Emit Knowledge lets you to smoothly craft and emit knowledge around the globe. 2. How 'old' is Emit Knowledge? In hamster’s years we are almost five years old start-up :). Just kidding. We’ve released our public beta about three months ago. Our official release date is 27 of June 2012. 3. How did you come up with this idea? Everything started from a simple idea to solve a complex problem. We’ve seen that the social web has become polluted with data and is on the right track to lose its base principles – socialization and common cause. That was our start point. We’ve gathered the team, drew some sketches and started to mind map the idea. After several idea refactoring’s Emit Knowledge was born. 4. Is there any competition out there in the market? Currently we don't have any competitors that share the same cause. What makes our platform different is the ideology that our product promotes and the functionalities that our platform offers for easy socialization based on interests and knowledge sharing. 5. What are the main technologies used to build Emit Knowledge? Emit Knowledge was built on a heterogeneous pallet of technologies. Currently, we have four of separation: UI – Built on ASP.NET MVC3 and Knockout.js; Messaging infrastructure – Build on top of RabbitMQ; Background services – Our in-house solution for job distribution, orchestration and processing; Data storage – Build on top of MongoDB; What are the main reasons you've chosen ASP.NET MVC? Since all of our team members are .NET engineers, the decision was very natural. ASP.NET MVC is the only Microsoft web stack that sticks to the HTTP behavioral standards. It is easy to work with, have a tiny learning curve and everyone who is familiar with the HTTP will understand its architecture and convention without any difficulties. 6. What are the main reasons for choosing ASP.NET MVC? Since all of our team members are .NET engineers, the decision was very natural. ASP.NET MVC is the only Microsoft web stack that sticks to the HTTP behavioral standards. It is easy to work with, have a tiny learning curve and everyone who is familiar with the HTTP will understand its architecture and convention without any difficulties. 7. Did you use some of the latest Microsoft technologies? If yes, which ones? Yes, we like to rock the cutting edge tech house. Currently we are using Microsoft’s latest technologies like ASP.NET MVC, Web API (work in progress) and the best for the last; we are utilizing Windows Azure IaaS to the bone. 8. Can you please tell us shortly, what would be the benefit of regular bloggers in other blogging platforms to join Emit Knowledge? Well, unless you are some of the smoking ace gurus whose blogs are followed by a large number of users, our platform offers knowledge based segregated community equipped with tools that will enable both current and future users to expand their relations and to self-promote in the community based on their activity and knowledge sharing. 10. I see you are working very intensively and there is already integration with some third-party services to make the process of sharing and emitting knowledge easier, which services did you integrate until now and what do you plan do to next? We have “reemit” functionality for internal sharing and we also support external services like: Twitter; LinkedIn; Facebook; For the regular bloggers we have an extra cream, Windows Live Writer support for easy blog posts emitting. 11. What should we expect next? Currently, we are working on a new fancy community feature. This means that we are going to support user groups to be formed. So for all existing communities and user groups out there, wait us a little bit, we are coming for rescue :). One of the top next features they are developing is the Community Feature. It means, if you have your own User Group, Community Group or any other Group on which you and your users are mostly blogging or sharing (emitting) knowledge in various ways, Emit Knowledge as a platform will help you have everything you need to promote your group, make new followers and host all the necessary stuff that you have had need of. I would invite you to try the network and start sharing knowledge in a way that will help you gather new followers and spread your knowledge faster, easier and in a more efficient way! Let’s Emit Knowledge!

    Read the article

  • Improving Manageability of Virtual Environments

    - by Jeff Victor
    Boot Environments for Solaris 10 Branded Zones Until recently, Solaris 10 Branded Zones on Solaris 11 suffered one notable regression: Live Upgrade did not work. The individual packaging and patching tools work correctly, but the ability to upgrade Solaris while the production workload continued running did not exist. A recent Solaris 11 SRU (Solaris 11.1 SRU 6.4) restored most of that functionality, although with a slightly different concept, different commands, and without all of the feature details. This new method gives you the ability to create and manage multiple boot environments (BEs) for a Solaris 10 Branded Zone, and modify the active or any inactive BE, and to do so while the production workload continues to run. Background In case you are new to Solaris: Solaris includes a set of features that enables you to create a bootable Solaris image, called a Boot Environment (BE). This newly created image can be modified while the original BE is still running your workload(s). There are many benefits, including improved uptime and the ability to reboot into (or downgrade to) an older BE if a newer one has a problem. In Solaris 10 this set of features was named Live Upgrade. Solaris 11 applies the same basic concepts to the new packaging system (IPS) but there isn't a specific name for the feature set. The features are simply part of IPS. Solaris 11 Boot Environments are not discussed in this blog entry. Although a Solaris 10 system can have multiple BEs, until recently a Solaris 10 Branded Zone (BZ) in a Solaris 11 system did not have this ability. This limitation was addressed recently, and that enhancement is the subject of this blog entry. This new implementation uses two concepts. The first is the use of a ZFS clone for each BE. This makes it very easy to create a BE, or many BEs. This is a distinct advantage over the Live Upgrade feature set in Solaris 10, which had a practical limitation of two BEs on a system, when using UFS. The second new concept is a very simple mechanism to indicate the BE that should be booted: a ZFS property. The new ZFS property is named com.oracle.zones.solaris10:activebe (isn't that creative? ). It's important to note that the property is inherited from the original BE's file system to any BEs you create. In other words, all BEs in one zone have the same value for that property. When the (Solaris 11) global zone boots the Solaris 10 BZ, it boots the BE that has the name that is stored in the activebe property. Here is a quick summary of the actions you can use to manage these BEs: To create a BE: Create a ZFS clone of the zone's root dataset To activate a BE: Set the ZFS property of the root dataset to indicate the BE To add a package or patch to an inactive BE: Mount the inactive BE Add packages or patches to it Unmount the inactive BE To list the available BEs: Use the "zfs list" command. To destroy a BE: Use the "zfs destroy" command. Preparation Before you can use the new features, you will need a Solaris 10 BZ on a Solaris 11 system. You can use these three steps - on a real Solaris 11.1 server or in a VirtualBox guest running Solaris 11.1 - to create a Solaris 10 BZ. The Solaris 11.1 environment must be at SRU 6.4 or newer. Create a flash archive on the Solaris 10 system s10# flarcreate -n s10-system /net/zones/archives/s10-system.flar Configure the Solaris 10 BZ on the Solaris 11 system s11# zonecfg -z s10z Use 'create' to begin configuring a new zone. zonecfg:s10z create -t SYSsolaris10 zonecfg:s10z set zonepath=/zones/s10z zonecfg:s10z exit s11# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - s10z configured /zones/s10z solaris10 excl Install the zone from the flash archive s11# zoneadm -z s10z install -a /net/zones/archives/s10-system.flar -p You can find more information about the migration of Solaris 10 environments to Solaris 10 Branded Zones in the documentation. The rest of this blog entry demonstrates the commands you can use to accomplish the aforementioned actions related to BEs. New features in action Note that the demonstration of the commands occurs in the Solaris 10 BZ, as indicated by the shell prompt "s10z# ". Many of these commands can be performed in the global zone instead, if you prefer. If you perform them in the global zone, you must change the ZFS file system names. Create The only complicated action is the creation of a BE. In the Solaris 10 BZ, create a new "boot environment" - a ZFS clone. You can assign any name to the final portion of the clone's name, as long as it meets the requirements for a ZFS file system name. s10z# zfs snapshot rpool/ROOT/zbe-0@snap s10z# zfs clone -o mountpoint=/ -o canmount=noauto rpool/ROOT/zbe-0@snap rpool/ROOT/newBE cannot mount 'rpool/ROOT/newBE' on '/': directory is not empty filesystem successfully created, but not mounted You can safely ignore that message: we already know that / is not empty! We have merely told ZFS that the default mountpoint for the clone is the root directory. List the available BEs and active BE Because each BE is represented by a clone of the rpool/ROOT dataset, listing the BEs is as simple as listing the clones. s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.55G 42.9G 31K legacy rpool/ROOT/zbe-0 1K 42.9G 3.55G / rpool/ROOT/newBE 3.55G 42.9G 3.55G / The output shows that two BEs exist. Their names are "zbe-0" and "newBE". You can tell Solaris that one particular BE should be used when the zone next boots by using a ZFS property. Its name is com.oracle.zones.solaris10:activebe. The value of that property is the name of the clone that contains the BE that should be booted. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local Change the active BE When you want to change the BE that will be booted next time, you can just change the activebe property on the rpool/ROOT dataset. s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe zbe-0 local s10z# zfs set com.oracle.zones.solaris10:activebe=newBE rpool/ROOT s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT NAME PROPERTY VALUE SOURCE rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# shutdown -y -g0 -i6 After the zone has rebooted: s10z# zfs get com.oracle.zones.solaris10:activebe rpool/ROOT rpool/ROOT com.oracle.zones.solaris10:activebe newBE local s10z# zfs mount rpool/ROOT/newBE / rpool/export /export rpool/export/home /export/home rpool /rpool Mount the original BE to see that it's still there. s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# ls /mnt Desktop export platform Documents export.backup.20130607T214951Z proc S10Flar home rpool TT_DB kernel sbin bin lib system boot lost+found tmp cdrom mnt usr dev net var etc opt Patch an inactive BE At this point, you can modify the original BE. If you would prefer to modify the new BE, you can restore the original value to the activebe property and reboot, and then mount the new BE to /mnt (or another empty directory) and modify it. Let's mount the original BE so we can modify it. (The first command is only needed if you haven't already mounted that BE.) s10z# zfs mount -o mountpoint=/mnt rpool/ROOT/zbe-0 s10z# patchadd -R /mnt -M /var/sadm/spool 104945-02 Note that the typical usage will be: Create a BE Mount the new (inactive) BE Use the package and patch tools to update the new BE Unmount the new BE Reboot Delete an inactive BE ZFS clones are children of their parent file systems. In order to destroy the parent, you must first "promote" the child. This reverses the parent-child relationship. (For more information on this, see the documentation.) The original rpool/ROOT file system is the parent of the clones that you create as BEs. In order to destroy an earlier BE that is that parent of other BEs, you must first promote one of the child BEs to be the ZFS parent. Only then can you destroy the original BE. Fortunately, this is easier to do than to explain: s10z# zfs promote rpool/ROOT/newBE s10z# zfs destroy rpool/ROOT/zbe-0 s10z# zfs list -r rpool/ROOT NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT 3.56G 269G 31K legacy rpool/ROOT/newBE 3.56G 269G 3.55G / Documentation This feature is so new, it is not yet described in the Solaris 11 documentation. However, MOS note 1558773.1 offers some details. Conclusion With this new feature, you can add and patch packages to boot environments of a Solaris 10 Branded Zone. This ability improves the manageability of these zones, and makes their use more practical. It also means that you can use the existing P2V tools with earlier Solaris 10 updates, and modify the environments after they become Solaris 10 Branded Zones.

    Read the article

  • NVIDIA x server - "sudo nvidia config" does not generate a working 'xorg.config'

    - by Mike
    I am over 18 hours deep on this challenge. I got to this point and am stuck. very stuck. Maybe you can figure it out? Ubuntu Version 12.04 LTS with all the updates installed. Problem: The default settings in "etc/X11/xorg.conf" that are generated by the "nvidia-xconfig" tool, do not allow the NVIDIA x server to connect to the driver in my "System Settings Additional Driver window". (that's how I understand it. Lots of information below). Symptoms of Problem "System Settings Additional Driver" window has drivers, but the nvidia x server cannot connect/utilize any of the 4 drivers. the drivers are activated, but not in use. When I go to "System Tools Administration NVIDIA x server settings" I get an error that basically tells me to create a default file to initialize the NVIDIA X server (screen shot below). This is the messages the terminal gives after running a "sudo nvidia-xconfig" command for the first time. It seems that the generated file by the tool i just ran is generating a bad/unusable file: If I run the "sudo nvidia-xconfig" command again, I wont get an error the second time. However when I reboot, the default file that is generated (etc/X11/xorg.conf) simply puts the screen resolution at 800 x 600 (or something big like that). When I try to go to NVIDIA x server settings I am greeted with the same screen as the screen shot as in symptom 2 (no option to change the resolution). If I try to go to "system settings display" there are no other resolutions to choose from. At this point I must delete the newly minted "xorg.conf" and reinstate the original in its place. Here are the contents of the "xorg.conf" that is generated first (the one missing required information): # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 304.88 (buildmeister@swio-display-x86-rhel47-06) Wed Mar 27 15:32:58 PDT 2013 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Hardware: I ran the "lspci|grep VGA". There results are: 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [Quadro 1000M] (rev a1) More Hardware info: Ram: 16GB CPU: Intel Core i7-2720QM @2.2GHz * 8 Other: 64 bit. This is a triple boot computer and not a VM. Attempts With Not Success on My End: 1) Tried to append the "xorg.conf" with what I perceive is missing information and obviously it didn't fly. 2) All the other stuff I tried got me to this point. 3) See if this link is helpful to you (I barely get it, but i get enough knowing that a smarter person might find this useful): http://manpages.ubuntu.com/manpages/lucid/man1/nvidia-xconfig.1.html 4) I am completely new to Linux (40 hours over past week), but not to programming. However I am very serious about changing over to Linux. When you respond (I hope someone responds...) please respond in a way that a person new to Linux can understand. 5) By the way, the reason I am in this mess is because I MUST have a second monitor running from my laptop, and "System Settings Display" doesn't recognize my second display. I know it is possible to make the second display work in my system, because when I boot from the install CD, I perform work on the native laptop monitor, but the second monitor shows a purple screen with Ubuntu in the middle, so I know the VGA port is sending a signal out. If this is too much for you to tackle please suggest an alternative method to get a second display. I don't want to go to windows but I cannot have a single display. I am really fudged here. I hope some smart person can help. Thanks in advance. Mike. **********************EDIT #1********************** More Details About Graphics Card I was asked "which brand of nvidia-card do you have exactly?" Here is what I did to provide more info (maybe relevant, maybe not, but here is everything): 1) Took my Lenovo W520 right apart to see if there is an identifier on the actual card. However I realized that if I get deep enough to take a look, the laptop "won't like it". so I put it back together. Figuring out the card this way is not an option for me right now. 2) (My computer is triple boot) I logged into Win7 and ran 'dxdiag' command. here is the screen shot: 3) I tried to look on the lenovo website for more details... but no luck. I took a look at my receipts and here is info form receipt: System Unit: W520 NVIDIA Quadro 1000M 2GB 4) In win7 I went to the NVIDIA website and used the option to have my card 'scanned' by a Java applet to determine the latest update for my card. I tried the same with Ubuntu but I can't get the applet to run. Here is the recommended driver from from the NVIDIA Applet for my card for Win7 (I hope this shines some light on the specifics of the card): Quadro/NVS/Tesla/GRID Desktop Driver Release R319 Version: 320.00 WHQL Release Date: 3.5.2013 5) Also I went on the NVIDIA driver search and looked through every possible combination of product type + product series + product to find all the combinations that yield a 1000M card. My card is: Product Type: Quadro Product Series: Quadro Series (Notebooks) Product: 1000M ***********************EDIT #2******************* Additional Symptoms Another question that generated more symptoms I previously didn't mention was: "After generating xorg.conf by nvidia-xconfig, go to additional drivers, do you see nvidia-304?" 1) I took a screen shot of the "additional drivers" right after generating xorg.conf by nvidia-xconfig. Here it is: 2) Then I did a reboot. Now Ubuntu is 600 x 800 resolution. When I logged in after the computer came up I got an error (which I always get after generating xorg.conf by nvidia-xconfig and rebooting) 3) To finally answer the question - No. There is no "NVIDIA-304" driver. Screen shot of additional drivers after generating xorg.conf by nvidia-xconfig and rebooting : At this point I revert to the original xorg.conf and delete the xorg.conf generated by Nvidia.

    Read the article

  • April Oracle Database Events

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 17-18, 2012 – Moscow, Russia Oracle Develop Conference The Oracle database developer conference, Oracle Develop, will visit Moscow, Russia and Hyderabad, India this spring. Oracle Develop includes a database development track, which contains .NET sessions and hands-on lab led by an Oracle .NET product manager. Register today before the event fills up. http://www.oracle.com/javaone/ru-en/index.html Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 24, 26 – San Diego, CA & San Jose, CA ISC2 Leadership Regional Event Series Oracle at (ISC)2 Security Leadership Series: Herding Clouds -- Managing Cloud Security Concerns and Expectations Join us for his interactive day-long session where industry leaders, including Oracle solution experts, will talk about how to: dispel the top Cloud security myths minimize "rogue Cloud" implementations identify potential compliance pitfalls and how to avoid them develop contract language for your cloud providers manage users across your fractured datacenter leverage existing technologies to protect data as it moves from the enterprise to the cloud http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=146972&src=7239493&src=7239493&Act=228 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 22-26, 2012 – Las Vegas, NV IOUG Collaborate 12 From April 22-26, 2012, Oracle takes Las Vegas. Thousands of Oracle professionals will descend upon the Mandalay Bay Convention Center for a weeks worth of education sessions, networking opportunities and more, at the only user-driven and user-run Oracle conference - COLLABORATE 12.  Your COLLABORATE 12 - IOUG Forum registration comes complete with a bonus- a full day Deep Dive education program on Sunday! Choose from numerous hot topics, including Real World Performance, High Availability and more, presented by legendary and seasoned Oracle pros, including Tom Kyte and Craig Shallahamer. http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=143637&src=7360364&src=7360364&Act=5 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Independent Oracle User Group (IOUG) Regional Events: April 16, 2012 – Columbus, Ohio Ohio Oracle Users Group Higher Performance PL/PQL and Oracle Database 11g PL/SQL New Features – Featuring Steven Feuerstein http://www.ooug.org/ Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 26, 2012- Irving, TX Dallas Oracle Users Group Oracle Database Forum: 5-7PM Oracle Corporation, 6031 Connection Drive Irving, TX http://memberservices.membee.com/538/irmevents.aspx?id=64 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Apr 30, 2012- Los Angeles, CA Real World Performance Tour A full day of real world database performance with: Tom Kyte, author of the ever popular AskTom Blog, Andrew Holdsworth, head of Oracle's Real World Performance Team, and Graham Wood, renowned Oracle Database performance architect. Through discussion, debate and demos, they’ll show you how to master performance engineering topics like: Best practices for designing hardware architectures and how to spot and fix bad design. How to develop applications that deliver the fastest possible performance without sacrificing accuracy.  New for 2012! Updates on Enterprise Manager, Exadata, and what these technologies mean to your current systems. http://www.ioug.org/Events/ADayofRealWorldPerformance/tabid/194/Default.aspx

    Read the article

  • Nashorn, the rhino in the room

    - by costlow
    Nashorn is a new runtime within JDK 8 that allows developers to run code written in JavaScript and call back and forth with Java. One advantage to the Nashorn scripting engine is that is allows for quick prototyping of functionality or basic shell scripts that use Java libraries. The previous JavaScript runtime, named Rhino, was introduced in JDK 6 (released 2006, end of public updates Feb 2013). Keeping tradition amongst the global developer community, "Nashorn" is the German word for rhino. The Java platform and runtime is an intentional home to many languages beyond the Java language itself. OpenJDK’s Da Vinci Machine helps coordinate work amongst language developers and tool designers and has helped different languages by introducing the Invoke Dynamic instruction in Java 7 (2011), which resulted in two major benefits: speeding up execution of dynamic code, and providing the groundwork for Java 8’s lambda executions. Many of these improvements are discussed at the JVM Language Summit, where language and tool designers get together to discuss experiences and issues related to building these complex components. There are a number of benefits to running JavaScript applications on JDK 8’s Nashorn technology beyond writing scripts quickly: Interoperability with Java and JavaScript libraries. Scripts do not need to be compiled. Fast execution and multi-threading of JavaScript running in Java’s JRE. The ability to remotely debug applications using an IDE like NetBeans, Eclipse, or IntelliJ (instructions on the Nashorn blog). Automatic integration with Java monitoring tools, such as performance, health, and SIEM. In the remainder of this blog post, I will explain how to use Nashorn and the benefit from those features. Nashorn execution environment The Nashorn scripting engine is included in all versions of Java SE 8, both the JDK and the JRE. Unlike Java code, scripts written in nashorn are interpreted and do not need to be compiled before execution. Developers and users can access it in two ways: Users running JavaScript applications can call the binary directly:jre8/bin/jjs This mechanism can also be used in shell scripts by specifying a shebang like #!/usr/bin/jjs Developers can use the API and obtain a ScriptEngine through:ScriptEngine engine = new ScriptEngineManager().getEngineByName("nashorn"); When using a ScriptEngine, please understand that they execute code. Avoid running untrusted scripts or passing in untrusted/unvalidated inputs. During compilation, consider isolating access to the ScriptEngine and using Type Annotations to only allow @Untainted String arguments. One noteworthy difference between JavaScript executed in or outside of a web browser is that certain objects will not be available. For example when run outside a browser, there is no access to a document object or DOM tree. Other than that, all syntax, semantics, and capabilities are present. Examples of Java and JavaScript The Nashorn script engine allows developers of all experience levels the ability to write and run code that takes advantage of both languages. The specific dialect is ECMAScript 5.1 as identified by the User Guide and its standards definition through ECMA international. In addition to the example below, Benjamin Winterberg has a very well written Java 8 Nashorn Tutorial that provides a large number of code samples in both languages. Basic Operations A basic Hello World application written to run on Nashorn would look like this: #!/usr/bin/jjs print("Hello World"); The first line is a standard script indication, so that Linux or Unix systems can run the script through Nashorn. On Windows where scripts are not as common, you would run the script like: jjs helloWorld.js. Receiving Arguments In order to receive program arguments your jjs invocation needs to use the -scripting flag and a double-dash to separate which arguments are for jjs and which are for the script itself:jjs -scripting print.js -- "This will print" #!/usr/bin/jjs var whatYouSaid = $ARG.length==0 ? "You did not say anything" : $ARG[0] print(whatYouSaid); Interoperability with Java libraries (including 3rd party dependencies) Another goal of Nashorn was to allow for quick scriptable prototypes, allowing access into Java types and any libraries. Resources operate in the context of the script (either in-line with the script or as separate threads) so if you open network sockets and your script terminates, those sockets will be released and available for your next run. Your code can access Java types the same as regular Java classes. The “import statements” are written somewhat differently to accommodate for language. There is a choice of two styles: For standard classes, just name the class: var ServerSocket = java.net.ServerSocket For arrays or other items, use Java.type: var ByteArray = Java.type("byte[]")You could technically do this for all. The same technique will allow your script to use Java types from any library or 3rd party component and quickly prototype items. Building a user interface One major difference between JavaScript inside and outside of a web browser is the availability of a DOM object for rendering views. When run outside of the browser, JavaScript has full control to construct the entire user interface with pre-fabricated UI controls, charts, or components. The example below is a variation from the Nashorn and JavaFX guide to show how items work together. Nashorn has a -fx flag to make the user interface components available. With the example script below, just specify: jjs -fx -scripting fx.js -- "My title" #!/usr/bin/jjs -fx var Button = javafx.scene.control.Button; var StackPane = javafx.scene.layout.StackPane; var Scene = javafx.scene.Scene; var clickCounter=0; $STAGE.title = $ARG.length>0 ? $ARG[0] : "You didn't provide a title"; var button = new Button(); button.text = "Say 'Hello World'"; button.onAction = myFunctionForButtonClicking; var root = new StackPane(); root.children.add(button); $STAGE.scene = new Scene(root, 300, 250); $STAGE.show(); function myFunctionForButtonClicking(){   var text = "Click Counter: " + clickCounter;   button.setText(text);   clickCounter++;   print(text); } For a more advanced post on using Nashorn to build a high-performing UI, see JavaFX with Nashorn Canvas example. Interoperable with frameworks like Node, Backbone, or Facebook React The major benefit of any language is the interoperability gained by people and systems that can read, write, and use it for interactions. Because Nashorn is built for the ECMAScript specification, developers familiar with JavaScript frameworks can write their code and then have system administrators deploy and monitor the applications the same as any other Java application. A number of projects are also running Node applications on Nashorn through Project Avatar and the supported modules. In addition to the previously mentioned Nashorn tutorial, Benjamin has also written a post about Using Backbone.js with Nashorn. To show the multi-language power of the Java Runtime, there is another interesting example that unites Facebook React and Clojure on JDK 8’s Nashorn. Summary Nashorn provides a simple and fast way of executing JavaScript applications and bridging between the best of each language. By making the full range of Java libraries to JavaScript applications, and the quick prototyping style of JavaScript to Java applications, developers are free to work as they see fit. Software Architects and System Administrators can take advantage of one runtime and leverage any work that they have done to tune, monitor, and certify their systems. Additional information is available within: The Nashorn Users’ Guide Java Magazine’s article "Next Generation JavaScript Engine for the JVM." The Nashorn team’s primary blog or a very helpful collection of Nashorn links.

    Read the article

  • Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

    - by Andreea Vaduva
    Untitled Document table { border: thin solid; } Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users. So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.   TIP It is not necessary to highlight and copy the row or column descriptions   Next from the Smart View ribbon select Copy Data Point. Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View;from it select the Paste Data Point icon. The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers. =After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.) It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text: Before refresh: After refresh: From now on for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again. As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).   So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:   By selecting the Manage POV option from the Smart View meny in Word or Power Point…   … the following POV Manager – Queries window opens:   You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV. TIP Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV. And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks: Select one of the numbers from within your Word or Power Point document by double-clicking.   Then select the Visualize in Excel option from the Smart View menu. Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.) However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features. So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected] . About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.  

    Read the article

  • Oracle Flashback Technologies - Overview

    - by Sridhar_R-Oracle
    Oracle Flashback Technologies - IntroductionIn his May 29th 2014 blog, my colleague Joe Meeks introduced Oracle Maximum Availability Architecture (MAA) and discussed both planned and unplanned outages. Let’s take a closer look at unplanned outages. These can be caused by physical failures (e.g., server, storage, network, file deletion, physical corruption, site failures) or by logical failures – cases where all components and files are physically available, but data is incorrect or corrupt. These logical failures are usually caused by human errors or application logic errors. This blog series focuses on these logical errors – what causes them and how to address and recover from them using Oracle Database Flashback. In this introductory blog post, I’ll provide an overview of the Oracle Database Flashback technologies and will discuss the features in detail in future blog posts. Let’s get started. We are all human beings (unless a machine is reading this), and making mistakes is a part of what we do…often what we do best!  We “fat finger”, we spill drinks on keyboards, unplug the wrong cables, etc.  In addition, many of us, in our lives as DBAs or developers, must have observed, caused, or corrected one or more of the following unpleasant events: Accidentally updated a table with wrong values !! Performed a batch update that went wrong - due to logical errors in the code !! Dropped a table !! How do DBAs typically recover from these types of errors? First, data needs to be restored and recovered to the point-in-time when the error occurred (incomplete or point-in-time recovery).  Moreover, depending on the type of fault, it’s possible that some services – or even the entire database – would have to be taken down during the recovery process.Apart from error conditions, there are other questions that need to be addressed as part of the investigation. For example, what did the data look like in the morning, prior to the error? What were the various changes to the row(s) between two timestamps? Who performed the transaction and how can it be reversed?  Oracle Database includes built-in Flashback technologies, with features that address these challenges and questions, and enable you to perform faster, easier, and convenient recovery from logical corruptions. HistoryFlashback Query, the first Flashback Technology, was introduced in Oracle 9i. It provides a simple, powerful and completely non-disruptive mechanism for data verification and recovery from logical errors, and enables users to view the state of data at a previous point in time.Flashback Technologies were further enhanced in Oracle 10g, to provide fast, easy recovery at the database, table, row, and even at a transaction level.Oracle Database 11g introduced an innovative method to manage and query long-term historical data with Flashback Data Archive. The 11g release also introduced Flashback Transaction, which provides an easy, one-step operation to back out a transaction. Oracle Database versions 11.2.0.2 and beyond further enhanced the performance of these features. Note that all the features listed here work without requiring any kind of restore operation.In addition, Flashback features are fully supported with the new multi-tenant capabilities introduced with Oracle Database 12c, Flashback Features Oracle Flashback Database enables point-in-time-recovery of the entire database without requiring a traditional restore and recovery operation. It rewinds the entire database to a specified point in time in the past by undoing all the changes that were made since that time.Oracle Flashback Table enables an entire table or a set of tables to be recovered to a point in time in the past.Oracle Flashback Drop enables accidentally dropped tables and all dependent objects to be restored.Oracle Flashback Query enables data to be viewed at a point-in-time in the past. This feature can be used to view and reconstruct data that was lost due to unintentional change(s) or deletion(s). This feature can also be used to build self-service error correction into applications, empowering end-users to undo and correct their errors.Oracle Flashback Version Query offers the ability to query the historical changes to data between two points in time or system change numbers (SCN) Oracle Flashback Transaction Query enables changes to be examined at the transaction level. This capability can be used to diagnose problems, perform analysis, audit transactions, and even revert the transaction by undoing SQLOracle Flashback Transaction is a procedure used to back-out a transaction and its dependent transactions.Flashback technologies eliminate the need for a traditional restore and recovery process to fix logical corruptions or make enquiries. Using these technologies, you can recover from the error in the same amount of time it took to generate the error. All the Flashback features can be accessed either via SQL command line (or) via Enterprise Manager.  Most of the Flashback technologies depend on the available UNDO to retrieve older data. The following table describes the various Flashback technologies: their purpose, dependencies and situations where each individual technology can be used.   Example Syntax Error investigation related:The purpose is to investigate what went wrong and what the values were at certain points in timeFlashback Queries  ( select .. as of SCN | Timestamp )   - Helps to see the value of a row/set of rows at a point in timeFlashback Version Queries  ( select .. versions between SCN | Timestamp and SCN | Timestamp)  - Helps determine how the value evolved between certain SCNs or between timestamps Flashback Transaction Queries (select .. XID=)   - Helps to understand how the transaction caused the changes.Error correction related:The purpose is to fix the error and correct the problems,Flashback Table  (flashback table .. to SCN | Timestamp)  - To rewind the table to a particular timestamp or SCN to reverse unwanted updates Flashback Drop (flashback table ..  to before drop )  - To undrop or undelete a table Flashback Database (flashback database to SCN  | Restore Point )  - This is the rewind button for Oracle databases. You can revert the entire database to a particular point in time. It is a fast way to perform a PITR (point-in-time recovery). Flashback Transaction (DBMS_FLASHBACK.TRANSACTION_BACKOUT(XID..))  - To reverse a transaction and its related transactions Advanced use cases Flashback technology is integrated into Oracle Recovery Manager (RMAN) and Oracle Data Guard. So, apart from the basic use cases mentioned above, the following use cases are addressed using Oracle Flashback. Block Media recovery by RMAN - to perform block level recovery Snapshot Standby - where the standby is temporarily converted to a read/write environment for testing, backup, or migration purposes Re-instate old primary in a Data Guard environment – this avoids the need to restore an old backup and perform a recovery to make it a new standby. Guaranteed Restore Points - to bring back the entire database to an older point-in-time in a guaranteed way. and so on..I hope this introductory overview helps you understand how Flashback features can be used to investigate and recover from logical errors.  As mentioned earlier, I will take a deeper-dive into to some of the critical Flashback features in my upcoming blogs and address common use cases.

    Read the article

  • Java collision detection and player movement: tips

    - by Loris
    I have read a short guide for game develompent (java, without external libraries). I'm facing with collision detection and player (and bullets) movements. Now i put the code. Most of it is taken from the guide (should i link this guide?). I'm just trying to expand and complete it. This is the class that take care of updates movements and firing mechanism (and collision detection): public class ArenaController { private Arena arena; /** selected cell for movement */ private float targetX, targetY; /** true if droid is moving */ private boolean moving = false; /** true if droid is shooting to enemy */ private boolean shooting = false; private DroidController droidController; public ArenaController(Arena arena) { this.arena = arena; this.droidController = new DroidController(arena); } public void update(float delta) { Droid droid = arena.getDroid(); //droid movements if (moving) { droidController.moveDroid(delta, targetX, targetY); //check if arrived if (droid.getX() == targetX && droid.getY() == targetY) moving = false; } //firing mechanism if(shooting) { //stop shot if there aren't bullets if(arena.getBullets().isEmpty()) { shooting = false; } for(int i = 0; i < arena.getBullets().size(); i++) { //current bullet Bullet bullet = arena.getBullets().get(i); System.out.println(bullet.getBounds()); //angle calculation double angle = Math.atan2(bullet.getEnemyY() - bullet.getY(), bullet.getEnemyX() - bullet.getX()); //increments x and y bullet.setX((float) (bullet.getX() + (Math.cos(angle) * bullet.getSpeed() * delta))); bullet.setY((float) (bullet.getY() + (Math.sin(angle) * bullet.getSpeed() * delta))); //collision with obstacles for(int j = 0; j < arena.getObstacles().size(); j++) { Obstacle obs = arena.getObstacles().get(j); if(bullet.getBounds().intersects(obs.getBounds())) { System.out.println("Collision detect!"); arena.removeBullet(bullet); } } //collisions with enemies for(int j = 0; j < arena.getEnemies().size(); j++) { Enemy ene = arena.getEnemies().get(j); if(bullet.getBounds().intersects(ene.getBounds())) { System.out.println("Collision detect!"); arena.removeBullet(bullet); } } } } } public boolean onClick(int x, int y) { //click on empty cell if(arena.getGrid()[(int)(y / Arena.TILE)][(int)(x / Arena.TILE)] == null) { //coordinates targetX = x / Arena.TILE; targetY = y / Arena.TILE; //enables movement moving = true; return true; } //click on enemy: fire if(arena.getGrid()[(int)(y / Arena.TILE)][(int)(x / Arena.TILE)] instanceof Enemy) { //coordinates float enemyX = x / Arena.TILE; float enemyY = y / Arena.TILE; //new bullet Bullet bullet = new Bullet(); //start coordinates bullet.setX(arena.getDroid().getX()); bullet.setY(arena.getDroid().getY()); //end coordinates (enemie) bullet.setEnemyX(enemyX); bullet.setEnemyY(enemyY); //adds bullet to arena arena.addBullet(bullet); //enables shooting shooting = true; return true; } return false; } As you can see for collision detection i'm trying to use Rectangle object. Droid example: import java.awt.geom.Rectangle2D; public class Droid { private float x; private float y; private float speed = 20f; private float rotation = 0f; private float damage = 2f; public static final int DIAMETER = 32; private Rectangle2D rectangle; public Droid() { rectangle = new Rectangle2D.Float(x, y, DIAMETER, DIAMETER); } public float getX() { return x; } public void setX(float x) { this.x = x; //rectangle update rectangle.setRect(x, y, DIAMETER, DIAMETER); } public float getY() { return y; } public void setY(float y) { this.y = y; //rectangle update rectangle.setRect(x, y, DIAMETER, DIAMETER); } public float getSpeed() { return speed; } public void setSpeed(float speed) { this.speed = speed; } public float getRotation() { return rotation; } public void setRotation(float rotation) { this.rotation = rotation; } public float getDamage() { return damage; } public void setDamage(float damage) { this.damage = damage; } public Rectangle2D getRectangle() { return rectangle; } } For now, if i start the application and i try to shot to an enemy, is immediately detected a collision and the bullet is removed! Can you help me with this? If the bullet hit an enemy or an obstacle in his way, it must disappear. Ps: i know that the movements of the bullets should be managed in another class. This code is temporary. update I realized what happens, but not why. With those for loops (which checks collisions) the movements of the bullets are instantaneous instead of gradual. In addition to this, if i add the collision detection to the Droid, the method intersects returns true ALWAYS while the droid is moving! public void moveDroid(float delta, float x, float y) { Droid droid = arena.getDroid(); int bearing = 1; if (droid.getX() > x) { bearing = -1; } if (droid.getX() != x) { droid.setX(droid.getX() + bearing * droid.getSpeed() * delta); //obstacles collision detection for(Obstacle obs : arena.getObstacles()) { if(obs.getRectangle().intersects(droid.getRectangle())) { System.out.println("Collision detected"); //ALWAYS HERE } } //controlla se è arrivato if ((droid.getX() < x && bearing == -1) || (droid.getX() > x && bearing == 1)) droid.setX(x); } bearing = 1; if (droid.getY() > y) { bearing = -1; } if (droid.getY() != y) { droid.setY(droid.getY() + bearing * droid.getSpeed() * delta); if ((droid.getY() < y && bearing == -1) || (droid.getY() > y && bearing == 1)) droid.setY(y); } }

    Read the article

  • Announcing: Improvements to the Windows Azure Portal

    - by ScottGu
    Earlier today we released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Service Bus Management and Monitoring Support for Managing Co-administrators Import/Export support for SQL Databases Virtual Machine Experience Enhancements Improved Cloud Service Status Notifications Media Services Monitoring Support Storage Container Creation and Access Control Support All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Service Bus Management and Monitoring The new Windows Azure Management Portal now supports Service Bus management and monitoring. Service Bus provides rich messaging infrastructure that can sit between applications (or between cloud and on-premise environments) and allow them to communicate in a loosely coupled way for improved scale and resiliency. With the new Service Bus experience, you can now create and manage Service Bus Namespaces, Queues, Topics, Relays and Subscriptions. You can also get rich monitoring for Service Bus Queues, Topics and Subscriptions. To create a Service Bus namespace, you can now select the “Service Bus” tab in the Windows Azure portal and then simply select the CREATE command: Doing so will bring up a new “Create a Namespace” dialog that allows you to name and create a new Service Bus Namespace: Once created, you can obtain security credentials associated with the Namespace via the ACCESS KEY command. This gives you the ability to obtain the connection string associated with the service namespace. You can copy and paste these values into any application that requires these credentials: It is also now easy to create Service Bus Queues and Topics via the NEW experience in the portal drawer.  Simply click the NEW command and navigate to the “App Services” category to create a new Service Bus entity: Once you provision a new Queue or Topic it can be managed in the portal.  Clicking on a namespace will display all queues and topics within it: Clicking on an item in the list will allow you to drill down into a dashboard view that allows you to monitor the activity and traffic within it, as well as perform operations on it. For example, below is a view of an “orders” queue – note how we now surface both the incoming and outgoing message flow rate, as well as the total queue length and queue size: To monitor pub/sub subscriptions you can use the ADD METRICS command within a topic and select a specific subscription to monitor. Support for Managing Co-Administrators You can now add co-administrators for your Windows Azure subscription using the new Windows Azure Portal. This allows you to share management of your Windows Azure services with other users. Subscription co-administrators share the same administrative rights and permissions that service administrator have - except a co-administrator cannot change or view billing details about the account, nor remove the service administrator from a subscription. In the SETTINGS section, click on the ADMINISTRATORS tab, and select the ADD button to add a co-administrator to your subscription: To add a co-administrator, you specify the email address for a Microsoft account (formerly Windows Live ID) or an organizational account, and choose the subscription you want to add them to: You can later update the subscriptions that the co-administrator has access to by clicking on the EDIT button, and then select or deselect the subscriptions to which they belong. Import/Export Support for SQL Databases The Windows Azure administration portal now supports importing and exporting SQL Databases to/from Blob Storage.  Databases can be imported/exported to blob storage using the same BACPAC file format that is supported with SQL Server 2012.  Among other benefits, this makes it easy to copy and migrate databases between on-premise and cloud environments. SQL Databases now have an EXPORT command in the bottom drawer that when pressed will prompt you to save your database to a Windows Azure storage container: The UI allows you to choose an existing storage account or create a new one, as well as the name of the BACPAC file to persist in blob storage: You can also now import and create a new SQL Database by using the NEW command.  This will prompt you to select the storage container and file to import the database from: The Windows Azure Portal enables you to monitor the progress of import and export operations. If you choose to log out of the portal, you can come back later and check on the status of all of the operations in the new history tab of the SQL Database server – this shows your entire import and export history and the status (success/fail) of each: Enhancements to the Virtual Machine Experience One of the common pain-points we have heard from customers using the preview of our new Virtual Machine support has been the inability to delete the associated VHDs when a VM instance (or VM drive) gets deleted. Prior to today’s release the VHDs would continue to be in your storage account and accumulate storage charges. You can now navigate to the Disks tab within the Virtual Machine extension, select a VM disk to delete, and click the DELETE DISK command: When you click the DELETE DISK button you have the option to delete the disk + associated .VHD file (completely clearing it from storage).  Alternatively you can delete the disk but still retain a .VHD copy of it in storage. Improved Cloud Service Status Notifications The Windows Azure portal now exposes more information of the health status of role instances.  If any of the instances are in a non-running state, the status at the top of the dashboard will summarize the status (and update automatically as the role health changes): Clicking the instance hyperlink within this status summary view will navigate you to a detailed role instance view, and allow you to get more detailed health status of each of the instances.  The portal has been updated to provide more specific status information within this detailed view – giving you better visibility into the health of your app: Monitoring Support for Media Services Windows Azure Media Services allows you to create media processing jobs (for example: encoding media files) in your Windows Azure Media Services account. In the Windows Azure Portal, you can now monitor the number of encoding jobs that are queued up for processing as well as active, failed and queued tasks for encoding jobs. On your media services account dashboard, you can visualize the monitoring data for last 6 hours, 24 hours or 7 days. Storage Container Creation and Access Control Support You can now create Windows Azure Storage storage containers from within the Windows Azure Portal.  After selecting a storage account, you can navigate to the CONTAINERS tab and click the ADD CONTAINER command: This will display a dialog that lets you name the new container and control access to it: You can also update the access setting as well as container metadata of existing containers by selecting one and then using the new EDIT CONTAINER command: This will then bring up the edit container dialog that allows you to change and save its settings: In addition to creating and editing containers, you can click on them within the portal to drill-in and view blobs within them.  Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming later this month – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5, and support .NET 4.5 with Websites).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Big Visible Charts

    - by Robert May
    An important part of Agile is the concept of transparency and visibility. In proper functioning teams, stakeholders can look at any team at any time in the iteration or release and see how that team is doing by simply looking at what we call Big Visible Charts. If you’ve done Scrum, you’ve seen these charts. However, interpreting these charts can often be an art form. There are several different charts that can be useful. In this newsletter, I’ll focus on the Iteration Burndown and Cumulative Flow charts. I’ve included a copy of the spreadsheet that I used to create the charts, and if you don’t have a tool that creates them for you, you can use this spreadsheet to do so. Our preferred tool for managing Scrum projects is Rally. Rally creates all of these charts for you, saving you quite a bit of time. The Iteration Burndown and Cumulative Flow Charts This is the main chart that teams use. Although less useful to stakeholders, this chart is critical to the team and provides quite a bit of information to the team about how their iteration is going. Most charts are a combination of the charts below, so you may need to combine aspects of each section to understand what is happening in your iterations. Ideal Ah, isn’t that a pretty picture? Unfortunately, it’s also very unrealistic. I’ve seen iterations that come close to ideal, but never that match perfectly. If your iteration matches perfectly, chances are, someone is playing with the numbers. Reality is just too difficult to have a burndown chart that matches this exactly. Late Planning Iteration started, but the team didn’t. You can tell this by the fact that the real number of estimated hours didn’t appear until day two. In the cumulative flow, you can also see that nothing was defined in Day one and two. You want to avoid situations like this. You’ll note that the team had to burn faster than is ideal to meet the iteration because of the late planning. This often results in long weeks and days. Testing Starved Determining whether or not testing is starved is difficult without the cumulative flow. The pattern in the burndown could be nothing more that developers not completing stories early enough or could be caused by stories being too big. With the cumulative flow, however, you see that only small bites are in progress and stories were completed early, but testing didn’t start testing until the end of the iteration, and didn’t complete testing all stories in the iteration. When this happens, question whether or not your testing resources are sufficient for your team and whether or not acceptance is adequately defined. No Testing With this one, both graphs show the same thing; the team needs testers and testing! Without testing, what was completed cannot be verified to make sure that it is acceptable to the business. If you find yourself in this situation, review your testing practices and acceptance testing process and make changes today. Late Development With this situation, both graphs tell a story. In the top graph, you can see that the hours failed to burn down as quickly as the team expected. This could be caused by the team not correctly estimating their hours or the team could have had illness or some other issue that affected them. Often, when teams are tackling something that is more unknown, they’ll run into technical barriers that cause the burn down to happen slower than expected. In the cumulative flow graph, you can see that not much was completed in the first few days. This could be because of illness or technical barriers or simply poor estimation. Testing was able to keep up with everything that was completed, however. No Tool Updating When you see graphs that look like this, you can be assured that it’s because the team is not updating the tool that generates the graphs. Review your policy for when they are to update. On the teams that I run, I require that each team member updates the tool at least once daily. You should also check to see how well the team is breaking down stories into tasks. If they’re creating few large tasks, graphs can look similar to this. As a general rule, I never allow tasks, other than Unit Testing and Uncertainty, to be greater than eight hours in duration. Scope Increase I always encourage team members to enter in however much time they think they have left on a task, even if that means increasing the total amount of time left to do. You get a much better and more realistic picture this way. Increasing time remaining could explain the burndown graph, but by looking at the cumulative flow graph, we can see that stories were added to the iteration and scope was increased. Since planning should consume all of the hours in the iteration, this is almost always a bad thing. If the scope change happened late in the iteration and the hours remaining were well below the ideal burn, then increasing scope is probably o.k., but estimation needs to get better. However, with the charts above, that’s clearly not what happened and the team was required to do extra work to make the iteration. If you find this happening, your product owner and ScrumMasters need training. The team also needs to learn to say no. Scope Decrease Scope decreases are just as bad as scope increases. Usually, graphs above show that the team did a poor job of estimating their stories and part way through had to reduce scope to change the iteration. This will happen once in a while, but if you find it’s a pattern on your team, you need to re-evaluate planning. Some teams are hopelessly optimistic. In those cases, I’ll introduce a task I call “Uncertainty.” With Uncertainty, the team estimates how many hours they might need if things don’t go well with the tasks they’ve defined. They try to estimate things that could go poorly and increase the time appropriately. Having an Uncertainty task allows them to have a low and high estimate. Uncertainty should not just be an arbitrary buffer. It must correlate to real uncertainty in the tasks that have been defined. Stories are too Big Often, we see graphs like the ones above. Note that the burndown looks fairly good, other than the chunky acceptance of stories. However, when you look at cumulative flow, you can see that at one point, everything is in progress. This is a bad thing. When you see graphs like this, you’re in one of two states. You may just have a very small team and can only handle one or two stories in your iteration. If you have more than one or two people, then the most likely problem is that your stories are far too big. To combat this, break large high hour stories into smaller pieces that can be completed independently and accepted independently. If you don’t, you’ll likely be requiring your testers to do heroic things to complete testing on the last day of the iteration and you’re much more likely to have the entire iteration fail, because of the limited amount of things that can be completed. Summary There are other charts that can be useful when doing scrum. If you don’t have any big visible charts, you really need to evaluate your process and change. These charts can provide the team a wealth of information and help you write better software. If you have any questions about charts that you’re seeing on your team, contact me with a screen capture of the charts and I’ll tell you what I’m seeing in those charts. I always want this information to be useful, so please let me know if you have other questions. Technorati Tags: Agile

    Read the article

  • CodePlex Daily Summary for Tuesday, July 02, 2013

    CodePlex Daily Summary for Tuesday, July 02, 2013Popular ReleasesMastersign.Expressions: Mastersign.Expressions v0.4.2: added support for if(<cond>, <true-part>, <false-part>) fixed multithreading issue with rand() improved demo applicationNB_Store - Free DotNetNuke Ecommerce Catalog Module: NB_Store v2.3.6 Rel0: v2.3.6 Is now DNN6 and DNN7 compatible Important : During update this install with overwrite the menu.xml setting, if you have changed this then make a backup before you upgrade and reapply your changes after the upgrade. Please view the following documentation if you are installing and configuring this module for the first time System Requirements Skill requirements Downloads and documents Step by step guide to a working store Please ask all questions in the Discussions tab. Document.Editor: 2013.26: What's new for Document.Editor 2013.26: New Insert Chart Improved User Interface Minor Bug Fix's, improvements and speed upsWsus Package Publisher: Release V1.2.1307.01: Fix an issue in the UI, approvals are not shown correctly in the 'Report' tabDirectX Tool Kit: July 2013: July 1, 2013 VS 2013 Preview projects added and updates for DirectXMath 3.05 vectorcall Added use of sRGB WIC metadata for JPEG, PNG, and TIFF SaveToWIC functions updated with new optional setCustomProps parameter and error check with optional targetFormatCore Server 2012 Powershell Script Hyper-v Manager: new_root.zip: Verison 1.0JSON Toolkit: JSON Toolkit 4.1.736: Improved strinfigy performance New serializing feature New anonymous type support in constructorsDotNetNuke® IFrame: IFrame 04.05.00: New DNN6/7 Manifest file and Azure Compatibility.VidCoder: 1.5.2 Beta: Fixed crash on presets with an invalid bitrate.Gardens Point LEX: Gardens Point LEX version 1.2.1: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.2.1 has new facilities for defining and manipulating character classes. These changes make the construction of large Unicode character classes more convenient. The runtime code for performing automaton backup has been re-implemented, and is now faster for scanners that need backup. Source CodeThe distribution contains a complete VS2010 project for the appli...ZXMAK2: Version 2.7.5.7: - fix TZX emulation (Bruce Lee, Zynaps) - fix ATM 16 colors for border - add memory module PROFI 512K; add PROFI V03 rom image; fix PROFI 3.XX configTwitter image Downloader: Twitter Image Downloader 2 with Installer: Application file with Install shield and Dot Net 4.0 redistributableUltimate Music Tagger: Ultimate Music Tagger 1.0.0.0: First release of Ultimate Music TaggerBlackJumboDog: Ver5.9.2: 2013.06.28 Ver5.9.2 (1) ??????????(????SMTP?????)?????????? (2) HTTPS???????????Outlook 2013 Add-In: Configuration Form: This new version includes the following changes: - Refactored code a bit. - Removing configuration from main form to gain more space to display items. - Moved configuration to separate form. You can click the little "gear" icon to access the configuration form (still very simple). - Added option to show past day appointments from the selected day (previous in time, that is). - Added some tooltips. You will have to uninstall the previous version (add/remove programs) if you had installed it ...Terminals: Version 3.0 - Release: Changes since version 2.0:Choose 100% portable or installed version Removed connection warning when running RDP 8 (Windows 8) client Fixed Active directory search Extended Active directory search by LDAP filters Fixed single instance mode when running on Windows Terminal server Merged usage of Tags and Groups Added columns sorting option in tables No UAC prompts on Windows 7 Completely new file persistence data layer New MS SQL persistence layer (Store data in SQL database)...NuGet: NuGet 2.6: Released June 26, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.6Python Tools for Visual Studio: 2.0 Beta: We’re pleased to announce the release of Python Tools for Visual Studio 2.0 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, and cross platform debugging support. For a quick overview of the general IDE experience, please watch this video: http://www.youtube.com/watch?v=TuewiStN...Player Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for Windows Azure Media Services Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expectin...AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...New ProjectsALM Rangers DevOps Tooling and Guidance: Practical tooling and guidance that will enable teams to realize a faster deployment based on continuous feedback.Core Server 2012 Powershell Script Hyper-v Manager: Free core Server 2012 powershell scripts and batch files that replace the non-existent hyper-v manager, vmconnect and mstsc.Enhanced Deployment Service (EDS): EDS is a web service based utility designed to extend the deployment capabilities of administrators with the Microsoft Deployment Toolkit.ExtendedDialogBox: Libreria DialogBoxJazdy: This project is here only because we wanted to take advantage of a public git server.Mon Examen: This web interface is meant to make examinationsneet: summaryOrchard Multi-Choice Voting: A multiple choice voting Orchard module.Particle Swarm Optimization Solving Quadratic Assignment Problem: This project is submitted for the solving of QAP using PSO algorithms with addition of some modification Porjects: 23123123PPL Power Pack: PPL Power PackProperty Builder: Visual Studio tool for speeding up process of coding class properties getters and setters.RedRuler for Redline: I tried some on-screen rulers, none of them help me measure the UI element quickly based on the Redline. So I decided to created this handy RedRuler tool. Royale Living: Mahindra Royale Community PortalSearch and booking Hotel or Tours: Ð? án nghiên c?u c?a sinh viên tdt theo mô hình mvc 4SystemBuilder.Show: This tool is a helper after you create your project in visual studio to create the respective objects and interface. TalentDesk: new ptojectTcmplex: The Training Center teaches many different kind of course such as English, French, Computer hardware and computer softwareTFS Reporting Guide: Provides guidance and samples to enable TFS users to generate reports based on WIT data.Umbraco AdaptiveImages: Adaptive Images Package for UmbracoVirtualNet - A ILcode interpreter/emulator written in C++/Assembly: VirtualNet is a interpreter/emulator for running .net code in native without having to install the .Net FrameWorkVisual Blocks: Visual Blocks ????IDE ????? ??????? ????? ????/?? Visual Studio and Cloud Based Mobile Device Testing: Practical guidance enabling field to remove blockers to adoption and to use and extend the Perfecto Mobile Cloud Device testing within the context of VS.Windows 8 Time Picker for Windows Phone: A Windows Phone implementation of the Time Picker control found in the Windows 8.1 Alarms app.???? - SmallBasic?: ?????????

    Read the article

  • The Great Divorce

    - by BlackRabbitCoder
    I have a confession to make: I've been in an abusive relationship for more than 17 years now.  Yes, I am not ashamed to admit it, but I'm finally doing something about it. I met her in college, she was new and sexy and amazingly fast -- and I'd never met anything like her before.  Her style and her power captivated me and I couldn't wait to learn more about her.  I took a chance on her, and though I learned a lot from her -- and will always be grateful for my time with her -- I think it's time to move on. Her name was C++, and she so outshone my previous love, C, that any thoughts of going back evaporated in the heat of this new romance.  She promised me she'd be gentle and not hurt me the way C did.  She promised me she'd clean-up after herself better than C did.  She promised me she'd be less enigmatic and easier to keep happy than C was.  But I was deceived.  Oh sure, as far as truth goes, it wasn't a complete lie.  To some extent she was more fun, more powerful, safer, and easier to maintain.  But it just wasn't good enough -- or at least it's not good enough now. I loved C++, some part of me still does, it's my first-love of programming languages and I recognize its raw power, its blazing speed, and its improvements over its predecessor.  But with today's hardware, at speeds we could only dream to conceive of twenty years ago, that need for speed -- at the cost of all else -- has died, and that has left my feelings for C++ moribund. If I ever need to write an operating system or a device driver, then I might need that speed.  But 99% of the time I don't.  I'm a business-type programmer and chances are 90% of you are too, and even the ones who need speed at all costs may be surprised by how much you sacrifice for that.   That's not to say that I don't want my software to perform, and it's not to say that in the business world we don't care about speed or that our job is somehow less difficult or technical.  There's many times we write programs to handle millions of real-time updates or handle thousands of financial transactions or tracking trading algorithms where every second counts.  But if I choose to write my code in C++ purely for speed chances are I'll never notice the speed increase -- and equally true chances are it will be far more prone to crash and far less easy to maintain.  Nearly without fail, it's the macro-optimizations you need, not the micro-optimizations.  If I choose to write a O(n2) algorithm when I could have used a O(n) algorithm -- that can kill me.  If I choose to go to the database to load a piece of unchanging data every time instead of caching it on first load -- that too can kill me.  And if I cross the network multiple times for pieces of data instead of getting it all at once -- yes that can also kill me.  But choosing an overly powerful and dangerous mid-level language to squeeze out every last drop of performance will realistically not make stock orders process any faster, and more likely than not open up the system to more risk of crashes and resource leaks. And that's when my love for C++ began to die.  When I noticed that I didn't need that speed anymore.  That that speed was really kind of a lie.  Sure, I can be super efficient and pack bits in a byte instead of using separate boolean values.  Sure, I can use an unsigned char instead of an int.  But in the grand scheme of things it doesn't matter as much as you think it does.  The key is maintainability, and that's where C++ failed me.  I like to tell the other developers I work with that there's two levels of correctness in coding: Is it immediately correct? Will it stay correct? That is, you can hack together any piece of code and make it correct to satisfy a task at hand, but if a new developer can't come in tomorrow and make a fairly significant change to it without jeopardizing that correctness, it won't stay correct. Some people laugh at me when I say I now prefer maintainability over speed.  But that is exactly the point.  If you focus solely on speed you tend to produce code that is much harder to maintain over the long hall, and that's a load of technical debt most shops can't afford to carry and end up completely scrapping code before it's time.  When good code is written well for maintainability, though, it can be correct both now and in the future. And you know the best part is?  My new love is nearly as fast as C++, and in some cases even faster -- and better than that, I know C# will treat me right.  Her creators have poured hundreds of thousands of hours of time into making her the sexy beast she is today.  They made her easy to understand and not an enigmatic mess.  They made her consistent and not moody and amorphous.  And they made her perform as fast as I care to go by optimizing her both at compile time and a run-time. Her code is so elegant and easy on the eyes that I'm not worried where she will run to or what she'll pull behind my back.  She is powerful enough to handle all my tasks, fast enough to execute them with blazing speed, maintainable enough so that I can rely on even fairly new peers to modify my work, and rich enough to allow me to satisfy any need.  C# doesn't ask me to clean up her messes!  She cleans up after herself and she tries to make my life easier for me by taking on most of those optimization tasks C++ asked me to take upon myself.  Now, there are many of you who would say that I am the cause of my own grief, that it was my fault C++ didn't behave because I didn't pay enough attention to her.  That I alone caused the pain she inflicted on me.  And to some extent, you have a point.  But she was so high maintenance, requiring me to know every twist and turn of her vast and unrestrained power that any wrong term or bout of forgetfulness was met with painful reminders that she wasn't going to watch my back when I made a mistake.  But C#, she loves me when I'm good, and she loves me when I'm bad, and together we make beautiful code that is both fast and safe. So that's why I'm leaving C++ behind.  She says she's changing for me, but I have no interest in what C++0x may bring.  Oh, I'll still keep in touch, and maybe I'll see her now and again when she brings her problems to my door and asks for some attention -- for I always have a soft spot for her, you see.  But she's out of my house now.  I have three kids and a dog and a cat, and all require me to clean up after them, why should I have to clean up after my programming language as well?

    Read the article

  • Waterfall Model (SDLC) vs. Prototyping Model

    The characters in the fable of the Tortoise and the Hare can easily be used to demonstrate the similarities and differences between the Waterfall and Prototyping software development models. This children fable is about a race between a consistently slow moving but steadfast turtle and an extremely fast but unreliable rabbit. After closely comparing each character’s attributes in correlation with both software development models, a trend seems to appear in that the Waterfall closely resembles the Tortoise in that Waterfall Model is typically a slow moving process that is broken up in to multiple sequential steps that must be executed in a standard linear pattern. The Tortoise can be quoted several times in the story saying “Slow and steady wins the race.” This is the perfect mantra for the Waterfall Model in that this model is seen as a cumbersome and slow moving. Waterfall Model Phases Requirement Analysis & Definition This phase focuses on defining requirements for a project that is to be developed and determining if the project is even feasible. Requirements are collected by analyzing existing systems and functionality in correlation with the needs of the business and the desires of the end users. The desired output for this phase is a list of specific requirements from the business that are to be designed and implemented in the subsequent steps. In addition this phase is used to determine if any value will be gained by completing the project. System Design This phase focuses primarily on the actual architectural design of a system, and how it will interact within itself and with other existing applications. Projects at this level should be viewed at a high level so that actual implementation details are decided in the implementation phase. However major environmental decision like hardware and platform decision are typically decided in this phase. Furthermore the basic goal of this phase is to design an application at the system level in those classes, interfaces, and interactions are defined. Additionally decisions about scalability, distribution and reliability should also be considered for all decisions. The desired output for this phase is a functional  design document that states all of the architectural decisions that have been made in regards to the project as well as a diagrams like a sequence and class diagrams. Software Design This phase focuses primarily on the refining of the decisions found in the functional design document. Classes and interfaces are further broken down in to logical modules based on the interfaces and interactions previously indicated. The output of this phase is a formal design document. Implementation / Coding This phase focuses primarily on implementing the previously defined modules in to units of code. These units are developed independently are intergraded as the system is put together as part of a whole system. Software Integration & Verification This phase primarily focuses on testing each of the units of code developed as well as testing the system as a whole. There are basic types of testing at this phase and they include: Unit Test and Integration Test. Unit Test are built to test the functionality of a code unit to ensure that it preforms its desired task. Integration testing test the system as a whole because it focuses on results of combining specific units of code and validating it against expected results. The output of this phase is a test plan that includes test with expected results and actual results. System Verification This phase primarily focuses on testing the system as a whole in regards to the list of project requirements and desired operating environment. Operation & Maintenance his phase primarily focuses on handing off the competed project over to the customer so that they can verify that all of their requirements have been met based on their original requirements. This phase will also validate the correctness of their requirements and if any changed need to be made. In addition, any problems not resolved in the previous phase will be handled in this section. The Waterfall Model’s linear and sequential methodology does offer a project certain advantages and disadvantages. Advantages of the Waterfall Model Simplistic to implement and execute for projects and/or company wide Limited demand on resources Large emphasis on documentation Disadvantages of the Waterfall Model Completed phases cannot be revisited regardless if issues arise within a project Accurate requirement are never gather prior to the completion of the requirement phase due to the lack of clarification in regards to client’s desires. Small changes or errors that arise in applications may cause additional problems The client cannot change any requirements once the requirements phase has been completed leaving them no options for changes as they see their requirements changes as the customers desires change. Excess documentation Phases are cumbersome and slow moving Learn more about the Major Process in the Sofware Development Life Cycle and Waterfall Model. Conversely, the Hare shares similar traits with the prototyping software development model in that ideas are rapidly converted to basic working examples and subsequent changes are made to quickly align the project with customers desires as they are formulated and as software strays from the customers vision. The basic concept of prototyping is to eliminate the use of well-defined project requirements. Projects are allowed to grow as the customer needs and request grow. Projects are initially designed according to basic requirements and are refined as requirement become more refined. This process allows customer to feel their way around the application to ensure that they are developing exactly what they want in the application This model also works well for determining the feasibility of certain approaches in regards to an application. Prototypes allow for quickly developing examples of implementing specific functionality based on certain techniques. Advantages of Prototyping Active participation from users and customers Allows customers to change their mind in specifying requirements Customers get a better understanding of the system as it is developed Earlier bug/error detection Promotes communication with customers Prototype could be used as final production Reduced time needed to develop applications compared to the Waterfall method Disadvantages of Prototyping Promotes constantly redefining project requirements that cause major system rewrites Potential for increased complexity of a system as scope of the system expands Customer could believe the prototype as the working version. Implementation compromises could increase the complexity when applying updates and or application fixes When companies trying to decide between the Waterfall model and Prototype model they need to evaluate the benefits and disadvantages for both models. Typically smaller companies or projects that have major time constraints typically head for more of a Prototype model approach because it can reduce the time needed to complete the project because there is more of a focus on building a project and less on defining requirements and scope prior to the start of a project. On the other hand, Companies with well-defined requirements and time allowed to generate proper documentation should steer towards more of a waterfall model because they are in a position to obtain clarified requirements and have to design and optimal solution prior to the start of coding on a project.

    Read the article

  • Are IE 9 will have a place in heart of user ?

    - by anirudha
    in a advertisement of IE 9 MSFT compare two product first is their IE9 and second is chrome 6. I know 6 is not currently [9] but no objection because may be they make ads when 6 is currently version and have RC or beta in their hands. on IE 9 test-drive website they show many of people ads to show the user that IE9 is performance better or other chrome or Firefox not. well they not compare with Firefox because last days firefox not still in news and search trends like before RC release many of user googling for them. Well I myself found IE9 perform smoother then chrome. but what MSFT do after IE9 nothing they waiting for IE 10 not for give updates not as well as Google chrome and Firefox. Are IE9 have anything new for Developer even a small or big. well they tell you blah or useless things everytime when they make for next version no matter for you but a matter for them because they add a new thing even useless for developer. I am not have any feeling with IE bad but I like to make reviews as well as I can make. I show you something who I experience with IE and someother browser like Chrome and Firefox. IE 9 still have no plugin as well as other provided like Firefox have Firebug a great utilities who is best option for developer to debug their code. IE9 developer tool is good but still you never customize them or readymade customization available to work as in firefox many of person make customization for firebug like example :- firepicker for picking color in firebug , firebug autocomplete for intellisense like feature when you write JavaScript inside console panel , pixelperfect , firequery , sitepoint reference and many other great example we all love to use. as other things that Firefox give many things customizable like themes , ui and many thing customization means more thing user or developer want to make themselves and more contribution make them better software so Firefox is great because customization is a great thing inside firefox and chrome. if you read some post of developer on MSDN to what’s new in IE 9 developer tool that you feel they are joking whenever you see some other things of Firefox and chrome. in a Firefox a plugin perform many much things but in IE still use IE 9 developer tool no other option like in Firefox use Firebug and many other utilities to make development easier and time saving and best as we can do.if you see Firefox page on mozilla that sublines of firefox is high performance easy customization advanced security well you can say what’s performance but there is no comparison with IE because IE have only performance and nothing else. but Firefox have these three thing to make product love. and third thing I really love that security yeah security. from long time before whenever IE6 is no hackproff and many other easily hack IE6 whenever Firefox is secure. I found myself that many of website install a software on client’s computer and they still not know about them so they track everything. sometime they hijack the homepage and make their website as their homepage. sometime they do something and you trying  to go to  any website then they go to their site first. the problem I telling about not long before it’s time of late in 2008 whenever Firefox is much better then IE6. if someone have bad experience with anyone of these software share with us I like to hear your voice. whenever IE still not for use Firefox is a good option for us even user or developer. I not know why someone make next version of IE. IE still have time to go away from Web. Firefox not rude as IE they still believe in user feedback and chrome is also open the door for feedback on their product gooogle Chrome. but what thing they made in IE on user feedback nothing. they still thing to teach what they maked not thing about what user need. if you spent some hour on firefox and chrome then you found what’s matter. what thing you have whenever you use IE or other browser like google chrome and Firefox :- as a user IE give you nothing even tell you blah blah and more blah but still next version of IE means next IE6 for the web. as in Google chrome you find plugins addons or customization to make experience better but in IE9 you can’t customize anything even the themes they have by default. Firefox already have a great list of plugins or addons to make experience better with Web but IE9 have nothing. this means IE9 not for user and other like chrome and firefox give you much better experience then IE. next thing after user is developer. first thing is that all developer want smooth development who save their time not take too perhaps saving.posts on IE9 show that a list of thing improved in IE 9 developer tool but are one developer tool enough for web development so developer need more utilities to solve different different type of puzzle who IE 9 never give like in Firefox you have utilities to do a task even small or big one. in chrome same experience you have but IE9 never give any plugin or utilities to make our work faster even they are new headache for developer because IE not give update as soon as other because in Firefox and in chrome if a bug is reported then they solve them fast and distribute them in next version of software very soon but in IE wait for a long time like IE 9 and IE 8 have no official release between them as update. As my conclusion there is no reason to use IE and adopt 9 again. it’s really not for Developer or user even newbie or smart people. as a rule I want to beware you with IE because it’s my responsibilities to move the thing in good way as I can make. well are you sure that there is no reason or profit they thing to have with IE9  if not why they forget luna [windows xp] user. because they are old nothing they want to force user to give them some money by purchasing a new version of OS. so this a thing why they marketed their software. if you thing about what firefox and chrome want to make : Mozilla's mission is to promote openness, innovation and opportunity on the web. chrome mission we all see whenever we use them. but IE9 is a trick they promote because they want to add something to next version of windows. if somebody like IE9 [even surprised by ads they see or post they read] then they purchase windows soon as they possible. Well you feel that I am opposition of IE9 and favor of chrome and Firefox yeah you feel right I hate IE from a heart not from a pencil. well you get same thing when you have trying three product major I described here Chrome firefox and IE. well don’t believe on the blogs , posts or article who are provided by the merchant or vender’s website. open the eyes read and thing what they talk and feel are they really true. if you confused that compare with some other. now you know the true because no one telling so badly as a user can described who use them not only one who make their feature. always open the eyes don’t believe use your mind and find the truth. thanks for reading my post good bye and take care

    Read the article

  • BRE (Business Rules Engine) Data Services is out...!!!

    - by Vishal
    A few months ago we at Tellago had open sourced the BizTalk Data Services. We were meanwhile working on other artifacts which comes along with BizTalk Server like the “Business Rules Engine”.  We are happy to announce the first version of BRE Data Services. BRE Data Services is a same concept which we covered through BTS Data Services, providing a RESTFul OData – based API to interact with the Business Rules Engine via HTTP using ATOM Publishing Protocol or JSON as the encoding mechanism.   In the first version release, we mainly focused on the browsing, querying and searching BRE artifacts via a RESTFul interface. Also along with that we provide the functionality to execute Business Rules by inserting the Facts for policies via the IUpdatable implementation of WCF Data Services.   The BRE Data Services API provides a lightweight interface for managing Business Rules Engine artifacts such as Policies, Rules, Vocabularies, Conditions, Actions, Facts etc. The following are some examples which details some of the available features in the current version of the API.   Basic Querying: Querying BRE Policies http://localhost/BREDataServices/BREMananagementService.svc/Policies Querying BRE Rules http://localhost/BREDataServices/BREMananagementService.svc/Rules Querying BRE Vocabularies http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies   Navigation: The BRE Data Services API also leverages WCF Data Services to enable navigation across related different BRE objects. Querying a specific Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies(‘PolicyName’) Querying a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules(‘RuleName’) Querying all Rules under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Rules Querying all Facts under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Facts Querying all Actions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying all Conditions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying a specific Vocabulary: http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies('VocabName')   Implementation: With the BRE Data Services, we also provide the functionality of executing a particular policy via HTTP. There are couple of ways you can do that though the API.   Ø First is though Service Operations feature of WCF Data Services in which you can execute the Facts by passing them in the URL itself. This is a very simple implementations of the executing the policies due to the limitations & restrictions (only primitive types of input parameters which can be passed) currently of the Service Operations of the WCF Data Services. Below is a code sample.                Below is a traced Request/Response message.                                 Ø Second is through the IUpdatable Interface of WCF Data Services. In this method, you can first query the rule which you want to execute and then inserts Facts for that particular Rules and finally when you perform the SaveChanges() call for the IUpdatable Interface API, it executes the policy with the facts which you inserted at runtime. Below is a sample of client side code. Due to the limitations of current version of WCF Data Services where there is no way you can return back the updates happening on the service side back to the client via the SaveChanges() method. Here we are executing the rule passing a serialized XML as Facts and there is no changes made to any data where we can query back to fetch the changes. This is overcome though the first way to executing the policies which is by executing it as a Service Operation call.     This actually generates a AtomPub message shown as below:   POST /Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/$batch HTTP/1.1 User-Agent: Microsoft ADO.NET Data Services DataServiceVersion: 1.0;NetFx MaxDataServiceVersion: 2.0;NetFx Accept: application/atom+xml,application/xml Accept-Charset: UTF-8 Content-Type: multipart/mixed; boundary=batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Host: localhost:8080 Content-Length: 1481 Expect: 100-continue   --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Content-Type: multipart/mixed; boundary=changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf   --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf Content-Type: application/http Content-Transfer-Encoding: binary   MERGE http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy') HTTP/1.1 Content-ID: 4 Content-Type: application/atom+xml;type=entry Content-Length: 927   <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" font-size: x-small"http://www.w3.org/2005/Atom">   <category scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" term="Tellago.BRE.REST.Resources.Fact" />   <title />   <author>     <name />   </author>   <updated>2011-01-31T20:09:15.0023982Z</updated>   <id>http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy')</id>   <content type="application/xml">     <m:properties>       <d:FactInstance>&lt;ns0:LoanStatus xmlns:ns0="http://tellago.com"&gt;&lt;Age&gt;10&lt;/Age&gt;&lt;Status&gt;true&lt;/Status&gt;&lt;/ns0:LoanStatus&gt;</d:FactInstance>       <d:FactType>TestSchema</d:FactType>       <d:ID>TestPolicy</d:ID>     </m:properties>   </content> </entry> --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf-- --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7—     Installation: The installation of the BRE Data Services is pretty straight forward. ·         Create a new IIS website say BREDataServices. ·         Download the SourceCode from TellagoCodeplex and copy the content from Tellago.BRE.REST.ServiceHost to the physical location of the above created website.     ·         The appPool account running the website should have admin access to the BizTalkRuleEngineDb database. ·         TheRight click the BREManagementService.svc in the IIS ContentView for the website and wala..     Conclusion: The BRE Data Services API is an experiment intended to bring the capabilities of RESTful/OData based services to the Traditional BTS/BRE Solutions. The future releases will target on technologies like BAM, ESB Toolkit. This version has been tested with various version of BizTalk Server and we have uploaded the source code to our Tellago's DevLabs workspace at Codeplex. I hope you guys enjoy this release. Keep an eye on our new releases @ Tellago Codeplex. We are working on various other Biztalk Artifacts like BAM, ESB Toolkit.     Till than happy BizzRuling…!!!     Thanks,   Vishal Mody

    Read the article

  • The Future of Project Management is Social

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Kazim Isfahani, Director, Product Marketing, Oracle Rapid Ascent. Breakneck Speed. Lightning Fast. Perhaps even overwhelming. No matter which set of adjectives we use to describe it, social media’s rise into the enterprise mainstream has been unprecedented. Indeed, the big 4 social media powerhouses (Facebook, Google+, LinkedIn, and Twitter), have nearly 2 Billion users between them. You may be asking (as you should really) “That’s all well and good for the consumer, but for me at my company, what’s your point? Beyond the fact that I can check and post updates, that is.” Good question, kind sir. Impact of Social and Collaboration on Project Management I’ll dovetail this discussion to the project management realm, since that’s what I’m writing about. Speed is a big challenge for project-driven organizations. Anything that can help speed up project delivery - be it a new product introduction effort or a geographical expansion project - fast is a good thing. So where does this whole social thing fit particularly since there are already a host of tools to help with traditional project execution? The fact is companies have seen improvements in their productivity by deploying departmental collaboration and other social-oriented solutions. McKinsey’s survey on social tools shows we have reached critical scale: 72% of respondents report that their companies use at least one and over 40% say they are using social networks and blogs. We don’t hear as much about the impact of social media technologies at the project and project manager level, but that does not mean there is none. Consider the new hire. The type of individual entering the workforce and executing on projects is a generation of worker expecting visually appealing, easy to use and easy to understand technology meshing hand-in-hand with business processes. Consider the project manager. The social era has enhanced the role that the project manager must play. Today’s project manager must be a supreme communicator, an influencer, a sympathizer, a negotiator, and still manage to keep all stakeholders in the loop on project progress. Social tools play a significant role in this effort. Now consider the impact to the project team. The way that a project team functions has changed, with newer, social oriented technologies making the process of information dissemination and team communications much more fluid. It’s clear that a shift is occurring where “social” is intersecting with project management. The Rise of Social Project Management We refer to the melding of project management and social networking as Social Project Management. Social Project Management is based upon the philosophy that the project team is one part of an integrated whole, and that valuable and unique abilities exist within the larger organization. For this reason, Social Project Management systems should be integrated into the collaborative platform(s) of an organization, allowing communication to proceed outside the project boundaries. What makes social project management "social" is an implicit awareness where distributed teams build connected links in ways that were previously restricted to teams that were co-located. Just as critical, Social Project Management embraces the vision of seamless online collaboration within a project team, but also provides for, (and enhances) the use of rigorous project management techniques. Social Project Management acknowledges that projects (particularly large projects) are a social activity - people doing work with people, for other people, with commitments to yet other people. The more people (larger projects), the more interpersonal the interactions, and the more social affects the project. The Epitome of Social - Fusion Project Portfolio Management If I take this one level further to discuss Fusion Project Portfolio Management, the notion of Social Project Management is on full display. With Fusion Project Portfolio Management, project team members have a single place for interaction on projects and access to any other resources working within the Fusion ERP applications. This allows team members the opportunity to be informed with greater participation and provide better information. The application’s the visual appeal, and highly graphical nature makes it easy to navigate information. The project activity stream adds to the intuitive user experience. The goal of productivity is pervasive throughout Fusion Project Portfolio Management. Field research conducted with Oracle customers and partners showed that users needed a way to stay in the context of their core transactions and yet easily access social networking tools. This is manifested in the application so when a user executes a business process, they not only have the transactional application at their fingertips, but also have things like e-mail, SMS, text, instant messaging, chat – all providing a number of different ways to interact with people and/or groups of people, both internal and external to the project and enterprise. But in the end, connecting people is relatively easy. The larger issue is finding a way to serve up relevant, system-generated, actionable information, in real time, which will allow for more streamlined execution on key business processes. Fusion Project Portfolio Management’s design concept enables users to create project communities, establish discussion threads, manage event calendars as well as deliver project based work spaces to organize communications within the context of a project – all within a secure business environment. We’d love to hear from you and get your thoughts and ideas about how Social Project Management is impacting your organization. To learn more about Oracle Fusion Project Portfolio Management, please visit this link

    Read the article

  • Spacewalk 2.0 provided to manage Oracle Linux systems

    - by wcoekaer
    Oracle Linux customers have a few options to manage and provision their servers. We provide a license to use Oracle Enterprise Manager's Linux OS management, monitoring and provisioning features without additional cost for every server that has an Oracle Linux support subscription. So there is no additional pack to license and no additional per server cost, it's all included in our Basic, Premier and Systems support subscriptions. The nice thing with Oracle Enterprise Manager is that you end up with a single management product that can manage all aspects of your software stack. You have complete insight into the applications running, you have roles and responsibilities, you have third party connectors for storage or other products and it makes it very easy and convenient to correlate data and events when something happens. If you use Oracle VM as well, you end up with a complete cloud portal with selfservice, chargeback, etc... Another, much simpler option, is just using yum. It is very easy to take a server and create directories and expose these through apache as repositories. You can have a simple yum config on each server pointing to a few specific repositories. It requires some manual effort in terms of creating directories, downloading packages and creating local repo files but it's easy to do and for many people a preferred solution. There are also a good number of customers that just connect their servers directly to ULN or to our free update server public-yum. Just to re-iterate, our public-yum servers have all the errata and updates available for free. Now we added another option. Many of our customers have switched from a competing Linux vendor and they had familiarity with their management tools. Switching to Oracle for support is very easy since we don't require changes to the installed servers but we also want to make sure there is a very easy and almost transparent switch for the management tools as well. While Oracle Enterprise Manager is our preferred way of managing systems, we now are offering Spacewalk 2.0 to our customers. The community project can be found here. We have made a few changes to ensure easy and complete support for Oracle Linux, tested it with public-yum, etc.. You can find the rpms in our public-yum repos at http://public-yum.oracle.com/repo/OracleLinux/OL6/. There are repositories for spacewalk server and then for each version (OL5,OL6) and architecture (x86 and x86-64) we have the client repositories as well. Spacewalk itself is only made available for OL6 x86-64. Documentation can be found here. I set it up myself and here are some quick steps on how you can get going in just a matter of minutes: Spacewalk Server Installation : 1) Installing an Oracle Database Use an existing Oracle Database or install a new Oracle Database (Standard or Enterprise Edition) [at this time use 11g, we will add support for 12c in the near future]. This database can be installed on the spacewalk server or on a separate remote server. While Oracle XE might work to create a small sample POC, we do not support the use of Oracle XE, spacewalk repositories can become large and create a significant database workload. Customers can use their existing database licenses, they can download the database with a trial licence from http://edelivery.oracle.com or Oracle Linux subscribers (customers) will be allowed to use the Oracle Database as a spacewalk repository as part of their Oracle Linux subscription at no additional cost. |NOTE : spacewalk requires the database to be configured with the UTF8 characterset. |Installation will fail if your database does not use UTF8. |To verify if your database is configured correctly, run the following command in sqlplus: | |select value from nls_database_parameters where parameter='NLS_CHARACTERSET'; |This should return 'AL32UTF8' 2) Configure the database schema for spacewalk Ideally, create a tablespace in the database to hold the spacewalk schema tables/data; create tablespace spacewalk datafile '/u01/app/oracle/oradata/orcl/spacewalk.dbf' size 10G autoextend on; Create the database user spacewalk (or use some other schema name) in sqlplus. example : create user spacewalk identified by spacewalk; grant connect, resource to spacewalk; grant create table, create trigger, create synonym, create view, alter session to spacewalk; grant unlimited tablespace to spacewalk; alter user spacewalk default tablespace spacewalk; 4) Spacewalk installation and configuration Spacewalk server requires an Oracle Linux 6 x86-64 system. Clients can be Oracle Linux 5 or 6, both 32- and 64bit. The server is only supported on OL6/64bit. The easiest way to get started is to do a 'Minimal' install of Oracle Linux on a server and configure the yum repository to include the spacewalk repo from public-yum. Once you have a system with a minimal install, modify your yum repo to include the spacewalk repo. Example : edit /etc/yum.repos.d/public-yum-ol.repo and add the following lines at the end of the file : [spacewalk] name=spacewalk baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/spacewalk20/server/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 Install the following pre-requisite packages on your spacewalk server : oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64 oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64 rpm -ivh oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64 rpm -ivh oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64 The above RPMs can be found on the Oracle Technology Network website : http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html As the root user, configure the library path to include the Oracle Instant Client libraries : cd /etc/ld.so.conf.d echo /usr/lib/oracle/11.2/client64/lib oracle-instantclient11.2.conf ldconfig Install spacewalk : # yum install spacewalk-oracle The above yum command should download and install all required packages to run spacewalk on your local server. | NOTE : if you did a full, desktop or workstation installation, | you have to remove the JTA package | BEFORE installing spacewalk-oracle (rpm -e --nodeps jta) Once the installation completes, simply run the spacewalk configuration tool and you are all set. (make sure to run the command with the 2 arguments) spacewalk-setup --disconnected --external-db Answer the questions during the setup, ensure you provide the current database user (example : spacewalk) and password (example : spacewalk) and database server hostname (the standard hostname of the server on which you have deployed the Oracle database) At the end of the setup script, your spacewalk server should be fully configured and you can log into the web portal. Use your favorite browser to connect to the website : http://[spacewalkserverhostname] The very first action will be to create the main admin account.

    Read the article

  • Emtel Knowledge Series - Q2/2014

    From Cyber Island to Smart Mauritius Cyber Island? Smart Mauritius? - What is Emtel talking about? "With the majority of the population living in urban environments today, the concept of "Smart Cities" has become an urgent necessity. "Smart Cities" refer to an urban transformation which, by using latest ICT technologies makes cities more efficient. Many Governments are setting out ambitious plans to build the cities of the future based on massive connectivity, high bandwidth communications, intelligent sensors and analysis of huge volumes of data. Various researches have shown four key enablers for smart city success - Government leadership, suitable technology infrastructure, solid public-private partnerships and engaged citizens. It is around these enabling factors that telecoms companies can play a vital role in assisting governments to deliver on the smart city vision." The Emtel Knowledge Series goes in compliance with Emtel's 25th anniversary celebrations throughout the year and the master of ceremony, Kim Andersen, mentioned that there will be more upcoming events on a quarterly base. As a representative of the Mauritius Software Craftsmanship Community (MSCC) there was absolutely no hesitation to join in again. Following my visit to the first Emtel Knowledge Series workshop back in February this year, it was great to have another opportunity to meet and exchange with technology experts. But quite frankly what is it with those buzz words... As far as I remember and how it was mentioned "Cyber Island" is an old initiative from around 2005/2006 which has been refreshed in 2010. It implies the empowerment of Information & Communication Technologies (ICT) as an essential factor of growth by the government here in Mauritius. Actually, the first promotional period of Cyber Island brought me here but that's another story. The venue and its own problems Like last time the event was organised and held at the Conference Hall at Cyber Tower I in Ebene. As I've been working there for some years, I know about the frustrating situation of finding a proper parking. So, does Smart Island include better solutions for the search of parking spaces? Maybe, let's see whether I will be able to answer that question at the end of the article. Anyway, after circling around the tower almost two times, I finally got a decent space to put the car, without risking to get a ticket or damage actually. International speakers and their experience Once again, Emtel did a great job to get international expertise onto the stage to share their experience and vision on this kind of embarkment. Personally, I really appreciated the fact they were speakers of global reach and could provide own-experience knowledge. Johan Gott spoke about the fundamental change that the Swedish government ignited in order to move their society and workers' environment away from heavy industry towards a knowledge-based approach. Additionally, we spoke about the effort and transformation of New York City into a greener and more efficient Smart City. Given modern technology he also advised that any kind of available Big Data should be opened to the general public - this openness would provide a playground for anyone to garner new ideas and most probably solid solutions of which no one else thought about before. Emtel Knowledge Series on moving from Cyber Island to Smart Mauritus Later during the afternoon that exact statement regarding openness to and transparency of government-owned Big Data has been emphasised again by the Danish speaker Kim Andersen and his former colleague Mika Jantunen from Finland. Mika continued to underline the important role of the government to provide a solid foundation for a knowledge-based society and mentioned that Finnish citizens have a constitutional right to broadband connectivity. Next to free higher (tertiary) education Finland already produced a good number of innovations, among them are: First country to grant voting rights to women Free higher education Constitutional right to broadband connectivity Nokia Linux Angry Birds Sauna and others...  General access to internet via broadband and/or mobile connectivity is surely a key factor towards Smart Cities, or better said Smart Mauritius given the area dimensions and size of population. CTO Paul Valette gave the audience a brief overview of the essential role that Emtel will have to move Mauritius forward towards a knowledge-based and innovation-driven environment for its citizen. What I have seen looks really promising and with recently published information that Mauritians have 127% of mobile capacity - meaning more than 1 mobile, smartphone or tablet per person - it will be crucial to have the right infrastructure for these connected devices. How would it be possible to achieve a knowledge-based society? YouTube to the rescue!Seriously, gaining more knowledge will require to have fast access to educational course material as explained by Dr Kaviraj Sukon, General Director of the Open University of Mauritius. According to him a good number of high-profile universities in the world have opened their course libraries to the general public, among them EDX, Coursera and Open University. Nowadays, you're actually able and enabled to learn for and earn a BSc or even MSc certification on your own pace - no need to attend classed on campus. It was really impressive to see the number of available hours - more than enough for a life-long learning experience! {loadposition content_adsense} Networking in the name of MSCC As briefly mentioned above I was about to combine two approaches for this workshop. Of course, getting latest information and updates on Emtel services available, especially for my business here on the west coast of the island, but also to meet and greet new people for the MSCC. And I think it was very positive on both sides. Let me quickly describe some of the key aspects that happened during the day: Met with Arnaud Meslier and Kellie, both Microsoft to swap latest information on IT events. Hereby, I got an invite to Microsoft Windows Phone 8.1 Dev Camp. Got in touch with Arvin Lockee, Emtel to check our options to meet with the data team, and seizing the opportunity to have a visiting tour at the Emtel Data Centre. Had a great chat with Avinash Meetoo, Knowledge 7, Kim Andersen and Mika Jantunen about the situation of teaching and learning in general and specifically in the private sector here in Mauritius. Additionally, a number of various other interesting chats... Once again, I'm catching up on a couple of business cards in order to provide more background information about the MSCC, and to create a better awareness of MSCC within the local IT businesses. There is more to come soon!  Resume of the day The number of attendees during this event has been doubled or even tripled this time. The whole organisation has been improved massively and the combination of presentation and summarizing panel discussions was better than during the previous workshop back in February. Overall, once again a well-organised workshop and I'm already looking forward to join the next workshop in Q3. Update End of July we finally managed to visit the Emtel Data Centre in Arsenal. It was an interesting opportunity for some of our MSCC members.

    Read the article

  • SQL Server Table Polling by Multiple Subscribers

    - by Daniel Hester
    Background Designing Stored Procedures that are safe for multiple subscribers (to call simultaneously) can be challenging.  For example let’s say that you want multiple worker processes to poll a shared work queue that’s encapsulated as a SQL Table. This is a common scenario and through experience you’ll find that you want to use Table Hints to prevent unwanted locking when performing simultaneous queries on the same table. There are three table hints to consider: NOLOCK, READPAST and UPDLOCK. Both NOLOCK and READPAST table hints allow you to SELECT from a table without placing a LOCK on that table. However, SELECTs with the READPAST hint will ignore any records that are locked due to being updated/inserted (or otherwise “dirty”), whereas a SELECT with NOLOCK ignores all locks including dirty reads. For the initial update of the flag (that marks the record as available for subscription) I don’t use the NOLOCK Table Hint because I want to be sensitive to the “active” records in the table and I want to exclude them.  I use an Update Lock (UPDLOCK) in conjunction with a WHERE clause that uses a sub-select with a READPAST Table Hint in order to explicitly lock the records I’m updating (UPDLOCK) but not place a lock on the table when selecting the records that I’m going to update (READPAST). UPDATES should be allowed to lock the rows affected because we’re probably changing a flag on a record so that it is not included in a SELECT from another subscriber. On the UPDATE statement we should explicitly use the UPDLOCK to guard against lock escalation. A SELECT to check for the next record(s) to process can result in a shared read lock being held by more than one subscriber polling the shared work queue (SQL table). It is expected that more than one worker process (or server) might try to process the same new record(s) at the same time. When each process then tries to obtain the update lock, none of them can because another process has a shared read lock in place. Thus without the UPDLOCK hint the result would be a lock escalation deadlock; however with the UPDLOCK hint this condition is mitigated against. Note that using the READPAST table hint requires that you also set the ISOLATION LEVEL of the transaction to be READ COMMITTED (rather than the default of SERIALIZABLE). Guidance In the Stored Procedure that returns records to the multiple subscribers: Perform the UPDATE first. Change the flag that makes the record available to subscribers.  Additionally, you may want to update a LastUpdated datetime field in order to be able to check for records that “got stuck” in an intermediate state or for other auditing purposes. In the UPDATE statement use the (UPDLOCK) Table Hint on the UPDATE statement to prevent lock escalation. In the UPDATE statement also use a WHERE Clause that uses a sub-select with a (READPAST) Table Hint to select the records that you’re going to update. In the UPDATE statement use the OUTPUT clause in conjunction with a Temporary Table to isolate the record(s) that you’ve just updated and intend to return to the subscriber. This is the fastest way to update the record(s) and to get the records’ identifiers within the same operation. Finally do a set-based SELECT on the main Table (using the Temporary Table to identify the records in the set) with either a READPAST or NOLOCK table hint.  Use NOLOCK if there are other processes (besides the multiple subscribers) that might be changing the data that you want to return to the multiple subscribers; or use READPAST if you're sure there are no other processes (besides the multiple subscribers) that might be updating column data in the table for other purposes (e.g. changes to a person’s last name).  NOLOCK is generally the better fit in this part of the scenario. See the following as an example: CREATE PROCEDURE [dbo].[usp_NewCustomersSelect] AS BEGIN -- OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON -- DECLARE TEMP TABLE -- Note that this example uses CustomerId as an identifier; -- you could just use the Identity column Id if that’s all you need. DECLARE @CustomersTempTable TABLE ( CustomerId NVARCHAR(255) ) -- PERFORM UPDATE FIRST -- [Customers] is the name of the table -- [Id] is the Identity Column on the table -- [CustomerId] is the business document key used to identify the -- record globally, i.e. in other systems or across SQL tables -- [Status] is INT or BIT field (if the status is a binary state) -- [LastUpdated] is a datetime field used to record the time of the -- last update UPDATE [Customers] WITH (UPDLOCK) SET [Status] = 1, [LastUpdated] = GETDATE() OUTPUT [INSERTED].[CustomerId] INTO @CustomersTempTable WHERE ([Id] = (SELECT TOP 100 [Id] FROM [Customers] WITH (READPAST) WHERE ([Status] = 0) ORDER BY [Id] ASC)) -- PERFORM SELECT FROM ENTITY TABLE SELECT [C].[CustomerId], [C].[FirstName], [C].[LastName], [C].[Address1], [C].[Address2], [C].[City], [C].[State], [C].[Zip], [C].[ShippingMethod], [C].[Id] FROM [Customers] AS [C] WITH (NOLOCK), @CustomersTempTable AS [TEMP] WHERE ([C].[CustomerId] = [TEMP].[CustomerId]) END In a system that has been designed to have multiple status values for records that need to be processed in the Work Queue it is necessary to have a “Watch Dog” process by which “stale” records in intermediate states (such as “In Progress”) are detected, i.e. a [Status] of 0 = New or Unprocessed; a [Status] of 1 = In Progress; a [Status] of 2 = Processed; etc.. Thus, if you have a business rule that states that the application should only process new records if all of the old records have been processed successfully (or marked as an error), then it will be necessary to build a monitoring process to detect stalled or stale records in the Work Queue, hence the use of the LastUpdated column in the example above. The Status field along with the LastUpdated field can be used as the criteria to detect stalled / stale records. It is possible to put this watchdog logic into the stored procedure above, but I would recommend making it a separate monitoring function. In writing the stored procedure that checks for stale records I would recommend using the same kind of lock semantics as suggested above. The example below looks for records that have been in the “In Progress” state ([Status] = 1) for greater than 60 seconds: CREATE PROCEDURE [dbo].[usp_NewCustomersWatchDog] AS BEGIN -- TO OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON DECLARE @MaxWait int; SET @MaxWait = 60 IF EXISTS (SELECT 1 FROM [dbo].[Customers] WITH (READPAST) WHERE ([Status] = 1) AND (DATEDIFF(s, [LastUpdated], GETDATE()) > @MaxWait)) BEGIN SELECT 1 AS [IsWatchDogError] END ELSE BEGIN SELECT 0 AS [IsWatchDogError] END END Downloads The zip file below contains two SQL scripts: one to create a sample database with the above stored procedures and one to populate the sample database with 10,000 sample records.  I am very grateful to Red-Gate software for their excellent SQL Data Generator tool which enabled me to create these sample records in no time at all. References http://msdn.microsoft.com/en-us/library/ms187373.aspx http://www.techrepublic.com/article/using-nolock-and-readpast-table-hints-in-sql-server/6185492 http://geekswithblogs.net/gwiele/archive/2004/11/25/15974.aspx http://grounding.co.za/blogs/romiko/archive/2009/03/09/biztalk-sql-receive-location-deadlocks-dirty-reads-and-isolation-levels.aspx

    Read the article

  • CodePlex Daily Summary for Monday, June 02, 2014

    CodePlex Daily Summary for Monday, June 02, 2014Popular ReleasesPortable Class Library for SQLite: Portable Class Library for SQLite - 3.8.4.4: This pull request from mattleibow addresses an issue with custom function creation (define functions in C# code and invoke them from SQLite as id they where regular SQL functions). Impact: Xamarin iOSTweetinvi a friendly Twitter C# API: Tweetinvi 0.9.3.x: Timelines- Added all the parameters available from the Timeline Endpoints in Tweetinvi. - This is available for HomeTimeline, UserTimeline, MentionsTimeline // Simple query var tweets = Timeline.GetHomeTimeline(); // Create a parameter for queries with specific parameters var timelineParameter = Timeline.GenerateHomeTimelineRequestParameter(); timelineParameter.ExcludeReplies = true; timelineParameter.TrimUser = true; var tweets = Timeline.GetHomeTimeline(timelineParameter); Tweets- Add mis...Sandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...Tooltip Web Preview: ToolTip Web Preview: Version 1.0Database Helper: Release 1.0.0.0: First Release of Database HelperCoMaSy: CoMaSy1.0.2: !Contact Management SystemImage View Slider: Image View Slider: This is a .NET component. We create this using VB.NET. Here you can use an Image Viewer with several properties to your application form. We wish somebody to improve freely. Try this out! Author : Steven Renaldo Antony Yustinus Arjuna Purnama Putra Andre Wijaya P Martin Lidau PBK GENAP 2014 - TI UKDWAspose for Apache POI: Missing Features of Apache POI WP - v 1.1: Release contain the Missing Features in Apache POI WP SDK in Comparison with Aspose.Words for dealing with Microsoft Word. What's New ?Following Examples: Insert Picture in Word Document Insert Comments Set Page Borders Mail Merge from XML Data Source Moving the Cursor Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.SEToolbox: 01.032.014 Release 1: Added fix when loading game Textures for icons causing 'Unable to read beyond the end of the stream'. Added new Resource Report, that displays all in game resources in a concise report. Added in temp directory cleaner, to keep excess files from building up. Fixed use of colors on the windows, to work better with desktop schemes. Adding base support for multilingual resources. This will allow loading of the Space Engineers resources to show localized names, and display localized date a...ClosedXML - The easy way to OpenXML: ClosedXML 0.71.2: More memory and performance improvements. Fixed an issue with pivot table field order.Vi-AIO SearchBar: Vi – AIO Search Bar: Version 1.0Top Verses ( Ayat Emas ): Binary Top Verses: This one is the bin folder of the component. the .dll component is inside.Traditional Calendar Component: Traditional Calender Converter: Duta Wacana Christian University This file containing Traditional Calendar Component and Demo Aplication that using Traditional Calendar Component. This component made with .NET Framework 4 and the programming language is C# .SQLSetupHelper: 1.0.0.0: First Stable Version of SQL SetupComposite Iconote: Composite Iconote: This is a composite has been made by Microsoft Visual Studio 2013. Requirement: To develop this composite or use this component in your application, your computer must have .NET framework 4.5 or newer.Magick.NET: Magick.NET 6.8.9.101: Magick.NET linked with ImageMagick 6.8.9.1. Breaking changes: - Int/short Set methods of WritablePixelCollection are now unsigned. - The Q16 build no longer uses HDRI, switch to the new Q16-HDRI build if you need HDRI.fnr.exe - Find And Replace Tool: 1.7: Bug fixes Refactored logic for encoding text values to command line to handle common edge cases where find/replace operation works in GUI but not in command line Fix for bug where selection in Encoding drop down was different when generating command line in some cases. It was reported in: https://findandreplace.codeplex.com/workitem/34 Fix for "Backslash inserted before dot in replacement text" reported here: https://findandreplace.codeplex.com/discussions/541024 Fix for finding replacing...VG-Ripper & PG-Ripper: VG-Ripper 2.9.59: changes NEW: Added Support for 'GokoImage.com' links NEW: Added Support for 'ViperII.com' links NEW: Added Support for 'PixxxView.com' links NEW: Added Support for 'ImgRex.com' links NEW: Added Support for 'PixLiv.com' links NEW: Added Support for 'imgsee.me' links NEW: Added Support for 'ImgS.it' linksToolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2014.5.28): XrmToolbox improvement XrmToolBox updates (v1.2014.5.28)Fix connecting to a connection with custom authentication without saved password Tools improvement New tool!Solution Components Mover (v1.2014.5.22) Transfer solution components from one solution to another one Import/Export NN relationships (v1.2014.3.7) Allows you to import and export many to many relationships Tools updatesAttribute Bulk Updater (v1.2014.5.28) Audit Center (v1.2014.5.28) View Layout Replicator (v1.2014.5.28) Scrip...Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.10: Fix for Issue #20875 - echo switch doesn't work for CSS CSS should honor the SASS source-file comments JS should allow multi-line comment directivesNew Projects[ISEN] Rendu de projet Naughty3Dogs - Pong3D: Pong3D est un jeu qui reprend le principe classique du Pong en le portant dans un environnement 3D à l'aide du langage c# et du moteur Unity3DBootstrap for MVC: Bootstrap for MVC.F. A. Q. - Najczesciej zadawane pytania: FAQForuMvc: Technifutur short projecthomework456: no.iStoody: Studies organize solution, available through app for Windows and Windows Phone.liaoliao: ???????????Price Tracker: Allows a user to track prices based on parsed emailsRoslynEval: RoslynRx Hub: Rx Hub provides server side computation which initiate by subscriber requestSharepoint Online AppCache Reset: We are an IT resource company providing Virtual IT services and custom and opensource programs for everyday needs. UnitConversionLib : Smart Unit Conversion Library in C#: Conversion of units, arithmetic operation and parsing quantities with their units on run time. Smart unit converter and conversion lib for physical quantities,

    Read the article

  • Not attending the LUGM mini-meetup - 05. Oct 2013

    Not attending a meeting of the LUGM can be fun, too. It's getting a bit of a habit that Ish is organising small gatherings, aka mini-meetups, of the Linux User Group Mauritius/Meta (LUGM) almost every Saturday. There they mainly discuss and talk about various elements of using Linux as ones main operating systems and the possibilities you are going to have. On top of course, some tips & tricks about mastering the command line and initial steps in scripting or even writing HTML. In general, sounds like a good portion of fun and great spirit of community. Unfortunately, I'm usually quite busy with private and family matters during the weekend and so I already signalised that I wouldn't be around. Well, at least not physically... But this Saturday a couple of things worked out faster than expected and so I was hanging out on my machine. I made virtual contact with one of Pawan's messages over on Facebook... And somehow that kicked off some kind of an online game fun on basic configuration of Apache HTTPd 2.2.x, PHP 5.x and how to improve the overall performance of a newly installed blog based on WordPress. Default configuration files Nitin's website finally came alive and despite the dark theme and the hidden Apple 'fanboy' advertisement I was more interested in the technical situation. As with any new installation there is usually quite some adjustment to be done. And Nitin's page was no exception. Unfortunately, out of the box installations of Apache httpd and PHP are too verbose and expose too much information under the hood. You might think that this isn't really a problem at all, well, think about it again after completely reading this article. First, I checked the HTTP response headers - using either Chrome Developer Tools or Firefox Web Developer extension - of Nitin's page and based on that I advised him to lower the noise levels a little bit. It's not really necessary that detailed information about web server software and scripting language has to be published in every response made. Quite a number of script kiddies and exploits actually check for version specifics prior to an attack. So, removing at least version details hardens the system a little bit. In particular, I'm talking about these response values: Server X-Powered-By How to achieve that? By tweaking the configuration files... Namely, we are going to look into the following ones: apache2.conf httpd.conf .htaccess php.ini The above list contains some additional files, I'm talking about in the next paragraphs. Anyway, those are the ones involved. Tweaking Apache Open your favourite text editor and start to modify the apache2.conf. Eventually, you might like to have a quick peak at the file to see whether it is necessary to adjust it or not. Following is a handy combination of commands to get an overview of your active directives: # sudo grep -v '#' /etc/apache2/apache2.conf | grep -v '^$' | less There you keep an eye on those two Apache directives: ServerSignature Off ServerTokens Prod If that's not the case, change them as highlighted above. In order to activate your modifications you have to restart Apache httpd server. On Debian and Ubuntu you might use apache2ctl for that, on other distributions you might have to use service or run the init-scripts again: # sudo apache2ctl configtestSyntax OK# sudo apache2ctl restart Refresh your website and check the HTTP response header. Tweaking PHP5 (a little bit) Next, check your php.ini file with the following statement: # sudo grep -v ';' /etc/php5/apache2/php.ini | grep -v '^$' | less And check the value of expose_php = Off Again, if it's not as highlighted, change it... Some more Apache love Okay, back to Apache it might also be interesting to improve the situation about browser caching and removing more obsolete information. When you run your website against the usual performance checks like Google Page Speed and Yahoo YSlow you might see those check points with bad grades on a standard, default configuration. Well, this can be done easily. Configure entity tags (ETags) ETags are only interesting when you run your websites on a farm of multiple web servers. Removing this data for your static resources is very simple in Apache. As we are going to deal with the HTTP response header information you have to ensure that Apache is capable to manipulate them. First, check your enabled modules: # sudo ls -al /etc/apache2/mods-enabled/ | grep headers And in case that the 'headers' module is not listed, you have to enable it from the available ones: # sudo a2enmod headers Second, check your httpd.conf file (in case it exists): # sudo grep -v '#' /etc/apache2/httpd.conf | grep -v '^$' | less In newer (better said fresh) installations you might have to create a new configuration file below your conf.d folder with your favourite text editor like so: # sudo nano /etc/apache2/conf.d/headers.conf Then, in order to tweak your HTTP responses either check for those lines or add them: Header unset ETagFileETag None In case that your file doesn't exist or those lines are missing, feel free to create/add them. Afterwards, check your Apache configuration syntax and restart your running instances as already shown above: # sudo apache2ctl configtestSyntax OK# sudo apache2ctl restart Add Expires headers To improve the loading performance of your website, you should take some care into the proper configuration of how to leverage the browser's ability to cache certain resources and files. This is done by adding an Expires: value to the HTTP response header. Generally speaking it is advised that you specify a near-future, read: 1 week or a little bit more, for your static content like JavaScript files or Cascading Style Sheets. One solution to adjust this is to put some instructions into the .htaccess file in the root folder of your web site. Of course, this could also be placed into a more generic location of your Apache installation but honestly, I'd like to keep this at the web site level. Following some adjustments I'm currently using on this blog site: # Turn on Expires and set default to 0ExpiresActive OnExpiresDefault A0 # Set up caching on media files for 1 year (forever?)<FilesMatch "\.(flv|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav)$">ExpiresDefault A29030400Header append Cache-Control "public"</FilesMatch> # Set up caching on media files for 1 week<FilesMatch "\.(js|css)$">ExpiresDefault A604800Header append Cache-Control "public"</FilesMatch> # Set up caching on media files for 31 days<FilesMatch "\.(gif|jpg|jpeg|png|swf)$">ExpiresDefault A2678400Header append Cache-Control "public"</FilesMatch> As we are editing the .htaccess files, it is not necessary to restart Apache. In case that your web site doesn't load anymore or you're experiencing an error while trying to restart your httpd, check that the 'expires' module is actually an enabled module: # ls -al /etc/apache2/mods-enabled/ | grep expires# sudo a2enmod expires Of course, the instructions above a re not feature complete but I hope that they might provide a better default configuration for your LAMP stack. Resume of the day Within a couple of hours, and while being occupied with an eLearning course on SQL Server 2012, I had some good fun in helping and assisting other LUGM members while they were some kilometers away at Bagatelle. According to other blog articles it seems that Nitin had quite some moments of desperation. Just for the records: At no time it was my intention to either kick his butt or pull a leg on him. Simply, providing some input based on the lessons I've learned over the last couple of years configuring Apache HTTPd and PHP. Check out the other blogs, too: LUGM mini-meetup... Epic! Superb Saturday Linux Meetup And last but not least, the man himself: The end of a new beginning Cheers, and happy community'ing! Updates Due to our weekly Code & Coffee sessions in the MSCC community, I had a chance to talk to Nitin directly and he showed me the problems directly on his machine. This led to update this article hence the paragraphs on enabling the modules 'headers' and 'expires'.

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

  • Oracle Partner Store (OPS) New Enhancements

    - by Kristin Rose
    Effective June 29th, Oracle Partner Store (OPS) will release the enhancements listed below to improve your overall ordering experience. v Online Transactional Oracle Master Agreement (Online TOMA) The Online TOMA enables end users to execute a transactional end user license agreement with Oracle. The new Online TOMA in OPS will replace the need for you to obtain a signed hard copy of the TOMA from the end user. You will now initiate the Online TOMA via OPS. Navigation: OPS Home > Order Tools > Online TOMA Query > Request Online TOMA> End User Contact, click “Select for TOMA” > Select Language > Submit (an automated email is sent immediately to the requestor and the end user) Ø The Online TOMA can also be initiated from the ‘My OPS’ tab. Under the Online TOMA Query section partners can track Online TOMA request details submitted to end users. The status of the Online TOMA request and the OMA Key generated (once Ts&Cs of the Online TOMA are accepted by an end user) are also displayed in this table. There is also the ability to resend pending Online TOMA requests by clicking ‘Resend’. Navigation: OPS Home > Order Tools > Online TOMA Query For more details on the Transactional OMA, please click here. v Convert Deals to Carts The partner deal registration system within OPS will now allow you to convert approved deals into carts with a simple click of a button. VADs can use Deal to Cart on all of their partners' registrations, regardless of whether they submitted on their partner's behalf, or the partner submitted themselves. Navigation: Login > Deal Registrations > Deal Registration List > Open the approved deal > Click Deal Reg ID number link to open > Click on 'Create Cart' link You can locate your newly created cart in the Saved Carts section of OPS. Links are also available from within an open deal or from the Deal Registration List. Click on the cart number to proceed. v Partner Opportunity Management: Deal Registration on OPS now allows you to see updated information on your opportunities from Oracle’s Fusion CRM opportunity management system.  Key fields such as close date, sales stage, products and status can be viewed by clicking the opportunity ID associated with the deal registration.  This new feature allows you to see regular updates to your opportunities after registrations are approved.  Through ongoing communication with Oracle Channel Managers and Sales Reps, you can ensure that Oracle has the latest information on your active registered deals. v Product Recommendations: When adding products to the Deal Registrations tab, OPS will now show additional products that you can try to include to maximize your sale and rebate. v Advanced Customer Support(ACS) Services Note: This will be available from July 9th. Initiate the purchase of the complete stack (HW/SW/Services) online with one single OPS order. More ACS services now supported online with exception of Start-Up Pack: · New SW installation services for Standard Configurations & stand alone System Software. · New Pre-production & Go-live services for Standard & Engineered Systems · New SW configuration & Platinum Pre-Production & Go-Live services for Engineered Systems · New Travel & Expenses Estimate included · New Partner & VAD volume discount supported v Software as a Service (SaaS) for Independent Software Vendors (ISVs): Oracle SaaS ISVs can now use OPS to submit their monthly usage reports to Oracle within 20 days after the end of every month. Navigation: OPS Home > Cart > Transaction Type: Partner SaaS for ISV’s > Add Eligible Products > Check out v Existing Approvals: In an effort to reduce the processing time of discount approvals, we have added a new section in the Request Approval page for you to communicate pre-existing approvals without having to attach the DAT. Just enter the Approval ID and submit your request. In case of existing software approvals, you will be required to submit the DAT with the Contact Information section filled out. v Additional data for Shipping Box Labels and Packing Slips OPS now has additional fields in the Shipping Notes section for you to add PO details. This will help you easily identify shipments as they arrive. Partners will have an End User PO field, whereas VADs will have VAR and End User PO fields. v Shipping Notes on OPS Hardware delivery Shipping Notes will now have multiple options to better suit your requirements. v Reminders for Royalty Reporting Partners: If you have not submitted your royalty report online, OPS will now send an automated alert to remind you. v Order Tracker Changes: · Order Tracker will now have a deal reg flag (Yes/No). You can now clearly distinguish between orders that have registered opportunities. · All lines of the order will be visible in the order details list. v Changes in Terminology · You will notice textual changes on some of our labels and messages relating to approval requests. “Discount Requests” has been replaced with “Approval Requests” to cater to some of our other offerings. · First Line Support (FLS) transaction type has been renamed to Support Provider Partner (SPP). OPS Support For more details on these enhancements, please request a training here. For assistance on the Oracle Partner Store, please contact the OPS support team in your region. NAMER: [email protected] LAD: [email protected] EMEA : [email protected] APAC: [email protected] Japan: [email protected] You can even call us on our Hotline! Find your local number here.     Thank you, Oracle Partner Store Support Team      

    Read the article

  • 5 Ways Android Still Disappoints (Me)

    - by TStewartDev
    Let me make this clear: I'm annoyed with Apple. I don't like their current policies and I don't like where Steve Jobs is taking the company. In general, I don't like it when any one company gets too much control in a market. When that happens, the leading company dictates the game and as consumers, our options all but disappear. That said, I'm still going to buy a new iPhone next week. My Apple-hating friends seem to desperately want me to go Android instead, but frankly, it's not good enough for me, and here are the reasons why. The Modern WinMo One of the reasons that Microsoft has identified for Windows Mobile's rapid decline is the breadth of hardware. They exercised little control over manufacturer's implementations. In theory, that sounds great. We as consumers have lots of choice. In practice, though, it meant among other things that updates to the devices were left up to the manufacturers. As a result, that rarely happened. (I'm still bitter at Toshiba for leaving me hanging back in 2002.) And now, Google is doing the same thing with Android. Case in point: my wife has a Motorola Backflip that we bought in April. It was released in March. Motorola says it will get Android 2.1 "sometime in Q3". Great. Meanwhile, I pull down the latest version of iPhone OS (now iOS) and install it the same day it's released. You may say that I can't judge Android by one lazy manufacturer. Yup, I sure can. With Apple, my original iPhone has been supported perfectly for 3 years. With Android, I will have to wait for upgrades after Google releases them, possibly indefinitely. Not cool. AT&T We signed a new contract with AT&T in April to get my wife's phone. I've had a reasonable experience with them. I don't imagine my experience with Verizon would be any better, and I'm relatively confident that Sprint doesn't have the coverage it takes to work well for us. The fact is, AT&T, for whatever reason, doesn't have jack for Android phones. May not be Android's fault, but it's still a shortcoming that prevents me from having it just like the iPhone's exclusivity keeps some folks on other networks from having it. Innovation? What Innovation? Android has a nice dashboard and a great notification system and… nothing else original. I keep reading about how disappointing the iPhone is nowadays. "It has no innovation," people say. Who does? Android has modeled its behavior after the iPhone. That's fine, but if all you've got is a similar product and I'm invested both skill-wise and app-wise in my current platform, why should I change? Microsoft's new Windows Phone 7 looks somewhat innovative, and I'm pretty excited to see what they'll bring to the table, but that's another six months away, at least. I've got a 3 year old phone that has some annoying issues now (thanks to recent encounters with water). I need a new phone now. Is This Going to Work? There's no shortage of criticism of Apple over its App Store policies, and I've vented my own anger about it. However, I will give them credit: their screening of apps has done a great job of weeding out the crap and gives an excellent indication that the app will work on my device. How about Android? Nope. It might work on your phone. Maybe. You'll have to try it to see. Get burned by it? Well, write a nasty review to try to keep others from making the mistake you did. If you don't mind doing that stuff, then Android is the platform for you. Personally, I'd rather have a receptionist screening out the telemarketing and survey calls than hang up on them myself, but that's your call. Slow, Slowing, Slower All this yapping about multitasking. This is an area I've been on Apple's side from the beginning. Sorry folks, but this is the number one reason I hated Windows Mobile: the longer you use it, the slower it gets because it doesn't kill apps. I'm with Steve Jobs on this one: if you see a task manager, we're doing it wrong. I don't want to have to manually kill apps. I hate doing that on Windows let alone on a mobile device. To me, priority one should be keeping the device speedy. Waiting for your device to respond is unacceptable. Bonus! Taken from iPhone Letdown? 8 Things Apple Didn't Announce, here are my responses: 4G Yeah, let me know if your area actually has it. I live in Lincoln, Nebraska. No carrier is going to have 4G here for at least 3 years. Meanwhile, you still get to pay for it. Yay! Cloud iTunes/OTA Sync You got me here. Of course, whether or not your Android device will be able to do it is always a good question. 3G Video Chat You got me here, too. I'm sure you spent countless hours in front of your phone with video chat. Also, I can't wait for the "No Video Chat While Driving" laws. Mobile Hotspot This is a neat feature, but as the author points out, it's left up to the carrier whether to implement it or not. Pretty sure any Android phones that come to AT&T won't have this enabled in the foreseeable future. Is Verizon even allowing this? I just figured Sprint was because they're failing so hard at keeping customers. Free MobileMe I use Google's services with my iPhone. The only people I know who use MobileMe are Apple fanboys and fangirls. If you choose to pay for a service that you can get for free, that's your decision, not Apple's. Voice Input Voice input has been available on phones (even "dumb" phones) for years now. iPhone does have the ability, though limited. Why don't I hear people telling their phones what to do? Maybe because it's still easier to use your fingers than talk to it. Get back to me when this becomes an important feature. Free Navigation Maybe this will be a bigger deal to me now that I'm getting a phone with GPS, but when using my buddy's 3gs, Google maps has worked just fine. Maybe I just don't trust turn-by-turn navigation enough to want it. Dashboard The only legitimate complaint on this list, to me. iPhone's home screen is pathetic, doubly so for the iPad. What a waste of perfectly usable space. I also want to add notifications to this list. Android's notification panel is far superior to the iPhone's. I don't want to hunt all over my screen to find little red dots. Put 'em in one place, Apple.

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >