Search Results

Search found 1124 results on 45 pages for 'avani nature'.

Page 38/45 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Virtualbox - routing subnet to bridge adapters

    - by user42384
    Hello, I have set up a Debian Lenny box with 3 vbox Lenny machines running eth0 of the host in bridged mode (on virtualbox 3.1.6). When testing in my local LAN, this all worked perfectly well and traffic flowed to and from the IPs of the virtual machines as it should. However, now that it's in its co-lo home, the networking setup is a bit different, and I'm unable to get traffic to flow to the vboxes properly. Specifically, the host has its own Primary IP, and I have a separate subnet of 8 (6 usable) IPs routed to the box for use by the vboxes. So, eth0 on host is: Machine IP: 2x.x.x.137 Gateway IP: 2x.x.x.138 Subnet Msk: 255.255.255.252 Subnet for vboxes is Subnet: 2x.x.x.240/29 Netmask: 255.255.255.248 vbox1 is configured to 2x.x.x.241 on eth0 as follows: auto eth0 iface eth0 inet static address 2x.x.x.241 netmask 255.255.255.248 Setting up a virtual interface (eth0:0) on the host with one of these subnet IPs allows me to ping to that address only from vbox1, and it allows me to ping vbox1 from the host. I can also ping that virtual interface perfectly well from outside, so the IPs are definitely landing at my machine. It seems I'm missing some sort of routing instruction either on the host or vbox1 to get traffic moving between the subnet and the default gateway, but I can't seem to figure out what it should be, or what glaringly obvious thing i'm missing. Most of my obvious attempts (the gw of eth0, the ip of eth0) were rejected by route command with SIOCADDRT: No such device (eg - i can't find it). I tried setting vbox1 to bridge on eth0:0, but this was not an acceptable device name and VBoxHeadless refused to start. The physical machine does have an unused physical NIC at eth1 that can be used if necessary for something or other. Host machine is running iptables configured by ferm, have experimented with it allowing forwarding for that subnet, but I wouldn't have thought this was necessary given the nature of the virtualbox devices (nor did it actually work). Clearing out all of these rules for a blank iptables set does not resolve the issue. (you can see ferm generated iptables at http://codedumper.com/ojaze) Thanks for any help you can give... Patrick

    Read the article

  • Should I choose KVM/XEN over OpenVZ or use them together?

    - by Krystian
    I've got a dual xeon e5504 server, with [for now] only 8GB of ram. Storage is'n impressive either: 3x 146GB sas in raid5 + 500GB sata drives. Currently it works as a development server, but it's over speced for our needs and since our development methods changed through last 2 years we decided it will work as a production system for some of our applications + we would like to have a separate system for testing/research. Our apps are mainly web apps deployed on tomcats [plural as some of the apps require older versions] and connected to Postgres. I would like to have a production system, where only httpd+tomcat+db are setup and nothing else runs there. Sterile system. Apart from that, I would like a test system, where I can play with different JVM settings, deploy my test apps, play with tomcat/httpd settings and restart them without interfering with the production system. Apart from that, I would like to be able to play with different linux flavors, with newer kernels to test how they work etc. I know, this is not possible with OpenVZ and I would have to choose KVM for that. I am thinking about merging the two, and setting up a KVM to be able to work with different systems [linux only to be frank] + use openVZ to setup separate machines for my development needs. I would simply go with that, but reading here and there about the performance impact full virtualization has over containers and looking at the specs of my server makes me think twice about it. I don't want to loose too much performance, especially because of the nature of my apps [few JVMs running at the same time]. It will be my first time with virtualization, apart from using desktop virtualbox/vmserver. Although I am a fast learner I don't want to mess with the main system so much that it will break the production apps or make them crawl. Although they are more or less internal apps and they don't produce much load, they need to be stable. I've read, that KVM host is a normal linux installation and it allows to run normal processes on it. If that is so, does it allow to run openVZ as well? I mean... can I have KVM and OpenVZ running on the same system/kernel? Or do I have to setup another system to run OpenVZ containers? How much performance impact can this have for me? Will my hardware suffice? oh and one more thing... unfortunately I'm quite limited with the funds... I'm looking for a free solution only :/

    Read the article

  • SSAS: Using fake dimension and scopes for dynamic ranges

    - by DigiMortal
    In one of my BI projects I needed to find count of objects in income range. Usual solution with range dimension was useless because range where object belongs changes in time. These ranges depend on calculation that is done over incomes measure so I had really no option to use some classic solution. Thanks to SSAS forums I got my problem solved and here is the solution. The problem – how to create dynamic ranges? I have two dimensions in SSAS cube: one for invoices related to objects rent and the other for objects. There is measure that sums invoice totals and two calculations. One of these calculations performs some computations based on object income and some other object attributes. Second calculation uses first one to define income ranges where object belongs. What I need is query that returns me how much objects there are in each group. I cannot use dimension for range because on one date object may belong to one range and two days later to another income range. By example, if object is not rented out for two days it makes no money and it’s income stays the same as before. If object is rented out after two days it makes some income and this income may move it to another income range. Solution – fake dimension and scopes Thanks to Gerhard Brueckl from pmOne I got everything work fine after some struggling with BI Studio. The original discussion he pointed out can be found from SSAS official forums thread Create a banding dimension that groups by a calculated measure. Solution was pretty simple by nature – we have to define fake dimension for our range and use scopes to assign values for object count measure. Object count measure is primitive – it just counts objects and that’s it. We will use it to find out how many objects belong to one or another range. We also need table for fake ranges and we have to fill it with ranges used in ranges calculation. After creating the table and filling it with ranges we can add fake range dimension to our cube. Let’s see now how to solve the problem step-by-step. Solving the problem Suppose you have ranges calculation defined like this: CASE WHEN [Measures].[ComplexCalc] < 0 THEN 'Below 0'WHEN [Measures].[ComplexCalc] >=0 AND  [Measures].[ComplexCalc] <=50 THEN '0 - 50'...END Let’s create now new table to our analysis database and name it as FakeIncomeRange. Here is the definition for table: CREATE TABLE [FakeIncomeRange] (     [range_id] [int] IDENTITY(1,1) NOT NULL,     [range_name] [nvarchar](50) NOT NULL,     CONSTRAINT [pk_fake_income_range] PRIMARY KEY CLUSTERED      (         [range_id] ASC     ) ) Don’t forget to fill this table with range labels you are using in ranges calculation. To use ranges from table we have to add this table to our data source view and create new dimension. We cannot bind this table to other tables but we have to leave it like it is. Our dimension has two attributes: ID and Name. The next thing to create is calculation that returns objects count. This calculation is also fake because we override it’s values for all ranges later. Objects count measure can be defined as calculation like this: COUNT([Object].[Object].[Object].members) Now comes the most crucial part of our solution – defining the scopes. Based on data used in this posting we have to define scope for each of our ranges. Here is the example for first range. SCOPE([FakeIncomeRange].[Name].&[Below 0], [Measures].[ObjectCount])     This=COUNT(            FILTER(                [Object].[Object].[Object].members,                 [Measures].[ComplexCalc] < 0          )     ) END SCOPE To get these scopes defined in cube we need MDX script blocks for each line given here. Take a look at the screenshot to get better idea what I mean. This example is given from SQL Server books online to avoid conflicts with NDA. :) From previous example the lines (MDX scripts) are: Line starting with SCOPE Block for This = Line with END SCOPE And now it is time to deploy and process our cube. Although you may see examples where there are semicolons in the end of statements you don’t need them. Visual Studio BI tools generate separate command from each script block so you don’t need to worry about it.

    Read the article

  • Mobile HCM: It’s not the future, it is right now

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Steve Boese, Director Product Strategy, Oracle I’ll bet you reached for your iPhone or Android or BlackBerry and took a quick look at email or Facebook or last night’s text messages before you even got out of bed this morning. Come on, admit it, it’s ok, you are among friends here. See, feel better now? But seriously, the incredible growth and near-ubiquity of increasingly powerful, capable, and for many of us, essential in our daily lives mobile devices has profoundly changed the way we communicate, consume information, socialize, and more and more, conduct business and get our work done. And if you doubt that profound change has happened, just think for a moment about the last time you misplaced your iPhone.  The shivers, the cold sweats, the panic... We have all been there. And indeed your personal experiences with mobile technology echoes throughout the world - here are a few data points to consider: Market research firm IDC estimates 1.8 billion mobile phones will be shipped in 2012. A recent Pew study reports 46% of Americans own a smartphone of some kind. And finally in the USA, ownership of tablets like the iPad has doubled from 10% to 19% in the last year. So truly for the Human Resources leader, the question is no longer, ‘Should HR explore ways to exploit mobile devices and their always-on nature to better support and empower the modern workforce?’, but rather ‘How can HR best take advantage of smartphone and tablet capability to provide information, enable transactions, and enhance decision making?’. Because even though moving HCM applications to mobile devices seems inherently logical given today’s fast-moving and mobile workforces, and its promise to deliver incredible value to the organization, HR leaders also have to consider many factors before devising their Mobile HCM strategy and embarking on mobile HR technology projects. Here are just some of the important considerations for HR leaders as you build your strategies and evaluate mobile HCM solutions: Does your organization provide mobile devices to the workforce today, and if so, will the current set of deployed devices have the necessary capability and ecosystems to support your mobile HCM initiatives? Will you allow workers to use or bring their own mobile devices, (commonly abbreviated as ‘BYOD’), and if so are your IT and Security organizations in agreement and capable of supporting that strategy? Do you know which workers need access to mobile HCM applications? Often mobile HCM capability flows down in an organization, with executives and other ‘road-warrior’ types having the most immediate needs, followed by field sales staff, project managers, and even potential job candidates. But just as an organization will have to spend time understanding ‘who’ should have access to mobile HCM technology, the ‘what’ of the way the solutions should be deployed to these groups will also vary. What works and makes sense for the executive, (company-wide dashboards and analytics on an iPad), might not be as relevant for a retail store manager, (employee schedules, location-level sales and inventory data, transaction approvals, etc.). With Oracle Fusion HCM, we are taking an approach to mobile HR that encompasses not just the mobile solution needs for the various types of worker, but also incorporates the fundamental attributes of great mobile applications - the ability to support end-to-end transactions, apps that respond with lightning-fast speed, with functions that are embedded in a worker’s daily activities, and features that can be mashed-up easily with other business areas like Finance and CRM. Finally, and perhaps most importantly for the Oracle Fusion HCM team, delivering mobile experiences that truly enhance, enable, and empower the mobile workforce, and deliver on the design mantras of the best-in-class consumer applications, continues to shape and drive design decisions. Mobile is no longer the future, it is right now, and the cutting-edge HR leader of today will need to consider how mobile fits her HCM technology strategy from here on out. You can learn more about our ideas and plans for Oracle Fusion HCM mobile solutions at https://fusiontap.oracle.com/.

    Read the article

  • 6 Interesting Facts About NASA’s Mars Rover ‘Curiosity’

    - by Gopinath
    Humans quest for exploring the surrounding planets to see whether we can live there or not is taking new shape today. NASA’s Mars probing robot, Curiosity, blasted off today on its 9 months journey to reach Mars and explore it for the possibilities of life there. Scientist says that Curiosity is one most advanced rover ever launched to probe life on other planets. Here is the launch video and some analysis by a news reporter Lets look at the 6 interesting facts about the mission 1. It’s as big as a car Curiosity is the biggest ever rover ever launched by NASA to probe life on outer planets. It’s as big as a car and almost double the size of its predecessor rover Spirit. The length of Curiosity is around 9 feet 10 inches(3 meters), width is 9 feet 1 inch (2.8 meters) and height is 7 feet (2.1 meters). 2. Powered by Plutonium – Lasts 24×7 for 23 months The earlier missions of NASA to explore Mars are powered by Solar power and that hindered capabilities of the rovers to move around when the Sun is hiding. Due to dependency of Sun the earlier rovers were not able to traverse the places where there is no Sun light. Curiosity on the other hand is equipped with a radioisotope power system that generates electricity from the heat emitted by plutonium’s radioactive decay. The plutonium weighs around 10 pounds and can generate power required for operating the rover close to 23 weeks. The best part of the new power system is, Curiosity can roam around in darkness, light and all year around. 3. Rocket powered backpack for a science fiction style landing The Curiosity is so heavy that NASA could not use parachute and balloons to air-drop the rover on the surface of Mars like it’s previous missions. They are trying out a new science fiction style air-dropping mechanism that is similar to sky crane heavy-lift helicopter. The landing of the rover begins first with entry into the Mars atmosphere protected by a heat shield. At about 6 miles to the surface, the heat shield is jettisoned and a parachute is deployed to glide the rover smoothly. When the rover touches 3 miles above the surface, the parachute is jettisoned and the eight motors rocket backpack is used for a smooth and impact free landing as shown in the image. Here is an animation created by NASA on the landing sequence. If you are interested in getting more detailed information about the landing process check this landing sequence picture available on NASA website 4. Equipped with Star Wars style laser gun Hollywood movie directors and novelist always imagined aliens coming to earth with spaceships full of laser guns and blasting the objects which comes on their way. With Curiosity the equations are going to change. It has a powerful laser gun equipped in one of it’s arms to beam laser on rocks to vaporize them. This is not part of any assault mission Curiosity is expected to carry out, the laser gun is will be used to carry out experiments to detect life and understand nature. 5. Most sophisticated laboratory powered by 10 instruments Around 10 state of art instruments are part of Curiosity rover and the these 10 instruments form a most advanced rover based lab ever built by NASA. There are instruments to cut through rocks to examine them and other instruments will search for organic compounds. Mounted cameras can study targets from a distance, arm mounted instruments can study the targets they touch. Microscopic lens attached to the arm can see and magnify tiny objects as tiny as 12.5 micro meters. 6. Rover Carrying 1.24 million names etched on silicon Early June 2009 NASA launched a campaign called “Send Your Name to Mars” and around 1.24 million people registered their names through NASA’s website. All those 1.24 million names are etched on Silicon chips mounted onto Curiosity’s deck. If you had registered your name in the campaign may be your name is going to reach Mars soon. Curiosity On Web If you wish to follow the mission here are few links to help you NASA’s Curiosity Web Page Follow Curiosity on Facebook Follow @MarsCuriosity on Twitter Artistic Gallery Image of Mars Rover Curiosity A printable sheet of Curiosity Mission [pdf] Images credit: NASA This article titled,6 Interesting Facts About NASA’s Mars Rover ‘Curiosity’, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • What’s Your Tax Strategy? Automate the Tax Transfer Pricing Process!

    - by tobyehatch
    Does your business operate in multiple countries? Well, whether you like it or not, many local and international tax authorities inspect your tax strategy.  Legal, effective tax planning is perceived as a “moral” issue. CEOs are being asked to testify on their process of tax transfer pricing between multinational legal entities.  Marc Seewald, Senior Director of Product Management for EPM Applications specializing in all tax subjects and Product Manager for Oracle Hyperion Tax Provisioning, and Bart Stoehr, Senior Director of Product Strategy for Oracle Hyperion Profitability and Cost Management joined me for a discussion/podcast on this interesting subject.  So what exactly is “tax transfer pricing”? Marc defined it this way. “Tax transfer pricing is a profit allocation methodology required to be used by multinational corporations. Specifically, the ultimate goal of the transfer pricing is to ensure that the global multinational pays their fair share of income tax in each of their local markets. Specifically, it prevents companies from unfairly moving profit from ‘high tax’ countries to ‘low tax’ countries.” According to Marc, in today’s global economy, profitability can be significantly impacted by goods and services exchanged between the related divisions within a single multinational company.  To ensure that these cost allocations are done fairly, there are rules that govern the process. These rules ensure that intercompany allocations fairly represent the actual nature of the businesses activity- as if two divisions were unrelated - and provide a clear audit trail of how the costs have been allocated to prove that allocations fall within reasonable ranges.  What are the repercussions of improper tax transfer pricing? How important is it? Tax transfer pricing allocations can materially impact the amount of overall corporate income taxes paid by a company worldwide, in some cases by hundreds of millions of dollars!  Since so much tax revenue is at stake, revenue agencies like the IRS, and international regulatory bodies like the Organization for Economic Cooperation and Development (OECD) are pushing to reform and clarify reporting for tax transfer pricing. Most recently the OECD announced an “Action Plan for Base Erosion and Profit Shifting”. As Marc explained, the times are changing and companies need to be responsive to this issue. “It feels like every other week there is another company being accused of avoiding taxes,” said Marc. Most recently, Caterpillar was accused of avoiding billions of dollars in taxes. In the last couple of years, Apple, GE, Ikea, and Starbucks, have all been accused of tax avoidance. It’s imperative that companies like these have a clear and auditable tax transfer process that enables them to justify tax transfer pricing allocations and avoid steep penalties and bad publicity. Transparency and efficiency are what is needed when it comes to the tax transfer pricing process. Bart explained that tax transfer pricing is driving a deeper inspection of profit recognition specifically focused on the tax element of profit.  However, allocations needed to support tax profitability are nearly identical in process to allocations taking place in other parts of the finance organization. For example, the methods and processes necessary to arrive at tax profitability by legal entity are no different than those used to arrive at fully loaded profitability for a product line. In fact, there is a great opportunity for alignment across these two different functions.So it seems that tax transfer pricing should be reflected in profitability in general. Bart agreed and told us more about some of the critical sub-processes of an overall tax transfer pricing process within the Oracle solution for tax transfer pricing.  “First, there is a ton of data preparation, enrichment and pre-allocation data analysis that is managed in the Oracle Hyperion solution. This serves as the “data staging” to the next, critical sub-processes.  From here, we leverage the Oracle EPM platform’s ability to re-use dimensions and legal entity driver data and financial data with Oracle Hyperion Profitability and Cost Management (HPCM).  Within HPCM, we manage the driver data, define the legal entity to legal entity allocation rules (like cost plus), and have the option to test out multiple, simultaneous tax transfer pricing what-if scenarios.  Once processed, a tax expert can evaluate the effectiveness of any one scenario result versus another via a variance analysis configured with HPCM’s pre-packaged reporting capability known as Oracle Hyperion SmartView for Office.”   Further, Bart explained that the ability to visibly demonstrate how a cost or revenue has been allocated is really helpful and auditable.  “HPCM’s Traceability Maps are that visual representation of all allocation flows that have been executed and is the tax transfer analyst’s best friend in maintaining clear documentation for tax transfer pricing audits. Simply click and drill as you inspect the chain of allocation definitions and results. Once final, the post-allocated tax data can be compared to the GL to create invoices and journal entries for posting to your GL system of choice.  Of course, there is a framework for overall governance of the journal entries, allocation percentages, and reporting to include necessary approvals.” Lastly, Marc explained that the key value in using the Oracle Hyperion solution for tax transfer pricing is that it keeps everything in alignment in one single place. Specifically, Oracle Hyperion effectively becomes the single book of record for the GAAP, management, and the tax set of books. There are many benefits to having one source of the truth. These include EFFICIENCY, CONTROLS and TRANSPARENCY.So, what’s your tax strategy? Why not automate the tax transfer pricing process!To listen to the entire podcast, click here.To learn more about Oracle Hyperion Profitability and Cost Management (HPCM), click here.

    Read the article

  • ODI 12c's Mapping Designer - Combining Flow Based and Expression Based Mapping

    - by Madhu Nair
    post by David Allan ODI is renowned for its declarative designer and minimal expression based paradigm. The new ODI 12c release has extended this even further to provide an extended declarative mapping designer. The ODI 12c mapper is a fusion of ODI's new declarative designer with the familiar flow based designer while retaining ODI’s key differentiators of: Minimal expression based definition, The ability to incrementally design an interface and to extract/load data from any combination of sources, and most importantly Backed by ODI’s extensible knowledge module framework. The declarative nature of the product has been extended to include an extensible library of common components that can be used to easily build simple to complex data integration solutions. Big usability improvements through consistent interactions of components and concepts all constructed around the familiar knowledge module framework provide the utmost flexibility. Here is a little taster: So what is a mapping? A mapping comprises of a logical design and at least one physical design, it may have many. A mapping can have many targets, of any technology and can be arbitrarily complex. You can build reusable mappings and use them in other mappings or other reusable mappings. In the example below all of the information from an Oracle bonus table and a bonus file are joined with an Oracle employees table before being written to a target. Some things that are cool include the one-click expression cross referencing so you can easily see what's used where within the design. The logical design in a mapping describes what you want to accomplish  (see the animated GIF here illustrating how the above mapping was designed) . The physical design lets you configure how it is to be accomplished. So you could have one logical design that is realized as an initial load in one physical design and as an incremental load in another. In the physical design below we can customize how the mapping is accomplished by picking Knowledge Modules, in ODI 12c you can pick multiple nodes (on logical or physical) and see common properties. This is useful as we can quickly compare property values across objects - below we can see knowledge modules settings on the access points between execution units side by side, in the example one table is retrieved via database links and the other is an external table. In the logical design I had selected an append mode for the integration type, so by default the IKM on the target will choose the most suitable/default IKM - which in this case is an in-built Oracle Insert IKM (see image below). This supports insert and select hints for the Oracle database (the ANSI SQL Insert IKM does not support these), so by default you will get direct path inserts with Oracle on this statement. In ODI 12c, the mapper is just that, a mapper. Design your mapping, write to multiple targets, the targets can be in the same data server, in different data servers or in totally different technologies - it does not matter. ODI 12c will derive and generate a plan that you can use or customize with knowledge modules. Some of the use cases which are greatly simplified include multiple heterogeneous targets, multi target inserts for Oracle and writing of XML. Let's switch it up now and look at a slightly different example to illustrate expression reuse. In ODI you can define reusable expressions using user functions. These can be reused across mappings and the implementations specialized per technology. So you can have common expressions across Oracle, SQL Server, Hive etc. shielding the design from the physical aspects of the generated language. Another way to reuse is within a mapping itself. In ODI 12c expressions can be defined and reused within a mapping. Rather than replicating the expression text in larger expressions you can decompose into smaller snippets, below you can see UNIT_TAX AMOUNT has been defined and is used in two downstream target columns - its used in the TOTAL_TAX_AMOUNT plus its used in the UNIT_TAX_AMOUNT (a recording of the calculation).  You can see the columns that the expressions depend on (upstream) and the columns the expression is used in (downstream) highlighted within the mapper. Also multi selecting attributes is a convenient way to see what's being used where, below I have selected the TOTAL_TAX_AMOUNT in the target datastore and the UNIT_TAX_AMOUNT in UNIT_CALC. You can now see many expressions at once now and understand much more at the once time without needlessly clicking around and memorizing information. Our mantra during development was to keep it simple and make the tool more powerful and do even more for the user. The development team was a fusion of many teams from Oracle Warehouse Builder, Sunopsis and BEA Aqualogic, debating and perfecting the mapper in ODI 12c. This was quite a project from supporting the capabilities of ODI in 11g to building the flow based mapping tool to support the future. I hope this was a useful insight, there is so much more to come on this topic, this is just a preview of much more that you will see of the mapper in ODI 12c.

    Read the article

  • Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics

    - by fip
    The WebLogic JMS Topic are typically running in a WLS cluster. So as your SOA composites that receive these Topic messages. In some situation, the two clusters are the same while in others they are sepearate. The composites in SOA cluster are subscribers to the JMS Topic in WebLogic cluster. As nature of JMS Topic is meant to distribute the same copy of messages to all its subscribers, two questions arise immediately when it comes to load balancing the JMS Topic messages against the SOA composites: How to assure all of the SOA cluster members receive different messages instead of the same (duplicate) messages, even though the SOA cluster members are all subscribers to the Topic? How to make sure the messages are evenly distributed (load balanced) to SOA cluster members? Here we will walk through how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that the JMS Topic messages will be evenly distributed to same composite running off different SOA cluster nodes without causing duplication. 2. The typical configuration In this typical configuration, we achieve the load balancing of JMS Topic messages to JmsAdapters by configuring a partitioned distributed topic along with sharable subscriptions. You can reference the documentation for explanation of PDT. And this blog posting does a very good job to visually explain how this combination of configurations would message load balancing among clients of JMS Topics. Our job is to apply this configuration in the context of SOA JMS Adapters. To do so would involve the following steps: Step A. Configure JMS Topic to be UDD and PDT, at the WebLogic cluster that house the JMS Topic Step B. Configure JCA Connection Factory with proper ServerProperties at the SOA cluster Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file) Here are more details of each step: Step A. Configure JMS Topic to be UDD and PDT, You do this at the WebLogic cluster that house the JMS Topic. You can follow the instructions at Administration Console Online Help to create a Uniform Distributed Topic. If you use WebLogic Console, then at the same administration screen you can specify "Distribution Type" to be "Uniform", and the Forwarding policy to "Partitioned", which would make the JMS Topic Uniform Distributed Destination and a Partitioned Distributed Topic, respectively Step B: Configure ServerProperties of JCA Connection Factory You do this step at the SOA cluster. This step is to make the JmsAdapter that connect to the JMS Topic through this JCA Connection Factory as a certain type of "client". When you configure the JCA Connection Factory for the JmsAdapter, you define the list of properties in FactoryProperties field, in a semi colon separated list: ClientID=myClient;ClientIDPolicy=UNRESTRICTED;SubscriptionSharingPolicy=SHARABLE;TopicMessageDistributionAll=false You can refer to Chapter 8.4.10 Accessing Distributed Destinations (Queues and Topics) on the WebLogic Server JMS of the Adapter User Guide for the meaning of these properties. Please note: Except for ClientID, other properties such as the ClientIDPolicy=UNRESTRICTED, SubscriptionSharingPolicy=SHARABLE and TopicMessageDistributionAll=false are all default settings for the JmsAdapter's connection factory. Therefore you do NOT have to explicitly specify them explicitly. All you need to do is the specify the ClientID. The ClientID is different from the subscriber ID that we are to discuss in the later steps. To make it simple, you just need to remember you need to specify the client ID and make it unique per connection factory. Here is the example setting: Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file) In the following example, the value 'MySubscriberID-1' was given as the value of property 'DurableSubscriber': <adapter-config name="subscribe" adapter="JMS Adapter" wsdlLocation="subscribe.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata"> <connection-factory location="eis/wls/MyTestUDDTopic" UIJmsProvider="WLSJMS" UIConnectionName="ateam-hq24b"/> <endpoint-activation portType="Consume_Message_ptt" operation="Consume_Message"> <activation-spec className="oracle.tip.adapter.jms.inbound.JmsConsumeActivationSpec"> <property name="DurableSubscriber" value="MySubscriberID-1"/> <property name="PayloadType" value="TextMessage"/> <property name="UseMessageListener" value="false"/> <property name="DestinationName" value="jms/MyTestUDDTopic"/> </activation-spec> </endpoint-activation> </adapter-config> You can set the durable subscriber name either at composite's JmsAdapter wizard,or by directly editing the JmsAdapter's *.jca file within the Composite project. 2.The "atypical" configurations: For some systems, there may be restrictions that do not allow the afore mentioned "typical" configurations be applied. For examples, some deployments may be required to configure the JMS Topic to be Replicated Distributed Topic rather than Partition Distributed Topic. We would like to discuss those scenarios here: Configuration A: The JMS Topic is NOT PDT In this case, you need to define the message selector 'NOT JMS_WL_DDForwarded' in the adapter's *.jca file, to filter out those "replicated" messages. Configuration B. The ClientIDPolicy=RESTRICTED In this case, you need separate factories for different composites. More accurately, you need separate factories for different *.jca file of JmsAdapter. References: Managing Durable Subscription WebLogic JMS Partitioned Distributed Topics and Shared Subscriptions JMS Troubleshooting: Configuring JMS Message Logging: Advanced Programming with Distributed Destinations Using the JMS Destination Availability Helper API

    Read the article

  • WordPress SEO Plugins to make your Blog Search Engine Friendly

    - by Vaibhav
    WordPress is the most common blogging system in use today and its use as a CMS is also wide spread. With hundreds of millions of sites using wordpress, getting correct SEO for your WordPress based Blog or Site is very important. We get regular queries from people who want Search Engine Optimisation for their site or blog which is made using wordpress. Here is a list of 16 of the best WordPress Plug-ins That can help you achieve better rankings: All in one SEO Pack This is most popular plugin among all SEO plugins for WordPress. It is easy to use and is compatible with most of the WordPress plugins. It works as a complete package of SEO plugin – automatically generating META tags and optimizing search engines for your titles and avoiding duplicate content. You can also include META tags manually (Met title, Meta description and Met keywords) for all pages and post in your website. HeadSpace2 HeasSpace2 is available in different languages , you can manage a wide range of SEO Tasks related with meta data, you can tag your posts, Custom descriptions and titles. So your page can rank the created relevancy on Search engines and you can load different settings for different pages. Platinum SEO plugin Automatic 301 redirects permalink changes, META tags generation, avoids duplicate content, and does SEO optimization of post and page titles and a lots of other features. TGFI.net SEO WordPress Plugin It’s a modified version of all-in-one SEO Pack. It has some unique feature over All-in-one SEO plugin, It generate titles, meta descriptions and meta keywords automatically when overrides are not present. Google XML Sitemaps Sitemaps Generated by this tool are supported by  Google,  Yahoo,  Bing, and Ask. We all know Sitemaps make indexing of web pages easier for web crawlers. Crawlers can retrieve complete structure of site and more information by sitemaps. They notify all major search engines about new posts every time you create a new post. Sitemap Generator You can generate highly customizable sitemap for your WordPress page. You can choose what to show and what not to show, you can list the items in your choice of orde. It supports pages and permalinks and multi-level categories. SEO Slugs They can generate more search engine friendly URLs for your site. Slugs are filename assigned to your post , this plugin removes all  common words like ‘a’, ‘the’, ‘in’, ‘what’, ‘you’ from slug which are assigned automatically to your post. SEO Post Links This is a similar plugin to SEO Slug, it removes unnecessary keywords from slug to make it short and SEO friendly and you can fix the number of characters in your post. Automatic SEO links With this tool you can create auto linking in your post. You can use this tool for inter linking or external linking too. Just select your words, anchor text target URL nature of links ( Do fallow / No follow ). This plugin will replace the matches found in post, WP Backlinks A helpful plugin for link exchange , whenever any webmaster submits a link for link exchange, the plugin will spider webmasters site for reciprocal link, and if everything is found good , your link will be exchanged. SEO Title Tag You can optimize your Title  tags of  Word press blog through this plugin . You can also override the title tag with custom titles , mass editing and title tags for 404 pages which are the main feature of this plugin. 404 SEO plugin With this Plugin you can customize 404 page of your site; you can give customized error message and links to relevant pages of your site. Redirection A powerful plugins to manage 301 redirection and logs related with redirection, with this plugin you can track 404 errors and track the log of all redirected URLs , this plugin can redirect  post automatically when URL changes for that post. AddToAny This plugin helps your readers to share, save, email and bookmark your posts and pages. It supports more than a hundred social bookmarking , networking and sharing sites. SEO Friendly Images You can make SEO friendly images available on your site with the help of this tool. It updates images with proper titles and ALT tags. Robots Meta A plugin which prevents Search engines to index comments on your post, login and admin pages. It also allows to add tags for individual pages.

    Read the article

  • JavaScript Class Patterns

    - by Liam McLennan
    To write object-oriented programs we need objects, and likely lots of them. JavaScript makes it easy to create objects: var liam = { name: "Liam", age: Number.MAX_VALUE }; But JavaScript does not provide an easy way to create similar objects. Most object-oriented languages include the idea of a class, which is a template for creating objects of the same type. From one class many similar objects can be instantiated. Many patterns have been proposed to address the absence of a class concept in JavaScript. This post will compare and contrast the most significant of them. Simple Constructor Functions Classes may be missing but JavaScript does support special constructor functions. By prefixing a call to a constructor function with the ‘new’ keyword we can tell the JavaScript runtime that we want the function to behave like a constructor and instantiate a new object containing the members defined by that function. Within a constructor function the ‘this’ keyword references the new object being created -  so a basic constructor function might be: function Person(name, age) { this.name = name; this.age = age; this.toString = function() { return this.name + " is " + age + " years old."; }; } var john = new Person("John Galt", 50); console.log(john.toString()); Note that by convention the name of a constructor function is always written in Pascal Case (the first letter of each word is capital). This is to distinguish between constructor functions and other functions. It is important that constructor functions be called with the ‘new’ keyword and that not constructor functions are not. There are two problems with the pattern constructor function pattern shown above: It makes inheritance difficult The toString() function is redefined for each new object created by the Person constructor. This is sub-optimal because the function should be shared between all of the instances of the Person type. Constructor Functions with a Prototype JavaScript functions have a special property called prototype. When an object is created by calling a JavaScript constructor all of the properties of the constructor’s prototype become available to the new object. In this way many Person objects can be created that can access the same prototype. An improved version of the above example can be written: function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { toString: function() { return this.name + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); In this version a single instance of the toString() function will now be shared between all Person objects. Private Members The short version is: there aren’t any. If a variable is defined, with the var keyword, within the constructor function then its scope is that function. Other functions defined within the constructor function will be able to access the private variable, but anything defined outside the constructor (such as functions on the prototype property) won’t have access to the private variable. Any variables defined on the constructor are automatically public. Some people solve this problem by prefixing properties with an underscore and then not calling those properties by convention. function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { _getName: function() { return this.name; }, toString: function() { return this._getName() + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); Note that the _getName() function is only private by convention – it is in fact a public function. Functional Object Construction Because of the weirdness involved in using constructor functions some JavaScript developers prefer to eschew them completely. They theorize that it is better to work with JavaScript’s functional nature than to try and force it to behave like a traditional class-oriented language. When using the functional approach objects are created by returning them from a factory function. An excellent side effect of this pattern is that variables defined with the factory function are accessible to the new object (due to closure) but are inaccessible from anywhere else. The Person example implemented using the functional object construction pattern is: var personFactory = function(name, age) { var privateVar = 7; return { toString: function() { return name + " is " + age * privateVar / privateVar + " years old."; } }; }; var john2 = personFactory("John Lennon", 40); console.log(john2.toString()); Note that the ‘new’ keyword is not used for this pattern, and that the toString() function has access to the name, age and privateVar variables because of closure. This pattern can be extended to provide inheritance and, unlike the constructor function pattern, it supports private variables. However, when working with JavaScript code bases you will find that the constructor function is more common – probably because it is a better approximation of mainstream class oriented languages like C# and Java. Inheritance Both of the above patterns can support inheritance but for now, favour composition over inheritance. Summary When JavaScript code exceeds simple browser automation object orientation can provide a powerful paradigm for controlling complexity. Both of the patterns presented in this article work – the choice is a matter of style. Only one question still remains; who is John Galt?

    Read the article

  • JavaScript Class Patterns

    - by Liam McLennan
    To write object-oriented programs we need objects, and likely lots of them. JavaScript makes it easy to create objects: var liam = { name: "Liam", age: Number.MAX_VALUE }; But JavaScript does not provide an easy way to create similar objects. Most object-oriented languages include the idea of a class, which is a template for creating objects of the same type. From one class many similar objects can be instantiated. Many patterns have been proposed to address the absence of a class concept in JavaScript. This post will compare and contrast the most significant of them. Simple Constructor Functions Classes may be missing but JavaScript does support special constructor functions. By prefixing a call to a constructor function with the ‘new’ keyword we can tell the JavaScript runtime that we want the function to behave like a constructor and instantiate a new object containing the members defined by that function. Within a constructor function the ‘this’ keyword references the new object being created -  so a basic constructor function might be: function Person(name, age) { this.name = name; this.age = age; this.toString = function() { return this.name + " is " + age + " years old."; }; } var john = new Person("John Galt", 50); console.log(john.toString()); Note that by convention the name of a constructor function is always written in Pascal Case (the first letter of each word is capital). This is to distinguish between constructor functions and other functions. It is important that constructor functions be called with the ‘new’ keyword and that not constructor functions are not. There are two problems with the pattern constructor function pattern shown above: It makes inheritance difficult The toString() function is redefined for each new object created by the Person constructor. This is sub-optimal because the function should be shared between all of the instances of the Person type. Constructor Functions with a Prototype JavaScript functions have a special property called prototype. When an object is created by calling a JavaScript constructor all of the properties of the constructor’s prototype become available to the new object. In this way many Person objects can be created that can access the same prototype. An improved version of the above example can be written: function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { toString: function() { return this.name + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); In this version a single instance of the toString() function will now be shared between all Person objects. Private Members The short version is: there aren’t any. If a variable is defined, with the var keyword, within the constructor function then its scope is that function. Other functions defined within the constructor function will be able to access the private variable, but anything defined outside the constructor (such as functions on the prototype property) won’t have access to the private variable. Any variables defined on the constructor are automatically public. Some people solve this problem by prefixing properties with an underscore and then not calling those properties by convention. function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { _getName: function() { return this.name; }, toString: function() { return this._getName() + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); Note that the _getName() function is only private by convention – it is in fact a public function. Functional Object Construction Because of the weirdness involved in using constructor functions some JavaScript developers prefer to eschew them completely. They theorize that it is better to work with JavaScript’s functional nature than to try and force it to behave like a traditional class-oriented language. When using the functional approach objects are created by returning them from a factory function. An excellent side effect of this pattern is that variables defined with the factory function are accessible to the new object (due to closure) but are inaccessible from anywhere else. The Person example implemented using the functional object construction pattern is: var john = new Person("John Galt", 50); console.log(john.toString()); var personFactory = function(name, age) { var privateVar = 7; return { toString: function() { return name + " is " + age * privateVar / privateVar + " years old."; } }; }; var john2 = personFactory("John Lennon", 40); console.log(john2.toString()); Note that the ‘new’ keyword is not used for this pattern, and that the toString() function has access to the name, age and privateVar variables because of closure. This pattern can be extended to provide inheritance and, unlike the constructor function pattern, it supports private variables. However, when working with JavaScript code bases you will find that the constructor function is more common – probably because it is a better approximation of mainstream class oriented languages like C# and Java. Inheritance Both of the above patterns can support inheritance but for now, favour composition over inheritance. Summary When JavaScript code exceeds simple browser automation object orientation can provide a powerful paradigm for controlling complexity. Both of the patterns presented in this article work – the choice is a matter of style. Only one question still remains; who is John Galt?

    Read the article

  • Geocaching - World wide treasure hunt

    I'm not quite sure how I came across this topic but actually I find it absolutely interesting, challenging and most of all a great fun for the family and friends. The interesting part is for sure that you can follow other peoples treasures and their preferred locations where a cache might be hidden. Of course, it wont be easy to find a cache after all. Sometimes there are even 'mystery caches' which have either riddles, further instructions or little brain games for you in order to find the actual cache - that's the challenge. And last but not least, those caches are hidden outdoor. A great experience to explore nature either on your own, or your family especially with children, or as a treasure hunting pack with a couple of friends. What is geocaching? It's a high-tech outdoor treasure hunting game that's a great way to explore the world with friends, family or on your own. Participants use GPS-enabled devices to locate hidden containers called geocaches. There are over one million geocaches hidden around the world today, waiting for you to find them. Visit Geocaching.com to search for geocaches near you.(Source: Referral Email of geocaching.com) Checkout the Geocaching 101 for further details and information. They also provide a video channel on YouTube. Which equipment do I need? Any GPS-enabled device is sufficient to go onto the hunt. I'm going to start our geocaching experience equipped with my Samsung Galaxy Tab. Additionally, I installed a geocaching.com client called c:geo that hopefully assists me soon. Combined with a map app like Google Maps and a nice Compass app you should be fully equipped and ready to go. I guess, that even a car navigation system is perfect for that task. Later on, with more experience and demand for technology (or precision) it might be interesting to opt-in for a pure GPS device, like a Garmin or any other brand on the market. {loadposition content_adsense} What is a geocache and what does it contain? In its simplest form, a cache always contains a logbook or logsheet for you to log your find. Larger caches may contain a logbook and any number of items. These items turn the adventure into a true treasure hunt. You never know what the cache owner or visitors to the cache may have left for you to enjoy. Remember, if you take something, leave something of equal or greater value in return. It is recommended that items in a cache be individually packaged in a clear, zipped plastic bag to protect them from the elements. Finding your first geocache Well, first you have to have interest to pick up the challenge. Then you have to check out the Geocache directory on geocaching.com. They have recommendations for beginner's caches but you are free to choose any. Actually, we have a Mystery Cache very close to our base, and I guess that we are going for that one on our first trip. Anyway, there is a very informative guide on the website which should answer all your questions about starting your new outdoor adventure. For sure, it's going to be rewarding. Team up with friends and family Especially as a beginner there might be misunderstandings in handling the GPS coordinates, the compass, or the map, and even finding the container at the documented position isn't easy in the first place. Luckily, there are logbook reports online from other hunters, and most of the time there are even 'spoiler' images available. But also bear in mind, that a geocache might have been removed or is lost due to unconscious people or whatever other reasons. Don't be disappointed in case that you can't find anything... There be nothing anymore. A general recommendation in this case would be to replace the missing container with a new one, and give feedback to the original owner about the state of that particular location. After all, it's about fun and active participation in a world-wide community. Geocaches in Mauritius? Yes, there are currently about 45 geocaches spread all over the island, and even a single in Rodriguez - that's gonna be a tough one. Hopefully, we will get increasing numbers as Geocaching.com allows, no better, even encourages you to hide new containers at your locations of choice. I think this is going to be real fun for us during the upcoming weeks and months. Especially, when we are travelling to other countries and transfer so-called trackable items between geocaches. On my first impression, Geocaching.com seems to be very mature, open and community-oriented. There are literally hundreds of thousands geocache 'hunters' all over the world. And usually finding a container remote from your home is very rewarding. I'll keep you updated in these matters during the next months to come...

    Read the article

  • Oracle ATG Web Commerce 10 Implementation Developer Boot Camp - Reading (UK) - October 1-12, 2012

    - by Richard Lefebvre
    REGISTER NOW: Oracle ATG Web Commerce 10 Implementation Developer Boot Camp Reading, UK, October 1-12, 2012! OPN invites you to join us for a 10-day implementation bootcamp on Oracle ATG Web Commerce in Reading, UK from October 1-12, 2012.This 10-day boot camp is designed to provide partners with hands-on experience and technical training to successfully build and deploy Oracle ATG Web Commerce 10 Applications. This particular boot camp is focused on helping partners develop the essential skills needed to implement every aspect of an ATG Commerce Application from scratch, (not CRS-based), with a specific goal of enabling experienced Java/J2EE developers with a path towards becoming functional, effective, and contributing members of an ATG implementation team. Built for both new and experienced ATG developers alike, the collaborative nature of this program and its exercises, have proven to be highly effective and extremely valuable in learning the best practices for implementing ATG solutions. Though not required, this bootcamp provides a structured path to earning a Certified Oracle ATG Web Commerce 10 Specialization! What Is Covered: This boot camp is for Application Developers and Software Architects wanting to gain valuable insight into ATG application development best practices, as well as relevant and applicable implementation experience on projects modeled after four of the most common types of applications built on the ATG platform. The following learning objectives are all critical, and are of equal priority in enabling this role to succeed. This learning boot camp will help with: Building a basic functional transaction-ready ATG Web Commerce 10 Application. Utilizing ATG’s platform features such as scenarios, slots, targeters, user profiles and segments, to create a personalized user experience. Building Nucleus components to support and/or extend application functionality. Understanding the intricacies of ATG order checkout and fulfillment. Specifying, designing and implementing new commerce features in ATG 10. Building a functional commerce application modeled after four of the most common types of applications built on the ATG platform, within an agile-based project team environment and under simulated real-world project conditions. Duration: The Oracle ATG Web Commerce 10 Implementation Developer Boot Camp is an instructor-led workshop spanning 10 days. Audience: Application Developers Software Architects Prerequisite Training and Environment Requirements: Programming and Markup Experience with Java J2EE, JavaScript, XML, HTML and CSS Completion of Oracle ATG Web Commerce 10 Implementation Specialist Development Guided Learning Path modules Participants will be required to bring their own laptop that meets the minimum specifications:   64-bit PC and OS (e.g. Windows 7 64-bit) 4GB RAM or more 40GB Hard Disk Space Laptops will require access to the Internet through Remote Desktop via Windows. Agenda Topics: Week 1 – Day 1 through 5 Build a Basic Commerce Application In week one of the boot camp training, we will apply knowledge learned from the ATG Web Commerce 10 Implementation Developer Guided Learning Path modules, towards building a basic transaction-ready commerce application. There will be little to no lectures delivered in this boot camp, as developers will be fully engaged in ATG Application Development activities and best practices. Developers will work independently on the following lab assignments from day's 1 through 5: Lab Assignments  1 Environment Setup 2 Build a dynamic Home Page 3 Site Authentication 4 Build Customer Registration 5 Display Top Level Categories 6 Display Product Sub-Categories 7 Display Product List Page 8 Display Product Detail Page 9 ATG Inventory 10 Build “Add to Cart” Functionality 11 Build Shopping Cart 12 Build Checkout Page  13 Build Checkout Review Page 14 Create an Order and Build Order Confirmation Page 15 Implement Slots and Targeters for Personalization 16 Implement Pricing and Promotions 17 Order Fulfillment Back to top Week 2 – Day 6 through 10 Team-based Case Project In the second week of the boot camp training, participants will be asked to join a project team that will select a case project for the team to implement. Teams will be able to choose from four of the most common application types developed and deployed on the ATG platform. They are as follows: Hard goods with physical fulfillment, Soft goods with electronic fulfillment, a Service or subscription case example, a Course/Event registration case example. Team projects will have approximately 160 hours of use cases/stories for each team to build (40 hours per developer). Each day's Use Cases/Stories will build upon the prior day's work, and therefore must be fully completed at the end of each day. Please note that this boot camp intends to simulate real-world project conditions, and as such will likely require the need for project teams to possibly work beyond normal business hours. To promote further collaboration and group learning, each team will be asked to present their work and share the methodologies and solutions that they've applied to their cases at the end of each day. Location: Oracle Reading CVC TPC510 Room: Wraysbury Reading, UK 9:00 AM – 5:00 PM  Registration Fee (10 Days): US $3,375 Please click on the following link to REGISTER or  visit the Oracle ATG Web Commerce 10 Implementation Developer Boot Camp page for more information. Questions: Patrick Ty Partner Enablement, Oracle Commerce Phone: 310.343.7687 Mobile: 310.633.1013 Email: [email protected]

    Read the article

  • Getting to grips with the stack in nasm

    - by MarkPearl
    Today I spent a good part of my day getting to grips with the stack and nasm. After looking at my notes on nasm I think this is one area for the course I am doing they could focus more on… So here are some snippets I have put together that have helped me understand a little bit about the stack… Simplest example of the stack You will probably see examples like the following in circulation… these demonstrate the simplest use of the stack… org 0x100 bits 16 jmp main main: push 42h push 43h push 44h mov ah,2h ;set to display characters pop dx    ;get the first value int 21h   ;and display it pop dx    ;get 2nd value int 21h   ;and display it pop dx    ;get 3rd value int 21h   ;and display it int 20h The output from above code would be… DCB Decoupling code using “call” and “ret” This is great, but it oversimplifies what I want to use the stack for… I do not know if this goes against the grain of assembly programmers or not, but I want to write loosely coupled assembly code – and I want to use the stack as a mechanism for passing values into my decoupled code. In nasm we have the call and return instructions, which provides a mechanism for decoupling code, for example the following could be done… org 0x100 bits 16 jmp main ;---------------------------------------- displayChar: mov ah,2h mov dx,41h int 21h ret ;---------------------------------------- main: call displayChar int 20h   This would output the following to the console A So, it would seem that call and ret allow us to jump to segments of our code and then return back to the calling position – a form of segmenting the code into what we would called in higher order languages “functions” or “methods”. The only issue is, in higher order languages there is a way to pass parameters into the functions and return results. Because of the primitive nature of the call and ret instructions, this does not seem to be obvious. We could of course use the registers to pass values into the subroutine and set values coming out, but the problem with this is we… Have a limited number of registers Are threading our code with tight coupling (it would be hard to migrate methods outside of their intended use in a particular program to another one) With that in mind, I turn to the stack to provide a loosely coupled way of calling subroutines… First attempt with the Stack Initially I thought this would be simple… we could use code that looks as follows to achieve what I want… org 0x100 bits 16 jmp main ;---------------------------------------- displayChar: mov ah,2h pop dx int 21h ret ;---------------------------------------- main: push 41h call displayChar int 20h   However running this application does not give the desired result, I want an ‘A’ to be returned, and I am getting something totally different (you will to). Reading up on the call and ret instructions a discovery is made… they are pushing and popping things onto and off the stack as well… When the call instruction is executed, the current value of IP (the address of the instruction to follow) is pushed onto the stack, when ret is called, the last value on the stack is popped off into the IP register. In effect what the above code is doing is as follows with the stack… push 41h push current value of ip pop current value of ip to dx pop 41h to ip This is not what I want, I need to access the 41h that I pushed onto the stack, but the call value (which is necessary) is putting something in my way. So, what to do? Remember we have other registers we can use as well as a thing called indirect addressing… So, after some reading around, I came up with the following approach using indirect addressing… org 0x100 bits 16 jmp main ;---------------------------------------- displayChar: mov bp,sp mov ah,2h mov dx,[bp+2] int 21h ret ;---------------------------------------- main: push 41h call displayChar int 20h In essence, what I have done here is used a trick with the stack pointer… it goes as follows… Push 41 onto the stack Make the call to the function, which will push the IP register onto the stack and then jump to the displayChar label Move the value in the stack point to the bp register (sp currently points at IP register) Move the at the location of bp minus 2 bytes to dx (this is now the value 41h) display it, execute the ret instruction, which pops the ip value off the stack and goes back to the calling point This approach is still very raw, some further reading around shows that I should be pushing the value of bp onto the stack before replacing it with sp, but it is the starting thread to getting loosely coupled subroutines. Let’s see if you get what the following output would be? org 0x100 bits 16 jmp main ;---------------------------------------- displayChar: mov bp,sp mov ah,2h mov dx,[bp+4] int 21h mov dx,[bp+2] int 21h ret ;---------------------------------------- main: push 41h push 42h call displayChar int 20h The output is… AB Where to from here? If by any luck some assembly programmer comes along and see this code and notices that I have made some fundamental flaw in my logic… I would like to know, so please leave a comment… appreciate any feedback!

    Read the article

  • Server Controls in ASP.NET MVC without ViewState

    - by imran_ku07
      Introduction :           ASP.NET Web Forms provides a development environment just like GUI or windows application and try to hide statelessness nature of HTTP protocol. For accomplishing this target, Web Forms uses ViewState (a hidden field) to remove the gap between HTTP statelessness and GUI applications. But the problem with this technique is that ViewState size which grows quickly and also go back and forth with every request, as a result it will degrade application performance. In this article i will try to use existing ASP.NET server controls without ViewState.   Description :           When you add a server control which needs viewstate, in the presentation view in ASP.NET MVC application without a form tag, for example,            <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox>            It will shows the following exception,            Control 'TextBox1' of type 'TextBox' must be placed inside a form tag with runat=server             When you place this textbox inside a form tag with runat=server, this will add the following ViewState even when you disable ViewState by using EnableViewState="false"            <input type="hidden" value="/wEPDwUJMjgzMDgzOTgzZGQ6u9CwikhHEW39ObrHyLTPFSboPA==" id="__VIEWSTATE" name="__VIEWSTATE"/>             The solution to this problem is to use the RenderControl method of server control which is simply renders HTML without any ViewState hidden field.         <% TextBox txt = new TextBox();          txt.Text = "abc";          StringBuilder sb = new StringBuilder();          System.IO.StringWriter textwriter = new System.IO.StringWriter(sb);          HtmlTextWriter htmlwriter = new HtmlTextWriter(textwriter);          txt.RenderControl(htmlwriter);  %>        <%= sb.ToString() %>             This will render <input type="text" > without any View State. This technique become very useful when you are using rich server controls like GridView. For example, let's say you have List of Recalls in Model.Recalls, then you will show your tabular data as,     <%  GridView gv = new GridView();          gv.AutoGenerateColumns = true;          gv.DataSource = Model.Recalls;          gv.DataBind();         StringBuilder sb = new StringBuilder();         System.IO.StringWriter textwriter = new System.IO.StringWriter(sb);         HtmlTextWriter htmlwriter = new HtmlTextWriter(textwriter);         gv.RenderControl(htmlwriter);%>            <%= sb.ToString() %>             This code might looks odd in your presentation view. A more better approach is to create a HTML Helper method which contains the above code. Summary :        In some cases you might needs to use existing ASP.NET Web Forms server controls but also dislikes ViewState. In this article i try to solve this gap by using the RenderControl method of Control class. Hopefully you enjoyed and become ready to create HTML helpers for many of the existing server controls.

    Read the article

  • Monitoring almost anything with BizTalk 360

    - by Michael Stephenson
    When you work in an integration environment it is common that you will find yourself in a situation where you integrate with some unusual applications or have some unusual dependencies. That is the nature of integration. When you work with BizTalk one of the common problems is that BizTalk often is the place where problems with applications you integrate with are highlighted and these external applications may have poor monitoring solutions. Fortunately if you are a working with a customer who uses BizTalk 360 then it contains a feature called the "Web Endpoint Manager". Typically the web endpoint manager is used to monitor web services that you integrate with and will ping them at appropriate times to make sure they return the expected HTTP status code. When you have an usual situation where you want to monitor something which is key to the success to your solution but you find yourself having to consider a significant custom solution to monitor the external dependency then the Web Endpoint Manager could be your friend. The endpoint manager monitors a url and checks for a certain status code. This means that you can create your own aspx web page and then make BizTalk 360 monitor this web page. Behind the web page you could write any code you wished. An example of this is architecture is shown in the below diagram.     In the custom web page you would implement some custom code to do whatever it is that you want to monitor. In the below code snippet you can see how the Page_Load default method is doing some kind of check then depending on the result of the check it returns a certain HTTP code. protected void Page_Load(object sender, EventArgs e) { var result = CheckSomething();   if (result == "Success") Response.StatusCode = 202; else if (result == "DatabaseError") Response.StatusCode = 510; else if (result == "SystemError") Response.StatusCode = 512; else Response.StatusCode = 513;   }   In BizTalk 360 you would go into the Monitor and Notify tab and then to BizTalk Environment which gives you access to the Web Endpoint Manager. You need an alarm setup which configures how the endpoint will be checked. I'm not going to go through the details of creating the alarm as this is already documented in the BizTalk 360 documentation. One point to note is that in the example I am using I setup a threshold alarm which means that the url is checked about every minute and if there is an error that persists for a period of time then the alarm will raise the alert notification. In my example I configured the alarm to fire if the error persisted for 3 minutes. The below picture shows accessing the endpoint manager.   In the web endpoint manager you would then configure your endpoint to monitor and the HTTP response code which indicates all is working fine. The below picture shows this. I now have my endpoint monitoring setup and BizTalk 360 should be checking my custom endpoint to see that it is available. If I wanted to manually sanity check that the endpoints I have registered are working fine then clicking the Refresh button will show if they are all good or not. If my custom ASP.net page which is checking my dependency gets a problem you will see in the endpoint manager that the status code does not match the expected return code and your endpoints will display in red and you can see the problem. The below picture shows this. If I use specific HTTP response codes for the errors the custom ASP.net page might encounter I can easily interpret these to know what the problem is. Using the alarms and notifications with BizTalk 360 it means when your endpoint goes into an error state you can easily configure email or SMS notifications from BizTalk 360 to tell you that your endpoint is having problems and you can use BizTalk 360 to help correlate what the problem is to allow you to investigate further. Below you can see the email which tells me my endpoint is not working.   When everything returns to normal you will see the status is now fixed and you will see a situation like below where you can see the WebEndpoints are now green and the return code matches what is expected.   Conclusion As you can see it is really easy to plug your own custom ASP.net page into the BizTalk 360 web endpoint monitoring feature. This extension then gives you the power to really extend the monitoring to almost anything you want as long as you can write some .net code to check that the dependency is available and working. It would be interesting to hear of any ideas people have around things they would monitor with this extension. More details on the end point monitor can be found on the following link: http://www.biztalk360.com/tour/monitoring_notifications

    Read the article

  • Integration with Multiple Versions of BizTalk HL7 Accelerator Schemas

    - by Paul Petrov
    Microsoft BizTalk Accelerator for HL7 comes with multiple versions of the HL7 implementation. One of the typical integration tasks is to receive one format and transmit another. For example, system A works HL7 v2.4 messages, system B with v2.3, and system C with v2.2. The system A is exchanging messages with B and C. The logical solution is to create schemas in separate namespaces for each system and assign maps on send ports. Schematic diagram of the messaging solution is shown below:   Nothing is complex about that conceptually. On the implementation level things can get nasty though because of the elaborate nature of HL7 schemas and sheer amount of message types involved. If trying to implement maps directly in BizTalk Map Editor one would quickly get buried by thousands of links between subfields of HL7 segments. Since task is repetitive because HL7 segments are reused between message types it's natural to take advantage of such modular structure and reduce amount of work through reuse. Here's where it makes sense to switch from visual map editor to old plain XSLT. The implementation is done in three steps. First, create XSL templates to map from segments of one version to another. This can be done using BizTalk Map Editor subsequently copying and modifying generated XSL code to create one xsl:template per segment. Group all segments for format mapping in one XSL file (we call it SegmentTemplates.xsl). Here's how template for the PID segment (Patient Identification) would look like this: <xsl:template name="PID"> <PID_PatientIdentification> <xsl:if test="PID_PatientIdentification/PID_1_SetIdPatientId"> <PID_1_SetIdPid> <xsl:value-of select="PID_PatientIdentification/PID_1_SetIdPatientId/text()" /> </PID_1_SetIdPid> </xsl:if> <xsl:for-each select="PID_PatientIdentification/PID_2_PatientIdExternalId"> <PID_2_PatientId> <xsl:if test="CX_0_Id"> <CX_0_Id> <xsl:value-of select="CX_0_Id/text()" /> </CX_0_Id> </xsl:if> <xsl:if test="CX_1_CheckDigit"> <CX_1_CheckDigitSt> <xsl:value-of select="CX_1_CheckDigit/text()" /> </CX_1_CheckDigitSt> </xsl:if> <xsl:if test="CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed"> <CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed> <xsl:value-of select="CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed/text()" /> </CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed> . . . // skipped for brevity This is the most tedious and time consuming part. Templates can be created for only those segments that are used in message interchange. Once this is done the rest goes much easier. The next step is to create message type specific XSL that references (imports) segment templates XSL file. Inside this file simple call segment templates in appropriate places. For example, beginning of the mapping XSL for ADT_A01 message would look like this:   <xsl:import href="SegmentTemplates_23_to_24.xslt" />  <xsl:output omit-xml-declaration="yes" method="xml" version="1.0" />   <xsl:template match="/">    <xsl:apply-templates select="s0:ADT_A01_23_GLO_DEF" />  </xsl:template>   <xsl:template match="s0:ADT_A01_23_GLO_DEF">    <ns0:ADT_A01_24_GLO_DEF>      <xsl:call-template name="EVN" />      <xsl:call-template name="PID" />      <xsl:for-each select="PD1_PatientDemographic">        <xsl:call-template name="PD1" />      </xsl:for-each>      <xsl:call-template name="PV1" />      <xsl:for-each select="PV2_PatientVisitAdditionalInformation">        <xsl:call-template name="PV2" />      </xsl:for-each> This code simply calls segment template directly for required singular elements and in for-each loop for optional/repeating elements. And lastly, create BizTalk map (btm) that references message type specific XSL. It is essentially empty map with Custom XSL Path set to appropriate XSL: In the end, you will end up with one segment templates file that is referenced by many message type specific XSL files which in turn used by BizTalk maps. Once all segment maps are created they are widely reusable and all the rest work is very simple and clean.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #006

    - by pinaldave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 This was my very first year of blogging so I was every day learning something new. As I have said many times, that blogging was never an intention. I had really not understood what exactly I am working on or beginning when I was beginning blogging in 2006. I had never knew that my life was going to change forever, once I started blogging. When I look back all of this year, I am happy that we are here together. 2007 IT Outsourcing to India – Top 10 Reasons Companies Outsource Outsourcing is about trust, collaboration and success. Helping other countries in need has been always the course of mankind, outsourcing is nothing different then that. With information technology and process improvements increasing the complexity, costs and skills required to accomplish routine tasks as well as challenging complex tasks, companies are outsourcing such tasks to providers who have the expertise to perform them at lower costs , with greater value and quality outcome. UDF – Remove Duplicate Chars From String This was a very interesting function I wrote in my early career. I am still using this function when I have to remove duplicate chars from strings. I have yet to come across a scenario where it does not work so I keep on using it till today. Please leave a comment if there is any better solution to this problem. FIX : Error : 3702 Cannot drop database because it is currently in use This is a very generic error when DROP Database is command is executed and the database is not dropped. The common mistake user is kept the connection open with this database and trying to drop the database. The database cannot be dropped if there is any other connection open along with it. It is always a good idea to take database in single user mode before dropping it. Here is the quick tutorial regarding how to bring the database in single user mode: Using T-SQL | Using SSMS. 2008 Install SQL Server 2008 – How to Upgrade to SQL Server 2008 – Installation Tutorial This was indeed one of the most popular articles in SQL Server 2008. Lots of people wanted to learn how to install SQL SErver 2008 but they were facing various issues while installation. I build this tutorial which becomes reference points for many. Default Collation of SQL Server 2008 What is the collation of SQL Server 2008 default installations? I often see this question confusing many experienced developers as well. Well the answer is in following image. Ahmedabad SQL Server User Group Meeting – November 2008 User group meetings are fun, now a days I am going to User Group meetings every week but there was a case when I have been just a beginner on this subject. The bug of the community was caught on me years ago when I started to present in Ahmedabad and Gandhinagar SQ LServer User Groups. 2009 Validate an XML document in TSQL using XSD My friend Jacob Sebastian wrote an excellent article on the subject XML and XSD. Because of the ‘eXtensible’ nature of XML (eXtensible Markup Language), often there is a requirement to restrict and validate the content of an XML document to a pre-defined structure and values. XSD (XML Schema Definition Language) is the W3C recommended language for describing and validating XML documents. SQL Server implements XSD as XML Schema Collections. Star Join Query Optimization At present, when queries are sent to very large databases, millions of rows are returned. Also the users have to go through extended query response times when joining multiple tables are involved with such queries. ‘Star Join Query Optimization’ is a new feature of SQL Server 2008 Enterprise Edition. This mechanism uses bitmap filtering for improving the performance of some types of queries by the effective retrieval of rows from fact tables. 2010 These puzzles are very interesting and intriguing – there was lots of interest on this subject. If you have free time this weekend. You may want to try them out. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint (Solution)  SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal (Solution)  SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists (Open)  Additionally, I had great fun presenting SQL Server Performance Tuning seminar at fantastic locations in Hyderabad. Installing AdventeWorks Database This has been the most popular request I have received on my blog. Here is the quick video about how one can install AdventureWorks. 2011 Effect of SET NOCOUNT on @@ROWCOUNT There was an interesting incident once while I was presenting a session. I wrote a code and suddenly 10 hands went up in the air.  This was a bit surprise to me as I do not know why they all got alerted. I assumed that there should be something wrong with either project, screen or my display. However the real reason was very interesting – I suggest you read the complete blog post to understand this interesting scenario. Error: Deleting Offline Database and Creating the Same Name This is very interesting because once a user deletes the offline database the MDF and LDF file still exists and if the user attempts to create a new database with the same name it will give error. I found this very interesting and the blog explains the concept very quickly. Have you ever faced a similar situation? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Surface and the Uphill Battle to Win Over iPad Users (Namely: Me)

    - by D'Arcy Lussier
    I went away this past weekend and decided to bring along the Windows 8 tablet from the Build conference last year – y’know, to give Windows 8 a try in a typical scenario. I also brought our iPad 2 along since I figured my wife would want to use that. I’d love to tell you how I found using my Windows 8 tablet but I can’t – I used the iPad exclusively the entire weekend. It was during this that I realized what Microsoft needs to do to win me over as an iPad user. As you’ll see, I’m left wondering what it is that Surface is meant to compete with: iPad and other tablets, or thin laptops like the MacBook Air or Ultrabooks. Device Size I really like the size of the iPad compared with the Build tablet. It’s not as long and the thinness/weight of the device makes it feel more like you’re holding a magazine than a computer. I’m pleased that Microsoft will be matching the thinness of the iPad with Surface, but I’m suspect as to what that actually means. The iPad’s edges slant inwards where the Surface has a thicker boxish look (similar to the iPhone 4S). So while they may have the same depth at the deepest part of both devices, I bet the iPad will come off feeling thinner. However, its not lost on me the number of external port options the Surface’s design provides over the iPad (Usb, etc.). With that said, I haven’t missed having a USB slot on my iPad. I’m not a fan of lengthening the Surface screen size to almost a full inch over the iPad, mainly because… Vertical Orientation Experience Did you notice at the announce event, in the images of the devices that have been released, and in any marketing for it, that the surface is always displayed in horizontal orientation. This is a huge beef I have with my Build tablet and why I prefer the iPad. Yes the iPad can do the wide-screenish mode, but the iPad is oriented to be vertical by nature. Don’t agree? Look at the button and camera placement – both on the shorter sides of the device. Compare that with the Surface, where the orientation for the button and camera is on the longer sides. To be fair, Blackberry and the horde of Android tablets out there haven’t gotten this either – since most monitors are widescreen nowadays tablets should be too right? Wrong. Widescreen is great for certain things, but tasks such as reading is not one of them – hence why monitor companies like Dell provide stands that allow you to flip your widescreen monitor to a vertical orientation. That Microsoft has chosen a horizontal orientation by default for Windows 8 is disappointing – hopefully hardware manufacturers will be given the option of a default vertical orientation. Fast Startup Time I like that I can turn off/turn on the iPad very quickly. Even from a true “off” mode and not just sleeping, the iPad boots up very quickly. Windows RT needs to have that same quick response. If I start finding that I’m waiting for the device to boot up for more than 30 seconds that could be a show stopper. No Heat I really hate that the Build tablet has fans that kick in to cool the procs, but its basically a slate computer and I get its part of that prototype build. For Surface, it needs to be the same type of experience as the iPad – no heat! I know Surface doesn’t have fans and uses some cool new vent system or something like that, but even then – I want to sit and read a book on my Surface without having to feel any heat coming from the device, which is the experience I have with the iPad now. What About Apps?! I am definitely not the target client when it comes to app stores. On my iPad I use: Safari Kindle Reader Twitter App Settlers of Catan TSN’s App And that’s it. So really, while its nice that some version of Office might be available, I’m not planning on utilizing a Surface for creating a PowerPoint or working on a Word document – that’s what my laptop is for. I want my tablet to be for information snacking or as an e-reader and occasionally an entertainment device. Surface vs iPad or Surface vs Air? The more that I read up on Surface, the more I wonder if it won’t be a touch-enabled MacBook Air competitor more than an iPad one. Also, I really question if Microsoft gets tablets – when one of your main selling features is a built-in physical keyboard it speaks more to a traditional laptop experience than a tablet one that’s entirely reliant on touch. Still, I really love the Windows Phone interface – way more than iOS – so I’m still very optimistic that the Metro experience on the tablet will be fantastic. I just worry that Microsoft has interpreted a tablet as a computer with a removable keyboard and a touch screen, and that’s not what tablet computing is about at all.

    Read the article

  • My Right-to-Left Foot (T-SQL Tuesday #13)

    - by smisner
    As a business intelligence consultant, I often encounter the situation described in this month's T-SQL Tuesday, hosted by Steve Jones ( Blog | Twitter) – “What the Business Says Is Not What the  Business Wants.” Steve posed the question, “What issues have you had in interacting with the business to get your job done?” My profession requires me to have one foot firmly planted in the technology world and the other foot planted in the business world. I learned long ago that the business never says exactly what the business wants because the business doesn't have the words to describe what the business wants accurately enough for IT. Not only do technological-savvy barriers exist, but there are also linguistic barriers between the two worlds. So how do I cope? The adage "a picture is worth a thousand words" is particularly helpful when I'm called in to help design a new business intelligence solution. Many of my students in BI classes have heard me explain ("rant") about left-to-right versus right-to-left design. To understand what I mean about these two design options, let's start with a picture: When we design a business intelligence solution that includes some sort of traditional data warehouse or data mart design, we typically place the data sources on the left, the new solution in the middle, and the users on the right. When I've been called in to help course-correct a failing BI project, I often find that IT has taken a left-to-right approach. They look at the data sources, decide how to model the BI solution as a _______ (fill in the blank with data warehouse, data mart, cube, etc.), and then build the new data structures and supporting infrastructure. (Sometimes, they actually do this without ever having talked to the business first.) Then, when they show what they've built to the business, the business says that is not what we want. Uh-oh. I prefer to take a right-to-left approach. Preferably at the beginning of a project. But even if the project starts left-to-right, I'll do my best to swing it around so that we’re back to a right-to-left approach. (When circumstances are beyond my control, I carry on, but it’s a painful project for everyone – not because of me, but because the approach just doesn’t get to what the business wants in the most effective way.) By using a right to left approach, I try to understand what it is the business is trying to accomplish. I do this by having them explain reports to me, and explaining the decision-making process that relates to these reports. Sometimes I have them explain to me their business processes, or better yet show me their business processes in action because I need pictures, too. I (unofficially) call this part of the project "getting inside the business's head." This is starting at the right side of the diagram above. My next step is to start moving leftward. I do this by preparing some type of prototype. Depending on the nature of the project, this might mean that I simply mock up some data in a relational database and build a prototype report in Reporting Services. If I'm lucky, I might be able to use real data in a relational database. I'll either use a subset of the data in the prototype report by creating a prototype database to hold the sample data, or select data directly from the source. It all depends on how much data there is, how complex the queries are, and how fast I need to get the prototype completed. If the solution will include Analysis Services, then I'll build a prototype cube. Analysis Services makes it incredibly easy to prototype. You can sit down with the business, show them the prototype, and have a meaningful conversation about what the BI solution should look like. I know I've done a good job on the prototype when I get knocked out of my chair so that the business user can explore the solution further independently. (That's really happened to me!) We can talk about dimensions, hierarchies, levels, members, measures, and so on with something tangible to look at and without using those terms. It's not helpful to use sample data like Adventure Works or to use BI terms that they don't really understand. But when I show them their data using the BI technology and talk to them in their language, then they truly have a picture worth a thousand words. From that, we can fine tune the prototype to move it closer to what they want. They have a better idea of what they're getting, and I have a better idea of what to build. So right to left design is not truly moving from the right to the left. But it starts from the right and moves towards the middle, and once I know what the middle needs to look like, I can then build from the left to meet in the middle. And that’s how I get past what the business says to what the business wants.

    Read the article

  • Great event : Microsoft Visual Studio 2010 Launch @ Microsoft TechEd Blore

    - by sathya
    Great event : Microsoft Visual Studio 2010 Launch @ Microsoft TechEd Blore   I was really excited on attending the day 1 of Microsoft TechEd 2010 in Bangalore. This is the first Teched that am attending. The event was really fun filled with lot of knowledge sharing sessions and lots of goodies and gifts by the partners Initially the Event Started by Murthy's Session. He explained about the Developers relating to the 5 elements of nature (Pancha Boothaas) 1. Fire - Passion 2. Wave (Water) - Catch the right wave which we need to apply. 3. Earth - Connections and lots of opportunities around the world 4. Air -  Its whatever we breathe. Developers.. Without them nothing is possible. they are like the air 5. Sky - Cloud based applications   Next the Keynote and the announcement of Visual Studio by SomaSegar. List of things that he delivered his speech on : 1. Announcement of Visual Studio 2010 2. Announcement of .NET 4.0 3. Announcement of Silverlight later this week 4. What is the current Trend? Microsoft has done a research with many developers across the globe and have got the following feedback from the users. Get Lost (interrupted) - When we do some work and somebody is calling or interrepting by someother way we lose track of what we were doing and we need to do from the start Falling Behind- Technology gets updated  phenomenally over a period of time and developers always have a scenario like they are not in the state of the art technology and they always have a doubt whether they are staying updated. Lack of Collobaration - When a Manager asks a person what the team members have done and some might be done and some might not be and finally all are into a state like we dont know where we are. So they have addressed these 3 points in the VS 2010 by the following features : Get Lost - Some cool features which could overcome this. We have some Graphical interface. which could show what we have done and where we are. Some Zoom features in the code level. Falling Behind - Everything is based on .NET language base. 2010 has been built in such a way that if developers know the native language that's enough for building good applications. Lack of Collobaration - Some Dashboard Features which would show where exactly the project is. And a graphical user interface is shown on clicking which it directly drills down even to the code level. 5. An overview on all new features in VS 2010. 6. Some good demos of new features in VS 2010 by Polita and one more girl. Some of the new features included : 1. Team Explorer 2. Zoom in Code 3. Ribbon Development 4. Development in Single Platform for Windows Phone, XBox, Zune, Azure, Web Based and Windows based applications 5. Sequence Diagram Generation directly from code 6. Dashboards to show project status 7. Javascript and JQuery intellisense 8. Native support for JQuery 9. Packaging feature while deploying. 10. Generation of different versions of web.config like Web.Config.Production, Web.Config.Staging, etc. 11. IntelliTrace - Eliminating the "Not Reproducible" statement. 12. Automated User Interface Testing. At last in the closing of the day we had a great event called Demo Extravaganza, where lot of cool projects that were launched by Microsoft and also the projects that are under research were also shown. I got a lot of info about Bing today. BING really rocks!!! It has the following : 1. Visual Search 2. Product based search. For each product different menu filters were provided to make an advanced search 3. BING Maps was awesome!! It zoomed in to the street level and we can assume that we are the persons who are walking or running on the road and we can see the real objects like buildings moving by our side. 4. PhotoSynth was used in BING to show up all the images taken around the globe in a 3D format. 5. Formula - If we give some formula it automatically gives the value for the variable or derivation of expression Also some info about some kool touch apps which does an authentication and computation of Teched Attendee's Points that they have scored and the sessions attended. One guy won an XBOX in lucky draw as a gift. There were lot of Partner Stalls like Accenture,Intel,Citrix,MicroFocus,Telerik,infragistics,Sapient etc. Some Offers were provided for us like 50% off on Certifications, 1 free Elearning Course, etc. Stay tuned!! Wil update you on other events too..

    Read the article

  • Take Advantage of Oracle's Ongoing Assurance Effort!

    - by eric.maurice
    Hi, this is Eric Maurice again! A few years ago, I posted a blog entry, which discussed the psychology of patching. The point of this blog entry was that a natural tendency existed for systems and database administrators to be reluctant to apply patches, even security patches, because of the fear of "breaking" the system. Unfortunately, this belief in the principle "if it ain't broke, don't fix it!" creates significant risks for organizations. Running systems without applying the proper security patches can greatly compromise the security posture of the organization because the security controls available in the affected system may be compromised as a result of the existence of the unfixed vulnerabilities. As a result, Oracle continues to strongly recommend that customers apply all security fixes as soon as possible. Most recently, I have had a number of conversations with customers who questioned the need to upgrade their highly stable but otherwise unsupported Oracle systems. These customers wanted to know more about the kind of security risks they were exposed to, by running obsolete versions of Oracle software. As per Oracle Support Policies, Critical Patch Updates are produced for currently supported products. In other words, Critical Patch Updates are not created by Oracle for product versions that are no longer covered under the Premier Support or Extended Support phases of the Lifetime Support Policy. One statement used in each Critical Patch Update Advisory is particularly important: "We recommend that customers upgrade to a supported version of Oracle products in order to obtain patches. Unsupported products, releases and versions are not tested for the presence of vulnerabilities addressed by this Critical Patch Update. However, it is likely that earlier versions of affected releases are also affected by these vulnerabilities." The purpose of this warning is to inform Oracle customers that a number of the vulnerabilities fixed in each Critical Patch Update may affect older versions of a specific product line. In other words, each Critical Patch Update provides a number of fixes for currently supported versions of a given product line (this information is listed for each bug in the Risk Matrices of the Critical Patch Update Advisory), but the unsupported versions in the same product line, while they may be affected by the vulnerabilities, will not receive the fixes, and are therefore vulnerable to attacks. The risk assumed by organizations wishing to remain on unsupported versions is amplified by the behavior of malicious hackers, who typically will attempt to, and sometimes succeed in, reverse-engineering the content of vendors' security fixes. As a result, it is not uncommon for exploits to be published soon after Oracle discloses vulnerabilities with the release of a Critical Patch Update or Security Alert. Let's consider now the nature of the vulnerabilities that may exist in obsolete versions of Oracle software. A number of severe vulnerabilities have been fixed by Oracle over the years. While Oracle does not test unsupported products, releases and versions for the presence of vulnerabilities addressed by each Critical Patch Update, it should be assumed that a number of the vulnerabilities fixed with the Critical Patch Update program do exist in unsupported versions (regardless of the product considered). The most severe vulnerabilities fixed in past Critical Patch Updates may result in full compromise of the targeted systems, down to the OS level, by remote and unauthenticated users (these vulnerabilities receive a CVSS Base Score of 10.0) or almost as critically, may result in the compromise of the affected systems (without compromising the underlying OS) by a remote and unauthenticated users (these vulnerabilities receive a CVSS Base Score of 7.5). Such vulnerabilities may result in complete takeover of the targeted machine (for the CVSS 10.0), or may result in allowing the attacker the ability to create a denial of service against the affected system or even hijacking or stealing all the data hosted by the compromised system (for the CVSS 7.5). The bottom line is that organizations should assume the worst case: that the most critical vulnerabilities are present in their unsupported version; therefore, it is Oracle's recommendation that all organizations move to supported systems and apply security patches in a timely fashion. Organizations that currently run supported versions but may be late in their security patch release level can quickly catch up because most Critical Patch Updates are cumulative. With a few exceptions noted in Oracle's Critical Patch Update Advisory, the application of the most recent Critical Patch Update will bring these products to current security patch level and provide the organization with the best possible security posture for their patch level. Furthermore, organizations are encouraged to upgrade to most recent versions as this will greatly improve their security posture. At Oracle, our security fixing policies state that security fixes are produced for the main code line first, and as a result, our products benefit from the mistakes made in previous version(s). Our ongoing assurance effort ensures that we work diligently to fix the vulnerabilities we find, and aim at constantly improving the security posture our products provide by default. Patch sets include numerous in-depth fixes in addition to those delivered through the Critical Patch Update and, in certain instances, important security fixes require major architectural changes that can only be included in new product releases (and cannot be backported through the Critical Patch Update program). For More Information: • Mary Ann Davidson is giving a webcast interview on Oracle Software Security Assurance on February 24th. The registration link for attending this webcast is located at http://event.on24.com/r.htm?e=280304&s=1&k=6A7152F62313CA09F77EBCEEA9B6294F&partnerref=EricMblog • A blog entry discussing Oracle's practices for ensuring the quality of Critical patch Updates can be found at http://blogs.oracle.com/security/2009/07/ensuring_critical_patch_update_quality.html • The blog entry "To patch or not to patch" is located at http://blogs.oracle.com/security/2008/01/to_patch_or_not_to_patch.html • Oracle's Support Policies are located at http://www.oracle.com/us/support/policies/index.html • The Critical Patch Update & Security Alert page is located at http://www.oracle.com/technetwork/topics/security/alerts-086861.html

    Read the article

  • Alternative Grid Layout for Silverlight suggestion

    - by brainbox
    I've proposed a suggestion to create alternative grid layout for Silverlight. Please vote for it if also faced the same problems. As i write before current Silverlight Grid Layout breakes best practices of HTML and Adobe Flex Grid layouts. Current defention based approach have following disadvantages that makes xaml coding very hard: 1. It is very hard to create new row. In that case you should rewriteall Grid.Row and Grid.Columns for all rows inserted below.2. Defenitions are static by their nature and because of it, it isimpossible to use grid for dynamic forms. Currently even in toolkit DataFormMicrosoft is using StackPanel. But StackPanel is not designed for multicolumn layout that have dataform. Here is a sample code of AdobeFlex datagrid, which incorporates bestpractices of HTML. <mx:Grid id="myGrid">        <!-- Define Row 1. -->       <mx:GridRow id="row1">           <!-- Define the first cell of Row 1. -->           <mx:GridItem>               <mx:Button label="Button 1"/>           </mx:GridItem>           <!-- Define the second cell of Row 1. -->           <mx:GridItem>               <mx:Button label="2"/>           </mx:GridItem>           <!-- Define the third cell of Row 1. -->           <mx:GridItem>               <mx:Button label="Button 3"/>           </mx:GridItem>       </mx:GridRow>        <!-- Define Row 2. -->       <mx:GridRow id="row2">           <!-- Define a single cell to span three columns of Row 2. -->           <mx:GridItem colSpan="3" horizontalAlign="center">               <mx:Button label="Long-Named Button 4"/>           </mx:GridItem>       </mx:GridRow>        <!-- Define Row 3. -->       <mx:GridRow id="row3">           <!-- Define an empty first cell of Row 3. -->           <mx:GridItem/>           <!-- Define a cell to span columns 2 and 3 of Row 3. -->           <mx:GridItem colSpan="2" horizontalAlign="center">               <mx:Button label="Button 5"/>           </mx:GridItem>       </mx:GridRow>    </mx:Grid>

    Read the article

  • What Counts For A DBA: Foresight

    - by drsql
    Of all the valuable attributes of a DBA covered so far in this series, ranging from passion to humility to practicality, perhaps one of the most important attributes may turn out to be the most seemingly-nebulous: foresight. According to Free Dictionary foresight is the "perception of the significance and nature of events before they have occurred". Foresight does not come naturally to most people, as the parent of any teenager will attest. No matter how clearly you see their problems coming they won't listen, and have to fail before eventually (hopefully) learning for themselves. Having graduated from the school of hard knocks, the DBA, the naive teenager no longer, acquires the ability to foretell how events will unfold in response to certain actions or attitudes with the unerring accuracy of a doom-laden prophet. Like Simba in the Lion King, after a few blows to the head, we foretell that a sore head that will be the inevitable consequence of a swing of Rafiki's stick, and we take evasive action. However, foresight is about more than simply learning when to duck. It's about taking the time to understand and prevent the habits that caused the stick to swing in the first place. And based on this definition, I often think there is a lot less foresight on display in my industry than there ought to be. Most DBAs reading this blog will spot a line such as the following in a piece of "working" code, understand immediately why it is less than optimimum, and take evasive action. …WHERE CAST (columnName as int) = 1 However, the programmers who regularly write this sort of code clearly lack that foresight, and this and numerous other examples of similarly-malodorous code prevail throughout our industry (and provide premium-grade fertilizer for the healthy growth of many a consultant's bank account). Sometimes, perhaps harried by impatient managers and painfully tight deadlines, everyone makes mistakes. Yes, I too occasionally write code that "works", but basically stinks. When the problems manifest, it is sometimes accompanied by a sense of grim recognition that somewhere in me existed the foresight to know that that approach would lead to this problem. However, in the headlong rush, warning signs got overlooked, lessons learned previously, which could supply the foresight to the current project, were lost and not applied.   Of course, the problem often is a simple lack of skills, training and knowledge in the relevant technology and/or business space; programmers and DBAs forced to do their best in the face of inadequate training, or to apply their skills in areas where they lack experience. However, often the problem goes deeper than this; I detect in some DBAs and programmers a certain laziness of attitude.   They veer from one project to the next, going with "whatever works", unwilling or unable to take the time to understand where their actions are leading them. Of course, the whole "Agile" mindset is often interpreted to favor flexibility and rapid production over aiming to get things right the first time. The faster you try to travel in the dark, frequently changing direction, the more important it is to have someone who has the foresight to know at least roughly where you are heading. This is doubly true for the data tier which, no matter how you try to deny it, simply cannot be "redone" every month as you learn aspects of the world you are trying to model that, with a little bit of foresight, you would have seen coming.   Sometimes, when as a DBA you can glance briefly at 200 lines of working SQL code and know instinctively why it will cause problems, foresight can feel like magic, but it isn't; it's more like muscle memory. It is acquired as the consequence of good experience, useful communication with those around you, and a willingness to learn continually, through continued education as well as from failure. Foresight can be deployed only by finding time to understand how the lessons learned from other DBAs, and other projects, can help steer the current project in the right direction.   C.S. Lewis once said "The future is something which everyone reaches at the rate of sixty minutes an hour, whatever he does, whoever he is." It cannot be avoided; the quality of what you build now is going to affect you, and others, at some point in the future. Take the time to acquire foresight; it is a love letter to your future self, to say you cared.

    Read the article

  • Windows Azure Recipe: Software as a Service (SaaS)

    - by Clint Edmonson
    The cloud was tailor built for aspiring companies to create innovative internet based applications and solutions. Whether you’re a garage startup with very little capital or a Fortune 1000 company, the ability to quickly setup, deliver, and iterate on new products is key to capturing market and mind share. And if you can capture that share and go viral, having resiliency and infinite scale at your finger tips is great peace of mind. Drivers Cost avoidance Time to market Scalability Solution Here’s a sketch of how a basic Software as a Service solution might be built out: Ingredients Web Role – this hosts the core web application. Each web role will host an instance of the software and as the user base grows, additional roles can be spun up to meet demand. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model provides extensibility to hook into other industry specific identity providers as well. Databases – nearly every modern SaaS application is backed by a relational database for its core operational data. If the solution is sold to organizations, there’s a good chance multi-tenancy will be needed. An emerging best practice for SaaS applications is to stand up separate SQL Azure database instances for each tenant’s proprietary data to ensure isolation from other tenants. Worker Role – this is the best place to handle autonomous background processing such as data aggregation, billing through external services, and other specialized tasks that can be performed asynchronously. Placing these tasks in a worker role frees the web roles to focus completely on user interaction and data input and provides finer grained control over the system’s scalability and throughput. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Blobs (optional) – depending on the nature of the software, users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Training & Examples These links point to online Windows Azure training labs and examples where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. Developing Applications for the Cloud, 2nd Edition (eBook) This book demonstrates how you can create from scratch a multi-tenant, Software as a Service (SaaS) application to run in the cloud using the latest versions of the Windows Azure Platform and tools. The book is intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that run on or interact with the cloud. Fabrikam Shipping (SaaS reference application) This is a full end to end sample scenario which demonstrates how to use the Windows Azure platform for exposing an application as a service. We developed this demo just as you would: we had an existing on-premises sample, Fabrikam Shipping, and we wanted to see what it would take to transform it in a full subscription based solution. The demo you find here is the result of that investigation See my Windows Azure Resource Guide for more guidance on how to get started, including more links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >