Search Results

Search found 8725 results on 349 pages for 'beginning steps'.

Page 51/349 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • SOA+OSB in same JVM

    - by Manoj Neelapu
    Oracle Service Bus 11gR1 (11.1.1.3) supports running in same JVM as SOA. This tutorial covers on how to do create domain in of SOA+OSB combined to run in single JVM . For this tutorial we will use a flavor  WebLogic installer bundled with both OEPE and coherence components (eg oepe111150_wls1033_win32.exe). WebLogic installer bundled with coherence and OEPE components can be seen in the screen shot.Oracle Service Bus 11gR1 (11.1.1.3) has built-in caching support for Business Services using coherence. Because of this we will have to install coherence before  installing OSB.  To get soa and osb running in the same domain, we have to install the SOA and OSB on the above ORACLE_HOME. After installation we should see both the SOA and OSB homes has highlighted in red.We could also see the coherence components which is mandatory for OSB and optional OEPE also installed.Now we will execute RCU(ofm_rcu_win_11.1.1.3.0_disk1_1of1) to install the schema for SOA and OSB. New RCU contains OSB tables (WLI_QS_REPORT_DATA , WLI_QS_REPORT_ATTRIBUTE) gets loaded as part of SOAINFRA schema After this step we will have to create soa+osb domain using config wizard. It is located under $WEBLOGIC_HOME\common\bin\config.* (.cmd or .sh as per your platform) .While creating a domain we will select options for SOA Suite  and Oracle Service Bus Extension-All Domain Topologies.There is another option for OSB  Oracle Service Bus Extension-Single server Domain Topology. This topology is for users who want to use OSB in single server configuration. Currently SOA doesn't support single server topology. So this topology cannot be used with SOA domain but can only be used for stand alone OSB installations.We can continue with domain configuration till we reach the below screen. Following steps are mandatory if we want to have the SOA and OSB run in same JVMwe should select Managed Server, Clusters and Machines as shown below After this selection you should see a screen with two servers One managed server for OSB and one managed for SOA. Since we would like to have both the servers in one managed server (one JVM) we will have to do one important step here. We have to delete either of the servers and rename the other server with deleted server name.eg delete osb_server1 and rename the soa_server1 to osb_server1 or we can also delete soa_server1 and rename the osb_server1 to soa_server1After this steps proceed as as-usual . If we observe created domain we see only one managed server which contains components for both SOA and OSB ($DOMAIN_HOME/startManagedWebLogic_readme.txt). 

    Read the article

  • Big Data Matters with ODI12c

    - by Madhu Nair
    contributed by Mike Eisterer On October 17th, 2013, Oracle announced the release of Oracle Data Integrator 12c (ODI12c).  This release signifies improvements to Oracle’s Data Integration portfolio of solutions, particularly Big Data integration. Why Big Data = Big Business Organizations are gaining greater insights and actionability through increased storage, processing and analytical benefits offered by Big Data solutions.  New technologies and frameworks like HDFS, NoSQL, Hive and MapReduce support these benefits now. As further data is collected, analytical requirements increase and the complexity of managing transformations and aggregations of data compounds and organizations are in need for scalable Data Integration solutions. ODI12c provides enterprise solutions for the movement, translation and transformation of information and data heterogeneously and in Big Data Environments through: The ability for existing ODI and SQL developers to leverage new Big Data technologies. A metadata focused approach for cataloging, defining and reusing Big Data technologies, mappings and process executions. Integration between many heterogeneous environments and technologies such as HDFS and Hive. Generation of Hive Query Language. Working with Big Data using Knowledge Modules  ODI12c provides developers with the ability to define sources and targets and visually develop mappings to effect the movement and transformation of data.  As the mappings are created, ODI12c leverages a rich library of prebuilt integrations, known as Knowledge Modules (KMs).  These KMs are contextual to the technologies and platforms to be integrated.  Steps and actions needed to manage the data integration are pre-built and configured within the KMs.  The Oracle Data Integrator Application Adapter for Hadoop provides a series of KMs, specifically designed to integrate with Big Data Technologies.  The Big Data KMs include: Check Knowledge Module Reverse Engineer Knowledge Module Hive Transform Knowledge Module Hive Control Append Knowledge Module File to Hive (LOAD DATA) Knowledge Module File-Hive to Oracle (OLH-OSCH) Knowledge Module  Nothing to beat an Example: To demonstrate the use of the KMs which are part of the ODI Application Adapter for Hadoop, a mapping may be defined to move data between files and Hive targets.  The mapping is defined by dragging the source and target into the mapping, performing the attribute (column) mapping (see Figure 1) and then selecting the KM which will govern the process.  In this mapping example, movie data is being moved from an HDFS source into a Hive table.  Some of the attributes, such as “CUSTID to custid”, have been mapped over. Figure 1  Defining the Mapping Before the proper KM can be assigned to define the technology for the mapping, it needs to be added to the ODI project.  The Big Data KMs have been made available to the project through the KM import process.   Generally, this is done prior to defining the mapping. Figure 2  Importing the Big Data Knowledge Modules Following the import, the KMs are available in the Designer Navigator. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Figure 3  The Project View in Designer, Showing Installed IKMs Once the KM is imported, it may be assigned to the mapping target.  This is done by selecting the Physical View of the mapping and examining the Properties of the Target.  In this case MOVIAPP_LOG_STAGE is the target of our mapping. Figure 4  Physical View of the Mapping and Assigning the Big Data Knowledge Module to the Target Alternative KMs may have been selected as well, providing flexibility and abstracting the logical mapping from the physical implementation.  Our mapping may be applied to other technologies as well. The mapping is now complete and is ready to run.  We will see more in a future blog about running a mapping to load Hive. To complete the quick ODI for Big Data Overview, let us take a closer look at what the IKM File to Hive is doing for us.  ODI provides differentiated capabilities by defining the process and steps which normally would have to be manually developed, tested and implemented into the KM.  As shown in figure 5, the KM is preparing the Hive session, managing the Hive tables, performing the initial load from HDFS and then performing the insert into Hive.  HDFS and Hive options are selected graphically, as shown in the properties in Figure 4. Figure 5  Process and Steps Managed by the KM What’s Next Big Data being the shape shifting business challenge it is is fast evolving into the deciding factor between market leaders and others. Now that an introduction to ODI and Big Data has been provided, look for additional blogs coming soon using the Knowledge Modules which make up the Oracle Data Integrator Application Adapter for Hadoop: Importing Big Data Metadata into ODI, Testing Data Stores and Loading Hive Targets Generating Transformations using Hive Query language Loading Oracle from Hadoop Sources For more information now, please visit the Oracle Data Integrator Application Adapter for Hadoop web site, http://www.oracle.com/us/products/middleware/data-integration/hadoop/overview/index.html Do not forget to tune in to the ODI12c Executive Launch webcast on the 12th to hear more about ODI12c and GG12c. Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Advice for moving from custom domain hosted on blogspot to new custom domain on hosting provider [closed]

    - by Chan
    Need some advice :) a) Facts: 1. Currently, we have a blog www.thebigbigsky.net hosted on blogspot.com. The site has been up for around a year and google had cached some of the posts. 2. The idea for setting up the blog was to promote/share articles/write-up about personal development and self-growth, and also to offer counseling services related to them. 3. There is another subdomain - chinese.thebigbigsky.net hosting similar blog posts but in Chinese. This is hosted on blogspot as well. b) What we want to do: 1. We did some research on the internet and understood that, to make a website more popular, it is better to have a domain name that is related to the content of the site. Hence, in our case, a domain name containing "personal development" would be more ideal and easier for people to remember. 2. Hence, we are thinking to move away from blogspot to a meaningful new domain (example: personaldevelopmentwithxyz.com) and move all contents there. The new domain will be hosted on a hosting provider e.g. hostgator.com c) Here are the steps we have in mind: 1. Register the new domain (example: personaldevelopmentwithxyz.com) 2. Get a hosting provider hostgator.com 3. Setup a new site based on Wordpress, import all contents from existing blog to this new site - personaldevelopmentwithxyz.com. 4. Switch current blog back to thebigbigsky.blogspot.com 5. Switch DNS of www.thebigbigsky.net to point to the new site on hostgator.com 6. On hostgator.com, setup permanent redirect 301 of www.thebigbigsky.net to point to personaldevelopmetnwithxyz.com by following the tutorial mentioned here - http://www.blogbloke.com/migrating-redirecting-blogger-wordpress-htaccess-apache-best-method/ Questions: 1. Will it have major impact on the current google pagerank or SEO of thebigbigsky.net? As the site is not that popular, we assumed that the impact would not be a major one. 2. Is the domain personaldevelopmentwithxyz.com considered too long? (For SEO etc) 3. Steps c.4 and c.5 above - Will this work for our case? 4. How about the subdomain - chinese.thebigbigsky.net? We would like keep it as separate blog with the domain e.g. chinese.personaldevelopmentwithxyz.com 5. We added "facebook comments" to the existing post on www.thebigtbigsky.net and would not want to lose it. Will the comments still be there after we moved to the new domain? 6. Any better idea to do it? :) Thank you very much! regards, -chan

    Read the article

  • Internet Explorer Security: How to Configure Settings

    Before jumping into the steps that are needed to configure Internet Explorer's security settings, let us first take a closer look at the four separate security zones that Microsoft has established for the browser. You will be able to tweak the settings of each of these four zones when we get into the configuration part of this tutorial, so it is best that you learn what they represent first. Internet Explorer Security Zones Internet Zone This Internet Explorer security zone refers to websites that are not on your computer or are not designated to your local intranet, which we will discuss in ...

    Read the article

  • Using Open MQ as an Oracle CEP Event Source

    - by seth.white
    I helped an Oracle CEP customer recently who wanted to use Open MQ has an event source for their Oracle CEP application.  In this case, the Oracle CEP application was being used to provide monitoring for an electronic commerce website, however, the steps for configuring Open MQ are entirely independent of the application logic. I thought I would list the configuration steps in a blog post in case they might help others in the future. Note that although the Oracle CEP documentation states that only WebLogic and Tibco JMS are "officially" supported, any JMS implementation that provides a Java client should work with Oracle CEP. The first step is to add an adapter to the application's EPN. This can be done in the usual way, using the Eclipse IDE. The end result is something like the following bit of configuration in the application's Spring application context. Note that the provider attribute value of 'jms-inbound' specifies that the out-of-the-box JMS adapter is being used. <wlevs:adapter id="helloworldAdapter" provider="jms-inbound"> </wlevs:adapter>   Next, configure the inbound adapter so that it can connect to Open MQ in the Oracle CEP configuration file (config.xml). The snippet below provides an example of what this configuration should look like. The exact values specified for jndi-provider-url, jndi-factory, connection-jndi-name, destination-jndi-name elements will depend on your Open MQ configuration.  For example , if the name of your Open MQ topic destination is 'ElectronicCommerceTopic', then you would specify that as the destination-jndi-name.  The name of your Open MQ connection factory goes in the connection-jndi-name element. In my simple example, I also specify in event-type element so that the out-of-the-box JMS adapter will attempt to automatically convert incoming messages to events of type HelloWorldEvent. In a more complex application, one would configure a custom converter on the JMS adapter to convert from messages to events.  The Oracle CEP 11.1.3 documentation describes how to do this.   <jms-adapter> <name>helloworldAdapter</name> <event-type>HelloWorldEvent</event-type> <jndi-provider-url>file:///C:/Temp</jndi-provider-url> <jndi-factory>com.sun.jndi.fscontext.RefFSContextFactory</jndi-factory> <connection-jndi-name>YourJMSConnectionFactoryName</connection-jndi-name> <destination-jndi-name>YourJMSDestinationName</destination-jndi-name> </jms-adapter>   Finally, one needs to package the client-side Open MQ jars so that the classes that they contain are available to the Oracle CEP runtime. The recommended way for doing this in the Oracle CEP 11.1.3 release is to package the classes as a library module or simply place them in the application bundle.  The advantage of deploying the classes as a library module is that they are available to any application that wants to connect to Open MQ. In my case, I packaged the classes in my application bundle. A best practice when you want to include additional jars in your application bundle is to create a 'lib' directory in your Eclipse project and then copy the required jars into that directory.  Then, use the support that Eclipse provides to add the jars to the bundle classpath (which makes the classes part of your application in the same way that regular application classes are), and export all of the classes from your application bundle so that they are available to the Oracle CEP server runtime.  The screenshot below Illustrates how this is done in Eclipse.  The bundle classpath contains two Open MQ jars and all packages in the jars are exported.     Finally, import the javax.jms and javax.naming packages into the application module as these are needed by the Open MQ classes. The screenshot below shows the complete list of package imports for my sample application.       Once you have completed these steps, you should be able to build and deploy your application and begin receiving inbound messages from Open MQ. Technorati Tags: CEP,JMS,Adapter,Open MQ,Eclipse .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 }

    Read the article

  • Understanding Performance Profiling Targets

    In this sample chapter from his upcoming book, Paul Glavich explains performance metrics and walks us through the steps needed to establish meaningful performance targets. He covers many metrics such as "time to first byte" and explains why you should add some contingency into your estimated performance requirements.

    Read the article

  • What strategy to use when starting in a new project with no documentation?

    - by Amir Rezaei
    Which is the best why to go when there are no documentation? For example how do you learn business rules? I have done the following steps: Since we are using a ORM tool I have printed a copy of database schema where I can see relations between objects. I have made a list of short names/table names that I will get explained. The project is client/server enterprise application using MVVM pattern.

    Read the article

  • Enabling Code Coverage in Visual Studio 2010

    - by Anthony Trudeau
    You'll quickly find out that enabling code coverage in Visual Studio 2010 has changed.  With the new version you enable this functionality through the test settings.  The following steps will enable code coverage: Open the local.testsettings which you can access from Test -> Edit Test Settings -> Local (local.testsettings) Select Data and Diagnostics from the list Select the Enabled checkbox on the Code Coverage row Double-click the Code Coverage row Select the assemblies you want to instrument Specify a re-signing key file if your assemblies are strong-named Click OK Click Apply Click Close

    Read the article

  • New Whitepaper: Oracle WebLogic Clustering

    - by ACShorten
    A new whitepaper is available that outlines the concepts and steps on implementing web application server clustering using the Oracle Utilities Application Framework and Oracle WebLogic Server. The whitepaper include the following: A short discussion on the concepts of clustering How to setup a cluster using Oracle WebLogic's utilities How to configure the Oracle Utilities Application Framework to take advantage of clustering How to deploy the Oracle Utilities Application based products in a clustered environment Common cluster operations The whitepaper is available from My Oracle Support at Doc Id: 1334558.1.

    Read the article

  • Fundamentals of the Website Building Process

    There are five general steps in the website building process. The first step is to plan the website; the second step is to choose the domain name; choose the webhost for the third, then build the website and finally, upload the website to the internet.

    Read the article

  • Libvirt-php Creating Domain

    - by Alee
    Could you tell me how to create a new domain? From PHP API Reference guide: http://libvirt.org/php/api-reference.html I have seen these functions: (i) libvirt_image_create($conn, $name, $size, $format) (ii) libvirt_domain_new($conn, $name, $arch, $memMB, $maxmemMB, $vcpus, $iso_image, $disks, $networks, $flags) (iii) libvirt_domain_create($res) (iv) libvirt_domain_create_xml($conn, $xml) The problem I am facing is that i dont know the steps to create a new domain. Either I have to create a new image first using libvirt_image_create .... or something else.

    Read the article

  • Wordpress Installation (on IIS and SQL Server)

    - by Davide Mauri
    To proceed with the installation of Wordpress on SQL Server and IIS, first of all, you need to do the following steps Create a database on SQL Server that will be used by Wordpress Create login that can access to the just created database and put the user into ddladmin, db_datareader, db_datawriter roles Download and unpack Wordpress 3.3.2 (latest version as of 27 May 2012) zip file into a directory of your choice Download the wp-db-abstraction 1.1.4 (latest version as of 27 May 2012) plugin from wordpress.org website Now that the basic action has been done, you can start to setup and configure your Wordpress installation. Unpack and follow the instructions in the README.TXT file to install the Database Abstraction Layer. Mainly you have to: Upload wp-db-abstraction.php and the wp-db-abstraction directory to wp-content/mu-plugins.  This should be parallel to your regular plugins directory.  If the mu-plugins directory does not exist, you must create it. Put the db.php file from inside the wp-db-abstraction.php directory to wp-content/db.php Now you can create an application pool in IIS like the following one Create a website, using the above Application Pool, that points to the folder where you unpacked Wordpress files. Be sure to give the “Write” permission to the IIS account, as pointed out in this (old, but still quite valid) installation manual: http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#iis Now you’re ready to go. Point your browser to the configured website and the Wordpress installation screen will be there for you. When you’re requested to enter information to connect to MySQL database, simply skip that page, leaving the default values. If you have installed the Database Abstraction Layer, another database installation screen will appear after the one used by MySQL, and here you can enter the configuration information needed to connect to SQL Server. After having finished the installation steps, you should be able to access and navigate your wordpress site.  A final touch, and it’s done: just add the needed rewrite rules http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#urlrewrite and that’s it! Well. Not really. Unfortunately the current (as of 27 May 2012) version of the Database Abstraction Layer (1.1.4) has some bugs. Luckily they can be quickly fixed: Backslash Fix http://wordpress.org/support/topic/plugin-wp-db-abstraction-fix-problems-with-backslash-usage Select Top 0 Fix Make the change to the file “.\wp-content\mu-plugins\wp-db-abstraction\translations\sqlsrv\translations.php” suggested by “debettap”   http://sourceforge.net/tracker/?func=detail&aid=3485384&group_id=315685&atid=1328061 And now you have a 100% working Wordpress installation on SQL Server! Since I also wanted to take advantage of SQL Server Full Text Search, I’ve created a very simple wordpress plugin to setup full-text search and to use it as website search engine: http://wpfts.codeplex.com/ Enjoy!

    Read the article

  • ASP.NET 4.0 and the Entity Framework 4 - Part 5 - Using the GridView and the EntityDataSource

    In this article, Vince demonstrates the usage of the GridView control to view, add, update, and delete records using the Entity Framework 4. After providing a short introduction, he provides the steps required to create a web site, entity data model, web form and template fields with the help of relevant source code and screenshots.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • USB drives not recognized all of a sudden (usb_storage not loaded lsmod does not report usb_storage)

    - by Siddharth
    I have tried most of the advice on askubuntu and other sites, usb_storage enable to fdisk -l. But I am unable to find steps to get it working again. sudo lsusb results Bus.... skipped 4 lines Bus 004 Device 002: ID 413c:3012 Dell Computer Corp. Optical Wheel Mouse Bus 005 Device 002: ID 413c:2105 Dell Computer Corp. Model L100 Keyboard Bus 001 Device 005: ID 8564:1000 sudo dmseg | tail reports [ 69.567948] usb 1-4: USB disconnect, device number 4 [ 74.084041] usb 1-6: new high-speed USB device number 5 using ehci_hcd [ 74.240484] Initializing USB Mass Storage driver... [ 74.256033] scsi5 : usb-storage 1-6:1.0 [ 74.256145] usbcore: registered new interface driver usb-storage [ 74.256147] USB Mass Storage support registered. [ 74.257290] usbcore: deregistering interface driver usb-storage fdisk -l reports Device Boot Start End Blocks Id System /dev/sda1 * 2048 972656639 486327296 83 Linux /dev/sda2 972658686 976771071 2056193 5 Extended /dev/sda5 972658688 976771071 2056192 82 Linux swap / Solaris I think I need steps to install and get usb_storage module working. Edit : I tried sudo modprobe -v usb-storage reports sudo modprobe -v usb-storage insmod /lib/modules/3.2.0-48-generic-pae/kernel/drivers/usb/storage/usb-storage.ko Edit : jsiddharth@siddharth-desktop:~$ sudo udevadm monitor --udev monitor will print the received events for: UDEV - the event which udev sends out after rule processing UDEV [4757.144372] add /module/usb_storage (module) UDEV [4757.146558] remove /module/usb_storage (module) UDEV [4757.148707] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6 (usb) UDEV [4757.149699] add /bus/usb/drivers/usb-storage (drivers) UDEV [4757.151214] remove /bus/usb/drivers/usb-storage (drivers) UDEV [4757.156873] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0 (usb) UDEV [4757.160903] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) UDEV [4757.164672] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165163] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165440] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) Narrowing down more : Seems like I need usb_storage to load as a module jsiddharth@siddharth-desktop:~$ lsmod | grep usb usbserial 37201 0 usbhid 41937 0 hid 77428 1 usbhid Still no usb driver mounted. Nor does a device show up in /dev. Any step by step process to debug and fix this will be really helpful.

    Read the article

  • Server Migration Checklist II

    - by merrillaldrich
    Easy Breezy Login Audit for your Ol’ 2000 Server In the last post on this topic I put up the preparatory steps I’ve been using for server migrations. Here I am posting some code that has worked well for us to trace who/what is connecting to our older SQL Server 2000 machines. It’s a simple audit of login events, tracing the login name, host name, database, and last login time for connections to the server, and gave us valuable insight into who was really using the machines and which databases might...(read more)

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • What non-programming tools do programmers use?

    - by user828584
    I'm reading code complete with the intention of learning how to better structure my code, but I'm also learning a lot about how many aspects of programming something there are that aren't just writing the code. The book talks a lot about problem definition, determining the requirements, defining the structure, designing the code, etc. What tools are used for these non-writing steps of programming? Is there software that will help me design and plan out what I'm going to write before I do?

    Read the article

  • adding custom SSIS transformation to visual studio toolbox fails

    - by ryangaraygay
    Just very recently I encountered an issue in deploying a custom SSIS component assembly which turns out to be a relative "no-brainer" error if only the clues were more straightforward. Basically after deploying the assembly I could not find my component listed in the "SSIS Data Flow Items" tab list.It turns out that the assembly containing the component just had missing or referenced the incorrect assemblies.I have outlined the steps I took that guided me on the right direction on this blog post of mine : adding custom SSIS transformation to visual studio toolbox fails 

    Read the article

  • Free E-Book from APress - Platform Embedded Security Technology Revealed

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/08/23/free-e-book-from-apress---platform-embedded-security-technology-revealed.aspxAt  http://www.apress.com/9781430265719, APress are providing a free E-Book - Platform Embedded Security Technology Revealed. “Platform Embedded Security Technology Revealed is an in-depth introduction to Intel’s security and management engine, with details on the security features and the steps for configuring and invoking them. It's written for security professionals and researchers; embedded-system engineers; and software engineers and vendors.”

    Read the article

  • Google adsense bombing

    - by tereško
    I made a website few months ago and put google adsense on it, but someone kept clicking on ads with bad intentions, e.g. google bombing I guess but as result they disabled my google adsense account, now I wonder how did this all happened and what steps should I take in future to stop such kind of attacks. Is there any tricks or tips to save your website from bombing of invalid clicks ? what ways do normally hacker use to bomb adsense on a website ?

    Read the article

  • BizTalk 2009 - SQL Server Job Configuration

    - by StuartBrierley
    Following the installation of Biztalk Server 2009 on my development laptop I used the BizTalk Server Best Practice Analyser which highlighted the fact that two of the SQL Server Agent jobs that BizTalk relies on were not running successfully.  Upon investigation it turned out that these jobs needed to be configured before they would run successfully. To configure these jobs open SQL Server Management Studio, expand SQL Server Agent > Jobs and double click on the appropriate job.  Select Steps and then edit the appropriate entries. Backup BizTalk Server (BizTalkMgmtDb) This job is comprised of three steps BackupFull, MarkAndBackupLog and ClearBackupHistory. BackupFull exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ The frequency here is set/left as daily The name is left as BTS You must provide a full destination path for the backup files to be stored. There are also two optional parameters: A flag that controls if the job forces a full backup if a partial backup fails A parameter to control the time of day to run the full backup; the default is midnight UTC time For example: exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ , 0, 22 MarkAndBackUpLog exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ You must provide a destination path for the log backups. Optionally you can also add an extra parameter that tells the procedure to use local time: exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ ,1 Clear Backup History exec [dbo].[sp_DeleteBackupHistory] @DaysToKeep=7 This will clear out the instances in the MarkLog table older than 7 days.    DTA Purge and Archive (BizTalkDTADb) This job is comprised of a single step. Archive and Purge exec dtasp_BackupAndPurgeTrackingDatabase 0, --@nLiveHours tinyint, 1, --@nLiveDays tinyint = 0, 30, --@nHardDeleteDays tinyint = 0, null, --@nvcFolder nvarchar(1024) = null, null, --@nvcValidatingServer sysname = null, 0 --@fForceBackup int = 0 Any completed instance that is older than the live days plus live hours will be deleted, as will any associated data. Any data older than the HardDeleteDays will be deleted - this means that those long running orchestration instances that would otherwise never be purged will at some point have their data cleared down while allowing the instance to continue, thus preventing the DTA databse from growing indefinitely.  This should always be greater than the soft purge window. The NVC folder is the path for the backup files, if this is null the job will not run failing with the error : DTA Purge and Archive (BizTalkDTADb) Job failed SQL Server Management Studio, job activity monitor, view history The @nvcFolder parameter cannot be null. Archive and Purge step How long you choose to keep instances in the Tracking Database is really up to you. For development I have set this up as: exec dtasp_BackupAndPurgeTrackingDatabase 0, 1, 30, ’<destination path>’, null, 0 On a live server you may want to adjust these figures: exec dtasp_BackupAndPurgeTrackingDatabase 0, 15, 20, ’<destination path>’, null, 0

    Read the article

  • CRM Is A Long Term Strategy

    - by ruth.donohue
    With the array of CRM solutions out there, it's sometimes easy to forget that CRM is more than just technology with fancy bells and whistles -- it's a long-term strategy that involves people and processes as well. The Wise Marketer summarizes a Gartner report outlining three key steps necessary to create and execute a successful CRM stratetegy that is linked with overall corporate strategy.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >