Search Results

Search found 11015 results on 441 pages for 'explain plan'.

Page 196/441 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • SQL SERVER – Weekly Series – Memory Lane – #031

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Find Table without Clustered Index – Find Table with no Primary Key Clustered index is very important concept for any table. They impact the performance very heavily. Here is a quick script to find tables without a clustered index. Replace TEXT with VARCHAR(MAX) – Stop using TEXT, NTEXT, IMAGE Data Types Question: “Is VARCHAR (MAX) big enough to store the TEXT field?” Answer: “Yes, VARCHAR(MAX) is big enough to accommodate TEXT field. TEXT, NTEXT and IMAGE data types of SQL Server 2000 will be deprecated in a future version of SQL Server, SQL Server 2005 provides backward compatibility to data types but it is recommended to use new data types which are VARHCAR (MAX), NVARCHAR (MAX) and VARBINARY (MAX).” Limiting Result Sets by Using TABLESAMPLE – Examples Introduced in SQL Server 2005, TABLESAMPLE allows you to extract a sampling of rows from a table in the FROM clause. The rows retrieved are random and they are are not in any order. This sampling can be based on a percentage of number of rows. You can use TABLESAMPLE when only a sampling of rows is necessary for the application instead of a full result set. User Defined Functions (UDF) Limitations UDF have its own advantage and usage but in this article we will see the limitation of UDF. Things UDF can not do and why Stored Procedure are considered as more flexible then UDFs. Stored Procedure are more flexibility then User Defined Functions(UDF). However, this blog post is a good read to know what are the limitations of UDF. Change Database Compatible Level – Backward Compatibility For a long time SQL Server stayed on the compatibility level of 80 which is of SQL Server 2000. However, as soon as SQL Server 2005 introduced the issue of compatibility was quite a major issue. Since that time MS has been releasing the versions at every 2-3 years, changing compatibility is a ever popular topic. In this blog post, we learn how we can do the same using T-SQL. We can also do the same using SSMS and here is the blog post for the same: Change Database Compatible Level – Backward Compatibility – Part 2 – Management Studio. Constraint on VARCHAR(MAX) Field To Limit It Certain Length How can I limit the VARCHAR(MAX) field with maximum length of 12500 characters only. His Question was valid as our application was allowed 12500 characters. First of all – this requirement is bit strange but if someone wants to do the same, they can do it as described in this blog post. 2008 UNPIVOT Table Example Understanding UNPIVOT can be very complicated at times. In this blog post, I have attempted to explain the same concept in very simple words. Create Default Constraint Over Table Column A simple straight to script blog post – I still use this blog quite many times for my own reference. UDF – Get the Day of the Week Function It took me 4 iteration to find this very simple function which can immediately get the day of the week in a single line. 2009 Find Hostname and Current Logged In User Name There are two tricks listed in this blog post where users can find out the hostname and current logged user name immediately and very easily. Interesting Observation of Logon Trigger On All Servers When I was doing a project, I made an interesting observation of executing a logon trigger multiple times. It was absolutely unexpected for me! As I was logging only once, naturally, I was expecting the entry only once. However, it did it multiple times on different threads – indeed an eccentric phenomenon at first sight! Difference Between Candidate Keys and Primary Key One needs to be very careful in selecting the Primary Key as an incorrect selection can adversely impact the database architect and future normalization. For a Candidate Key to qualify as a Primary Key, it should be Non-NULL and unique in any domain. I have observed quite often that Primary Keys are seldom changed. I would like to have your feedback on not changing a Primary Key. Create Multiple Filegroup For Single Database Why should one create multiple file group for any database and what are the advantages of the same. In this blog post, I explain the same in detail. List All Objects Created on All Filegroups in Database In this blog post we discuss the essential question – “How can I find which object belongs to which filegroup. Is there any way to know this?” 2010 DATE and TIME in SQL Server 2008 When DATE is converted to DATETIME it adds the of midnight. When TIME is converted to DATETIME it adds the date of 1900 and it is something one wants to consider if you are going to run scripts from SQL Server 2008 to earlier version with CONVERT. Disabled Index and Update Statistics If you do not need a nonclustered index, I suggest you to drop it as keeping them disabled is an overhead on your system. This is because every time the statistics are updated for system all the statistics for disabled indexes are also updated. Precision of SMALLDATETIME – A 1 Minute Precision The precision of the datatype SMALLDATETIME is 1 minute. It discards the seconds by rounding up or rounding down any seconds greater than zero. 2011 Getting Columns Headers without Result Data – SET FMTONLY ON SET FMTONLY ON returns only metadata to the client. It can be used to test the format of the response without actually running the query. When this setting is ON the resultset only have headers of the results but no data. Copy Database from Instance to Another Instance – Copy Paste in SQL Server SQL Server has a feature which copy database from one database to another database and it can be automated as well using SSIS. Make sure you have SQL Server Agent Turned on as this feature will create a job. Puzzle – SELECT * vs SELECT COUNT(*) If you have ever wondered SELECT * gives error when executed alone but SELECT COUNT(*) does not. Why? in that case, you should read this blog post. Creating All New Database with Full Recovery Model This blog post is very based on very interesting story where the user wants to do something by default for every single new database created. Model database is a secret weapon which should be used very carefully and with proper evalution. If used carefully this can be a very much beneficiary when we need a newly created database behave in certain fashion. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Can anyone remember their final day of schooling?  This is probably a silly question because – of course you can!  Many people mark this as the most exciting, happiest day of their life.  It marks the end of testing, the end of following rules set by teachers, and the beginning of finally being able to earn money and work in your chosen field. Read five part series on developer training subject Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Computer Networks UNISA - Chap 10 &ndash; In Depth TCP/IP Networking

    - by MarkPearl
    After reading this section you should be able to Understand methods of network design unique to TCP/IP networks, including subnetting, CIDR, and address translation Explain the differences between public and private TCP/IP networks Describe protocols used between mail clients and mail servers, including SMTP, POP3, and IMAP4 Employ multiple TCP/IP utilities for network discovery and troubleshooting Designing TCP/IP-Based Networks The following sections explain how network and host information in an IPv4 address can be manipulated to subdivide networks into smaller segments. Subnetting Subnetting separates a network into multiple logically defined segments, or subnets. Networks are commonly subnetted according to geographic locations, departmental boundaries, or technology types. A network administrator might separate traffic to accomplish the following… Enhance security Improve performance Simplify troubleshooting The challenges of Classful Addressing in IPv4 (No subnetting) The simplest type of IPv4 is known as classful addressing (which was the Class A, Class B & Class C network addresses). Classful addressing has the following limitations. Restriction in the number of usable IPv4 addresses (class C would be limited to 254 addresses) Difficult to separate traffic from various parts of a network Because of the above reasons, subnetting was introduced. IPv4 Subnet Masks Subnetting depends on the use of subnet masks to identify how a network is subdivided. A subnet mask indicates where network information is located in an IPv4 address. The 1 in a subnet mask indicates that corresponding bits in the IPv4 address contain network information (likewise 0 indicates the opposite) Each network class is associated with a default subnet mask… Class A = 255.0.0.0 Class B = 255.255.0.0 Class C = 255.255.255.0 An example of calculating  the network ID for a particular device with a subnet mask is shown below.. IP Address = 199.34.89.127 Subnet Mask = 255.255.255.0 Resultant Network ID = 199.34.89.0 IPv4 Subnetting Techniques Subnetting breaks the rules of classful IPv4 addressing. Read page 490 for a detailed explanation Calculating IPv4 Subnets Read page 491 – 494 for an explanation Important… Subnetting only applies to the devices internal to your network. Everything external looks at the class of the IP address instead of the subnet network ID. This way, traffic directed to your network externally still knows where to go, and once it has entered your internal network it can then be prioritized and segmented. CIDR (classless Interdomain Routing) CIDR is also known as classless routing or supernetting. In CIDR conventional network class distinctions do not exist, a subnet boundary can move to the left, therefore generating more usable IP addresses on your network. A subnet created by moving the subnet boundary to the left is known as a supernet. With CIDR also came new shorthand for denoting the position of subnet boundaries known as CIDR notation or slash notation. CIDR notation takes the form of the network ID followed by a forward slash (/) followed by the number of bits that are used for the extended network prefix. To take advantage of classless routing, your networks routers must be able to interpret IP addresses that don;t adhere to conventional network class parameters. Routers that rely on older routing protocols (i.e. RIP) are not capable of interpreting classless IP addresses. Internet Gateways Gateways are a combination of software and hardware that enable two different network segments to exchange data. A gateway facilitates communication between different networks or subnets. Because on device cannot send data directly to a device on another subnet, a gateway must intercede and hand off the information. Every device on a TCP/IP based network has a default gateway (a gateway that first interprets its outbound requests to other subnets, and then interprets its inbound requests from other subnets). The internet contains a vast number of routers and gateways. If each gateway had to track addressing information for every other gateway on the Internet, it would be overtaxed. Instead, each handles only a relatively small amount of addressing information, which it uses to forward data to another gateway that knows more about the data’s destination. The gateways that make up the internet backbone are called core gateways. Address Translation An organizations default gateway can also be used to “hide” the organizations internal IP addresses and keep them from being recognized on a public network. A public network is one that any user may access with little or no restrictions. On private networks, hiding IP addresses allows network managers more flexibility in assigning addresses. Clients behind a gateway may use any IP addressing scheme, regardless of whether it is recognized as legitimate by the Internet authorities but as soon as those devices need to go on the internet, they must have legitimate IP addresses to exchange data. When a clients transmission reaches the default gateway, the gateway opens the IP datagram and replaces the client’s private IP address with an Internet recognized IP address. This process is known as NAT (Network Address Translation). TCP/IP Mail Services All Internet mail services rely on the same principles of mail delivery, storage, and pickup, though they may use different types of software to accomplish these functions. Email servers and clients communicate through special TCP/IP application layer protocols. These protocols, all of which operate on a variety of operating systems are discussed below… SMTP (Simple Mail transfer Protocol) The protocol responsible for moving messages from one mail server to another over TCP/IP based networks. SMTP belongs to the application layer of the ODI model and relies on TCP as its transport protocol. Operates from port 25 on the SMTP server Simple sub-protocol, incapable of doing anything more than transporting mail or holding it in a queue MIME (Multipurpose Internet Mail Extensions) The standard message format specified by SMTP allows for lines that contain no more than 1000 ascii characters meaning if you relied solely on SMTP you would have very short messages and nothing like pictures included in an email. MIME us a standard for encoding and interpreting binary files, images, video, and non-ascii character sets within an email message. MIME identifies each element of a mail message according to content type. MIME does not replace SMTP but works in conjunction with it. Most modern email clients and servers support MIME POP (Post Office Protocol) POP is an application layer protocol used to retrieve messages from a mail server POP3 relies on TCP and operates over port 110 With POP3 mail is delivered and stored on a mail server until it is downloaded by a user Disadvantage of POP3 is that it typically does not allow users to save their messages on the server because of this IMAP is sometimes used IMAP (Internet Message Access Protocol) IMAP is a retrieval protocol that was developed as a more sophisticated alternative to POP3 The single biggest advantage IMAP4 has over POP3 is that users can store messages on the mail server, rather than having to continually download them Users can retrieve all or only a portion of any mail message Users can review their messages and delete them while the messages remain on the server Users can create sophisticated methods of organizing messages on the server Users can share a mailbox in a central location Disadvantages of IMAP are typically related to the fact that it requires more storage space on the server. Additional TCP/IP Utilities Nearly all TCP/IP utilities can be accessed from the command prompt on any type of server or client running TCP/IP. The syntaxt may differ depending on the OS of the client. Below is a list of additional TCP/IP utilities – research their use on your own! Ipconfig (Windows) & Ifconfig (Linux) Netstat Nbtstat Hostname, Host & Nslookup Dig (Linux) Whois (Linux) Traceroute (Tracert) Mtr (my traceroute) Route

    Read the article

  • Combining Shared Secret and Certificates

    - by Michael Stephenson
    As discussed in the introduction article this walkthrough will explain how you can implement WCF security with the Windows Azure Service Bus to ensure that you can protect your endpoint in the cloud with a shared secret but also combine this with certificates so that you can identify the sender of the message.   Prerequisites As in the previous article before going into the walk through I want to explain a few assumptions about the scenario we are implementing but to keep the article shorter I am not going to walk through all of the steps in how to setup some of this. In the solution we have a simple console application which will represent the client application. There is also the services WCF application which contains the WCF service we will expose via the Windows Azure Service Bus. The WCF Service application in this example was hosted in IIS 7 on Windows 2008 R2 with AppFabric Server installed and configured to auto-start the WCF listening services. I am not going to go through significant detail around the IIS setup because it should not matter in relation to this article however if you want to understand more about how to configure WCF and IIS for such a scenario please refer to the following paper which goes into a lot of detail about how to configure this. The link is: http://tinyurl.com/8s5nwrz   Setting up the Certificates To keep the post and sample simple I am going to use the local computer store for all certificates but this bit is really just the same as setting up certificates for an example where you are using WCF without using Windows Azure Service Bus. In the sample I have included two batch files which you can use to create the sample certificates or remove them. Basically you will end up with: A certificate called PocServerCert in the personal store for the local computer which will be used by the WCF Service component A certificate called PocClientCert in the personal store for the local computer which will be used by the client application A root certificate in the Root store called PocRootCA with its associated revocation list which is the root from which the client and server certificates were created   For the sample Im just using development certificates like you would normally, and you can see exactly how these are configured and placed in the stores from the batch files in the solution using makecert and certmgr.   The Service Component To begin with let's look at the service component and how it can be configured to listen to the service bus using a shared secret but to also accept a username token from the client. In the sample the service component is called Acme.Azure.ServiceBus.Poc.Cert.Services. It has a single service which is the Visual Studio template for a WCF service when you add a new WCF Service Application so we have a service called Service1 with its Echo method. Nothing special so far!.... The next step is to look at the web.config file to see how we have configured the WCF service. In the services section of the WCF configuration you can see I have created my service and I have created a local endpoint which I simply used to do a little bit of diagnostics and to check it was working, but more importantly there is the Windows Azure endpoint which is using the ws2007HttpRelayBinding (note that this should also work just the same if your using netTcpRelayBinding). The key points to note on the above picture are the service behavior called MyServiceBehaviour and the service bus endpoints behavior called MyEndpointBehaviour. We will go into these in more detail later.   The Relay Binding The relay binding for the service has been configured to use the TransportWithMessageCredential security mode. This is the important bit where the transport security really relates to the interaction between the service and listening to the Azure Service Bus and the message credential is where we will use our certificate like we have specified in the message/clientCrentialType attribute. Note also that we have left the relayClientAuthenticationType set to RelayAccessToken. This means that authentication will be made against ACS for accessing the service bus and messages will not be accepted from any sender who has not been authenticated by ACS.   The Endpoint Behaviour In the below picture you can see the endpoint behavior which is configured to use the shared secret client credential for accessing the service bus and also for diagnostic purposes I have included the service registry element.     Hopefully if you are familiar with using Windows Azure Service Bus relay feature the above is very familiar to you and this is a very common setup for this section. There is nothing specific to the username token implementation here. The Service Behaviour Now we come to the bit with most of the certificate stuff in it. When you configure the service behavior I have included the serviceCredentials element and then setup to use the clientCertificate check and also specifying the serviceCertificate with information on how to find the servers certificate in the store.     I have also added a serviceAuthorization section where I will implement my own authorization component to perform additional security checks after the service has validated that the message was signed with a good certificate. I also have the same serviceSecurityAudit configuration to log access to my service. My Authorization Manager The below picture shows you implementation of my authorization manager. WCF will eventually hand off the message to my authorization component before it calls the service code. This is where I can perform some logic to check if the identity is allowed to access resources. In this case I am simple rejecting messages from anyone except the PocClientCertificate.     The Client Now let's take a look at the client side of this solution and how we can configure the client to authenticate against ACS but also send a certificate over to the service component so it can implement additional security checks on-premise. I have a console application and in the program class I want to use the proxy generated with Add Service Reference to send a message via the Azure Service Bus. You can see in my WCF client configuration below I have setup my details for the azure service bus url and am using the ws2007HttpRelayBinding.   Next is my configuration for the relay binding. You can see below I have configured security to use TransportWithMessageCredential so we will flow the token from a certificate with the message and also the RelayAccessToken relayClientAuthenticationType which means the component will validate against ACS before being allowed to access the relay endpoint to send a message.     After the binding we need to configure the endpoint behavior like in the below picture. This contains the normal transportClientEndpointBehaviour to setup the ACS shared secret configuration but we have also configured the clientCertificate to look for the PocClientCert.     Finally below we have the code of the client in the console application which will call the service bus. You can see that we have created our proxy and then made a normal call to a WCF in exactly the normal way but the configuration will jump in and ensure that a token is passed representing the client certificate.     Conclusion As you can see from the above walkthrough it is not too difficult to configure a service to use both a shared secret and certificate based token at the same time. This gives you the power and protection offered by the access control service in the cloud but also the ability to flow additional tokens to the on-premise component for additional security features to be implemented. Sample The sample used in this post is available at the following location: https://s3.amazonaws.com/CSCBlogSamples/Acme.Azure.ServiceBus.Poc.Cert.zip

    Read the article

  • JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes

    - by John-Brown.Evans
    JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} ol{margin:0;padding:0} .c17_6{vertical-align:top;width:468pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c5_6{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c6_6{vertical-align:top;width:156pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c15_6{background-color:#ffffff} .c10_6{color:#1155cc;text-decoration:underline} .c1_6{text-align:center;direction:ltr} .c0_6{line-height:1.0;direction:ltr} .c16_6{color:#666666;font-size:12pt} .c18_6{color:inherit;text-decoration:inherit} .c8_6{background-color:#f3f3f3} .c2_6{direction:ltr} .c14_6{font-size:8pt} .c11_6{font-size:10pt} .c7_6{font-weight:bold} .c12_6{height:0pt} .c3_6{height:11pt} .c13_6{border-collapse:collapse} .c4_6{font-family:"Courier New"} .c9_6{font-style:italic} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue This example leads you through the creation of an Oracle database Advanced Queue and the related WebLogic server objects in order to use AQ JMS in connection with a SOA composite. If you have not already done so, I recommend you look at the previous posts in this series, as they include steps which this example builds upon. The following examples will demonstrate how to write and read from the queue from a SOA process. 1. Recap and Prerequisites In the previous examples, we created a JMS Queue, a Connection Factory and a Connection Pool in the WebLogic Server Console. Then we wrote and deployed BPEL composites, which enqueued and dequeued a simple XML payload. AQ JMS allows you to interoperate with database Advanced Queueing via JMS in WebLogic server and therefore take advantage of database features, while maintaining compliance with the JMS architecture. AQ JMS uses the WebLogic JMS Foreign Server framework. A full description of this functionality can be found in the following Oracle documentation Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server 11g Release 1 (10.3.6) Part Number E13738-06 7. Interoperating with Oracle AQ JMS http://docs.oracle.com/cd/E23943_01/web.1111/e13738/aq_jms.htm#CJACBCEJ For easier reference, this sample will use the same names for the objects as in the above document, except for the name of the database user, as it is possible that this user already exists in your database. We will create the following objects Database Objects Name Type AQJMSUSER Database User MyQueueTable Advanced Queue (AQ) Table UserQueue Advanced Queue WebLogic Server Objects Object Name Type JNDI Name aqjmsuserDataSource Data Source jdbc/aqjmsuserDataSource AqJmsModule JMS System Module AqJmsForeignServer JMS Foreign Server AqJmsForeignServerConnectionFactory JMS Foreign Server Connection Factory AqJmsForeignServerConnectionFactory AqJmsForeignDestination AQ JMS Foreign Destination queue/USERQUEUE eis/aqjms/UserQueue Connection Pool eis/aqjms/UserQueue 2. Create a Database User and Advanced Queue The following steps can be executed in the database client of your choice, e.g. JDeveloper or SQL Developer. The examples below use SQL*Plus. Log in to the database as a DBA user, for example SYSTEM or SYS. Create the AQJMSUSER user and grant privileges to enable the user to create AQ objects. Create Database User and Grant AQ Privileges sqlplus system/password as SYSDBA GRANT connect, resource TO aqjmsuser IDENTIFIED BY aqjmsuser; GRANT aq_user_role TO aqjmsuser; GRANT execute ON sys.dbms_aqadm TO aqjmsuser; GRANT execute ON sys.dbms_aq TO aqjmsuser; GRANT execute ON sys.dbms_aqin TO aqjmsuser; GRANT execute ON sys.dbms_aqjms TO aqjmsuser; Create the Queue Table and Advanced Queue and Start the AQ The following commands are executed as the aqjmsuser database user. Create the Queue Table connect aqjmsuser/aqjmsuser; BEGIN dbms_aqadm.create_queue_table ( queue_table = 'myQueueTable', queue_payload_type = 'sys.aq$_jms_text_message', multiple_consumers = false ); END; / Create the AQ BEGIN dbms_aqadm.create_queue ( queue_name = 'userQueue', queue_table = 'myQueueTable' ); END; / Start the AQ BEGIN dbms_aqadm.start_queue ( queue_name = 'userQueue'); END; / The above commands can be executed in a single PL/SQL block, but are shown as separate blocks in this example for ease of reference. You can verify the queue by executing the SQL command SELECT object_name, object_type FROM user_objects; which should display the following objects: OBJECT_NAME OBJECT_TYPE ------------------------------ ------------------- SYS_C0056513 INDEX SYS_LOB0000170822C00041$$ LOB SYS_LOB0000170822C00040$$ LOB SYS_LOB0000170822C00037$$ LOB AQ$_MYQUEUETABLE_T INDEX AQ$_MYQUEUETABLE_I INDEX AQ$_MYQUEUETABLE_E QUEUE AQ$_MYQUEUETABLE_F VIEW AQ$MYQUEUETABLE VIEW MYQUEUETABLE TABLE USERQUEUE QUEUE Similarly, you can view the objects in JDeveloper via a Database Connection to the AQJMSUSER. 3. Configure WebLogic Server and Add JMS Objects All these steps are executed from the WebLogic Server Administration Console. Log in as the webLogic user. Configure a WebLogic Data Source The data source is required for the database connection to the AQ created above. Navigate to domain > Services > Data Sources and press New then Generic Data Source. Use the values:Name: aqjmsuserDataSource JNDI Name: jdbc/aqjmsuserDataSource Database type: Oracle Database Driver: *Oracle’ Driver (Thin XA) for Instance connections; Versions:9.0.1 and later Connection Properties: Enter the connection information to the database containing the AQ created above and enter aqjmsuser for the User Name and Password. Press Test Configuration to verify the connection details and press Next. Target the data source to the soa server. The data source will be displayed in the list. It is a good idea to test the data source at this stage. Click on aqjmsuserDataSource, select Monitoring > Testing > soa_server1 and press Test Data Source. The result is displayed at the top of the page. Configure a JMS System Module The JMS system module is required to host the JMS foreign server for AQ resources. Navigate to Services > Messaging > JMS Modules and select New. Use the values: Name: AqJmsModule (Leave Descriptor File Name and Location in Domain empty.) Target: soa_server1 Click Finish. The other resources will be created in separate steps. The module will be displayed in the list.   Configure a JMS Foreign Server A foreign server is required in order to reference a 3rd-party JMS provider, in this case the database AQ, within a local WebLogic server JNDI tree. Navigate to Services > Messaging > JMS Modules and select (click on) AqJmsModule to configure it. Under Summary of Resources, select New then Foreign Server. Name: AqJmsForeignServer Targets: The foreign server is targeted automatically to soa_server1, based on the JMS module’s target. Press Finish to create the foreign server. The foreign server resource will be listed in the Summary of Resources for the AqJmsModule, but needs additional configuration steps. Click on AqJmsForeignServer and select Configuration > General to complete the configuration: JNDI Initial Context Factory: oracle.jms.AQjmsInitialContextFactory JNDI Connection URL: <empty> JNDI Properties Credential:<empty> Confirm JNDI Properties Credential: <empty> JNDI Properties: datasource=jdbc/aqjmsuserDataSource This is an important property. It is the JNDI name of the data source created above, which points to the AQ schema in the database and must be entered as a name=value pair, as in this example, e.g. datasource=jdbc/aqjmsuserDataSource, including the “datasource=” property name. Default Targeting Enabled: Leave this value checked. Press Save to save the configuration. At this point it is a good idea to verify that the data source was written correctly to the config file. In a terminal window, navigate to $MIDDLEWARE_HOME/user_projects/domains/soa_domain/config/jms  and open the file aqjmsmodule-jms.xml . The foreign server configuration should contain the datasource name-value pair, as follows:   <foreign-server name="AqJmsForeignServer">         <default-targeting-enabled>true</default-targeting-enabled>         <initial-context-factory>oracle.jms.AQjmsInitialContextFactory</initial-context-factory>         <jndi-property>           <key> datasource </key>           <value> jdbc/aqjmsuserDataSource </value>         </jndi-property>   </foreign-server> </weblogic-jms> Configure a JMS Foreign Server Connection Factory When creating the foreign server connection factory, you enter local and remote JNDI names. The name of the connection factory itself and the local JNDI name are arbitrary, but the remote JNDI name must match a specific format, depending on the type of queue or topic to be accessed in the database. This is very important and if the incorrect value is used, the connection to the queue will not be established and the error messages you get will not immediately reflect the cause of the error. The formats required (Remote JNDI names for AQ JMS Connection Factories) are described in the section Configure AQ Destinations  of the Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server document mentioned earlier. In this example, the remote JNDI name used is   XAQueueConnectionFactory  because it matches the AQ and data source created earlier, i.e. thin with AQ. Navigate to JMS Modules > AqJmsModule > AqJmsForeignServer > Connection Factories then New.Name: AqJmsForeignServerConnectionFactory Local JNDI Name: AqJmsForeignServerConnectionFactory Note: this local JNDI name is the JNDI name which your client application, e.g. a later BPEL process, will use to access this connection factory. Remote JNDI Name: XAQueueConnectionFactory Press OK to save the configuration. Configure an AQ JMS Foreign Server Destination A foreign server destination maps the JNDI name on the foreign JNDI provider to the respective local JNDI name, allowing the foreign JNDI name to be accessed via the local server. As with the foreign server connection factory, the local JNDI name is arbitrary (but must be unique), but the remote JNDI name must conform to a specific format defined in the section Configure AQ Destinations  of the Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server document mentioned earlier. In our example, the remote JNDI name is Queues/USERQUEUE , because it references a queue (as opposed to a topic) with the name USERQUEUE. We will name the local JNDI name queue/USERQUEUE, which is a little confusing (note the missing “s” in “queue), but conforms better to the JNDI nomenclature in our SOA server and also allows us to differentiate between the local and remote names for demonstration purposes. Navigate to JMS Modules > AqJmsModule > AqJmsForeignServer > Destinations and select New.Name: AqJmsForeignDestination Local JNDI Name: queue/USERQUEUE Remote JNDI Name:Queues/USERQUEUE After saving the foreign destination configuration, this completes the JMS part of the configuration. We still need to configure the JMS adapter in order to be able to access the queue from a BPEL processt. 4. Create a JMS Adapter Connection Pool in Weblogic Server Create the Connection Pool Access to the AQ JMS queue from a BPEL or other SOA process in our example is done via a JMS adapter. To enable this, the JmsAdapter in WebLogic server needs to be configured to have a connection pool which points to the local connection factory JNDI name which was created earlier. Navigate to Deployments > Next and select (click on) the JmsAdapter. Select Configuration > Outbound Connection Pools and New. Check the radio button for oracle.tip.adapter.jms.IJmsConnectionFactory and press Next. JNDI Name: eis/aqjms/UserQueue Press Finish Expand oracle.tip.adapter.jms.IJmsConnectionFactory and click on eis/aqjms/UserQueue to configure it. The ConnectionFactoryLocation must point to the foreign server’s local connection factory name created earlier. In our example, this is AqJmsForeignServerConnectionFactory . As a reminder, this connection factory is located under JMS Modules > AqJmsModule > AqJmsForeignServer > Connection Factories and the value needed here is under Local JNDI Name. Enter AqJmsForeignServerConnectionFactory  into the Property Value field for ConnectionFactoryLocation. You must then press Return/Enter then Save for the value to be accepted. If your WebLogic server is running in Development mode, you should see the message that the changes have been activated and the deployment plan successfully updated. If not, then you will manually need to activate the changes in the WebLogic server console.Although the changes have been activated, the JmsAdapter needs to be redeployed in order for the changes to become effective. This should be confirmed by the message Remember to update your deployment to reflect the new plan when you are finished with your changes. Redeploy the JmsAdapter Navigate back to the Deployments screen, either by selecting it in the left-hand navigation tree or by selecting the “Summary of Deployments” link in the breadcrumbs list at the top of the screen. Then select the checkbox next to JmsAdapter and press the Update button. On the Update Application Assistant page, select “Redeploy this application using the following deployment files” and press Finish. After a few seconds you should get the message that the selected deployments were updated. The JMS adapter configuration is complete and it can now be used to access the AQ JMS queue. You can verify that the JNDI name was created correctly, by navigating to Environment > Servers > soa_server1 and View JNDI Tree. Then scroll down in the JNDI Tree Structure to eis and select aqjms. This concludes the sample. In the following post, I will show you how to create a BPEL process which sends a message to this advanced queue via JMS. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • Setting up a new Silverlight 4 Project with WCF RIA Services

    - by Kevin Grossnicklaus
    Many of my clients are actively using Silverlight 4 and RIA Services to build powerful line of business applications.  Getting things set up correctly is critical to being to being able to take full advantage of the RIA services plumbing and when developers struggle with the setup they tend to shy away from the solution as a whole.  I’m a big proponent of RIA services and wanted to take the opportunity to share some of my experiences in setting up these types of projects.  In late 2010 I presented a RIA Services Master Class here in St. Louis, MO through my firm (ArchitectNow) and the information shared in this post was promised during that presentation. One other thing I want to mention before diving in is the existence of a number of other great posts on this subject.  I’ve learned a lot from many of them and wanted to call out a few of them.  The purpose of my post is to point out some of the gotchas that people get caught up on in the process but I would still encourage you to do as much additional research as you can to find the perfect setup for your needs. Here are a few additional blog posts and articles you should check out on the subject: http://msdn.microsoft.com/en-us/library/ee707351(VS.91).aspx http://adam-thompson.com/post/2010/07/03/Getting-Started-with-WCF-RIA-Services-for-Silverlight-4.aspx Technologies I don’t intend for this post to turn into a full WCF RIA Services tutorial but I did want to point out what technologies we will be using: Visual Studio.NET 2010 Silverlight 4.0 WCF RIA Services for Visual Studio 2010 Entity Framework 4.0 I also wanted to point out that the screenshots came from my personal development box which has a number of additional plug-ins and frameworks loaded so a few of the screenshots might not match 100% with what you see on your own machines. If you do not have Visual Studio 2010 you can download the express version from http://www.microsoft.com/express.  The Silverlight 4.0 tools and the WCF RIA Services components are installed via the Web Platform Installer (http://www.microsoft.com/web/download). Also, the examples given in this post are done in C#…sorry to you VB folks but the concepts are 100% identical. Setting up anew RIA Services Project This section will provide a step-by-step walkthrough of setting up a new RIA services project using a shared DLL for server side code and a simple Entity Framework model for data access.  All projects are created with the consistent ArchitectNow.RIAServices filename prefix and default namespace.  This would be modified to match your companies standards. First, open Visual Studio and open the new project window via File->New->Project.  In the New Project window, select the Silverlight folder in the Installed Templates section on the left and select “Silverlight Application” as your project type.  Verify your solution name and location are set appropriately.  Note that the project name we specified in the example below ends with .Client.  This indicates the name which will be given to our Silverlight project. I consider Silverlight a client-side technology and thus use this name to reflect that.  Click Ok to continue. During the creation on a new Silverlight 4 project you will be prompted with the following dialog to create a new web ASP.NET web project to host your Silverlight content.  As we are demonstrating the setup of a WCF RIA Services infrastructure, make sure the “Enable WCF RIA Services” option is checked and click OK.  Obviously, there are some other options here which have an effect on your solution and you are welcome to look around.  For our example we are going to leave the ASP.NET Web Application Project selected.  If you are interested in having your Silverlight project hosted in an MVC 2 application or a Web Site project these options are available as well.  Also, whichever web project type you select, the name can be modified here as well.  Note that it defaults to the same name as your Silverlight project with the addition of a .Web suffix. At this point, your full Silverlight 4 project and host ASP.NET Web Application should be created and will now display in your Visual Studio solution explorer as part of a single Visual Studio solution as follows: Now we want to add our WCF RIA Services projects to this same solution.  To do so, right-click on the Solution node in the solution explorer and select Add->New Project.  In the New Project dialog again select the Silverlight folder under the Visual C# node on the left and, in the main area of the screen, select the WCF RIA Services Class Library project template as shown below.  Make sure your project name is set appropriately as well.  For the sample below, we will name the project “ArchitectNow.RIAServices.Server.Entities”.   The .Server.Entities suffix we use is meant to simply indicate that this particular project will contain our WCF RIA Services entity classes (as you will see below).  Click OK to continue. Once you have created the WCF RIA Services Class Library specified above, Visual Studio will automatically add TWO projects to your solution.  The first will be an project called .Server.Entities (using our naming conventions) and the other will have the same name with a .Web extension.  The full solution (with all 4 projects) is shown in the image below.  The .Entities project will essentially remain empty and is actually a Silverlight 4 class library that will contain generated RIA Services domain objects.  It will be referenced by our front-end Silverlight project and thus allow for simplified sharing of code between the client and the server.   The .Entities.Web project is a .NET 4.0 class library into which we will put our data access code (via Entity Framework).  This is our server side code and business logic and the RIA Services plumbing will maintain a link between this project and the front end.  Specific entities such as our domain objects and other code we set to be shared will be copied automatically into the .Entities project to be used in both the front end and the back end. At this point, we want to do a little cleanup of the projects in our solution and we will do so by deleting the “Class1.cs” class from both the .Entities project and the .Entities.Web project.  (Has anyone ever intentionally named a class “Class1”?) Next, we need to configure a few references to make RIA Services work.  THIS IS A KEY STEP THAT CAUSES MANY HEADACHES FOR DEVELOPERS NEW TO THIS INFRASTRUCTURE! Using the Add References dialog in Visual Studio, add a project reference from the *.Client project (our Silverlight 4 client) to the *.Entities project (our RIA Services class library).  Next, again using the Add References dialog in Visual Studio, add a project reference from the *.Client.Web project (our ASP.NET host project) to the *.Entities.Web project (our back-end data services DLL).  To get to the Add References dialog, simply right-click on the project you with to add a reference to in the Visual Studio solution explorer and select “Add Reference” from the resulting context menu.  You will want to make sure these references are added as “Project” references to simplify your future debugging.  To reiterate the reference direction using the project names we have utilized in this example thus far:  .Client references .Entities and .Client.Web reference .Entities.Web.  If you have opted for a different naming convention, then the Silverlight project must reference the RIA Services Silverlight class library and the ASP.NET host project must reference the server-side class library. Next, we are going to add a new Entity Framework data model to our data services project (.Entities.Web).  We will do this by right clicking on this project (ArchitectNow.Server.Entities.Web in the above diagram) and selecting Add->New Project.  In the New Project dialog we will select ADO.NET Entity Data Model as in the following diagram.  For now we will call this simply SampleDataModel.edmx and click OK. It is worth pointing out that WCF RIA Services is in no way tied to the Entity Framework as a means of accessing data and any data access technology is supported (as long as the server side implementation maps to the RIA Services pattern which is a topic beyond the scope of this post).  We are using EF to quickly demonstrate the RIA Services concepts and setup infrastructure, as such, I am not providing a database schema with this post but am instead connecting to a small sample database on my local machine.  The following diagram shows a simple EF Data Model with two tables that I reverse engineered from a local data store.   If you are putting together your own solution, feel free to reverse engineer a few tables from any local database to which you have access. At this point, once you have an EF data model generated as an EDMX into your .Entites.Web project YOU MUST BUILD YOUR SOLUTION.  I know it seems strange to call that out but it important that the solution be built at this point for the next step to be successful.  Obviously, if you have any build errors, these must be addressed at this point. At this point we will add a RIA Services Domain Service to our .Entities.Web project (our server side code).  We will need to right-click on the .Entities.Web project and select Add->New Item.  In the Add New Item dialog, select Domain Service Class and verify the name of your new Domain Service is correct (ours is called SampleService.cs in the image below).  Next, click "Add”. After clicking “Add” to include the Domain Service Class in the selected project, you will be presented with the following dialog.  In it, you can choose which entities from the selected EDMX to include in your services and if they should be allowed to be edited (i.e. inserted, updated, or deleted) via this service.  If the “Available DataContext/ObjectContext classes” dropdown is empty, this indicates you have not yes successfully built your project after adding your EDMX.  I would also recommend verifying that the “Generate associated classes for metadata” option is selected.  Once you have selected the appropriate options, click “OK”. Once you have added the domain service class to the .Entities.Web project, the resulting solution should look similar to the following: Note that in the solution you now have a SampleDataModel.edmx which represents your EF data mapping to your database and a SampleService.cs which will contain a large amount of generated RIA Services code which RIA Services utilizes to access this data from the Silverlight front-end.  You will put all your server side data access code and logic into the SampleService.cs class.  The SampleService.metadata.cs class is for decorating the generated domain objects with attributes from the System.ComponentModel.DataAnnotations namespace for validation purposes. FINAL AND KEY CONFIGURATION STEP!  One key step that causes significant headache to developers configuring RIA Services for the first time is the fact that, when we added the EDMX to the .Entities.Web project for our EF data access, a connection string was generated and placed within a newly generated App.Context file within that project.  While we didn’t point it out at the time you can see it in the image above.  This connection string will be required for the EF data model to successfully locate it’s data.  Also, when we added the Domain Service class to the .Entities.Web project, a number of RIA Services configuration options were added to the same App.Config file.   Unfortunately, when we ultimately begin to utilize the RIA Services infrastructure, our Silverlight UI will be making RIA services calls through the ASP.NET host project (i.e. .Client.Web).  This host project has a reference to the .Entities.Web project which actually contains the code so all will pass through correctly EXCEPT the fact that the host project will utilize it’s own Web.Config for any configuration settings.  For this reason we must now merge all the sections of the App.Config file in the .Entities.Web project into the Web.Config file in the .Client.Web project.  I know this is a bit tedious and I wish there were a simpler solution but it is required for our RIA Services Domain Service to be made available to the front end Silverlight project.  Much of this manual merge can be achieved by simply cutting and pasting from App.Config into Web.Config.  Unfortunately, the <system.webServer> section will exist in both and the contents of this section will need to be manually merged.  Fortunately, this is a step that needs to be taken only once per solution.  As you add additional data structures and Domain Services methods to the server no additional changes will be necessary to the Web.Config. Next Steps At this point, we have walked through the basic setup of a simple RIA services solution.  Unfortunately, there is still a lot to know about RIA services and we have not even begun to take advantage of the plumbing which we just configured (meaning we haven’t even made a single RIA services call).  I plan on posting a few more introductory posts over the next few weeks to take us to this step.  If you have any questions on the content in this post feel free to reach out to me via this Blog and I’ll gladly point you in (hopefully) the right direction. Resources Prior to closing out this post, I wanted to share a number or resources to help you get started with RIA services.  While I plan on posting more on the subject, I didn’t invent any of this stuff and wanted to give credit to the following areas for helping me put a lot of these pieces into place.   The books and online resources below will go a long way to making you extremely productive with RIA services in the shortest time possible.  The only thing required of you is the dedication to take advantage of the resources available. Books Pro Business Applications with Silverlight 4 http://www.amazon.com/Pro-Business-Applications-Silverlight-4/dp/1430272074/ref=sr_1_2?ie=UTF8&qid=1291048751&sr=8-2 Silverlight 4 in Action http://www.amazon.com/Silverlight-4-Action-Pete-Brown/dp/1935182374/ref=sr_1_1?ie=UTF8&qid=1291048751&sr=8-1 Pro Silverlight for the Enterprise (Books for Professionals by Professionals) http://www.amazon.com/Pro-Silverlight-Enterprise-Books-Professionals/dp/1430218673/ref=sr_1_3?ie=UTF8&qid=1291048751&sr=8-3 Web Content RIA Services http://channel9.msdn.com/Blogs/RobBagby/NET-RIA-Services-in-5-Minutes http://silverlight.net/riaservices/ http://www.silverlight.net/learn/videos/all/net-ria-services-intro/ http://www.silverlight.net/learn/videos/all/ria-services-support-visual-studio-2010/ http://channel9.msdn.com/learn/courses/Silverlight4/SL4BusinessModule2/SL4LOB_02_01_RIAServices http://www.myvbprof.com/MainSite/index.aspx#/zSL4_RIA_01 http://channel9.msdn.com/blogs/egibson/silverlight-firestarter-ria-services http://msdn.microsoft.com/en-us/library/ee707336%28v=VS.91%29.aspx Silverlight www.silverlight.net http://msdn.microsoft.com/en-us/silverlight4trainingcourse.aspx http://channel9.msdn.com/shows/silverlighttv

    Read the article

  • Ops Center 12c - Provisioning Solaris Using a Card-Based NIC

    - by scottdickson
    It's been a long time since last I added something here, but having some conversations this last week, I got inspired to update things. I've been spending a lot of time with Ops Center for managing and installing systems these days.  So, I suspect a number of my upcoming posts will be in that area. Today, I want to look at how to provision Solaris using Ops Center when your network is not connected to one of the built-in NICs.  We'll talk about how this can work for both Solaris 10 and Solaris 11, since they are pretty similar.  In both cases, WANboot is a key piece of the story. Here's what I want to do:  I have a Sun Fire T2000 server with a Quad-GbE nxge card installed.  The only network is connected to port 2 on that card rather than the built-in network interfaces.  I want to install Solaris on it across the network, either Solaris 10 or Solaris 11.  I have met with a lot of customers lately who have a similar architecture.  Usually, they have T4-4 servers with the network connected via 10GbE connections. Add to this mix the fact that I use Ops Center to manage the systems in my lab, so I really would like to add this to Ops Center.  If possible, I would like this to be completely hands free.  I can't quite do that yet. Close, but not quite. WANBoot or Old-Style NetBoot? When a system is installed from the network, it needs some help getting the process rolling.  It has to figure out what its network configuration (IP address, gateway, etc.) ought to be.  It needs to figure out what server is going to help it boot and install, and it needs the instructions for the installation.  There are two different ways to bootstrap an installation of Solaris on SPARC across the network.   The old way uses a broadcast of RARP or more recently DHCP to obtain the IP configuration and the rest of the information needed.  The second is to explicitly configure this information in the OBP and use WANBoot for installation WANBoot has a number of benefits over broadcast-based installation: it is not restricted to a single subnet; it does not require special DHCP configuration or DHCP helpers; it uses standard HTTP and HTTPS protocols which traverse firewalls much more easily than NFS-based package installation.  But, WANBoot is not available on really old hardware and WANBoot requires the use o Flash Archives in Solaris 10.  Still, for many people, this is a great approach. As it turns out, WANBoot is necessary if you plan to install using a NIC on a card rather than a built-in NIC. Identifying Which Network Interface to Use One of the trickiest aspects to this process, and the one that actually requires manual intervention to set up, is identifying how the OBP and Solaris refer to the NIC that we want to use to boot.  The OBP already has device aliases configured for the built-in NICs called net, net0, net1, net2, net3.  The device alias net typically points to net0 so that when you issue the command  "boot net -v install", it uses net0 for the boot.  Our task is to figure out the network instance for the NIC we want to use.  We will need to get to the OBP console of the system we want to install in order to figure out what the network should be called.  I will presume you know how to get to the ok prompt.  Once there, we have to see what networks the OBP sees and identify which one is associated with our NIC using the OBP command show-nets. SunOS Release 5.11 Version 11.0 64-bit Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved. {4} ok banner Sun Fire T200, No Keyboard Copyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.30.4.b, 32640 MB memory available, Serial #69057548. Ethernet address 0:14:4f:1d:bc:c, Host ID: 841dbc0c. {4} ok show-nets a) /pci@7c0/pci@0/pci@2/network@0,1 b) /pci@7c0/pci@0/pci@2/network@0 c) /pci@780/pci@0/pci@8/network@0,3 d) /pci@780/pci@0/pci@8/network@0,2 e) /pci@780/pci@0/pci@8/network@0,1 f) /pci@780/pci@0/pci@8/network@0 g) /pci@780/pci@0/pci@1/network@0,1 h) /pci@780/pci@0/pci@1/network@0 q) NO SELECTION Enter Selection, q to quit: d /pci@780/pci@0/pci@8/network@0,2 has been selected. Type ^Y ( Control-Y ) to insert it in the command line. e.g. ok nvalias mydev ^Y for creating devalias mydev for /pci@780/pci@0/pci@8/network@0,2 {4} ok devalias ... net3 /pci@7c0/pci@0/pci@2/network@0,1 net2 /pci@7c0/pci@0/pci@2/network@0 net1 /pci@780/pci@0/pci@1/network@0,1 net0 /pci@780/pci@0/pci@1/network@0 net /pci@780/pci@0/pci@1/network@0 ... name aliases By looking at the devalias and the show-nets output, we can see that our Quad-GbE card must be the device nodes starting with  /pci@780/pci@0/pci@8/network@0.  The cable for our network is plugged into the 3rd slot, so the device address for our network must be /pci@780/pci@0/pci@8/network@0,2. With that, we can create a device alias for our network interface.  Naming the device alias may take a little bit of trial and error, especially in Solaris 11 where the device alias seems to matter more with the new virtualized network stack. So far in my testing, since this is the "next" network interface to be used, I have found success in naming it net4, even though it's a NIC in the middle of a card that might, by rights, be called net6 (assuming the 0th interface on the card is the next interface identified by Solaris and this is the 3rd interface on the card).  So, we will call it net4.  We need to assign a device alias to it: {4} ok nvalias net4 /pci@780/pci@0/pci@8/network@0,2 {4} ok devalias net4 /pci@780/pci@0/pci@8/network@0,2 ... We also may need to have the MAC for this particular interface, so let's get it, too.  To do this, we go to the device and interrogate its properties. {4} ok cd /pci@780/pci@0/pci@8/network@0,2 {4} ok .properties assigned-addresses 82060210 00000000 03000000 00000000 01000000 82060218 00000000 00320000 00000000 00008000 82060220 00000000 00328000 00000000 00008000 82060230 00000000 00600000 00000000 00100000 local-mac-address 00 21 28 20 42 92 phy-type mif ... From this, we can see that the MAC for this interface is  00:21:28:20:42:92.  We will need this later. This is all we need to do at the OBP.  Now, we can configure Ops Center to use this interface. Network Boot in Solaris 10 Solaris 10 turns out to be a little simpler than Solaris 11 for this sort of a network boot.  Since WANBoot in Solaris 10 fetches a specified In order to install the system using Ops Center, it is necessary to create a OS Provisioning profile and its corresponding plan.  I am going to presume that you already know how to do this within Ops Center 12c and I will just cover the differences between a regular profile and a profile that can use an alternate interface. Create a OS Provisioning profile for Solaris 10 as usual.  However, when you specify the network resources for the primary network, click on the name of the NIC, probably GB_0, and rename it to GB_N/netN, where N is the instance number you used previously in creating the device alias.  This is where the trial and error may come into play.  You may need to try a few instance numbers before you, the OBP, and Solaris all agree on the instance number.  Mark this as the boot network. For Solaris 10, you ought to be able to then apply the OS Provisioning profile to the server and it should install using that interface.  And if you put your cards in the same slots and plug the networks into the same NICs, this profile is reusable across multiple servers. Why This Works If you watch the console as Solaris boots during the OSP process, Ops Center is going to look for the device alias netN.  Since WANBoot requires a device alias called just net, Ops Center uses the value of your netN device alias and assigns that device to the net alias.  That means that boot net will automatically use this device.  Very cool!  Here's a trace from the console as Ops Center provisions a server: Sun Sun Fire T200, No KeyboardCopyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved.OpenBoot 4.30.4.b, 32640 MB memory available, Serial #69057548.Ethernet address 0:14:4f:1d:bc:c, Host ID: 841dbc0c.auto-boot? =            false{0} ok  {0} ok printenv network-boot-argumentsnetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok devalias net net                      /pci@780/pci@0/pci@1/network@0{0} ok devalias net4 net4                     /pci@780/pci@0/pci@8/network@0,2{0} ok devalias net /pci@780/pci@0/pci@8/network@0,2{0} ok setenv network-boot-arguments host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:8004/cgi-bin/wanboot-cginetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:8004/cgi-bin/wanboot-cgi{0} ok {0} ok boot net - installBoot device: /pci@780/pci@0/pci@8/network@0,2  File and args: - install/pci@780/pci@0/pci@8/network@0,2: 1000 Mbps link up<time unavailable> wanboot info: WAN boot messages->console<time unavailable> wanboot info: configuring /pci@780/pci@0/pci@8/network@0,2 See what happened?  Ops Center looked for the network device alias called net4 that we specified in the profile, took the value from it, and made it the net device alias for the boot.  Pretty cool! WANBoot and Solaris 11 Solaris 11 requires an additional step since the Automated Installer in Solaris 11 uses the MAC address of the network to figure out which manifest to use for system installation.  In order to make sure this is available, we have to take an extra step to associate the MAC of the NIC on the card with the host.  So, in addition to creating the device alias like we did above, we also have to declare to Ops Center that the host has this new MAC. Declaring the NIC Start out by discovering the hardware as usual.  Once you have discovered it, take a look under the Connectivity tab to see what networks it has discovered.  In the case of this system, it shows the 4 built-in networks, but not the networks on the additional cards.  These are not directly visible to the system controller.  In order to add the additional network interface to the hardware asset, it is necessary to Declare it.  We will declare that we have a server with this additional NIC, but we will also  specify the existing GB_0 network so that Ops Center can associate the right resources together.  The GB_0 acts as sort of a key to tie our new declaration to the old system already discovered.  Go to the Assets tab, select All Assets, and then in the Actions tab, select Add Asset.  Rather than going through a discovery this time, we will manually declare a new asset. When we declare it, we will give the hostname, IP address, system model that match those that have already been discovered.  Then, we will declare both GB_0 with its existing MAC and the new GB_4 with its MAC.  Remember that we collected the MAC for GB_4 when we created its device alias. After you declare the asset, you will see the new NIC in the connectivity tab for the asset.  You will notice that only the NICs you listed when you declared it are seen now.  If you want Ops Center to see all of the existing NICs as well as the additional one, declare them as well.  Add the other GB_1, GB_2, GB_3 links and their MACs just as you did GB_0 and GB_4.  Installing the OS  Once you have declared the asset, you can create an OS Provisioning profile for Solaris 11 in the same way that you did for Solaris 10.  The only difference from any other provisioning profile you might have created already is the network to use for installation.  Again, use GB_N/netN where N is the interface number you used for your device alias and in your declaration.  And away you go.  When the system boots from the network, the automated installer (AI) is able to see which system manifest to use, based on the new MAC that was associated, and the system gets installed. {0} ok {0} ok printenv network-boot-argumentsnetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok devalias net net                      /pci@780/pci@0/pci@1/network@0{0} ok devalias net4 net4                     /pci@780/pci@0/pci@8/network@0,2{0} ok devalias net /pci@780/pci@0/pci@8/network@0,2{0} ok setenv network-boot-arguments host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cginetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok boot net - installBoot device: /pci@780/pci@0/pci@8/network@0,2  File and args: - install/pci@780/pci@0/pci@8/network@0,2: 1000 Mbps link up<time unavailable> wanboot info: WAN boot messages->console<time unavailable> wanboot info: configuring /pci@780/pci@0/pci@8/network@0,2...SunOS Release 5.11 Version 11.0 64-bitCopyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.Remounting root read/writeProbing for device nodes ...Preparing network image for useDownloading solaris.zlib--2012-02-17 15:10:17--  http://10.140.204.22:5555/var/js/AI/sparc//solaris.zlibConnecting to 10.140.204.22:5555... connected.HTTP request sent, awaiting response... 200 OKLength: 126752256 (121M) [text/plain]Saving to: `/tmp/solaris.zlib'100%[======================================>] 126,752,256 28.6M/s   in 4.4s    2012-02-17 15:10:21 (27.3 MB/s) - `/tmp/solaris.zlib' saved [126752256/126752256] Conclusion So, why go to all of this trouble?  More and more, I find that customers are wiring their data center to only use higher speed networks - 10GbE only to the hosts.  Some customers are moving aggressively toward consolidated networks combining storage and network on CNA NICs.  All of this means that network-based provisioning cannot rely exclusively on the built-in network interfaces.  So, it's important to be able to provision a system using other than the built-in networks.  Turns out, that this is pretty straight-forward for both Solaris 10 and Solaris 11 and fits into the Ops Center deployment process quite nicely. Hopefully, you will be able to use this as you build out your own private cloud solutions with Ops Center.

    Read the article

  • The Art of Productivity

    - by dwahlin
    Getting things done has always been a challenge regardless of gender, age, race, skill, or job position. No matter how hard some people try, they end up procrastinating tasks until the last minute. Some people simply focus better when they know they’re out of time and can’t procrastinate any longer. How many times have you put off working on a term paper in school until the very last minute? With only a few hours left your mental energy and focus seem to kick in to high gear especially as you realize that you either get the paper done now or risk failing. It’s amazing how a little pressure can turn into a motivator and allow our minds to focus on a given task. Some people seem to specialize in procrastinating just about everything they do while others tend to be the “doers” who get a lot done and ultimately rise up the ladder at work. What’s the difference between these types of people? Is it pure laziness or are other factors at play? I think that some people are certainly more motivated than others, but I also think a lot of it is based on the process that “doers” tend to follow - whether knowingly or unknowingly. While I’ve certainly fought battles with procrastination, I’ve always had a knack for being able to get a lot done in a relatively short amount of time. I think a lot of my “get it done” attitude goes back to the the strong work ethic my parents instilled in me at a young age. I remember my dad saying, “You need to learn to work hard!” when I was around 5 years old. I remember that moment specifically because I was on a tractor with him the first time I heard it while he was trying to move some large rocks into a pile. The tractor was big but so were the rocks and my dad had to balance the tractor perfectly so that it didn’t tip forward too far. It was challenging work and somewhat tedious but my dad finished the task and taught me a few important lessons along the way including persistence, the importance of having a skill, and getting the job done right without skimping along the way. In this post I’m going to list a few of the techniques and processes I follow that I hope may be beneficial to others. I blogged about the general concept back in 2009 but thought I’d share some updated information and lessons learned since then. Most of the ideas that follow came from learning and refining my daily work process over the years. However, since most of the ideas are common sense (at least in my opinion), I suspect they can be found in other productivity processes that are out there. Let’s start off with one of the most important yet simple tips: Start Each Day with a List. Start Each Day with a List What are you planning to get done today? Do you keep track of everything in your head or rely on your calendar? While most of us think that we’re pretty good at managing “to do” lists strictly in our head you might be surprised at how affective writing out lists can be. By writing out tasks you’re forced to focus on the most important tasks to accomplish that day, commit yourself to those tasks, and have an easy way to track what was supposed to get done and what actually got done. Start every morning by making a list of specific tasks that you want to accomplish throughout the day. I’ll even go so far as to fill in times when I’d like to work on tasks if I have a lot of meetings or other events tying up my calendar on a given day. I’m not a big fan of using paper since I type a lot faster than I write (plus I write like a 3rd grader according to my wife), so I use the Sticky Notes feature available in Windows. Here’s an example of yesterday’s sticky note: What do you add to your list? That’s the subject of the next tip. Focus on Small Tasks It’s no secret that focusing on small, manageable tasks is more effective than trying to focus on large and more vague tasks. When you make your list each morning only add tasks that you can accomplish within a given time period. For example, if I only have 30 minutes blocked out to work on an article I don’t list “Write Article”. If I do that I’ll end up wasting 30 minutes stressing about how I’m going to get the article done in 30 minutes and ultimately get nothing done. Instead, I’ll list something like “Write Introductory Paragraphs for Article”. The next day I may add, “Write first section of article” or something that’s small and manageable – something I’m confident that I can get done. You’ll find that once you’ve knocked out several smaller tasks it’s easy to continue completing others since you want to keep the momentum going. In addition to keeping my tasks focused and small, I also make a conscious effort to limit my list to 4 or 5 tasks initially. I’ve found that if I list more than 5 tasks I feel a bit overwhelmed which hurts my productivity. It’s easy to add additional tasks as you complete others and you get the added benefit of that confidence boost of knowing that you’re being productive and getting things done as you remove tasks and add others. Getting Started is the Hardest (Yet Easiest) Part I’ve always found that getting started is the hardest part and one of the biggest contributors to procrastination. Getting started working on tasks is a lot like getting a large rock pushed to the bottom of a hill. It’s difficult to get the rock rolling at first, but once you manage to get it rocking some it’s really easy to get it rolling on its way to the bottom. As an example, I’ve written 100s of articles for technical magazines over the years and have really struggled with the initial introductory paragraphs. Keep in mind that these are the paragraphs that don’t really add that much value (in my opinion anyway). They introduce the reader to the subject matter and nothing more. What a waste of time for me to sit there stressing about how to start the article. On more than one occasion I’ve spent more than an hour trying to come up with 2-3 paragraphs of text.  Talk about a productivity killer! Whether you’re struggling with a writing task, some code for a project, an email, or other tasks, jumping in without thinking too much is the best way to get started I’ve found. I’m not saying that you shouldn’t have an overall plan when jumping into a task, but on some occasions you’ll find that if you simply jump into the task and stop worrying about doing everything perfectly that things will flow more smoothly. For my introductory paragraph problem I give myself 5 minutes to write out some general concepts about what I know the article will cover and then spend another 10-15 minutes going back and refining that information. That way I actually have some ideas to work with rather than a blank sheet of paper. If I still find myself struggling I’ll write the rest of the article first and then circle back to the introductory paragraphs once I’m done. To sum this tip up: Jump into a task without thinking too hard about it. It’s better to to get the rock at the top of the hill rocking some than doing nothing at all. You can always go back and refine your work.   Learn a Productivity Technique and Stick to It There are a lot of different productivity programs and seminars out there being sold by companies. I’ve always laughed at how much money people spend on some of these motivational programs/seminars because I think that being productive isn’t that hard if you create a re-useable set of steps and processes to follow. That’s not to say that some of these programs/seminars aren’t worth the money of course because I know they’ve definitely benefited some people that have a hard time getting things done and staying focused. One of the best productivity techniques I’ve ever learned is called the “Pomodoro Technique” and it’s completely free. This technique is an extremely simple way to manage your time without having to remember a bunch of steps, color coding mechanisms, or other processes. The technique was originally developed by Francesco Cirillo in the 80s and can be implemented with a simple timer. In a nutshell here’s how the technique works: Pick a task to work on Set the timer to 25 minutes and work on the task Once the timer rings record your time Take a 5 minute break Repeat the process Here’s why the technique works well for me: It forces me to focus on a single task for 25 minutes. In the past I had no time goal in mind and just worked aimlessly on a task until I got interrupted or bored. 25 minutes is a small enough chunk of time for me to stay focused. Any distractions that may come up have to wait until after the timer goes off. If the distraction is really important then I stop the timer and record my time up to that point. When the timer is running I act as if I only have 25 minutes total for the task (like you’re down to the last 25 minutes before turning in your term paper….frantically working to get it done) which helps me stay focused and turns into a “beat the clock” type of game. It’s actually kind of fun if you treat it that way and really helps me focus on a the task at hand. I automatically know how much time I’m spending on a given task (more on this later) by using this technique. I know that I have 5 minutes after each pomodoro (the 25 minute sprint) to waste on anything I’d like including visiting a website, stepping away from the computer, etc. which also helps me stay focused when the 25 minute timer is counting down. I use this technique so much that I decided to build a program for Windows 8 called Pomodoro Focus (I plan to blog about how it was built in a later post). It’s a Windows Store application that allows people to track tasks, productive time spent on tasks, interruption time experienced while working on a given task, and the number of pomodoros completed. If a time estimate is given when the task is initially created, Pomodoro Focus will also show the task completion percentage. I like it because it allows me to track my tasks, time spent on tasks (very useful in the consulting world), and even how much time I wasted on tasks (pressing the pause button while working on a task starts the interruption timer). I recently added a new feature that charts productive and interruption time for tasks since I wanted to see how productive I was from week to week and month to month. A few screenshots from the Pomodoro Focus app are shown next, I had a lot of fun building it and use it myself to as I work on tasks.   There are certainly many other productivity techniques and processes out there (and a slew of books describing them), but the Pomodoro Technique has been the simplest and most effective technique I’ve ever come across for staying focused and getting things done.   Persistence is Key Getting things done is great but one of the biggest lessons I’ve learned in life is that persistence is key especially when you’re trying to get something done that at times seems insurmountable. Small tasks ultimately lead to larger tasks getting accomplished, however, it’s not all roses along the way as some of the smaller tasks may come with their own share of bumps and bruises that lead to discouragement about the end goal and whether or not it is worth achieving at all. I’ve been on several long-term projects over my career as a software developer (I have one personal project going right now that fits well here) and found that repeating, “Persistence is the key!” over and over to myself really helps. Not every project turns out to be successful, but if you don’t show persistence through the hard times you’ll never know if you succeeded or not. Likewise, if you don’t persistently stick to the process of creating a daily list, follow a productivity process, etc. then the odds of consistently staying productive aren’t good.   Track Your Time How much time do you actually spend working on various tasks? If you don’t currently track time spent answering emails, on phone calls, and working on various tasks then you might be surprised to find out that a task that you thought was going to take you 30 minutes ultimately ended up taking 2 hours. If you don’t track the time you spend working on tasks how can you expect to learn from your mistakes, optimize your time better, and become more productive? That’s another reason why I like the Pomodoro Technique – it makes it easy to stay focused on tasks while also tracking how much time I’m working on a given task.   Eliminate Distractions I blogged about this final tip several years ago but wanted to bring it up again. If you want to be productive (and ultimately successful at whatever you’re doing) then you can’t waste a lot of time playing games or on Twitter, Facebook, or other time sucking websites. If you see an article you’re interested in that has no relation at all to the tasks you’re trying to accomplish then bookmark it and read it when you have some spare time (such as during a pomodoro break). Fighting the temptation to check your friends’ status updates on Facebook? Resist the urge and realize how much those types of activities are hurting your productivity and taking away from your focus. I’ll admit that eliminating distractions is still tough for me personally and something I have to constantly battle. But, I’ve made a conscious decision to cut back on my visits and updates to Facebook, Twitter, Google+ and other sites. Sure, my Klout score has suffered as a result lately, but does anyone actually care about those types of scores aside from your online “friends” (few of whom you’ve actually met in person)? :-) Ultimately it comes down to self-discipline and how badly you want to be productive and successful in your career, life goals, hobbies, or whatever you’re working on. Rather than having your homepage take you to a time wasting news site, game site, social site, picture site, or others, how about adding something like the following as your homepage? Every time your browser opens you’ll see a personal message which helps keep you on the right track. You can download my ubber-sophisticated homepage here if interested. Summary Is there a single set of steps that if followed can ultimately lead to productivity? I don’t think so since one size has never fit all. Every person is different, works in their own unique way, and has their own set of motivators, distractions, and more. While I certainly don’t consider myself to be an expert on the subject of productivity, I do think that if you learn what steps work best for you and gradually refine them over time that you can come up with a personal productivity process that can serve you well. Productivity is definitely an “art” that anyone can learn with a little practice and persistence. You’ve seen some of the steps that I personally like to follow and I hope you find some of them useful in boosting your productivity. If you have others you use please leave a comment. I’m always looking for ways to improve.

    Read the article

  • Building a jQuery Plug-in to make an HTML Table scrollable

    - by Rick Strahl
    Today I got a call from a customer and we were looking over an older application that uses a lot of tables to display financial and other assorted data. The application is mostly meta-data driven with lots of layout formatting automatically driven through meta data rather than through explicit hand coded HTML layouts. One of the problems in this apps are tables that display a non-fixed amount of data. The users of this app don't want to use paging to see more data, but instead want to display overflow data using a scrollbar. Many of the forms are very densely populated, often with multiple data tables that display a few rows of data in the UI at the most. This sort of layout does not lend itself well to paging, but works much better with scrollable data. Unfortunately scrollable tables are not easily created. HTML Tables are mangy beasts as anybody who's done any sort of Web development knows. Tables are finicky when it comes to styling and layout, and they have many funky quirks, especially when it comes to scrolling both of the table rows themselves or even the child columns. There's no built-in way to make tables scroll and to lock headers while you do, and while you can embed a table (or anything really) into a scrolling div with something like this: <div style="position:relative; overflow: hidden; overflow-y: scroll; height: 200px; width: 400px;"> <table id="table" style="width: 100%" class="blackborder" > <thead> <tr class="gridheader"> <th>Column 1</th> <th>Column 2</th> <th>Column 3</th> <th >Column 4</th> </tr> </thead> <tbody> <tr> <td>Column 1 Content</td> <td>Column 2 Content</td> <td>Column 3 Content</td> <td>Column 4 Content</td> </tr> <tr> <td>Column 1 Content</td> <td>Column 2 Content</td> <td>Column 3 Content</td> <td>Column 4 Content</td> </tr> … </tbody> </table> </div> </div> that won't give a very satisfying visual experience: Both the header and body scroll which looks odd. You lose context as soon as the header scrolls off the top and when you reach the bottom of the list the bottom outline of the table shows which also looks off. The the side bar shows all the way down the length of the table yet another visual miscue. In a pinch this will work, but it's ugly. What's out there? Before we go further here you should know that there are a few capable grid plug-ins out there already. Among them: Flexigrid (can work of any table as well as with AJAX data) jQuery Scrollable Table Plug-in (feature similar to what I need but not quite) jqGrid (mostly an Ajax Grid which is very powerful and works very well) But in the end none of them fit the bill of what I needed in this situation. All of these require custom CSS and some of them are fairly complex to restyle. Others are AJAX only or work better with AJAX loaded data. However, I need to actually try (as much as possible) to maintain the original styling of the tables without requiring extensive re-styling. Building the makeTableScrollable() Plug-in To make a table scrollable requires rearranging the table a bit. In the plug-in I built I create two <div> tags and split the table into two: one for the table header and one for the table body. The bottom <div> tag then contains only the table's row data and can be scrolled while the header stays fixed. Using jQuery the basic idea is pretty simple: You create the divs, copy the original table into the bottom, then clone the table, clear all content append the <thead> section, into new table and then copy that table into the second header <div>. Easy as pie, right? Unfortunately it's a bit more complicated than that as it's tricky to get the width of the table right to account for the scrollbar (by adding a small column) and making sure the borders properly line up for the two tables. A lot of style settings have to be made to ensure the table is a fixed size, to remove and reattach borders, to add extra space to allow for the scrollbar and so forth. The end result of my plug-in is a table with a scrollbar. Using the same table I used earlier the result looks like this: To create it, I use the following jQuery plug-in logic to select my table and run the makeTableScrollable() plug-in against the selector: $("#table").makeTableScrollable( { cssClass:"blackborder"} ); Without much further ado, here's the short code for the plug-in: (function ($) { $.fn.makeTableScrollable = function (options) { return this.each(function () { var $table = $(this); var opt = { // height of the table height: "250px", // right padding added to support the scrollbar rightPadding: "10px", // cssclass used for the wrapper div cssClass: "" } $.extend(opt, options); var $thead = $table.find("thead"); var $ths = $thead.find("th"); var id = $table.attr("id"); var cssClass = $table.attr("class"); if (!id) id = "_table_" + new Date().getMilliseconds().ToString(); $table.width("+=" + opt.rightPadding); $table.css("border-width", 0); // add a column to all rows of the table var first = true; $table.find("tr").each(function () { var row = $(this); if (first) { row.append($("<th>").width(opt.rightPadding)); first = false; } else row.append($("<td>").width(opt.rightPadding)); }); // force full sizing on each of the th elemnts $ths.each(function () { var $th = $(this); $th.css("width", $th.width()); }); // Create the table wrapper div var $tblDiv = $("<div>").css({ position: "relative", overflow: "hidden", overflowY: "scroll" }) .addClass(opt.cssClass); var width = $table.width(); $tblDiv.width(width).height(opt.height) .attr("id", id + "_wrapper") .css("border-top", "none"); // Insert before $tblDiv $tblDiv.insertBefore($table); // then move the table into it $table.appendTo($tblDiv); // Clone the div for header var $hdDiv = $tblDiv.clone(); $hdDiv.empty(); var width = $table.width(); $hdDiv.attr("style", "") .css("border-bottom", "none") .width(width) .attr("id", id + "_wrapper_header"); // create a copy of the table and remove all children var $newTable = $($table).clone(); $newTable.empty() .attr("id", $table.attr("id") + "_header"); $thead.appendTo($newTable); $hdDiv.insertBefore($tblDiv); $newTable.appendTo($hdDiv); $table.css("border-width", 0); }); } })(jQuery); Oh sweet spaghetti code :-) The code starts out by dealing the parameters that can be passed in the options object map: height The height of the full table/structure. The height of the outside wrapper container. Defaults to 200px. rightPadding The padding that is added to the right of the table to account for the scrollbar. Creates a column of this width and injects it into the table. If too small the rightmost column might get truncated. if too large the empty column might show. cssClass The CSS class of the wrapping container that appears to wrap the table. If you want a border around your table this class should probably provide it since the plug-in removes the table border. The rest of the code is obtuse, but pretty straight forward. It starts by creating a new column in the table to accommodate the width of the scrollbar and avoid clipping of text in the rightmost column. The width of the columns is explicitly set in the header elements to force the size of the table to be fixed and to provide the same sizing when the THEAD section is moved to a new copied table later. The table wrapper div is created, formatted and the table is moved into it. The new wrapper div is cloned for the header wrapper and configured. Finally the actual table is cloned and cleared of all elements. The original table's THEAD section is then moved into the new table. At last the new table is added to the header <div>, and the header <div> is inserted before the table wrapper <div>. I'm always amazed how easy jQuery makes it to do this sort of re-arranging, and given of what's happening the amount of code is rather small. Disclaimer: Your mileage may vary A word of warning: I make no guarantees about the code above. It's a first cut and I provided this here mainly to demonstrate the concepts of decomposing and reassembling an HTML layout :-) which jQuery makes so nice and easy. I tested this component against the typical scenarios we plan on using it for which are tables that use a few well known styles (or no styling at all). I suspect if you have complex styling on your <table> tag that things might not go so well. If you plan on using this plug-in you might want to minimize your styling of the table tag and defer any border formatting using the class passed in via the cssClass parameter, which ends up on the two wrapper div's that wrap the header and body rows. There's also no explicit support for footers. I rarely if ever use footers (when not using paging that is), so I didn't feel the need to add footer support. However, if you need that it's not difficult to add - the logic is the same as adding the header. The plug-in relies on a well-formatted table that has THEAD and TBODY sections along with TH tags in the header. Note that ASP.NET WebForm DataGrids and GridViews by default do not generate well-formatted table HTML. You can look at my Adding proper THEAD sections to a GridView post for more info on how to get a GridView to render properly. The plug-in has no dependencies other than jQuery. Even with the limitations in mind I hope this might be useful to some of you. I know I've already identified a number of places in my own existing applications where I will be plugging this in almost immediately. Resources Download Sample and Plug-in code Latest version in the West Wind Web & AJAX Toolkit Repository © Rick Strahl, West Wind Technologies, 2005-2011Posted in jQuery  HTML  ASP.NET  

    Read the article

  • eBooks on iPad vs. Kindle: More Debate than Smackdown

    - by andrewbrust
    When the iPad was presented at its San Francisco launch event on January 28th, Steve Jobs spent a significant amount of time explaining how well the device would serve as an eBook reader. He showed the iBooks reader application and iBookstore and laid down the gauntlet before Amazon and its beloved Kindle device. Almost immediately afterwards, criticism came rushing forth that the iPad could never beat the Kindle for book reading. The curious part of that criticism is that virtually no one offering it had actually used the iPad yet. A few weeks later, on April 3rd, the iPad was released for sale in the United States. I bought one on that day and in the few additional weeks that have elapsed, I’ve given quite a workout to most of its capabilities, including its eBook features. I’ve also spent some time with the Kindle, albeit a first-generation model, to see how it actually compares to the iPad. I had some expectations going in, but I came away with conclusions about each device that were more scenario-based than absolute. I present my findings to you here.   Vital Statistics Let’s start with an inventory of each device’s underlying technology. The iPad has a color, backlit LCD screen and an on-screen keyboard. It has a battery which, on a full charge, lasts anywhere from 6-10 hours. The Kindle offers a monochrome, reflective E Ink display, a physical keyboard and a battery that on my first gen loaner unit can go up to a week between charges (Amazon claims the battery on the Kindle 2 can last up to 2 weeks on a single charge). The Kindle connects to Amazon’s Kindle Store using a 3G modem (the technology and network vary depending on the model) that incurs no airtime service charges whatsoever. The iPad units that are on-sale today work over WiFi only. 3G-equipped models will be on sale shortly and will command a $130 premium over their WiFi-only counterparts. 3G service on the iPad, in the U.S. from AT&T, will be fee-based, with a 250MB plan at $14.99 per month and an unlimited plan at $29.99. No contract is required for 3G service. All these tech specs aside, I think a more useful observation is that the iPad is a multi-purpose Internet-connected entertainment device, while the Kindle is a dedicated reading device. The question is whether those differences in design and intended use create a clear-cut winner for reading electronic publications. Let’s take a look at each device, in isolation, now.   Kindle To me, what’s most innovative about the Kindle is its E Ink display. E Ink really looks like ink on a sheet of paper. It requires no backlight, it’s fully visible in direct sunlight and it causes almost none of the eyestrain that LCD-based computer display technology (like that used on the iPad) does. It’s really versatile in an all-around way. Forgive me if this sounds precious, but reading on it is really a joy. In fact, it’s a genuinely relaxing experience. Through the Kindle Store, Amazon allows users to download books (including audio books), magazines, newspapers and blog feeds. Books and magazines can be purchased either on a single-issue basis or as an annual subscription. Books, of course, are purchased singly. Oddly, blogs are not free, but instead carry a monthly subscription fee, typically $1.99. To me this is ludicrous, but I suppose the free 3G service is partially to blame. Books and magazine issues download quickly. Magazine and blog subscriptions cause new issues or posts to be pushed to your device on an automated basis. Available blogs include 9000-odd feeds that Amazon offers on the Kindle Store; unless I missed something, arbitrary RSS feeds are not supported (though there are third party workarounds to this limitation). The shopping experience is integrated well, has an huge selection, and offers certain graphical perks. For example, magazine and newspaper logos are displayed in menus, and book cover thumbnails appear as well. A simple search mechanism is provided and text entry through the physical keyboard is relatively painless. It’s very easy and straightforward to enter the store, find something you like and start reading it quickly. If you know what you’re looking for, it’s even faster. Given Kindle’s high portability, very reliable battery, instant-on capability and highly integrated content acquisition, it makes reading on whim, and in random spurts of downtime, very attractive. The Kindle’s home screen lists all of your publications, and easily lets you select one, then start reading it. Once opened, publications display in crisp, attractive text that is adjustable in size. “Turning” pages is achieved through buttons dedicated to the task. Notes can be recorded, bookmarks can be saved and pages can be saved as clippings. I am not an avid book reader, and yet I found the Kindle made it really fun, convenient and soothing to read. There’s something about the easy access to the material and the simplicity of the display that makes the Kindle seduce you into chilling out and reading page after page. On the other hand, the Kindle has an awkward navigation interface. While menus are displayed clearly on the screen, the method of selecting menu items is tricky: alongside the right-hand edge of the main display is a thin column that acts as a second display. It has a white background, and a scrollable silver cursor that is moved up or down through the use of the device’s scrollwheel. Picking a menu item on the main display involves scrolling the silver cursor to a position parallel to that menu item and pushing the scrollwheel in. This navigation technique creates a disconnect, literally. You don’t really click on a selection so much as you gesture toward it. I got used to this technique quickly, but I didn’t love it. It definitely created a kind of anxiety in me, making me feel the need to speed through menus and get to my destination document quickly. Once there, I could calm down and relax. Books are great on the Kindle. Magazines and newspapers much less so. I found the rendering of photographs, and even illustrations, to be unacceptably crude. For this reason, I expect that reading textbooks on the Kindle may leave students wanting. I found that the original flow and layout of any publication was sacrificed on the Kindle. In effect, browsing a magazine or newspaper was almost impossible. Reading the text of individual articles was enjoyable, but having to read this way made the whole experience much more “a la carte” than cohesive and thematic between articles. I imagine that for academic journals this is ideal, but for consumer publications it imposes a stripped-down, low-fidelity experience that evokes a sense of deprivation. In general, the Kindle is great for reading text. For just about anything else, especially activity that involves exploratory browsing, meandering and short-attention-span reading, it presents a real barrier to entry and adoption. Avid book readers will enjoy the Kindle (if they’re not already). It’s a great device for losing oneself in a book over long sittings. Multitaskers who are more interested in periodicals, be they online or off, will like it much less, as they will find compromise, and even sacrifice, to be palpable.   iPad The iPad is a very different device from the Kindle. While the Kindle is oriented to pages of text, the iPad orbits around applications and their interfaces. Be it the pinch and zoom experience in the browser, the rich media features that augment content on news and weather sites, or the ability to interact with social networking services like Twitter, the iPad is versatile. While it shares a slate-like form factor with the Kindle, it’s effectively an elegant personal computer. One of its many features is the iBook application and integration of the iBookstore. But it’s a multi-purpose device. That turns out to be good and bad, depending on what you’re reading. The iBookstore is great for browsing. It’s color, rich animation-laden user interface make it possible to shop for books, rather than merely search and acquire them. Unfortunately, its selection is rather sparse at the moment. If you’re looking for a New York Times bestseller, or other popular titles, you should be OK. If you want to read something more specialized, it’s much harder. Unlike the awkward navigation interface of the Kindle, the iPad offers a nearly flawless touch-screen interface that seduces the user into tinkering and kibitzing every bit as much as the Kindle lulls you into a deep, concentrated read. It’s a dynamic and interactive device, whereas the Kindle is static and passive. The iBook reader is slick and fun. Use the iPad in landscape mode and you can read the book in 2-up (left/right 2-page) display; use it in portrait mode and you can read one page at a time. Rather than clicking a hardware button to turn pages, you simply drag and wipe from right-to-left to flip the single or right-hand page. The page actually travels through an animated path as it would in a physical book. The intuitiveness of the interface is uncanny. The reader also accommodates saving of bookmarks, searching of the text, and the ability to highlight a word and look it up in a dictionary. Pages display brightly and clearly. They’re easy to read. But the backlight and the glare made me less comfortable than I was with the Kindle. The knowledge that completely different applications (including the Web and email and Twitter) were just a few taps away made me antsy and very tempted to task-switch. The knowledge that battery life is an issue created subtle discomfort. If the Kindle makes you feel like you’re in a library reading room, then the iPad makes you feel, at best, like you’re under fluorescent lights at a Barnes and Noble or Borders store. If you’re lucky, you’d be on a couch or at a reading table in the store, but you might also be standing up, in the aisles. Clearly, I didn’t find this conducive to focused and sustained reading. But that may have more to do with my own tendency to read periodicals far more than books, and my neurotic . And, truth be known, the book reading experience, when not explicitly compared to Kindle’s, was still pleasant. It is also important to point out that Kindle Store-sourced books can be read on the iPad through a Kindle reader application, from Amazon, specific to the device. This offered a less rich experience than the iBooks reader, but it was completely adequate. Despite the Kindle brand of the reader, however, it offered little in terms of simulating the reading experience on its namesake device. When it comes to periodicals, the iPad wins hands down. Magazines, even if merely scanned images of their print editions, read on the iPad in a way that felt similar to reading hard copy. The full color display, touch navigation and even the ability to render advertisements in their full glory makes the iPad a great way to read through any piece of work that is measured in pages, rather than chapters. There are many ways to get magazines and newspapers onto the iPad, including the Zinio reader, and publication-specific applications like the Wall Street Journal’s and Popular Science’s. The New York Times’ free Editors’ Choice application offers a Times Reader-like interface to a subset of the Gray Lady’s daily content. The completely Web-based but iPad-optimized Times Skimmer site (at www.nytimes.com/timesskimmer) works well too. Even conventional Web sites themselves can be read much like magazines, given the iPad’s ability to zoom in on the text and crop out advertisements on the margins. While the Kindle does have an experimental Web browser, it reminded me a lot of early mobile phone browsers, only in a larger size. For text-heavy sites with simple layout, it works fine. For just about anything else, it becomes more trouble than it’s worth. And given the way magazine articles make me think of things I want to look up online, I think that’s a real liability for the Kindle.   Summing Up What I came to realize is that the Kindle isn’t so much a computer or even an Internet device as it is a printer. While it doesn’t use physical paper, it still renders its content a page at a time, just like a laser printer does, and its output appears strikingly similar. You can read the rendered text, but you can’t interact with it in any way. That’s why the navigation requires a separate cursor display area. And because of the page-oriented rendering behavior, turning pages causes a flash on the display and requires a sometimes long pause before the next page is rendered. The good side of this is that once the page is generated, no battery power is required to display it. That makes for great battery life, optimal viewing under most lighting conditions (as long as there is some light) and low-eyestrain text-centric display of content. The Kindle is highly portable, has an excellent selection in its store and is refreshingly distraction-free. All of this is ideal for reading books. And iPad doesn’t offer any of it. What iPad does offer is versatility, variety, richness and luxury. It’s flush with accoutrements even if it’s low on focused, sustained text display. That makes it inferior to the Kindle for book reading. But that also makes it better than the Kindle for almost everything else. As such, and given that its book reading experience is still decent (even if not superior), I think the iPad will give Kindle a run for its money. True book lovers, and people on a budget, will want the Kindle. People with a robust amount of discretionary income may want both devices. Everyone else who is interested in a slate form factor e-reading device, especially if they also wish to have leisure-friendly Internet access, will likely choose the iPad exclusively. One thing is for sure: iPad has reduced Kindle’s market, and may have shifted its mass market potential to a mere niche play. If Amazon is smart, it will improve its iPad-based Kindle reader app significantly. It can then leverage the iPad channel as a significant market for the Kindle Store. After all, selling the eBooks themselves is what Amazon should care most about.

    Read the article

  • Microsoft Declares the Future of ASP.NET is Web API

    - by sbwalker
    Sitting on a plane on my way home from Tech Ed 2012 in Orlando, I thought it would be a good time to jot down some key takeaways from this year’s conference. Some of these items I have known since the Microsoft MVP Summit which occurred in Redmond in late February ( but due to NDA restrictions I could not share them with the developer community at large ) and some of them are a result of insightful conversations with a wide variety of industry insiders and Microsoft employees at the conference. First, let’s travel back in time 4 years to the Microsoft MVP Summit in 2008. Microsoft was facing some heat from market newcomer Ruby on Rails and responded with a new web development framework of its own, ASP.NET MVC. At the Summit they estimated that MVC would only be applicable for ~10% of all new web development projects. Based on that prediction I questioned why they were investing such considerable resources for such a relative edge case, but my guess is that they felt it was an important edge case at the time as some of the more vocal .NET evangelists as well as some very high profile start-ups ( ie. Twitter ) had publicly announced their intent to use Rails. Microsoft made a lot of noise about MVC. In fact, they focused so much of their messaging and marketing hype around MVC that it appeared that WebForms was essentially dead. Yes, it may have been true that Microsoft continued to invest in WebForms, but from an outside perspective it really appeared that MVC was the only framework getting any real attention. As a result, MVC started to gain market share. An inside source at Microsoft told me that MVC usage has grown at a rate of about 5% per year and now sits at ~30%. Essentially by focusing so much marketing effort on MVC, Microsoft actually created a larger market demand for it.  This is because in the Microsoft ecosystem there is somewhat of a bandwagon mentality amongst developers. If Microsoft spends a lot of time talking about a specific technology, developers get the perception that it must be really important. So rather than choosing the right tool for the job, they often choose the tool with the most marketing hype and then try to sell it to the customer. In 2010, I blogged about the fact that MVC did not make any business sense for the DotNetNuke platform. This was because our ecosystem relied on third party extensions which were dependent on the WebForms model. If we migrated the core to MVC it would mean that all of the third party extensions would no longer be compatible, which would be an irresponsible business decision for us to make at the expense of our users and customers. However, this did not stop the debate from continuing to occur in our ecosystem. Clearly some developers had drunk Microsoft’s Kool-Aid about MVC and were of the mindset, to paraphrase an old Scottish saying, “If its not MVC, it’s crap”. Now, this is a rather ignorant position to take as most of the benefits of MVC can be achieved in WebForms with solid architecture and responsible coding practices. Clean separation of concerns, unit testing, and direct control over page output are all possible in the WebForms model – it just requires diligence and discipline. So over the past few years some horror stories have begun to bubble to the surface of software development projects focused on ground-up rewrites of web applications for the sole purpose of migrating from WebForms to MVC. These large scale rewrites were typically initiated by engineering teams with only a single argument driving the business decision, that Microsoft was promoting MVC as “the future”. These ill-fated rewrites offered no benefit to end users or customers and in fact resulted in a less stable, less scalable and more complicated systems – basically taking one step forward and two full steps back. A case in point is the announcement earlier this week that a popular open source .NET CMS provider has decided to pull the plug on their new MVC product which has been under active development for more than 18 months and revert back to WebForms. The availability of multiple server-side development models has deeply fragmented the Microsoft developer community. Some folks like to compare it to the age-old VB vs. C# language debate. However, the VB vs. C# language debate was ultimately more of a religious war because at least the two dominant programming languages were compatible with one another and could be used interchangeably. The issue with WebForms vs. MVC is much more challenging. This is because the messaging from Microsoft has positioned the two solutions as being incompatible with one another and as a result web developers feel like they are forced to choose one path or another. Yes, it is true that it has always been technically possible to use WebForms and MVC in the same project, but the tooling support has always made this feel “dirty”. The fragmentation has also made it difficult to attract newcomers as the perceived barrier to entry for learning ASP.NET has become higher. As a result many new software developers entering the market are gravitating to environments where the development model seems more simple and intuitive ( ie. PHP or Ruby ). At the same time that the Web Platform team was busy promoting ASP.NET MVC, the Microsoft Office team has been promoting Sharepoint as a platform for building internal enterprise web applications. Sharepoint has great penetration in the enterprise and over time has been enhanced with improved extensibility capabilities for software developers. But, like many other mature enterprise ASP.NET web applications, it is built on the WebForms development model. Similar to DotNetNuke, Sharepoint leverages a rich third party ecosystem for both generic web controls and more specialized WebParts – both of which rely on WebForms. So basically this resulted in a situation where the Web Platform group had headed off in one direction and the Office team had gone in another direction, and the end customer was stuck in the middle trying to figure out what to do with their existing investments in Microsoft technology. It really emphasized the perception that the left hand was not speaking to the right hand, as strategically speaking there did not seem to be any high level plan from Microsoft to ensure consistency and continuity across the different product lines. With the introduction of ASP.NET MVC, it also made some of the third party control vendors scratch their heads, and wonder what the heck Microsoft was thinking. The original value proposition of ASP.NET over Classic ASP was the ability for web developers to emulate the highly productive desktop development model by using abstract components for creating rich, interactive web interfaces. Web control vendors like Telerik, Infragistics, DevExpress, and ComponentArt had all built sizable businesses offering powerful user interface components to WebForms developers. And even after MVC was introduced these vendors continued to improve their products, offering greater productivity and a superior user experience via AJAX to what was possible in MVC. And since many developers were comfortable and satisfied with these third party solutions, the demand remained strong and the third party web control market continued to prosper despite the availability of MVC. While all of this was going on in the Microsoft ecosystem, there has also been a fundamental shift in the general software development industry. Driven by the explosion of Internet-enabled devices, the focus has now centered on service-oriented architecture (SOA). Service-oriented architecture is all about defining a public API for your product that any client can consume; whether it’s a native application running on a smart phone or tablet, a web browser taking advantage of HTML5 and Javascript, or a rich desktop application running on a PC. REST-based services which utilize the less verbose characteristics of JSON as a transport mechanism, have become the preferred approach over older, more bloated SOAP-based techniques. SOA also has the benefit of producing a cross-platform API, as every major technology stack is able to interact with standard REST-based web services. And for web applications, more and more developers are turning to robust Javascript libraries like JQuery and Knockout for browser-based client-side development techniques for calling web services and rendering content to end users. In fact, traditional server-side page rendering has largely fallen out of favor, resulting in decreased demand for server-side frameworks like Ruby on Rails, WebForms, and (gasp) MVC. In response to these new industry trends, Microsoft did what it always does – it immediately poured some resources into developing a solution which will ensure they remain relevant and competitive in the web space. This work culminated in a new framework which was branded as Web API. It is convention-based and designed to embrace native HTTP standards without copious layers of abstraction. This framework is designed to be the ultimate replacement for both the REST aspects of WCF and ASP.NET MVC Web Services. And since it was developed out of band with a dependency only on ASP.NET 4.0, it means that it can be used immediately in a variety of production scenarios. So at Tech Ed 2012 it was made abundantly clear in numerous sessions that Microsoft views Web API as the “Future of ASP.NET”. In fact, one Microsoft PM even went as far as to say that if we look 3-4 years into the future, that all ASP.NET web applications will be developed using the Web API approach. This is a fairly bold prediction and clearly telegraphs where Microsoft plans to allocate its resources going forward. Currently Web API is being delivered as part of the MVC4 package, but this is only temporary for the sake of convenience. It also sounds like there are still internal discussions going on in terms of how to brand the various aspects of ASP.NET going forward – perhaps the moniker of “ASP.NET Web Stack” coined a couple years ago by Scott Hanselman and utilized as part of the open source release of ASP.NET bits on Codeplex a few months back will eventually stick. Web API is being positioned as the unification of ASP.NET – the glue that is able to pull this fragmented mess back together again. The  “One ASP.NET” strategy will promote the use of all frameworks - WebForms, MVC, and Web API, even within the same web project. Basically the message is utilize the appropriate aspects of each framework to solve your business problems. Instead of navigating developers to a fork in the road, the plan is to educate them that “hybrid” applications are a great strategy for delivering solutions to customers. In addition, the service-oriented approach coupled with client-side development promoted by Web API can effectively be used in both WebForms and MVC applications. So this means it is also relevant to application platforms like DotNetNuke and Sharepoint, which means that it starts to create a unified development strategy across all ASP.NET product lines once again. And so what about MVC? There have actually been rumors floated that MVC has reached a stage of maturity where, similar to WebForms, it will be treated more as a maintenance product line going forward ( MVC4 may in fact be the last significant iteration of this framework ). This may sound alarming to some folks who have recently adopted MVC but it really shouldn’t, as both WebForms and MVC will continue to play a vital role in delivering solutions to customers. They will just not be the primary area where Microsoft is spending the majority of its R&D resources. That distinction will obviously go to Web API. And when the question comes up of why not enhance MVC to make it work with Web API, you must take a step back and look at this from the higher level to see that it really makes no sense. MVC is a server-side page compositing framework; whereas, Web API promotes client-side page compositing with a heavy focus on web services. In order to make MVC work well with Web API, would require a complete rewrite of MVC and at the end of the day, there would be no upgrade path for existing MVC applications. So it really does not make much business sense. So what does this have to do with DotNetNuke? Well, around 8-12 months ago we recognized the software industry trends towards web services and client-side development. We decided to utilize a “hybrid” model which would provide compatibility for existing modules while at the same time provide a bridge for developers who wanted to utilize more modern web techniques. Customers who like the productivity and familiarity of WebForms can continue to build custom modules using the traditional approach. However, in DotNetNuke 6.2 we also introduced a new Service Framework which is actually built on top of MVC2 ( we chose to leverage MVC because it had the most intuitive, light-weight REST implementation in the .NET stack ). The Services Framework allowed us to build some rich interactive features in DotNetNuke 6.2, including the Messaging and Notification Center and Activity Feed. But based on where we know Microsoft is heading, it makes sense for the next major version of DotNetNuke ( which is expected to be released in Q4 2012 ) to migrate from MVC2 to Web API. This will likely result in some breaking changes in the Services Framework but we feel it is the best approach for ensuring the platform remains highly modern and relevant. The fact that our development strategy is perfectly aligned with the “One ASP.NET” strategy from Microsoft means that our customers and developer community can be confident in their current and future investments in the DotNetNuke platform.

    Read the article

  • Tips for XNA WP7 Developers

    - by Michael B. McLaughlin
    There are several things any XNA developer should know/consider when coming to the Windows Phone 7 platform. This post assumes you are familiar with the XNA Framework and with the changes between XNA 3.1 and XNA 4.0. It’s not exhaustive; it’s simply a list of things I’ve gathered over time. I may come back and add to it over time, and I’m happy to add anything anyone else has experienced or learned as well. Display · The screen is either 800x480 or 480x800. · But you aren’t required to use only those resolutions. · The hardware scaler on the phone will scale up from 240x240. · One dimension will be capped at 800 and the other at 480; which depends on your code, but you cannot have, e.g., an 800x600 back buffer – that will be created as 800x480. · The hardware scaler will not normally change aspect ratio, though, so no unintended stretching. · Any dimension (width, height, or both) below 240 will be adjusted to 240 (without any aspect ratio adjustment such that, e.g. 200x240 will be treated as 240x240). · Dimensions below 240 will be honored in terms of calculating whether to use portrait or landscape. · If dimensions are exactly equal or if height is greater than width then game will be in portrait. · If width is greater than height, the game will be in landscape. · Landscape games will automatically flip if the user turns the phone 180°; no code required. · Default landscape is top = left. In other words a user holding a phone who starts a landscape game will see the first image presented so that the “top” of the screen is along the right edge of his/her phone, such that the natural behavior would be to turn the phone 90° so that the top of the phone will be held in the user’s left hand and the bottom would be held in the user’s right hand. · The status bar (where the clock, battery power, etc., are found) is hidden when the Game-derived class sets GraphicsDeviceManager.IsFullScreen = true. It is shown when IsFullScreen = false. The default value is false (i.e. the status bar is shown). · You should have a good reason for hiding the status bar. Users find it helpful to know what time it is, how much charge their battery has left, and whether or not their phone is in service range. This is especially true for casual games that you expect someone to play for a few minutes at a time, e.g. while waiting for some event to start, for a phone call to come in, or for a train, bus, or subway to arrive. · In portrait mode, the status bar occupies 32 pixels of space. This means that a game with a back buffer of 480x800 will be scaled down to occupy approximately 461x768 screen pixels. Setting the back buffer to 480x768 (or some resolution with the same 0.625 aspect ratio) will avoid this scaling. · In landscape mode, the status bar occupies 72 pixels of space. This means that a game with a back buffer of 800x480 will be scaled down to occupy approximately 728x437 screen pixels. Setting the back buffer to 728x480 (or some resolution with the same 1.51666667 aspect ratio) will avoid this scaling. Input · Touch input is scaled with screen size. · So if your back buffer is 600x360, a tap in the bottom right corner will come in as (599,359). You don’t need to do anything special to get this automatic scaling of touch behavior. · If you do not use full area of the screen, any touch input outside the area you use will still register as a touch input. For example, if you set a portrait resolution of 240x240, it would be scaled up to occupy a 480x480 area, centered in the screen. If you touch anywhere above this area, you will get a touch input of (X,0) where X is a number from 0 to 239 (in accordance with your 240 pixel wide back buffer). Any touch below this area will give a touch input of (X,239). · If you keep the status bar visible, touches within its area will not be passed to your game. · In general, a screen measurement is the diagonal. So a 3.5” screen is 3.5” long from the bottom right corner to the top left corner. With an aspect ratio of 0.6 (480/800 = 0.6), this means that a phone with a 3.5” screen is only approximately 1.8” wide by 3” tall. So there are approximately 267 pixels in an inch on a 3.5” screen. · Again, this time in metric! 3.5 inches is approximately 8.89 cm. So an 8.89 cm screen is 8.89 cm long from the bottom right corner to the top left corner. With an aspect ratio of 0.6, this means that a phone with an 8.89 cm screen is only approximately 4.57 cm wide by 7.62 cm tall. So there are approximately 105 pixels in a centimeter on an 8.89 cm screen. · Think about the size of your finger tip. If you do not have large hands, think about the size of the fingertip of someone with large hands. Consider that when you are sizing your touch input. Especially consider that when you are spacing two touch targets near one another. You need to judge it for yourself, but items that are next to each other and are each 100x100 should be fine when it comes to selecting items individually. Smaller targets than that are ok provided that you leave space between them. · You want your users to have a pleasant experience. Making touch controls too small or too close to one another will make them nervous about whether they will touch the right target. Take this into account when you plan out your game initially. If possible, do some quick size mockups on an actual phone using colored rectangles that you position and size where you plan to have your game controls. Adjust as necessary. · People do not have transparent hands! Nor are their hands the size of a mouse pointer icon. Consider leaving a dedicated space for input rather than forcing the user to cover up to one-third of the screen with a finger just to play the game. · Another benefit of designing your controls to use a dedicated area is that you’re less likely to have players moving their finger(s) so frantically that they accidentally hit the back button, start button, or search button (many phones have one or more of these on the screen itself – it’s easy to hit one by accident and really annoying if you hit, e.g., the search button and then quickly tap back only to find out that the game didn’t save your progress such that you just wasted all the time you spent playing). · People do not like doing somersaults in order to move something forward with accelerometer-based controls. Test your accelerometer-based controls extensively and get a lot of feedback. Very well-known games from noted publishers have created really bad accelerometer controls and been virtually unplayable as a result. Also be wary of exceptions and other possible failures that the documentation warns about. · When done properly, the accelerometer can add a nice touch to your game (see, e.g. ilomilo where the accelerometer was used to move the background; it added a nice touch without frustrating the user; I also think CarniVale does direct accelerometer controls very well). However, if done poorly, it will make your game an abomination unto the Marketplace. Days, weeks, perhaps even months of development time that you will never get back. I won’t name names; you can search the marketplace for games with terrible reviews and you’ll find them. Graphics · The maximum frame rate is 30 frames per second. This was set as a compromise between battery life and quality. · At least one model of phone is known to have a screen refresh rate that is between 59 and 60 hertz. Because of this, using a fixed time step with a target frame rate of 30 will cause a slight internal delay to build up as the framework is forced to wait slightly for the next refresh. Eventually the delay will get to the point where a draw is skipped in order to recover from the delay. (See Nick's comment below for clarification.) · To deal with that delay, you can either stay with a fixed time step and set the frame rate slightly lower or else you can go to a variable time step and make sure to adjust all of your update data (e.g. player movement distance) to take into account the elapsed time from the last update. A variable time step makes your update logic slightly more complicated but will avoid frame skips entirely. · Currently there are no custom shaders. This might change in the future (there is no hardware limitation preventing it; it simply wasn’t a feature that could be implemented in the time available before launch). · There are five built-in shaders. You can create a lot of nice effects with the built-in shaders. · There is more power on the CPU than there is on the GPU so things you might typically off-load to the GPU will instead make sense to do on the CPU side. · This is a phone. It is not a PC. It is not an Xbox 360. The emulator runs on a PC and uses the full power of your PC. It is very good for testing your code for bugs and doing early prototyping and layout. You should not use it to measure performance. Use actual phone hardware instead. · There are many phone models, each of which has slightly different performance levels for I/O, screen blitting, CPU performance, etc. Do not take your game right to the performance limit on your phone since for some other phones you might be crossing their limits and leaving players with a bad experience. Leave a cushion to account for hardware differences. · Smaller screened phones will have slightly more dots per inch (dpi). Larger screened phones will have slightly less. Either way, the dpi will be much higher than the typical 96 found on most computer screens. Make sure that whoever is doing art for your game takes this into account. · Screens are only required to have 16 bit color (65,536 colors). This is common among smart phones. Using gradients on a 16 bit display can produce an ugly artifact known as banding. Banding is when, rather than a smooth transition from one color to another, you instead see distinct lines. Be careful to avoid this when possible. Banding can be avoided through careful art creation. Its effects can be minimized and even unnoticeable when the texture in question is always moving. You should be careful not to rely on “looks good on my phone” since some phones do have 32-bit displays and thus you’ll find yourself wondering why you’re getting bad reviews that complain about the graphics. Avoid gradients; if you can’t, make sure they are 16-bit safe. Audio · Never rely on sounds as your sole signal to the player that something is happening in the game. They might have the sound off. They might be playing somewhere loud. Etc. · You have to provide controls to disable sound & music. These should be separate. · On at least one model of phone, the volume control API currently has no effect. Players can adjust sound with their hardware volume buttons, but in game selectors simply won’t work. As such, it may not be worth the effort of providing anything beyond on/off switches for sound and music. · MediaPlayer.GameHasControl will return true when a game is hooked up to a PC running Zune. When Zune is running, any attempts to do anything (beyond check GameHasControl) with MediaPlayer will cause an exception to be thrown. If this exception is thrown, catch it and disable music. Exceptions take time to propagate; you don’t want one popping up in every single run of your game’s Update method. · Remember that players can already be listening to music or using the FM radio. In this case GameHasControl will be false and you should handle this appropriately. You can, alternately, ask the player for permission to stop their current music and play your music instead, but the (current) requirement that you restore their music when done is very hard (if not impossible) to deal with. · You can still play sound effects even when the game doesn’t have control of the music, but don’t think this is a backdoor to playing music. Your game will fail certification if your “sound effect” seems to be more like music in scope and length.

    Read the article

  • Best way to integrate StyleCop with TFS CI

    - by Slavo
    I've been doing research on how to enable source analysis for the project I'm working on and plan to use StyleCop. The setup I have is a TFS Server for source control, using TFS Continuous Integration. I want to enable source analysis for CI builds and daily builds run on the build machine, and not only for those run on developers' machines. Here's an article from the documentation of StyleCop that I read on the subject: http://blog.newagesolution.net/2008/07/how-to-use-stylecop-and-msbuild-and.html. It basically modifies the csproj file for the purpose. I've also read other opinions about how StyleCop should be integrated with build automation, which advise doing the same thing using build tasks: http://blog.newagesolution.net/2008/07/how-to-use-stylecop-and-msbuild-and.html http://freetodev.spaces.live.com/blog/cns!EC3C8F2028D842D5!400.entry. What are you opinions? Have you had similar projects and done something like this?

    Read the article

  • How to choose an integer linear programming solver ?

    - by Cassie
    Hi all, I am newbie for integer linear programming. I plan to use a integer linear programming solver to solve my combinatorial optimization problem. I am more familiar with C++/object oriented programming on an IDE. Now I am using NetBeans with Cygwin to write my applications most of time. May I ask if there is an easy use ILP solver for me? Or it depends on the problem I want to solve ? I am trying to do some resources mapping optimization. Please let me know if any further information is required. Thank you very much, Cassie.

    Read the article

  • Managing subviews in Cocoa Touch

    - by brainfsck
    Hi, I'm developing an iPhone app. I need to create a Quiz application that has different Question views embedded in it (see my similar question). Different types of Question will have different behavior, so I plan to create a controller class for each type of Question. The MultipleChoiceQuestionController would set up a question and 3-4 buttons for the user to select an answer. Similarly, the IdentifyPictureQuestionController would load an image and present a text box to the user. However, the docs say that a UIViewController should only be used for views that take up the entire application window. How else can I create a class to manage events in my subviews? Thanks,

    Read the article

  • Asp.net ADO.NET Entity Framework or ADO.NET

    - by sharru
    I'm starting a new project based on ASP.NET and Windows server. The application is planned to be pretty big and serve large amount of clients pulling and updating high freq. changing data. I have previously created projects with Linq-To-Sql or with Ado.Net. My plan for this project is to use VS2010 and the new EF4 framework. It would be great to hear other programmers options about development with Entity Framework Pros and cons from previous experience? Do you think EF4 is ready for production? Should i take the risk or just stick with plain old good ADO.NET?

    Read the article

  • How to rollback in EF4?

    - by Jaxidian
    In my research about rolling back transactions in EF4, it seems everybody refers to this blog post or offers a similar explanation. In my scenario, I'm wanting to do this in a unit testing scenario where I want to rollback practically everything I do within my unit testing context to keep from updating the data in the database (yeah, we'll increment counters but that's okay). In order to do this, is it best to follow the following plan? Am I missing some concept or anything else major with this (aside from my SetupMyTest and PerformMyTest functions won't really exist that way)? using (var ts = new TransactionScope()) { // Arrange SetupMyTest(context); // Act PerformMyTest(context); var numberOfChanges = context.SaveChanges(false); // if there's an issue, chances are that an exception has been thrown by now. // Assert Assert.IsTrue(numberOfChanges > 0, "Failed to _____"); // transaction will rollback because we do not ever call Complete on it }

    Read the article

  • Serving east/west coasts with Geoipdns and MaxMind GeoLite data

    - by netvope
    I want to serve east (west) coast visitors with my Virginia (California) server. To do so, I plan to use Geoipdns and the IP-to-location mappings from MaxMind. MaxMind provide two datasets for free: GeoLite Country and GeoLite City. However, neither of them has east/west coast regions defined. A possible solution is to write a script to combine all the IP ranges for the east/west coast cities in GeoLite City, but that sounds a little bit stupid. What is the best practice in doing this? Any suggestions or alternatives?

    Read the article

  • WCF REST vs. ADO.NET Data Services

    - by ray247
    Hi there, Could someone compare and contrast on WCF Rest services vs. ADO.NET Data Services? What is the difference and when to use which? Thanks, Ray. Edit: thanks to the first answer, just to give a bit background on what I'm looking to do: I have a web app I plan to put in the cloud (someday), the DAL is built with ADO.NET Entity Framework. And, I need to figure which web service data access technology would best fit my case.

    Read the article

  • Comparison between Tigase, Openfire and any other open-source XMPP servers

    - by John
    I've been looking at these too, both seem to provide fully functional XMPP servers in Java. I know Tigase is designed in a very modular way, not looked at Openfire in as much detail yet. My intended use would be to create a custom IM-based app, using XMPP for convenience rather than to open my server up to talk to other XMPP servers. I'm trying to evaluate my needs based on the following, roughly in order of importance: Documentation coverage & community How easy to plug in own functionality Licensing/cost - I don't plan to release my code Maturity and stability

    Read the article

  • Device driver with generic communication media layer

    - by Tom
    Greetings all, I'm trying to implement driver for an embedded device with generic communication media layer. Not sure what is the best way to do it so I'm seeking an advice from more experienced stackoverflow users:). Basically we've got devices around the country communicating with our head office. Usual form of communication is over TCP/IP, but could be also using usb, RF dongle, IR, etc. The plan is to have object corresponding with each of these devices, handling the proprietary protocol on one side and requests/responses from other internal systems on the other. The thing is how create something generic in between the media and the handling objects. I had a play around with the TCP dispatcher using boost.asio but trying to create something generic seems like a nightmare :). Anybody tried to do something like that? What is the best way how to do it? Cheers, Tom

    Read the article

  • Brainstorming - MIDI over LAN

    - by Hunter Bridges
    I'm planning out a summer coding project for myself. I work with a lot of MIDI and have been researching it a lot. I know it's an old technology, but it works with a lot of music hardware/software so in my eyes, it's still viable. Anyway, I haven't ever worked with writing drivers or anything, so I don't know where I would start with this. So, provided I have MIDI data already being sent over LAN to a server (I know how to do that part), what steps would it take for the server to channel those received messages to an emulated MIDI device, that could then be accessed in music software and so on? Also, what would it take to also send MIDI data back through the LAN to be received by the device that originally received the message? I'm not really looking for too specific a solution. Really I am just trying to come up with a game plan right now and I need to figure out where/what to research.

    Read the article

  • Moving from WCF RIA Beta to RC: best practices?

    - by Duncan Bayne
    I have an existing WCF RIA project built on the Release Candidate; I'm now moving to the Release version & have discovered many changes. David Scruggs made the following comment on his (MSDN) blog: "If you’ve written anything in SIlverlight 4 RIA Services, you’ll need to rewrite it. There has been a lot of refactoring and namespace moves." Having made a brief attempt to compile the old solution with the new RIA framework I'm inclined to agree. My current plan is to: remove the Silverlight Business Application projects from the Solution rebuild the EF4 items from the database create a new Silverlight Business Application project re-add the files (XAML, CS) from the old Silverlight Business Application project Does this sound like a reasonable approach? I think it's cleaner than trying to manually alter the existing project.

    Read the article

  • Push DVCS repository to master without needing codebase

    - by Scorchin
    To work on a client's staging environment I have to connect through a VPN which locks all normal network traffic and prevents any connection to the Internet. This would immediately prevent any of the "normal" VCS solutions from being used as it's not possible to gain access to the server. A solution to this would be to create a DVCS repository (git?) locally and then push changes to the master, as and when needed. There is one flaw in this plan. The entire codebase is around 14GB. To download all of this over the internet would take some time, especially when I'm likely to be working on 3 or 4 different machines in each case. This seems silly and overkill for a DVCS. TL;DR Can any DVCS solution allow you to push to a master server/repo without needing the codebase? Bad example: copy the .git folder (not the 14GB codebase) to another directory and push this to the master once disconnected from the VPN.

    Read the article

  • ADO.NET Entity Framework or ADO.NET

    - by sharru
    I'm starting a new project based on ASP.NET and Windows server. The application is planned to be pretty big and serve large amount of clients pulling and updating high freq. changing data. I have previously created projects with Linq-To-Sql or with Ado.Net. My plan for this project is to use VS2010 and the new EF4 framework. It would be great to hear other programmers options about development with Entity Framework Pros and cons from previous experience? Do you think EF4 is ready for production? Should i take the risk or just stick with plain old good ADO.NET?

    Read the article

  • C Programming - Convert an integer to binary

    - by leo
    Hi guys - i was hopefully after some tips opposed to solutions as this is homework and i want to solve it myself I am firstly very new to C. In fact i have never done any before, though i have previous java experience from modules at university. I am trying to write a programme that converts a single integer in to binary. I am only allowed to use bitwise operations and no library functions Can anyone possibly suggest some ideas about how i would go about doing this. Obviously i dont want code or anything, just some ideas as to what avenues to explore as currenty i am a little confused and have no plan of attack. Well, make that a lot confused :D thanks very much

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >