Search Results

Search found 10533 results on 422 pages for 'task organization'.

Page 324/422 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • SQL SERVER – LOGBUFFER – Wait Type – Day 18 of 28

    - by pinaldave
    At first, I was not planning to write about this wait type. The reason was simple- I have faced this only once in my lifetime so far maybe because it is one of the top 5 wait types. I am not sure if it is a common wait type or not, but in the samples I had it really looks rare to me. From Book On-Line: LOGBUFFER Occurs when a task is waiting for space in the log buffer to store a log record. Consistently high values may indicate that the log devices cannot keep up with the amount of log being generated by the server. LOGBUFFER Explanation: The book online definition of the LOGBUFFER seems to be very accurate. On the system where I faced this wait type, the log file (LDF) was put on the local disk, and the data files (MDF, NDF) were put on SanDrives. My client then was not familiar about how the file distribution was supposed to be. Once we moved the LDF to a faster drive, this wait type disappeared. Reducing LOGBUFFER wait: There are several suggestions to reduce this wait stats: Move Transaction Log to Separate Disk from mdf and other files. (Make sure your drive where your LDF is has no IO bottleneck issues). Avoid cursor-like coding methodology and frequent commit statements. Find the most-active file based on IO stall time, as shown in the script written over here. You can also use fn_virtualfilestats to find IO-related issues using the script mentioned over here. Check the IO-related counters (PhysicalDisk:Avg.Disk Queue Length, PhysicalDisk:Disk Read Bytes/sec and PhysicalDisk :Disk Write Bytes/sec) for additional details. Read about them over here. If you have noticed, my suggestions for reducing the LOGBUFFER is very similar to WRITELOG. Although the procedures on reducing them are alike, I am not suggesting that LOGBUFFER and WRITELOG are same wait types. From the definition of the two, you will find their difference. However, they are both related to LOG and both of them can severely degrade the performance. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Is Python Interpreted or Compiled?

    - by crodjer
    This is just a wondering I had while reading about interpreted and compiled languages. Ruby is no doubt an interpreted language, since source code is compiled by an interpreter at the point of execution. On the contrary C is a compiled language, as one have to compile the source code first according to the machine and then execute. This results is much faster execution. Now coming to Python: A python code (somefile.py) when imported creates a file (somefile.pyc) in the same directory. Let us say the import is done in a python shell or django module. After the import I change the code a bit and execute the imported functions again to find that it is still running the old code. This suggests that *.pyc files are compiled python files similar to executable created after compilation of a C file, though I can't execute *.pyc file directly. When the python file (somefile.py) is executed directly ( ./somefile.py or python somefile.py ) no .pyc file is created and the code is executed as is indicating interpreted behavior. These suggest that a python code is compiled every time it is imported in a new process to crate a .pyc while it is interpreted when directly executed. So which type of language should I consider it as? Interpreted or Compiled? And how does its efficiency compare to interpreted and compiled languages? According to wiki's Interpreted Languages page it is listed as a language compiled to Virtual Machine Code, what is meant by that? Update Looking at the answers it seems that there cannot be a perfect answer to my questions. Languages are not only interpreted or only compiled, but there is a spectrum of possibilities between interpreting and compiling. From the answers by aufather, mipadi, Lenny222, ykombinator, comments and wiki I found out that in python's major implementations it is compiled to bytecode, which is a highly compressed and optimized representation and is machine code for a virtual machine, which is implemented not in hardware, but in the bytecode interpreter. Also the the languages are not interpreted or compiled, but rather language implementations either interpret or compile code. I also found out about Just in time compilation As far as execution speed is concerned the various benchmarks cannot be perfect and depend on context and the task which is being performed. Please tell if I am wrong in my interpretations.

    Read the article

  • URL Routing in ASP.NET 4.0

    In the .NET Framework 3.5 SP1, Microsoft introduced ASP.NET Routing, which decouples the URL of a resource from the physical file on the web server. With ASP.NET Routing you, the developer, define routing rules map route patterns to a class that generates the content. For example, you might indicate that the URL Categories/CategoryName maps to a class that takes the CategoryName and generates HTML that lists that category's products in a grid. With such a mapping, users could view products for the Beverages category by visiting www.yoursite.com/Categories/Beverages. In .NET 3.5 SP1, ASP.NET Routing was primarily designed for ASP.NET MVC applications, although as discussed in Using ASP.NET Routing Without ASP.NET MVC it is possible to implement ASP.NET Routing in a Web Forms application, as well. However, implementing ASP.NET Routing in a Web Forms application involves a bit of seemingly excessive legwork. In a Web Forms scenario we typically want to map a routing pattern to an actual ASP.NET page. To do so we need to create a route handler class that is invoked when the routing URL is requested and, in a sense, dispatches the request to the appropriate ASP.NET page. For instance, to map a route to a physical file, such as mapping Categories/CategoryName to ShowProductsByCategory.aspx - requires three steps: (1) Define the mapping in Global.asax, which maps a route pattern to a route handler class; (2) Create the route handler class, which is responsible for parsing the URL, storing any route parameters into some location that is accessible to the target page (such as HttpContext.Items), and returning an instance of the target page or HTTP Handler that handles the requested route; and (3) writing code in the target page to grab the route parameters and use them in rendering its content. Given how much effort it took to just read the preceding sentence (let alone write it) you can imagine that implementing ASP.NET Routing in a Web Forms application is not necessarily the most straightforward task. The good news is that ASP.NET 4.0 has greatly simplified ASP.NET Routing for Web Form applications by adding a number of classes and helper methods that can be used to encapsulate the aforementioned complexity. With ASP.NET 4.0 it's easier to define the routing rules and there's no need to create a custom route handling class. This article details these enhancements. Read on to learn more! Read More >

    Read the article

  • Silverlight Cream for May 05, 2010 -- #856

    - by Dave Campbell
    In this Issue: Jeremy Alles(-2-), Kunal Chowdhury, anand iyer, Yochay Kiriaty(-2-, -3-), Max Paulousky, David Kelley, smartyP, Tim Heuer, and Dan Wahlin. Shoutout: Tim Heuer provides links for all the Ways to give feedback on Silverlight From SilverlightCream.com: [WP7] Bug when using NavigationService in Windows Phone 7 Jeremy Alles has blogged about a bug he found using the Navigation service in WP7. He gives the steps to reproduce and a couple possible workarounds. [WP7] Using the camera in the emulator Jeremy Alles is also digging into the camera functionality in the emulator. He has code demonstrating launching a camera task, and a list of other tasks available. Silverlight Tutorials Chapter 3: Introduction to Panels Kunal Chowdhury has Chapter 3 of his Silverlight 4 Tutorial series up and he's talking about Panels this time out. Push Notifications in Windows Phone 7 developer tools CTP April Refresh anand iyer is discussing the Push Notifications, only from a code perspective. Good information and good additional links to follow. Windows Phone Application Life Cycle Yochay Kiriaty talks with Tudor Toma and Jaime Rodriguez about the WP7 application lifecycle on Channel 9. Understanding Microsoft Push Notifications for Windows Phones Yochay Kiriaty has a 2-part post up on WP7 Push Notifications. The first part is explaining what Push Notifications are and why we need them... as a developer and as an end user viewing Toast or Tile notifications. Understanding How Microsoft Push Notification Works – Part 2 In the 2nd part of his Push Notification series, Yochay Kiriaty discusses how the Push Notification works under the covers. To Remember: Deployment of Silverlight Applications With Wcf Ria Services Max Paulousky has a post up for reference on what to look into when you get "Load Operation Failed" in WCF RIA services. Launching a URL from an OOB Silverlight Application David Kelley has a quick post up on launching URLs from an OOB app. If you haven't tried it, you may be surprised as he was at first. Creating a Windows Phone 7 XNA Game in Landscape Orientation smartyP is looking at recreating a landscape WP7 game in XNA and is detailing some of the issues he's been dealing with, and is also sharing a project file. New Silverlight 4 Themes available–get the raw bits Tim Heuer provided 'raw' versions of 3 new themes. Read his post to see exactly what he means by 'raw' ... they're definitely good looking, and are going to get a lot of play. Handling WCF Service Paths in Silverlight 4 – Relative Path Support Dan Wahlin shares his technique for avoiding the pain involved with ServiceReferences.ClientConfig by using Silverlight 4 relative path support. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Oracle Fusion Middleware Innovation Award Winners 2012: ADF & Fusion Development

    - by Dana Singleterry
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Fusion Middleware Innovation Awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. The awards were presented during Oracle OpenWorld 2012 and following winners are for the category of ADF & Fusion Development. Micros – an OPN Platinum partner – has been working closely with Oracle product management teams in applying industry best practices in the development of their solutions. Their current application suite for the hospitality industry was built on Oracle Forms and the Oracle database running on MS Windows. The next generation of this suite is being developed and released in modules that are now based on Oracle FMW (including ADF) 11g technologies and Oracle Database 11g all running on Oracle Linux. The primary driver was that of modernization and hence the reason Oracle ADF was selected to provide a rich UI for business processes that could be served up through traditional methods or through mobile devices globally. SOA Suite & ADF allowed for loosely-coupled services that could evolve with the needs of the business. Micros's application innovations includes the use of business application portlets that have been published from ADF Faces Task Flows generated using WebCenter portlet libraries  & Oracle Metadata Services (MDS) with multi-layered customizations using Oracle WebCenter Composer. PCS (Marfin Egnatia Bank of Greece) – PCS Wealth Management is a WM Software Solution, which captures and automates the WM business processes allowing Service Providers to allocate enough time and effort into Customer Service and Investment Strategies, under Advisory or Execution-Only Services. The Product is built upon the latest Web Technologies and ensures Best Practices covering all functional expectations, meeting local regulatory requirements and discovering successful opportunities for the WM Customers' Portfolios. The new unified Wealth Management system offers an unparalleled User Interface taking full advantage of the user friendly ADF Faces Components to a great extent, all serving Private Banking purposes. The application offers a true Account Officer Cockpit with shallow navigation, one-click access to informed decisions and a perfect customer service. ADF Grids and Pivots, the Data Visualization Components, as well as the Calendar and Map Components are cleverly used to help the user eliminate the usage of Excel, Outlook and other systems. PCS's application is unique in the way it leverages the ADF Faces data visualization components to create a truly attractive and insightful dashboard for their application. PCS Wealth Management Demo Qualcomm – Qualcomm, a $17B per year company, designs and sells semiconductor products for wireless telecommunications, mobile and computing markets. In addition, Qualcomm companies provide various hardware and software products to facilitate the design, development and deployment of phones and the applications that run on them. Qualcomm’s challenge has been to not only develop and deploy new business system functions to keep pace with customer demand, but also to provide a customer collaboration capability that is sufficiently robust, easy to use, and flexible to meet emerging and future needs. Qualcomm has taken successful steps in building and deploying the customer engagement platform Ieveraging various Oracle technologies including Fusion Middleware (ADF, SOA, OBIEE) and their proven ERP foundation of EBS and 11g databases. The new platform delivers a more unified and “seamless” business solution with a consistent, modern “look and feel” all based on standard business processes which facilitate efficient collaboration with Qualcomm and its customers. The look and feel leverages ADF in innovative ways and includes hover over navigation, custom pagination components, and skinning. Qualcomm has exposed a services layer that provides significant functionality including order-to-ship, quote-to-order, customer on-boarding and contract validation. Qualcomm's creative designs leverage Oracle's SOA Suite to integrate with Oracle EBS and desperate applications to provide a rich user interface through the use use of Oracle ADF Faces Rich Client Components providing a self-service solution to their customers.

    Read the article

  • SQL SERVER – Simple Example to Configure Resource Governor – Introduction to Resource Governor

    - by pinaldave
    Let us jump right away with question and answer mode. What is resource governor? Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. Why is resource governor required? If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. What will be the real world example of need of resource governor? Here are two simple scenarios where the resource governor can be very useful. Scenario 1: A server which is running OLTP workload and various resource intensive reports on the same server. The ideal situation is where there are two servers which are data synced with each other and one server runs OLTP transactions and the second server runs all the resource intensive reports. However, not everybody has the luxury to set up this kind of environment. In case of the situation where reports and OLTP transactions are running on the same server, limiting the resources to the reporting workload it can be ensured that OTLP’s critical transaction is not throttled. Scenario 2: There are two DBAs in one organization. One DBA A runs critical queries for business and another DBA B is doing maintenance of the database. At any point in time the DBA A’s work should not be affected but at the same time DBA B should be allowed to work as well. The ideal situation is that when DBA B starts working he get some resources but he can’t get more than defined resources. Does SQL Server have any default resource governor component? Yes, SQL Server have two by default created resource governor component. 1) Internal –This is used by database engine exclusives and user have no control. 2) Default – This is used by all the workloads which are not assigned to any other group. What are the major components of the resource governor? Resource Pools Workload Groups Classification In simple words here is what the process of resource governor is. Create resource pool Create a workload group Create classification function based on the criteria specified Enable Resource Governor with classification function Let me further explain you the same with graphical image. Is it possible to configure resource governor with T-SQL? Yes, here is the code for it with explanation in between. Step 0: Here we are assuming that there are separate login accounts for Reporting server and OLTP server. /*----------------------------------------------- Step 0: (Optional and for Demo Purpose) Create Two User Logins 1) ReportUser, 2) PrimaryUser Use ReportUser login for Reports workload Use PrimaryUser login for OLTP workload -----------------------------------------------*/ Step 1: Creating Resource Pool We are creating two resource pools. 1) Report Server and 2) Primary OLTP Server. We are giving only a few resources to the Report Server Pool as described in the scenario 1 the other server is mission critical and not the report server. ----------------------------------------------- -- Step 1: Create Resource Pool ----------------------------------------------- -- Creating Resource Pool for Report Server CREATE RESOURCE POOL ReportServerPool WITH ( MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30, MIN_MEMORY_PERCENT=0, MAX_MEMORY_PERCENT=30) GO -- Creating Resource Pool for OLTP Primary Server CREATE RESOURCE POOL PrimaryServerPool WITH ( MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100, MIN_MEMORY_PERCENT=50, MAX_MEMORY_PERCENT=100) GO Step 2: Creating Workload Group We are creating two workloads each mapping to each of the resource pool which we have just created. ----------------------------------------------- -- Step 2: Create Workload Group ----------------------------------------------- -- Creating Workload Group for Report Server CREATE WORKLOAD GROUP ReportServerGroup USING ReportServerPool ; GO -- Creating Workload Group for OLTP Primary Server CREATE WORKLOAD GROUP PrimaryServerGroup USING PrimaryServerPool ; GO Step 3: Creating user defiled function which routes the workload to the appropriate workload group. In this example we are checking SUSER_NAME() and making the decision of Workgroup selection. We can use other functions such as HOST_NAME(), APP_NAME(), IS_MEMBER() etc. ----------------------------------------------- -- Step 3: Create UDF to Route Workload Group ----------------------------------------------- CREATE FUNCTION dbo.UDFClassifier() RETURNS SYSNAME WITH SCHEMABINDING AS BEGIN DECLARE @WorkloadGroup AS SYSNAME IF(SUSER_NAME() = 'ReportUser') SET @WorkloadGroup = 'ReportServerGroup' ELSE IF (SUSER_NAME() = 'PrimaryServerPool') SET @WorkloadGroup = 'PrimaryServerGroup' ELSE SET @WorkloadGroup = 'default' RETURN @WorkloadGroup END GO Step 4: In this final step we enable the resource governor with the classifier function created in earlier step 3. ----------------------------------------------- -- Step 4: Enable Resource Governer -- with UDFClassifier ----------------------------------------------- ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.UDFClassifier); GO ALTER RESOURCE GOVERNOR RECONFIGURE GO Step 5: If you are following this demo and want to clean up your example, you should run following script. Running them will disable your resource governor as well delete all the objects created so far. ----------------------------------------------- -- Step 5: Clean Up -- Run only if you want to clean up everything ----------------------------------------------- ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = NULL) GO ALTER RESOURCE GOVERNOR DISABLE GO DROP FUNCTION dbo.UDFClassifier GO DROP WORKLOAD GROUP ReportServerGroup GO DROP WORKLOAD GROUP PrimaryServerGroup GO DROP RESOURCE POOL ReportServerPool GO DROP RESOURCE POOL PrimaryServerPool GO ALTER RESOURCE GOVERNOR RECONFIGURE GO I hope this introductory example give enough light on the subject of Resource Governor. In future posts we will take this same example and learn a few more details. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Resource Governor

    Read the article

  • Anticipating JavaOne 2012 – Number 17!

    - by Janice J. Heiss
    As I write this, JavaOne 2012 (September 30-October 4 in San Francisco, CA) is just over a week away -- the seventeenth JavaOne! I’ll resist the impulse to travel in memory back to the early days of JavaOne. But I will say that JavaOne is a little like your birthday or New Year’s in that it invites reflection, evaluation, and comparison. It’s a time when we take the temperature of Java and assess the world of information technology generally. At JavaOne, insight and information flow amongst Java developers like no other time of the year.This year, the status of Java seems more secure in the eyes of most Java developers who agree that Oracle is doing an acceptable job of stewarding the platform, and while the story is still in progress, few doubt that Oracle is engaging strongly with the Java community and wants to see Java thrive. From my perspective, the biggest news about Java is the growth of some 250 alternative languages for the JVM – from Groovy to Jython to JRuby to Scala to Clojure and on and on – offering both new opportunities and challenges. The JVM has proven itself to be unusually flexible, resulting in an embarrassment of riches in which, more and more, developers are challenged to find ways to optimally mix together several different languages on projects.    To the matter at hand -- I can say with confidence that Oracle is working hard to make each JavaOne better than the last – more interesting, more stimulating, more networking, and more fun! A great deal of thought and attention is being devoted to the task. To free up time for the 475 technical sessions/Birds of feather/Hands-on-Labs slots, the Java Strategy, Partner, and Technical keynotes will be held on Sunday September 30, beginning at 4:00 p.m.   Let’s not forget Java Embedded@JavaOne which is being held Wednesday, Oct. 3rd and Thursday, Oct. 4th at the Hotel Nikko. It will provide business decision makers, technical leaders, and ecosystem partners important information about Java Embedded technologies and new business opportunities.   This year's JavaOne theme is “Make the Future Java”. So come to JavaOne and make your future better by:--Choosing from 475 sessions given by the experts to improve your working knowledge and coding expertise --Networking with fellow developers in both casual and formal settings--Enjoying world-class entertainment--Delighting in one of the world’s great cities (my home town) Hope to see you there!

    Read the article

  • Verizon Wireless Supports its Mission-Critical Employee Portal with MySQL

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Verizon Wireless, the #1 mobile carrier in the United States, operates the nation’s largest 3G and 4G LTE network, with the most subscribers (109 millions) and the highest revenue ($70.2 Billion in 2011). Verizon Wireless built the first wide-area wireless broadband network and delivered the first wireless consumer 3G multimedia service in the US, and offers global voice and data services in more than 200 destinations around the world. To support 4.2 million daily wireless transactions and 493,000 calls and emails transactions produced by 94.2 million retail customers, Verizon Wireless employs over 78,000 employees with area headquarters across the United States. The Business Challenge Seeing the stupendous rise in social media, video streaming, live broadcasting…etc which redefined the scope of technology, Verizon Wireless, as a technology savvy company, wanted to provide a platform to its employees where they could network socially, view and host microsites, stream live videos, blog and provide the latest news. The IT team at Verizon Wireless had abundant experience with various technology platforms to support the huge number of applications in the company. However, open-source products weren’t yet widely used in the organization and the team had the ambition to adopt such technologies and see if the architecture could meet Verizon Wireless’ rigid requirements. After evaluating a few solutions, the IT team decided to use the LAMP stack for Vzweb, its mission-critical, 24x7 employee portal, with Drupal as the front end and MySQL on Linux as the backend, and for a few other internal websites also on MySQL. The MySQL Solution Verizon Wireless started to support its employee portal, Vzweb, its online streaming website, Vztube, and internal wiki pages, Vzwiki, with MySQL 5.1 in 2010. Vzweb is the main internal communication channel for Verizon Wireless, while Vztube hosts important company-wide webcasts regularly for executive-level announcements, so both channels have to be live and accessible all the time for its 78,000 employees across the United States. However during the initial deployment of the MySQL based Intranet, the application experienced performance issues. High connection spikes occurred causing slow user response time, and the IT team applied workarounds to continue the service. A number of key performance indexes (KPI) for the infrastructure were identified and the operational framework redesigned to support a more robust website and conform to the 99.985% uptime SLA (Service-Level Agreement). The MySQL DBA team made a series of upgrades in MySQL: Step 1: Moved from MyISAM to InnoDB storage engine in 2010 Step 2: Upgraded to the latest MySQL 5.1.54 release in 2010 Step 3: Upgraded from MySQL 5.1 to the latest GA release MySQL 5.5 in 2011, and leveraging MySQL Thread Pool as part of MySQL Enterprise Edition to scale better After making those changes, the team saw a much better response time during high concurrency use cases, and achieved an amazing performance improvement of 1400%! In January 2011, Verizon CEO, Ivan Seidenberg, announced the iPhone launch during the opening keynote at Consumer Electronic Show (CES) in Las Vegas, and that presentation was streamed live to its 78,000 employees. The event was broadcasted flawlessly with MySQL as the database. Later in 2011, Hurricane Irene attacked the East Coast of United States and caused major life and financial damages. During the hurricane, the team directed more traffic to its west coast data center to avoid potential infrastructure damage in the East Coast. Such transition was executed smoothly and even though the geographical distance became longer for the East Coast users, there was no impact in the performance of Vzweb and Vztube, and the SLA goal was achieved. “MySQL is the key component of Verizon Wireless’ mission-critical employee portal application,” said Shivinder Singh, senior DBA at Verizon Wireless. “We achieved 1400% performance improvement by moving from the MyISAM storage engine to InnoDB, upgrading to the latest GA release MySQL 5.5, and using the MySQL Thread Pool to support high concurrent user connections. MySQL has become part of our IT infrastructure, on which potentially more future applications will be built.” To learn more about MySQL Enterprise Edition, Get our Product Guide.

    Read the article

  • links for 2010-04-20

    - by Bob Rhubart
    smattoon@: Enterprise Architecture for Drupal | DrupalCon San Francisco 2010 Details on today's (4/20/10) Drupalcon presentation by Scott "@smattoon" Mattoon. (tags: oracle sun enterprisearchitecture drupal) Mona Rakibe: Deploying BAM Data Control Application to WLS server "Typically we would test our ADF pages that use BAM Data control using integrated WLS server (ADRS), " writes Mona Rakibe. "If we have to deploy this same application to a standalone WLS we have to make sure we have the BAM server connection created in WLS. Unless we do that we may face runtime errors." (tags: oracle otn weblogic soa adf) George Maggessy: Deploying an Consuming Task Flows as Shared Libraries on WLS "A Java EE library is an easy way to share one or more different types of Java EE modules among multiple Enterprise Applications," says George Maggessy. "A shared Java EE library can be a simple jar file, an EJB module or even a web application module." His post includes a sample. (tags: oracle otn architect java weblogic) Adam Hawley: Oracle VM and JRockit Virtual Edition: Oracle Introduces Java Virtualization Solution for Oracle(R) WebLogic Suite Adam Hawley offers information on "a WebLogic Suite option that permits the Oracle WebLogic Server 11g to run on a Java JVM (JRockit Virtual Edition) that itself runs directly on the Oracle VM Server for x86 / x64 without needing any operating system." (tags: oracle otn weblogic virtualization architect javajrockit) @fteter: Highlights From The Bright Lights - Sunday #c10 "Sunday, the first day of Collaborate 10, was probably the best conference kickoff I've ever experienced," says Oracle ACE Director Floyd Teter. "And that's mostly because 'Oracle Fusion Architecture: Soup To Nuts' absolutely rocked!" (tags: oracle otn oracleace collaborate2010 fusionmiddleware architecture) @ORACLENERD: COLLABORATE: Day 2 Wrap Up Oracle ACE Chet "oraclenerd" Justice's tale of cell phone chargers, beer, and shrimp eyes. (tags: oracle otn oracleace collaborate2010) Registration is Open: Oracle Technology Network Architect Day: Dallas The 2010 series of Oracle Technology Network Architect Days kicks off in Dallas on Wednesday, May 13. Registration is now open for the Dallas event, and will open soon for the events in Anaheim, CA and Redwood Shores, CA. (tags: oracle otn architect entarch community events)

    Read the article

  • Who is Jeremiah Owyang?

    - by Michael Hylton
    Q: What’s your current role and what career path brought you here? J.O.: I'm currently a partner and one of the founding team members at Altimeter Group.  I'm currently the Research Director, as well as wear the hat of Industry Analyst. Prior to joining Altimeter, I was an Industry Analyst at Forrester covering Social Computing, and before that, deployed and managed the social media program at Hitachi Data Systems in Santa Clara.  Around that time, I started a career blog called Web Strategy which focused on how companies were using the web to connect with customers --and never looked back. Q: As an industry analyst, what are you focused on these days? J.O.: There are three trends that I'm focused my research on at this time:  1) The Dynamic Customer Journey:  Individuals (both b2c and b2b) are given so many options in their sources of data, channels to choose from and screens to consume them on that we've found that at each given touchpoint there are 75 potential permutations.  Companies that can map this, then deliver information to individuals when they need it will have a competitive advantage and we want to find out who's doing this.  2) One of the sub themes that supports this trend is Social Performance.  Yesterday's social web was disparate engagement of humans, but the next phase will be data driven, and soon new technologies will emerge to help all those that are consuming, publishing, and engaging on the social web to be more efficient with their time through forms of automation.  As you might expect, this comes with upsides and downsides.  3) The Sentient World is our research theme that looks out the furthest as the world around us (even inanimate objects) become 'self aware' and are able to talk back to us via digital devices and beyond.  Big data, internet of things, mobile devices will all be this next set. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? J.O.: The lines between our work and personal lives are dissolving, and this leads to a greater upside of being always connected and have deeper relationships with those that are not.  It also means a downside of society expectations that we're always around and available for colleagues, customers, and beyond.  In the future, a balance will be sought as we seek to achieve the goals of family, friends, work, and our own personal desires.  All of this is being ironically written at 430 am on a Sunday am.  Q: How can people keep up with what you’re working on? J.O.: A great question, thanks.  There are a few sources of information to find out, I'll lead with the first which is my blog at web-strategist.com.  A few times a week I'll publish my industry insights (hires, trends, forces, funding, M&A, business needs) as well as on twitter where I'll point to all the news that's fit to print @jowyang.  As my research reports go live (we publish them for all to read --called Open Research-- at no cost) they'll emerge on my blog, or checkout the research tab to find out more now.  http://www.web-strategist.com/blog/research/ Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon?  J.O.: Absolutely! This coming Thursday, September 13th, I’m doing a webcast with Oracle on “Managing Social Relationships for the Enterprise”. This is going to be a great discussion with Reggie Bradford, Senior Vice President of Product Development at Oracle and Christian Finn, Senior Director of Product Management for Oracle WebCenter. I’m looking forward to a great discussion around all those issues that so many companies are struggling with these days as they realize how much social media is impacting their business. It’s changing the way your customers and employees interact with your brand. Today it’s no longer a matter of when to become a social-enabled enterprise, but how to become a successful one. Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast?  J.O.: Below is a 15 minute video which encapsulates Altimeter’s themes on the Dynamic Customer Journey and the Sentient World. I’m really proud to have taken an active role in the first ever LeWeb outside of Paris. This one, which was featured in downtown London across the street from Westminster Abbey was sold out. If you’ve not heard of LeWeb, this is a global Internet conference hosted by Loic and Geraldine Le Meur, a power couple that stem from Paris but are also living in Silicon Valley, this is one of my favorite conferences to connect with brands, technology innovators, investors and friends. Altimeter was able to play a minor role in suggesting the theme for the event “Faster Than Real Time” which stems off previous LeWebs that focused on the “Real time web”. In this radical state, companies are able to anticipate the needs of their customers by using data, technology, and devices and deliver meaningful experiences before customers even know they need it. I explore two of three of Altimeter’s research themes, the Dynamic Customer Journey, and the Sentient World in my speech, but due to time, did not focus on Adaptive Organization.

    Read the article

  • What have my fellow Delphi programmers done to make Eclipse/Java more like Delphi?

    - by Robert Oschler
    I am a veteran Delphi programmer working on my first real Android app. I am using Eclipse and Java as my development environment. The thing I miss the most of course is Delphi's VCL components and the associated IDE tools for design-time editing and code creation. Fortunately I am finding Eclipse to be one hell of an IDE with it's lush context sensitive help, deep auto-complete and code wizard facilities, and other niceties. This is a huge double treat since it is free. However, here is an example of something in the Eclipse/Java environment that will give a Delphi programmer pause. I will use the simple case of adding an "on-click" code stub for an OK button. DELPHI Drop button on a form Double-click button on form and fill in the code that will fire when the button is clicked ECLIPSE Drop button on layout in the graphical XML file editor Add the View.OnClickListener interface to the containing class's "implements" list if not there already. (Command+1 on Macs, Ctrl + 1 on PCs I believe). Use Eclipse to automatically add the code stub for unimplemented methods needed to support the View.OnClickListener interface, thus creating the event handler function stub. Find the stub and fill it in. However, if you have more than one possible click event source then you will need to inspect the View parameter to see which View element triggered the OnClick() event, thus requiring a case statement to handle multiple click event sources. NOTE: I am relatively new to Eclipse/Java so if there is a much easier way of doing this please let me know. Now that work flow isn't all that terrible, but again, that's just the simplest of use cases. Ratchet up the amount of extra work and thinking for a more complex component (aka widget) and the large number of properties/events it might have. It won't be long before you miss dearly the Delphi intelligent property editor and other designers. Eclipse tries to cover this ground by having an extensive list of properties in the menu that pops up when you right-click over a component/widget in the XML graphical layout editor. That's a huge and welcome assist but it's just not even close to the convenience of the Delphi IDE. Let me be very clear. I absolutely am not ranting nor do I want to start a Delphi vs. Java ideology discussion. Android/Eclipse/Java is what it is and there is a lot that impresses me. What I want to know is what other Delphi programmers that made the switch to the Eclipse/Java IDE have done to make things more Delphi like, and not just to make component/widget event code creation easier but any programming task. For example: Clever tips/tricks Eclipse plugins you found other ideas? Any great blog posts or web resources on the topic are appreciated too. -- roschler

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves - June 23-29, 2013

    - by Bob Rhubart
    2,947 people now follow OTN ArchBeat on Facebook. Here are the Top 10 items shared on that page for June 23-29, 2013. Podcast Show Notes: DevOps, Cloud, and Role Creep After some confusion (my bad) all three CORRECT parts of this podcast are now available. The panelists for this discussion are all Oracle ACE Directors: Ron Batra, Basheer Khan, and Cary Millsap. SOA Suite 11g Developers Cookbook Published | Antony Reynolds "The book focuses on areas that we felt we had neglected in the Developers Guide, says co-author Antony Reynolds. "There is more about Java integration and OSB, both of which we see a lot of questions about when working with customers." Using Oracle TimesTen With Oracle BI Applications (Part 2) | Peter Scott Peter Scott follows up an earlier post with a look at some of the OBIA structures and a discussion of some of the features of TimesTen. Linux-Containers — Part 1: Overview | Lenz Grimmer OTN Garage blogger Lenz Grimmer kicks off a series and expands your mind with deep detail on Linux Containers Slides from my ODTUG Kscope13 Presentation | Zeeshan Baig Oracle ACE Zeeshan Baig shares the slides from his KScope13 presentation, "Build Your Business Services Using ADF Task Flows." Fun with Enterprise Manager | Rene van Wijk Oracle ACE Rene van Wijk shares some background and some tuning and other tech tips for working with Oracle Enterprise Manager. Using VirtualBox to test drive Windows Blue | The Fat Bloke The Fat Bloke shares a tech tip for those interested in giving Windows Blue a try on Virtual Box. Podcast Show Notes: The Fusion Middleware A-Team and the Chronicles of Architecture In this three-part series Oracle Fusion Middleware A-Team members Jennifer Briscoe, Clifford Musante, Mikael Ottosson, and Pardha Reddy talk about the origins and mission of the FMW A-Team and about the great technical content you'll find on the recently launched Oracle A-Team blog. Part one is now available. 5 Best Practices - Laying the Foundation for WebCenter Projects | John Brunswick Oracle WebCenter expert John Brunswick shares best practices that "enable the creation of portal solutions with minimal resource overhead, while offering the greatest flexibility for progressive elaboration." Oracle Magazine - July/Aug 2013 The digital edition of the July/August edition of Oracle Magazine is now available. This issue includes my architect community column, "The CX Factor." which features insight from community members on "why and how CX has become a significant factor in enterprise IT." h

    Read the article

  • The Work Order Printing Challenge

    - by celine.beck
    One of the biggest concerns we've heard from maintenance practitioners is the ability to print and batch print work order details along with its accompanying attachments. Indeed, maintenance workers traditionally rely on work order packets to complete their job. A standard work order packet can include a variety of information like equipment documentation, operating instructions, checklists, end-of-task feedback forms and the likes. Now, the problem is that most Asset Lifecycle Management applications do not provide a simple and efficient solution for process printing with document attachments. Work order forms can be easily printed but attachments are usually left out of the printing process. This sounds like a minor problem, but when you are processing high volume of work orders on a regular basis, this inconvenience can result in important inefficiencies. In order to print work order and its related attachments, maintenance personnel need to print the work order details and then go back to the work order and open each individual attachment using the proper authoring application to view and print each document. The printed output is collated into a work order packet. The AutoVue Document Print Service products that were just released in April 2010 aim at helping organizations address the work order printing challenge. Customers and partners can leverage the AutoVue Document Print Services to build a complete printing solution that complements their existing print server solution with AutoVue's document- and platform-agnostic document print services. The idea is to leverage AutoVue's printing services to invoke printing either programmatically or manually directly from within the work order management application, and efficiently process the printing of complete work order packets, including all types of attachments, from office files to more advanced engineering documents like 2D CAD drawings. Oracle partners like MIPRO Consulting, specialists in PeopleSoft implementations, have already expressed interest in the AutoVue Document Print Service products for their ability to offer print services to the PeopleSoft ALM suite, so that customers are able to print packages of documents for maintenance personnel. For more information on the subject, please consult MIPRO Consulting's article entitled Unsung Value: Primavera and AutoVue Integration into PeopleSoft posted on their blog. The blog post entitled Introducing AutoVue Document Print Service provides additional information on how the solution works. We would also love to hear what your thoughts are on the topic, so please do not hesitate to post your comments/feedback on our blog. Related Articles: Introducing AutoVue Document Print Service Print Any Document Type with AutoVue Document Print Services

    Read the article

  • CCNet TFS Migration - Dealing with left over folders

    - by Michael Stephenson
    Im currently in the process of migrating our many BizTalk projects from MKS source control to TFS.  While we will be using TFS for work item tracking and source control etc we will be continuing to use Cruise Control for continuous integration although im updating this to CCNet 1.5 at the same time. Ill post a few things as much as a reminder to myself about some of the problems we come across. Problem After the first build of our code the next time a build is triggered an error is encountered by the TFS source control block refreshing the source code. System.IO.IOException: The directory is not empty.    at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive)    at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) System.IO.IOException: The directory is not empty. at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive) at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) Project: Bupa.BPI.Documents Date of build: 2011-01-28 14:54:21 Running time: 00:00:05 Integration Request: Build (ForceBuild) triggered from VMOPBZDEV11 Solution The problem seems to be with a folder called TestLocations which is created by the build process and used along with the file adapter as a way to get messages into BizTalk.  For some reason the source control block when it does a full refresh of the code does not get rid of this folder and then complains thats a problem and fails the build. Interestingly there are other folders created by the build which are deleted fine.  My assumption is that this if something to do with the file adapter polling the directory.  However note that we have not had this problem with other source control blocks in the past. To workaround this I have added a prebuild task to the ccnet.config file to delete this folder before the source control block is executed.  See below for example < prebuild> exec>executable>cmd.exe</executable>buildArgs>/c "if exist "C:\<MyCode>\TestLocations" rd /s /q "C:\<MyCode>\TestLocations""</buildArgs>exec> prebuild> < < < </ </

    Read the article

  • SQL Saturday 43 in Redmond

    - by AjarnMark
    I attended my first SQLSaturday a couple of days ago, SQLSaturday #43 in Redmond (at Microsoft).  I got there really early, primarily because I forgot how fast I can get there from my home when nobody else is on the road.  On a weekday in rush hour traffic, that would have taken two hours to get there.  I gave myself 90 minutes, and actually got there in about 45.  Crazy! I made the mistake of going to the main Microsoft campus, but that’s not where the event was being held.  Instead it was in a big Microsoft conference center on the other side of the highway.  Fortunately, I had the address with me and quickly realized my mistake.  When I got back on track, I noticed that there were bright yellow signs out on the street corner that looked like they said they were for SOL Saturday, which actually was appropriate since it was the sunniest day around here in a long time. Since I was there so early, the registration was just getting setup, so I found Greg Larsen who was coordinating things and offered to help.  He put me to work with a group of people organizing the pre-printed raffle tickets and stuffing swag bags. I had never been to a SQLSaturday before this one, so I wasn’t exactly sure what to expect even though I have read about a few on some blogs.  It makes sense that each one will be a little bit different since they are almost completely volunteer driven, and the whole concept is still in its early stages.  I have been to the PASS Summit for the last several years, and was hoping for a smaller version of that.  Now, it’s not really fair to compare one free day of training run entirely by volunteers with a multi-day, $1,000+ event put on under the direction of a professional event management company.  But there are some parallels. At this SQLSaturday, there was no opening general session, just coffee and pastries in the common area / expo hallway and straight into the first group of sessions.  I don’t know if that was because there was no single room large enough to hold everyone, or for other reasons.  This worked out okay, but the organization guy in me would have preferred to have even a 15 minute welcome message from the organizers with a little overview of the day.  Even something as simple as, “Thanks to persons X, Y, and Z for helping put this together…Sessions will start in 20 minutes and are all in rooms down this hallway…the bathrooms are on the other side of the conference center…lunch today is pizza and we would like to thank sponsor Q for providing it.”  It doesn’t need to be much, certainly not a full-blown Keynote like at the PASS Summit, but something to use as a rallying point to pull everyone together and get the day off to an official start would be nice.  Again, there may have been logistical reasons why that was not feasible here.  I’m just putting out my thoughts for other SQLSaturday coordinators to consider. The event overall was great.  I believe that there were over 300 in attendance, and everything seemed to run smoothly.  At least from an attendee’s point of view where there was plenty of muffins in the morning and pizza in the afternoon, with plenty of pop to drink.  And hey, if you’ve got the food and drink covered, a lot of other stuff could go wrong and people will be very forgiving.  But as I said, everything appeared to run pretty smoothly, at least until Buck Woody showed up in his Oracle shirt.  Other than that, the volunteers did a great job! I was a little surprised by how few people in my own backyard that I know.  It makes sense if you really think about it, given how many companies must be using SQL Server around here.  I guess I just got spoiled coming into the PASS Summit with a few contacts that I already knew would be there.  Perhaps I have been spending too much time with too few people at the Summits and I need to step out and meet more folks.  Of course, it also is different since the Summit is the big national event and a number of the folks I know are spread out across the country, so the Summit is the only time we’re all in the same place at the same time.  I did make a few new contacts at SQLSaturday, and bumped into a couple of people that I knew (and a couple others that I only knew from Twitter, and didn’t even realize that they were here in the area). Other than the sheer entertainment value of Buck Woody’s session, the one that was probably the greatest value for me was a quick introduction to PowerShell.  I have not done anything with it yet, but I think it will be a good tool to use to implement my plans for automated database recovery testing.  I saw just enough at the session to take away some of the intimidation factor, and I am getting ready to jump in and see what I can put together in the next few weeks.  And that right there made the investment worthwhile.  So I encourage you, if you have the opportunity to go to a SQLSaturday event near you, go for it!

    Read the article

  • User Experience Fundamentals

    - by ultan o'broin
    Understanding what user experience means in the modern work environment is central to building great-looking usable applications on the desktop or mobile devices. What better place to start a series of blog posts on such Applications User Experience team enablement for customers and partners than by sharing what the term really means, writes team member Karen Scipi. Applications UX have gained valuable insights into developing a user experience that reflects the experience of today’s worker. We have observed real workers performing real tasks in real work environments, and we have developed a set of new standards of application design that have been scientifically proven to be beneficial to enable today’s workers. We share such expertise to enable our customers and partners to benefit from our insights and to further their return on investment when building Oracle applications. So, What is User Experience? ?The user interface (UI) is about the on-screen user context provided by the layout of widgets (such as icons, fields, and buttons and more) and the visual impact of colors, typographic choices, and so on. The UI comprises the “look and feel” of the application that users interact with, and reflects, in essence, the most immediate aspects of usability we can now all relate to.  User experience, on the other hand, is about understanding the whole context of the world of work, how workers go about completing tasks, crossing all sorts of boundaries along the way. It is a study of how business processes and workers goals coincide, how users work with technology or other tools to get their jobs done, their interactions with other users, and their response to the technical, physical, and cultural environment around them. User experience is all about how users work—their work environments, office layouts, desk tools, types of devices, their working day, and more. Even their job aids, such as sticky notes, offer insight for UX innovation. User experience matters because businesses needs to be efficient, work must be productive, and users now demand to be satisfied by the applications they work with. In simple terms, tasks finished quickly and accurately for a business evokes organization and worker satisfaction, which in turn makes workers feel good and more than willing to use the application again tomorrow. Design Principles for the Enterprise Worker The consumerization of information technology has raised the bar for enterprise applications. Applications must be consistent, simple, intuitive, but above all contextual, reflecting how and when workers work, in the office or on the go. For example, the Google search experience with its type-ahead keyword-prompting feature is how workers expect to be able to discover enterprise information, too. Type-ahead in PeopleSoft 9.1 To build software that enables workers to be productive, our design principles meet modern work requirements about consistency, with well-organized, context-driven information, geared for a working world of discovery and collaboration. Our applications must also behave in a simple, web-like way just like Amazon, Google, and Apple products that workers use at home or on the go. Our user experience must also reflect workers’ needs for flexibility and well-loved enterprise practices such as using popular desktop tools like Microsoft Excel or Outlook as required. Building User Experience Productively The building blocks of Oracle Fusion Applications are the user experience design patterns. Based on the Oracle Fusion Middleware technology used to build Oracle Fusion Applications, the patterns are reusable solutions to common usability challenges that ADF developers typically face as they build applications, extensions, and integrations. Developers use the patterns as part of their Oracle toolkits to realize great usability consistently and in a productive way. Our design pattern creation process is informed by user experience research and science, an understanding of our technology’s capabilities, the demands for simplification and intuitiveness from users, and the best of Oracle’s acquisitions strategy (an injection of smart people and smart innovation). The patterns are supported by usage guidelines and are tested in our labs and assembled into a library of proven resources we used to build own Oracle Fusion Applications and other Oracle applications user experiences. The design patterns library is now available to the ADF community and to our partners and customers, for free. Developers with ADF skills and other technology skills can now offer more than just coding and functionality and still use the best in enterprise methodologies to ensure that a great user experience is easily applied, scaled, and maintained, whether it be for SaaS or on-premise deployments for Oracle Fusion Applications, for applications coexistence, or for partner integrations scenarios.  Oracle partners and customers already using our design patterns to build solutions and win business in smart and productive ways are now sharing their experiences and insights on pattern use to benefit your entire business. Applications UX is going global with the message and the means. Our hands-on user experience enablement through ADF  is expanding. So, stay tuned to Misha Vaughan's Voice of User Experience (VOX) blog and follow along on Twitter at @usableapps for news of outreach events and other learning opportunities. Interested in Learning More? Oracle Fusion Applications User Experience Patterns and Guidelines Library Shout-outs for Oracle UX Design Patterns Oracle Fusion Applications User Experience Design Patterns: Productivity Realized

    Read the article

  • ROA on top of SOA

    - by Vaibhav Pujari
    I already have a stable Service Oriented Architecture for my application which exposes services as API calls. (the verbs) Now, I need to build a Resource Oriented Architecture to expose a RESTful API to interact with the application objects. (the nouns) What are the best practices to reuse the existing services: - without any persistence inside my new code. - without putting unnecessary logic into the REST layer i.e. it should ideally just leverage the services provided by SOA API. I want this layer to be as thin as possible. - without modifying the existing SOA API - allow easy extension of the REST API i.e. it should be easy to add more resources without changing the (yet to be written) core code. (I want to make resource names and their associated actions configurable so more contributors can easily add resources without a need to understand my module) Any advices/suggestions how to achieve this? Edit: Adding more info My Stack: My existing stacks is in Java. But since I plan to just use the services, I don't think that should affect the design of new REST code. I am planning to implement the new REST code in PHP. How well the services map to resources? Some services are mapped well i.e. there are services for creating, updating application objects. But for other application objects, there are no direct services available. More importantly, there are actions beyond just create, update etc. that apply to application objects. And I would like to provide some way for these actions to be exposed through REST. Since these are verbs, how do I deal with them? Where exactly I need help? I would appreciate any help towards high level design to accomplish the task along-with making the framework extendible. For instance, tomorrow there are some new services added to my SOA layer, I want to make sure it is easy for a fresh developer to write a REST call by simply registering a new resource (in a config file/db) and write code for connecting it with SOA calls. Just like plugin.

    Read the article

  • Code refactoring with Visual Studio 2010 Part-2

    - by Jalpesh P. Vadgama
    In previous post I have written about Extract Method Code refactoring option. In this post I am going to some other code refactoring features of Visual Studio 2010.  Renaming variables and methods is one of the most difficult task for a developer. Normally we do like this. First we will rename method or variable and then we will find all the references then do remaining over that stuff. This will be become difficult if your variable or method are referenced at so many files and so many place. But once you use refactor menu rename it will be bit Easy. I am going to use same code which I have created in my previous post. I am just once again putting that code here for your reference. using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; Print(firstName, lastName); } private static void Print(string firstName, string lastName) { Console.WriteLine(string.Format("FirstName:{0}", firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } } Now I want to rename print method in this code. To rename the method you can select method name and then select Refactor-> Rename . Once I selected Print method and then click on rename a dialog box will appear like following. Now I am renaming this Print method to PrintMyName like following.   Now once you click OK a dialog will appear with preview of code like following. It will show preview of code. Now once you click apply. You code will be changed like following. using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; PrintMyName(firstName, lastName); } private static void PrintMyName(string firstName, string lastName) { Console.WriteLine(string.Format("FirstName:{0}", firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } } So that’s it. This will work in multiple files also. Hope you liked it.. Stay tuned for more.. Till that Happy Programming.

    Read the article

  • RTS Movement + Navigation + Destination

    - by Oliver Jones
    I'm looking into building my own simple RTS game, and I'm trying to get my head around the movement of single, and multi selected units. (Developing in Unity) After much research, I now know that its a bigger task than I thought. So I need to break it down. I already have an A* navigation system with static obstacles taken into account. I don't want to worry about dynamic local avoidance right now. So I guess my first break down question would be: How would I go about moving mutli units to the same location. Right now - my units move to the location, but because they're all told to go to the same location, they start to 'fight' over one another to get there. I think theres two paths to go down: 1) Give each individual unit a separate destination point that is close to the 'master' destination point - and get the units to move to that. 2) Group my selected units in a flock formation, and move that entire flock group towards the destination point. Question about each path: 1a) How can I go about finding a suitable destination point that is close to the master destination? What happens if there isn't a suitable destination point? 1b) Would this be more CPU heavy? As it has to compute a path for each unit? (40 unit count). 2a) Is this a good idea? Not giving the units themselves a destination, but instead the flock (which holds the units within). The units within the flock could then maintain a formation (local avoidance) - though, again local avoidance is not an issue at this current time. 2b) Not sure what results I would get if I have a flock of 5 units, or a flock of 40 units, as the radius would be greater - which might mess up my A* navigation system. In other words: A flock of 2 units will be able to move down an alleyway, but a flock of 40 wont. But my nav system won't take that into account. I would appreciate any feedback. Kind regards, Ollie Jones

    Read the article

  • Is hiring a "chief intern" a good idea?

    - by dukeofgaming
    I'm starting an internship program for our software department and I was wondering about creating a position ("chief intern", intern supervisor, or whatever one should call it) with the following responsibilities: Train interns Coach interns Manage projects and tasks for interns Supervise intern's work in terms of rhythm and quality Act as a liaison between the main team's needs and interns performance/aspirations Evaluate and facilitate intern's progress when they want to grab a higher-level domain-specific task (at this point, a main dev team member can do mentoring) Get freely involved in the main team's software development tasks so that he himself can grow, and have full mentorship from the main dev team. I'm thinking that an apprentice-level engineer (below Jr., or Jr.; but being a graduate and working full-time) can handle this for a while (he will be trained by the main dev team first), until one of two things happen: He/she decides to move on to the main dev team by recommending an appropriate replacement (or me finding another one as a new hire) Keep leading the interns while still being able to grow to Jr. Eng., Eng., Sr. Eng I know the notion of a "chief intern" is common within the medical world, but I don't really know about that in the software world (I was a freelancer for most of my university years). A side-intention to this is also that, if this ends up being a higher rotation position (organically) because the intern supervisor wants to join the main dev team, this could help interns that aspire this position emerge as leaders. My main intention for this, though, is removing distractions from the main team but without making the interns suffer the lack of attention, which could lead to boredom and little intern retention. Is this "chief intern" idea common (or good at least)?, are there any obvious risks to it that I might not be seeing? Edit: I have a draft plan for the kind of work the interns would be doing: Are R&D mini-projects a good activity for interns? Edit #2: My intention is not keeping them isolated, but having someone focus on giving attention to them when we cannot. Edit #3: I'm now convince it is a good idea, but I will take the organic approach to hiring someone in such position: do it myself until I cannot. This way I'll know better what to expect from a person I hire for this role in the future, as well as what works and what doesn't with interns.

    Read the article

  • Microsoft MVP Award Nomination

    - by Mark A. Wilson
    I am extremely honored to announce that I have been nominated to receive the Microsoft MVP Award for my contributions in C#! Hold on; I have not won the award yet. But to be nominated is really humbling. Thank you very much! For those of you who may not know, here is a high-level summary of the MVP award: The Microsoft Most Valuable Professional (MVP) Program recognizes and thanks outstanding members of technical communities for their community participation and willingness to help others. The program celebrates the most active community members from around the world who provide invaluable online and offline expertise that enriches the community experience and makes a difference in technical communities featuring Microsoft products. MVPs are credible, technology experts from around the world who inspire others to learn and grow through active technical community participation. While MVPs come from many backgrounds and a wide range of technical communities, they share a passion for technology and a demonstrated willingness to help others. MVPs do this through the books and articles they author, the Web sites they manage, the blogs they maintain, the user groups they participate in, the chats they host or contribute to, the events and training sessions where they present, as well as through the questions they answer in technical newsgroups or message boards. - Microsoft MVP Award Nomination Email I guess I should start my nomination acceptance speech by profusely thanking Microsoft as well as everyone who nominated me. Unfortunately, I’m not completely certain who those people are. While I could guess (in no particular order: Bill J., Brian H., Glen G., and/or Rob Z.), I would much rather update this post accordingly after I know for certain who to properly thank. I certainly don’t want to leave anyone out! Please Help My next task is to provide the MVP Award committee with information and descriptions of my contributions during the past 12 months. For someone who has difficulty remembering what they did just last week, trying to remember something that I did 12 months ago is going to be a real challenge. (Yes, I should do a better job blogging about my activities. I’m just so busy!) Since this is an award about community, I invite and encourage you to participate. Please leave a comment below or send me an email. Help jog my memory by listing anything and everything that you can think of that would apply and/or be important to include in my reply back to the committee. I welcome advice on what to say and how to say it from previous award winners. Again, I greatly appreciate the nomination and welcome any assistance you can provide. Thanks for visiting and till next time, Mark A. Wilson      Mark's Geekswithblogs Blog Enterprise Developers Guild Technorati Tags: Community,Way Off Topic

    Read the article

  • VSDB to SSDT part 4 : Redistributable database deployment package with SqlPackage.exe

    - by Etienne Giust
    The goal here is to use SSDT SqlPackage to deploy the output of a Visual Studio 2012 Database project… a bit in the same fashion that was detailed here : http://geekswithblogs.net/80n/archive/2012/09/12/vsdb-to-ssdt-part-3--command-line-deployment-with-sqlpackage.exe.aspx   The difference is we want to do it on an environment where Visual Studio 2012 and SSDT are not installed. This might be the case of your Production server.   Package structure So, to get started you need to create a folder named “DeploymentSSDTRedistributable”. This folder will have the following structure :         The dacpac and dll files are the outputs of your Visual Studio 2012 Database project. If your database project references another database project, you need to put their dacpac and dll here too, otherwise deployment will not work. The publish.xml file is the publish configuration suitable for your target environment. It holds connexion strings, SQLVARS parameters and deployment options. Review it carefully. The SqlDacRuntime folder (an arbitrary chosen name) will hold the SqlPackage executable and supporting libraries   Contents of the SqlDacRuntime folder Here is what you need to put in the SqlDacRuntime folder  :      You will be able to find these files in the following locations, on a machine with Visual Studio 2012 Ultimate installed : C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin : SqlPackage.exe Microsoft.Data.Tools.Schema.Sql.dll  Microsoft.Data.Tools.Utilities.dll Microsoft.SqlServer.Dac.dll C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.SqlServer.TransactSql.ScriptDom\v4.0_11.0.0.0__89845dcd8080cc91 Microsoft.SqlServer.TransactSql.ScriptDom.dll   Deploying   Now take your DeploymentSSDTRedistributable deployment package to your remote machine. In a standard command window, place yourself inside the DeploymentSSDTRedistributable  folder.   You can first perform a check of what will be updated in the target database. The DeployReport task of SqlPackage.exe will help you do that. The following command will output an xml of the changes:   "SqlDacRuntime/SqlPackage.exe" /Action:DeployReport /SourceFile:./Our.Database.dacpac /Profile:./Release.publish.xml /OutputPath:./ChangesToDeploy.xml      You might get some warnings on Log and Data file like I did. You can ignore them. Also, the tool is warning about data loss when removing a column from a table. By default, the publish.xml options will prevent you from deploying when data loss is occuring (see the BlockOnPossibleDataLoss inside the publish.xml file). Before actual deployment, take time to carefully review the changes to be applied in the ChangesToDeploy.xml file.    When you are satisfied, you can deploy your changes with the following command : "SqlDacRuntime/SqlPackage.exe" /Action:Publish /SourceFile:./Our.Database.dacpac /Profile:./Release.publish.xml   Et voilà !  Your dacpac file has been deployed to your database. I’ve been testing this on a SQL 2008 Server (not R2) but it should work on 2005, 2008 R2 and 2012 as well.   Many thanks to Anuj Chaudhary for his article on the subject : http://www.anujchaudhary.com/2012/08/sqlpackageexe-automating-ssdt-deployment.html

    Read the article

  • Demystified - BI in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Frequently, my clients ask me if there is a good guide on deciphering the seemingly daunting choice of products from Microsoft when it comes to business intelligence offerings in a SharePoint 2010 world. These are all described in detail in my book, but here is a one (well maybe two) page executive overview. Microsoft Excel: Yes, Microsoft Excel! Your favorite and most commonly used in the world database. No it isn’t a database in technical pure definitions, but this is the most commonly used ‘database’ in the world. You will find many business users craft up very compelling excel sheets with tonnes of logic inside them. Good for: Quick Ad-Hoc reports. Excel 64 bit allows the possibility of very large datasheets (Also see 32 bit vs 64 bit Office, and PowerPivot Add-In below). Audience: End business user can build such solutions. Related technologies: PowerPivot, Excel Services Microsoft Excel with PowerPivot Add-In: The powerpivot add-in is an extension to Excel that adds support for large-scale data. Think of this as Excel with the ability to deal with very large amounts of data. It has an in-memory data store as an option for Analysis services. Good for: Ad-hoc reporting and logic with very large amounts of data. Audience: End business user can build such solutions. Related technologies: Excel, and Excel Services Excel Services: Excel Services is a Microsoft SharePoint Server 2010 shared service that brings the power of Excel to SharePoint Server by providing server-side calculation and browser-based rendering of Excel workbooks. Thus, excel sheets can be created by end users, and published to SharePoint server – which are then rendered right through the browser in read-only or parameterized-read-only modes. They can also be accessed by other software via SOAP or REST based APIs. Good for: Sharing excel sheets with a larger number of people, while maintaining control/version control etc. Sharing logic embedded in excel sheets with other software across the organization via REST/SOAP interfaces Audience: End business users can build such solutions once your tech staff has setup excel services on a SharePoint server instance. Programmers can write software consuming functionality/complex formulae contained in your sheets. Related technologies: PerformancePoint Services, Excel, and PowerPivot. Visio Services: Visio Services is a shared service on the Microsoft SharePoint Server 2010 platform that allows users to share and view Visio diagrams that may or may not have data connected to them. Connected data can update these diagrams allowing a visual/graphical view into the data. The diagrams are viewable through the browser. They are rendered in silverlight, but will automatically down-convert to .png formats. Good for: Showing data as diagrams, live updating. Comes with a developer story. Audience: End business users can build such solutions once your tech staff has setup visio services on a SharePoint server instance. Developers can enhance the visualizations Related Technologies: Visio Services can be used to render workflow visualizations in SP2010 Reporting Services: SQL Server reporting services can integrate with SharePoint, allowing you to store reports and data sources in SharePoint document libraries, and render these reports and associated functionality such as subscriptions through a SharePoint site. In SharePoint 2010, you can also write reports against SharePoint lists (access services uses this technique). Good for: Showing complex reports running in a industry standard data store, such as SQL server. Audience: This is definitely developer land. Don’t expect end users to craft up reports, unless a report model has previously been published. Related Technologies: PerformancePoint Services PerformancePoint Services: PerformancePoint Services in SharePoint 2010 is now fully integrated with SharePoint, and comes with features that can either be used in the BI center site definition, or on their own as activated features in existing site collections. PerformancePoint services allows you to build reports and dashboards that target a variety of back-end datasources including: SQL Server reporting services, SQL Server analysis services, SharePoint lists, excel services, simple tables, etc. Using these you have the ability to create dashboards, scorecards/kpis, and simple reports. You can also create reports targeting hierarchical multidimensional data sources. The visual decomposition tree is a new report type that lets you quickly breakdown multi-dimensional data. Good for: Mostly everything :), except your wallet – it’s not free! But this is the most comprehensive offering. If you have SharePoint server, forget everything and go with performance point. Audience: Developers need to setup the back-end sources, manageability story. DBAs need to setup datawarehouses with cubes. Moderately sophisticated business users, or developers can craft up reports using dashboard designer which is a click-once App that deploys with PerformancePoint Related Technologies: Excel services, reporting services, etc.   Other relevant technologies to know about: Business Connectivity Services: Allows for consumption of external data in SharePoint as columns or external lists. This can be paired with one or more of the above BI offerings allowing insight into such data. Access Services: Allows the representation/publishing of an access database as a SharePoint 2010 site, leveraging many SharePoint features. Reporting services is used by Access services. Secure Store Service: The SP2010 Secure store service is a replacement for the SP2007 single sign on feature. This acts as a credential policeman providing credentials to various applications running with SharePoint. BCS, PerformancePoint Services, Excel Services, and many other apps use the SSS (Secure Store Service) for credential control. Comment on the article ....

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris Stevens
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >