Search Results

Search found 8523 results on 341 pages for 'sun storage 7000 scriptin'.

Page 306/341 | < Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >

  • Create a Remote Git Repository from an Existing XCode Repository

    - by codeWithoutFear
    Introduction Distributed version control systems (VCS’s), like Git, provide a rich set of features for managing source code.  Many development tools, including XCode, provide built-in support for various VCS’s.  These tools provide simple configuration with limited customization to get you up and running quickly while still providing the safety net of basic version control. I hate losing (and re-doing) work.  I have OCD when it comes to saving and versioning source code.  Save early, save often, and commit to the VCS often.  I also hate merging code.  Smaller and more frequent commits enable me to minimize merge time and effort as well. The work flow I prefer even for personal exploratory projects is: Make small local changes to the codebase to create an incrementally improved (and working) system. Commit these changes to the local repository.  Local repositories are quick to access, function even while offline, and provides the confidence to continue making bold changes to the system.  After all, I can easily recover to a recent working state. Repeat 1 & 2 until the codebase contains “significant” functionality and I have connectivity to the remote repository. Push the accumulated changes to the remote repository.  The smaller the change set, the less likely extensive merging will be required.  Smaller is better, IMHO. The remote repository typically has a greater degree of fault tolerance and active management dedicated to it.  This can be as simple as a network share that is backed up nightly or as complex as dedicated hardware with specialized server-side processing and significant administrative monitoring. XCode’s out-of-the-box Git integration enables steps 1 and 2 above.  Time Machine backups of the local repository add an additional degree of fault tolerance, but do not support collaboration or take advantage of managed infrastructure such as on-premises or cloud-based storage. Creating a Remote Repository These are the steps I use to enable the full workflow identified above.  For simplicity the “remote” repository is created on the local file system.  This location could easily be on a mounted network volume. Create a Test Project My project is called HelloGit and is located at /Users/Don/Dev/HelloGit.  Be sure to commit all outstanding changes.  XCode always leaves a single changed file for me after the project is created and the initial commit is submitted. Clone the Local Repository We want to clone the XCode-created Git repository to the location where the remote repository will reside.  In this case it will be /Users/Don/Dev/RemoteHelloGit. Open the Terminal application. Clone the local repository to the remote repository location: git clone /Users/Don/Dev/HelloGit /Users/Don/Dev/RemoteHelloGit Convert the Remote Repository to a Bare Repository The remote repository only needs to contain the Git database.  It does not need a checked out branch or local files. Go to the remote repository folder: cd /Users/Don/Dev/RemoteHelloGit Indicate the repository is “bare”: git config --bool core.bare true Remove files, leaving the .git folder: rm -R * Remove the “origin” remote: git remote rm origin Configure the Local Repository The local repository should reference the remote repository.  The remote name “origin” is used by convention to indicate the originating repository.  This is set automatically when a repository is cloned.  We will use the “origin” name here to reflect that relationship. Go to the local repository folder: cd /Users/Don/Dev/HelloGit Add the remote: git remote add origin /Users/Don/Dev/RemoteHelloGit Test Connectivity Any changes made to the local Git repository can be pushed to the remote repository subject to the merging rules Git enforces. Create a new local file: date > date.txt /li> Add the new file to the local index: git add date.txt Commit the change to the local repository: git commit -m "New file: date.txt" Push the change to the remote repository: git push origin master Now you can save, commit, and push/pull to your OCD hearts’ content! Code without fear! --Don

    Read the article

  • Microsoft Build 2012 Day 1 Keynote Summary

    - by Tim Murphy
    So I have finally dried the tears after watching the Keynote for Build 2012.  This wasn’t because it was an emotional presentation, but because for the second year I missed the goodies.  Each on site attendee got a Surface RT, a Lumia 920 and a voucher for 100GB of SkyDrive storage. The event was opened with the announcement that in the three days since the launch of Windows 8 over 4 million upgrades have been sold.  I don’t care who you are that is an impressive stat.  Ballmer then spent a fair amount of time remaking the case for the Windows and Windows Phone platforms similar to what we have heard over the last to launch events. There were some cool, but non-essential demos.  The one that was the most fun was the Perceptive Pixel 82” slate device.  At first glance I wondered why I would ever want such a device, but then Ballmer explained it’s possible use for schools and boardrooms.  The actually made sense. Then things got strange.  Steve started explaining features that developers could leverage.  Usually this type of information is left to the product leads.  He focused on the integration with the Charms features such as Search and Share. Steve “Guggs” Guggenheim showed off an app that would appeal to my kids from Disney called “Agent P” which is base on Phineas and Ferb.  Then he got to the meat of the presentation.  We found out that you could add a tile that can be used to sell ad space.  In the same vein we also found out that you could use Microsoft’s, Paypal’s or any commerce engine of your own creation or choosing. For those who are interested in sports and especially developing sports apps you would have found the small presentation from Michael Bayle of ESPN.  He introduced the ESPN app which has tons of features.  For the developers in the crowd he also mentioned that ESPN has an API available at developer.espn.com. During the launch events we were told apps were coming.  In this presentation we were actually shown a scrolling list of logos and told about a couple of them.  Ballmer mentioned specifically Twitter, SAP and DropBox.  These are impressive names that were just a couple of the list impressive names. Steve Ballmer addressed the question of why you should develop for the Windows 8 platform.  He feels that Microsoft has the best commercial terms for developers, a better way to build apps than other platforms and a variety of form factors.  His key point though was the available volume of customers given the current Windows install base and assuming even a flat growth of the platform.  This he backed with a promise that Microsoft is going to do better at marketing and you won’t be able to avoid the ads that they are bringing out. The last section of the key note was present by Kevin Gallo from the Windows Phone team.  This was the real reason I tuned into the webcast.  He impressed upon those watching that the strength of developing for the Microsoft platform is the common programming model that now exist.  While there are difference between form factor implementations you can leverage code across them. He claimed that 90% of developer requests for Windows Phone 8 had been implemented.  These include: More controls with better performance Better live tiles including lock screen integration Speech support in custom apps Easier submission to the market place App camera integration VOIP and chat support Bluetooth and NFC support Native C++ development Direct 3D development   The quote from Kevin that stood out for me was that “Take your Dramamine and buckle your seatbelt type of games are coming to Windows Phone 8”.  He back this up by displaying a list of game development frameworks and then having Unity come out and do a demo. Ok, almost done … The last two things of note for me were the announcement that the SDK is immediately available at dev.windowsphone.com and that they were reducing the cost of an individual developer account to $8 for the next 8 days. Let the development commence. del.icio.us Tags: Build 2012,Windows 8,Windows Phone 8,Windows Phone

    Read the article

  • Security and the Mobile Workforce

    - by tobyehatch
    Now that many organizations are moving to the BYOD philosophy (bring your own devices), security for phones and tablets accessing company sensitive information is of paramount importance. I had the pleasure to interview Brian MacDonald, Principal Product Manager for Oracle Business Intelligence (BI) Mobile Products, about this subject, and he shared some wonderful insight about how the Oracle Mobile Security Tool Kit is addressing mobile security and doing some pretty cool things.  With the rapid proliferation of phones and tablets, there is a perception that mobile devices are a security threat to corporate IT, that mobile operating systems are not secure, and that there are simply too many ways to inadvertently provide access to critical analytic data outside the firewall. Every day, I see employees working on mobile devices at the airport, while waiting for their airplanes, and using public WIFI connections at coffee houses and in restaurants. These methods are not typically secure ways to access confidential company data. I asked Brian to explain why. “The native controls for mobile devices and applications are indeed insufficiently secure for corporate deployments of Business Intelligence and most certainly for businesses where data is extremely critical - such as financial services or defense - although it really applies across the board. The traditional approach for accessing data from outside a firewall is using a VPN connection which is not a viable solution for mobile. The problem is that once you open up a VPN connection on your phone or tablet, you are creating an opening for the whole device, for all the software and installed applications. Often the VPN connection by itself provides insufficient encryption – if any – which means that data can be potentially intercepted.” For this reason, most organizations that deploy Business Intelligence data via mobile devices will only do so with some additional level of control. So, how has the industry responded? What are companies doing to address this very real threat? Brian explained that “Mobile Device Management (MDM) and Mobile Application Management (MAM) software vendors have rapidly created solutions for mobile devices that provide a vast array of services for controlling, managing and establishing enterprise mobile usage policies. On the device front, vendors now support full levels of encryption behind the firewall, encrypted local data storage, credential management such as federated single-sign-on as well as remote wipe, geo-fencing and other risk reducing features (should a device be lost or stolen). More importantly, these software vendors have created methods for providing these capabilities on a per application basis, allowing for complete isolation of the application from the mobile operating system. Finally, there are tools which allow the applications themselves to be distributed through enterprise application stores allowing IT organizations to manage who has access to the apps, when updates to the applications will happen, and revoke access after an employee leaves. So even though an employee may be using a personal device, access to company data can be controlled while on or near the company premises. So do the Oracle BI mobile products integrate with the MDM and MAM vendors? Brian explained that our customers use a wide variety of mobile security vendors and may even have more than one in-house. Therefore, Oracle is ensuring that users have a choice and a mechanism for linking together Oracle’s BI offering with their chosen vendor’s secure technology. The Oracle BI Mobile Security Toolkit, which is a version of the Oracle BI Mobile HD application, delivered through the Oracle Technology Network (OTN) in its component parts, helps Oracle users to build their own version of the Mobile HD application, sign it with their own enterprise development certificates, link with their security vendor of choice, then deploy the combined application through whichever means they feel most appropriate, including enterprise application stores.  Brian further explained that Oracle currently supports most of the major mobile security vendors, has close relationships with each, and maintains strong partnerships enabling both Oracle and the vendors to test, update and release a cooperating solution in lock-step. Oracle also ensures that as new versions of the Oracle HD application are made available on the Apple iTunes store, the same version is also immediately made available through the Security Toolkit on OTN.  Rest assured that as our workforce continues down the mobile path, company sensitive information can be secured.  To listen to the entire podcast, click here. To learn more about the Oracle BI Mobile HD, click  here To learn more about the BI Mobile Security Toolkit, click here 

    Read the article

  • Easing the Journey to the Private Cloud with Oracle Consulting

    - by MichaelM-Oracle
    By Sanjai Marimadaiah, Senior Director, Strategy & Business Development – Cloud Solutions, Oracle Consulting Services Business leaders are now leading the charge on how their firms can profit from cloud solutions. Agility and innovation are becoming the primary drivers of the business case for the cloud, even more than the anticipated cost savings. Leaders need to find the right strategy and optimize the use of cloud-based applications across their enterprise-computing infrastructure. The Problem – Current State With prevalent IT practices, many organizations find that they run multiple IT solutions serving similar business needs. This has led to the proliferation of technology stacks, for example: Oracle 10g on Sun T4 running Solaris 9; Oracle 11g on Exadata running Linux; or Oracle 12c on commodity x86 servers. This variance has a huge impact on an organization’s agility and expenses, and requires IT professionals with varied skills as well as on-going training for different systems and tools. Fortunately there is a practical business strategy to overcome this unneeded redundancy. Thus begins a journey to the right cloud computing solution. The Solution – Cloud Services from Oracle Consulting Services (OCS) Oracle Consulting Services (OCS ) works closely with our clients as trusted advisors to proactively respond to business needs and IT concerns. OCS understands that making the transition to cloud solutions begins with a strategic conversation, based on its deep expertise for successfully completing private cloud service engagements with several companies. For a journey to the cloud, Oracle Consulting Services leads the client through four phases– standardization, consolidation, service delivery, and enterprise cloud – to achieve optimal returns. Phase 1 - Standardization Oracle Consulting Services (OCS) works with clients to evaluate their business requirements and propose a set of standard solutions stacks for various IT solutions. This is an opportune time to evaluate cloud ready solutions, such as Oracle 12c, Oracle Exadata, and the Oracle Database Appliance (ODA). The OCS consultants, together with the delivery team, then turn to upgrading and migrating existing solution stacks to standardized offerings. OCS has the expertise and tools to complete this stage in a fraction of the time required by other IT services companies. Clients quickly realize cost savings in tools, processes, and type/number of resources required. This standardization also improves agility of the IT organizations and their abilities to respond to the needs of various business units. Phase 2 - Consolidation During the consolidation phase, OCS consultants programmatically consolidate hundreds of databases into a smaller number of servers to improve utilization, reduce floor space, and optimize maintenance costs. Consolidation helps clients realize huge savings in CapEx investments and shrink OpEx costs. The use of engineered systems, such as Oracle Exadata, greatly reduces the client’s risk of moving to a new solution stack. OCS recommends clients to pursue Phase 1 (Standardization) and Phase 2 (Consolidation) simultaneously to reduce the overall time, effort, and expense of the cloud journey. Phase 3 - Service Delivery Once a client is on a path of standardization and consolidation, OCS consultants create Service Catalogues based on the SLAs requirements and the criticality of the solutions. The number and types of Service Catalogues (Platinum, Gold, Silver, Bronze, etc.) vary from client to client. OCS consultants also implement a variety of value-added cloud solutions, including monitoring, metering, and charge-back solutions. At this stage, clients are able to achieve a high level of understanding in their cloud journey. Their IT organizations are operating efficiently and are more agile in responding to the needs of business units. Phase 4 - Enterprise Cloud In the final phase of the cloud journey, the economics of the IT organizations change. Business units can request services on-demand; applications can be deployed and consumed on a pay-as-you-go model. OCS has the expertise and capabilities to establish processes, programs, and solutions required for IT organizations to transform how they interact with business units. The Promise of Cloud Solutions Depending the size and complexity of their business model, some clients are able to abbreviate some phases of their cloud journey. Cloud solutions are still evolving and there is rapid pace of innovation to transform how IT organizations operate. The lesson is clear. Cloud solutions hold a lot of promise for business agility. Business leaders can now leverage an additional set of capabilities and services. They can ramp up their pace of innovation. With cloud maturity, they can compete more effectively in their respective markets. But there are certainly challenges ahead. A skilled consulting services partner can play a pivotal role as a trusted advisor in the successful adoption of cloud solutions. Oracle Consulting Services has expertise and a portfolio of services to help clients succeed on their journey to the cloud.

    Read the article

  • Oracle Fusion Middleware gives you Choice and Portability for Public and Private Cloud

    - by Michelle Kimihira
    Author: Margaret Lee, Senior Director, Product Management, Oracle Fusion Middleware Cloud Computing allows customers to quickly develop and deploy applications in a shared environment.  The environment can span across hardward (IaaS), foundation layer software (PaaS), and end-user software (SaaS). Cloud Computing provides compelling benefits in terms of business agility and IT cost savings.  However, with complex, existing heterogeneous architectures, and concerns for security and manageability, enterprises are challenged to define their Cloud strategy.  For most enterprises, the solution is a hybrid of private and public cloud.  Fusion Middleware supports customers’ Cloud requirements through choice and portability. Fusion Middleware supports a variety of cloud development and deployment models:  Oracle [Public] Cloud; customer private cloud; hybrid of these two, and traditional dedicated, on-premise model Customers can develop applications in any of these models and deployed in another, providing the flexibility and portability they need Oracle Cloud is a public cloud offering.  Within Oracle Cloud, Fusion Middleware provides two key offerings include the Developer cloud service and Java cloud deployment service. Developer Cloud Service Simplify Development: Automated provisioned environment; pre-configured and integrated; web-based administration Deploy Automatically: Fully integrated with Oracle Cloud for Java deployment; workflow ensures build & test Collaborate & Manage: Fits any size team; integrated team source repository; continuous integration; task/defect tracking Integrated with all major IDEs: Oracle JDeveloper; NetBeans; Eclipse Java Cloud Service Java Cloud service provides flexible Java deployment environment for departmental applications and development, staging, QA, training, and demo environments.  It also supports customizations deployments for SaaS-based Fusion Applications customers.  Some key features of Java Cloud Service include: WebLogic Server on Exalogic, secure, highly available infrastructure Database Service & IDE Integration Open, Standard-based Deploy Web Apps, Web Services, REST Services Fully managed and supported by Oracle For more information, please visit Oracle Cloud, Oracle Cloud Java Service and Oracle Cloud Developer Service. If your enterprise prefers a private cloud, for reasons such as security, control, manageability, and complex integration that prevent your applications from being deployed on a public cloud, Fusion Middleware also provide you with the products and tools you need.  Sometimes called Private PaaS, private clouds have their predecessors in shared-services arrangements many large companies have been building in the past decade.  The difference, however, are in the scope of the services, and depth of their capabilities.  In terms of vertical stack depth, private clouds not only provide hardware and software infrastructure to run your applications, they also provide services such as integration and security, that your applications need.  Horizontally, private clouds provide monitoring, management, lifecycle, and charge back capabilities out-of-box that shared-services platforms did not have before. Oracle Fusion Middleware includes the complete stack of hardware and software for you to build private clouds: SOA suite and BPM suite to support systems integration and process flow between applications deployed on your private cloud and the rest of your organization Identity and Access Management suite to provide security, provisioning, and access services for applications deployed on your private cloud WebLogic Server to run your applications Enterprise Manager's Cloud Management pack to monitor, manage, upgrade applications running on your private cloud Exalogic or optimized Oracle-Sun hardware to build out your private cloud The most important key differentiator for Oracle's cloud solutions is portability, between private and public clouds.  This is unique to Oracle because portability requires the vendor to have product depth and breadth in both public cloud services and private cloud product offerings.  Most public cloud vendors cannot provide the infrastructure and tools customers need to build their own private clouds.  In reverse, traditional software tools vendors typically do not have the product and expertise breadth to build out and offer a public cloud.  Oracle can.  It is important for customers that the products and technologies  Oracle uses to build its public is the same set that it sells to customers for them to build private clouds.  Fundamentally, that enables skills reuse,  as well as application portability. For more information on Oracle PaaS offerings, please visit Oracle's product information page.    Resources Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Hyper-V Live Migration across Sites!

    - by Ryan Roussel
    One of the great sessions I sat in on at Tech Ed this week was stretching a Windows 2008 R2 Hyper-V  Failover Cluster across sites.  With this ability, you could actually implement a Hyper-V cluster where you could migrate or even Live Migrate VMs across sites.   With this area’s propensity for Hurricanes, this will be a very popular topic for me over the next few months. While this technology is possible today, it’s also very complicated and can be very expensive to implement.    First your WAN connection has to support the ability to trunk your VLAN across both sites in order to Live Migrate.  This means you can’t use a Layer 3 routed connection like MPLS.  It has to be a Metro Ethernet connection or "Dark Fiber”.  Dark Fiber is unused Fiber already in the ground that can be leased from  various providers. Both of these connections would allow you to trunk layer 2 across your WAN.  Cisco does have the ability to trunk layer 2 across a routed connection by muxing the traffic but this is only available in their Nexus product line which has a very steep price tag.   If you are stuck with MPLS or the like and Nexus switching is not a realistic possibility, you will have to implement a multi-subnet cluster in which case Live Migration won’t be possible.  However you can still failover VMs to the remote site with some planning and manual intervention.  The consideration here is that the VMs will be on a different subnet once migrated, so you will have to change the IP addressing of your VMs.  This also has ramifications with DNS and Name resolution to control your down time.  DHCP with Reservations for your VMs is the preferred method to achieve the IP changes as this will automate that part of the process.   Secondly, you will have to have  a mechanism to replicate your storage across both sites.  Many SAN vendors natively support hardware based synchronous and asynchronous replication.  Some even support cluster shared volumes which were introduced in 2008 R2.   If your SANs do not support this natively, there are alternative file based replication products either software based like Double Take or hardware appliance like EMC.  Be sure to check with your vendor on the support of Disk majority if you’re replicating your quorum disk between SANs.   The last consideration is the ability to maintain quorum for your cluster.  If your replication provider does not support Disk Majority through replication, you will have to explore Node Majority with File Share Witness.  This will affect your design as a 3 node cluster with 1 node at the remote site and FSW at the production site would not have the ability to maintain quorum if the production site was lost. MS best practice for this would be to implement an even node cluster with 2 nodes at  each site and the FSW at a third site.   And there you have it.  While some considerations and research goes into implementing this solution, even a multi-subnet solution would be invaluable to organizations in the implementations of “warm” DR sites.

    Read the article

  • Protecting Consolidated Data on Engineered Systems

    - by Steve Enevold
    In this time of reduced budgets and cost cutting measures in Federal, State and Local governments, the requirement to provide services continues to grow. Many agencies are looking at consolidating their infrastructure to reduce cost and meet budget goals. Oracle's engineered systems are ideal platforms for accomplishing these goals. These systems provide unparalleled performance that is ideal for running applications and databases that traditionally run on separate dedicated environments. However, putting multiple critical applications and databases in a single architecture makes security more critical. You are putting a concentrated set of sensitive data on a single system, making it a more tempting target.  The environments were previously separated by iron so now you need to provide assurance that one group, department, or application's information is not visible to other personnel or applications resident in the Exadata system. Administration of the environments requires formal separation of duties so an administrator of one application environment cannot view or negatively impact others. Also, these systems need to be in protected environments just like other critical production servers. They should be in a data center protected by physical controls, network firewalls, intrusion detection and prevention, etc Exadata also provides unique security benefits, including a reducing attack surface by minimizing packages and services to only those required. In addition to reducing the possible system areas someone may attempt to infiltrate, Exadata has the following features: 1.    Infiniband, which functions as a secure private backplane 2.    IPTables  to perform stateful packet inspection for all nodes               Cellwall implements firewall services on each cell using IPTables 3.    Hardware accelerated encryption for data at rest on storage cells Oracle is uniquely positioned to provide the security necessary for implementing Exadata because security has been a core focus since the company's beginning. In addition to the security capabilities inherent in Exadata, Oracle security products are all certified to run in an Exadata environment. Database Vault Oracle Database Vault helps organizations increase the security of existing applications and address regulatory mandates that call for separation-of-duties, least privilege and other preventive controls to ensure data integrity and data privacy. Oracle Database Vault proactively protects application data stored in the Oracle database from being accessed by privileged database users. A unique feature of Database Vault is the ability to segregate administrative tasks including when a command can be executed, or that the DBA can manage the health of the database and objects, but may not see the data Advanced Security  helps organizations comply with privacy and regulatory mandates by transparently encrypting all application data or specific sensitive columns, such as credit cards, social security numbers, or personally identifiable information (PII). By encrypting data at rest and whenever it leaves the database over the network or via backups, Oracle Advanced Security provides the most cost-effective solution for comprehensive data protection. Label Security  is a powerful and easy-to-use tool for classifying data and mediating access to data based on its classification. Designed to meet public-sector requirements for multi-level security and mandatory access control, Oracle Label Security provides a flexible framework that both government and commercial entities worldwide can use to manage access to data on a "need to know" basis in order to protect data privacy and achieve regulatory compliance  Data Masking reduces the threat of someone in the development org taking data that has been copied from production to the development environment for testing, upgrades, etc by irreversibly replacing the original sensitive data with fictitious data so that production data can be shared safely with IT developers or offshore business partners  Audit Vault and Database Firewall Oracle Audit Vault and Database Firewall serves as a critical detective and preventive control across multiple operating systems and database platforms to protect against the abuse of legitimate access to databases responsible for almost all data breaches and cyber attacks.  Consolidation, cost-savings, and performance can now be achieved without sacrificing security. The combination of built in protection and Oracle’s industry-leading data protection solutions make Exadata an ideal platform for Federal, State, and local governments and agencies.

    Read the article

  • Incomplete upgrade 12.04 to 12.10

    - by David
    Everything was running smoothly. Everything had been downloaded from Internet, packages had been installed and a prompt asked for some obsolete programs/files to be removed or kept. After that the computer crashed and and to manually force a shutdown. I turned it on again and surprise I was on 12.10! Still the upgrade was not finished! How can I properly finish that upgrade? Here's the output I got in the command line after following posted instructions: i astrill - Astrill VPN client software i dayjournal - Simple, minimal, digital journal. i gambas2-gb-form - A gambas native form component i gambas2-gb-gtk - The Gambas gtk component i gambas2-gb-gtk-ext - The Gambas extended gtk GUI component i gambas2-gb-gui - The graphical toolkit selector component i gambas2-gb-qt - The Gambas Qt GUI component i gambas2-gb-settings - Gambas utilities class i A gambas2-runtime - The Gambas runtime i google-chrome-stable - The web browser from Google i google-talkplugin - Google Talk Plugin i indicator-keylock - Indicator for Lock Keys i indicator-ubuntuone - Indicator for Ubuntu One synchronization s i A language-pack-kde-zh-hans - KDE translation updates for language Simpl i language-pack-kde-zh-hans-base - KDE translations for language Simplified C i libapt-inst1.4 - deb package format runtime library idA libattica0.3 - a Qt library that implements the Open Coll idA libbabl-0.0-0 - Dynamic, any to any, pixel format conversi idA libboost-filesystem1.46.1 - filesystem operations (portable paths, ite idA libboost-program-options1.46.1 - program options library for C++ idA libboost-python1.46.1 - Boost.Python Library idA libboost-regex1.46.1 - regular expression library for C++ i libboost-serialization1.46.1 - serialization library for C++ idA libboost-signals1.46.1 - managed signals and slots library for C++ idA libboost-system1.46.1 - Operating system (e.g. diagnostics support idA libboost-thread1.46.1 - portable C++ multi-threading i libcamel-1.2-29 - Evolution MIME message handling library i libcmis-0.2-0 - CMIS protocol client library i libcupsdriver1 - Common UNIX Printing System(tm) - Driver l i libdconf0 - simple configuration storage system - runt i libdvdcss2 - Simple foundation for reading DVDs - runti i libebackend-1.2-1 - Utility library for evolution data servers i libecal-1.2-10 - Client library for evolution calendars i libedata-cal-1.2-13 - Backend library for evolution calendars i libedataserver-1.2-15 - Utility library for evolution data servers i libexiv2-11 - EXIF/IPTC metadata manipulation library i libgdu-gtk0 - GTK+ standard dialog library for libgdu i libgdu0 - GObject based Disk Utility Library idA libgegl-0.0-0 - Generic Graphics Library idA libglew1.5 - The OpenGL Extension Wrangler - runtime en i libglew1.6 - OpenGL Extension Wrangler - runtime enviro i libglewmx1.6 - OpenGL Extension Wrangler - runtime enviro i libgnome-bluetooth8 - GNOME Bluetooth tools - support library i libgnomekbd7 - GNOME library to manage keyboard configura idA libgsoap1 - Runtime libraries for gSOAP i libgweather-3-0 - GWeather shared library i libimobiledevice2 - Library for communicating with the iPhone i libkdcraw20 - RAW picture decoding library i libkexiv2-10 - Qt like interface for the libexiv2 library i libkipi8 - library for apps that want to use kipi-plu i libkpathsea5 - TeX Live: path search library for TeX (run i libmagickcore4 - low-level image manipulation library i libmagickwand4 - image manipulation library i libmarblewidget13 - Marble globe widget library idA libmusicbrainz4-3 - Library to access the MusicBrainz.org data i libnepomukdatamanagement4 - Basic Nepomuk data manipulation interface i libnux-2.0-0 - Visual rendering toolkit for real-time app i libnux-2.0-common - Visual rendering toolkit for real-time app i libpoppler19 - PDF rendering library i libqt3-mt - Qt GUI Library (Threaded runtime version), i librhythmbox-core5 - support library for the rhythmbox music pl i libusbmuxd1 - USB multiplexor daemon for iPhone and iPod i libutouch-evemu1 - KernelInput Event Device Emulation Library i libutouch-frame1 - Touch Frame Library i libutouch-geis1 - Gesture engine interface support i libutouch-grail1 - Gesture Recognition And Instantiation Libr idA libx264-120 - x264 video coding library i libyajl1 - Yet Another JSON Library i linux-headers-3.2.0-29 - Header files related to Linux kernel versi i linux-headers-3.2.0-29-generic - Linux kernel headers for version 3.2.0 on i linux-image-3.2.0-29-generic - Linux kernel image for version 3.2.0 on 64 i mplayerthumbs - video thumbnail generator using mplayer i myunity - Unity configurator i A openoffice.org-calc - office productivity suite -- spreadsheet i A openoffice.org-writer - office productivity suite -- word processo i python-brlapi - Python bindings for BrlAPI i python-louis - Python bindings for liblouis i rts-bpp-dkms - rts-bpp driver in DKMS format. i system76-driver - Universal driver for System76 computers. i systemconfigurator - Unified Configuration API for Linux Instal i systemimager-client - Utilities for creating an image and upgrad i systemimager-common - Utilities and libraries common to both the i systemimager-initrd-template-am - SystemImager initrd template for amd64 cli i touchpad-indicator - An indicator for the touchpad i ubuntu-tweak - Ubuntu Tweak i A unity-lens-utilities - Unity Utilities lens i A unity-scope-calculator - Calculator engine i unity-scope-cities - Cities engine i unity-scope-rottentomatoes - Unity Scope Rottentomatoes

    Read the article

  • Can not print after upgrading from 12.x to 14.04

    - by user318889
    After upgrading from V12.04 to V14.04 I am not able to print. I am using an HP LaserJet 400 M451dn. The printer troubleshooter told me that there is no solution to the problem. This is the output of the advanced diagnositc output. (Due to limited space I cut the output!) Can anybody tell me what is going wrong. I am using the printer via USB ? Page 1 (Scheduler not running?): {'cups_connection_failure': False} Page 2 (Is local server publishing?): {'local_server_exporting_printers': False} Page 3 (Choose printer): {'cups_dest': , 'cups_instance': None, 'cups_queue': u'HP-LaserJet-400-color-M451dn', 'cups_queue_listed': True} Page 4 (Check printer sanity): {'cups_device_uri_scheme': u'hp', 'cups_printer_dict': {'device-uri': u'hp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670', 'printer-info': u'Hewlett-Packard HP LaserJet 400 color M451dn', 'printer-is-shared': True, 'printer-location': u'Pinatubo', 'printer-make-and-model': u'HP LJ 300-400 color M351-M451 Postscript (recommended)', 'printer-state': 4, 'printer-state-message': u'', 'printer-state-reasons': [u'none'], 'printer-type': 8556636, 'printer-uri-supported': u'ipp://localhost:631/printers/HP-LaserJet-400-color-M451dn'}, 'cups_printer_remote': False, 'hplip_output': (['', '\x1b[01mHP Linux Imaging and Printing System (ver. 3.14.6)\x1b[0m', '\x1b[01mDevice Information Utility ver. 5.2\x1b[0m', '', 'Copyright (c) 2001-13 Hewlett-Packard Development Company, LP', 'This software comes with ABSOLUTELY NO WARRANTY.', 'This is free software, and you are welcome to distribute it', 'under certain conditions. See COPYING file for more details.', '', '', '\x1b[01mhp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670\x1b[0m', '', '\x1b[01mDevice Parameters (dynamic data):\x1b[0m', '\x1b[01m Parameter Value(s) \x1b[0m', ' ---------------------------- ----------------------------------------------------------', ' back-end hp ', " cups-printers ['HP-LaserJet-400-color-M451dn'] ", ' cups-uri hp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670 ', ' dev-file ', ' device-state -1 ', ' device-uri hp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670 ', ' deviceid ', ' error-state 101 ', ' host ', ' is-hp True ', ' panel 0 ', ' panel-line1 ', ' panel-line2 ', ' port 1 ', ' serial CNFF308670 ', ' status-code 5002 ', ' status-desc ', '\x1b[01m', 'Model Parameters (static data):\x1b[0m', '\x1b[01m Parameter Value(s) \x1b[0m', ' ---------------------------- ----------------------------------------------------------', ' align-type 0 ', ' clean-type 0 ', ' color-cal-type 0 ', ' copy-type 0 ', ' embedded-server-type 0 ', ' fax-type 0 ', ' fw-download False ', ' icon hp_color_laserjet_cp2025.png ', ' io-mfp-mode 1 ', ' io-mode 1 ', ' io-support 6 ', ' job-storage 0 ', ' linefeed-cal-type 0 ', ' model HP_LaserJet_400_color_M451dn ', ' model-ui HP LaserJet 400 Color m451dn ', ' model1 HP LaserJet 400 Color M451dn ', ' monitor-type 0 ', ' panel-check-type 0 ', ' pcard-type 0 ', ' plugin 0 ', ' plugin-reason 0 ', ' power-settings 0 ', ' ppd-name lj_300_400_color_m351_m451 ', ' pq-diag-type 0 ', ' r-type 0 ', ' r0-agent1-kind 4 ', ' r0-agent1-sku CE410A/CE410X ', ' r0-agent1-type 1 ', ' r0-agent2-kind 4 ', ' r0-agent2-sku CE411A ', ' r0-agent2-type 4 ', ' r0-agent3-kind 4 ', ' r0-agent3-sku CE413A ', ' r0-agent3-type 5 ', ' r0-agent4-kind 4 ', ' r0-agent4-sku CE412A ', ' r0-agent4-type 6 ', ' scan-src 0 ', ' scan-type 0 ', ' status-battery-check 0 ', ' status-dynamic-counters 0 ', ' status-type 3 ', ' support-released True ', ' support-subtype 2202411 ', ' support-type 2 ', ' support-ver 3.12.2 ', " tech-class ['Postscript'] ", " tech-subclass ['Normal'] ", ' tech-type 4 ', ' usb-pid 3882 ', ' usb-vid 1008 ', ' wifi-config 0 ', '\x1b[01m', 'Status History (most recent first):\x1b[0m', '\x1b[01m Date/Time Code Status Description User Job ID \x1b[0m', ' -------------------- ----- ---------------------------------------- -------- --------', ' 08/21/14 00:07:25 5012 Device communication error richard 0 ', ' 08/20/14 13:42:44 500 Started a print job richard 4214 ', '', '', 'Done.', ''], ['\x1b[35;01mwarning: No display found.\x1b[0m', '\x1b[31;01merror: hp-info -u/--gui requires Qt4 GUI support. Entering interactive mode.\x1b[0m', '\x1b[31;01merror: Unable to communicate with device (code=12): hp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670\x1b[0m', '\x1b[31;01merror: Error opening device (Device not found).\x1b[0m', ''], 0), 'is_cups_class': False, 'local_cups_queue_attributes': {'charset-configured': u'utf-8', 'charset-supported': [u'us-ascii', u'utf-8'], 'color-supported': True, 'compression-supported': [u'none', u'gzip'], 'copies-default': 1, 'copies-supported': (1, 9999), 'cups-version': u'1.7.2', 'device-uri': u'hp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670', 'document-format-default': u'application/octet-stream', 'document-format-supported': [u'application/octet-stream', u'application/pdf', u'application/postscript', u'application/vnd.adobe-reader-postscript', u'application/vnd.cups-command', u'application/vnd.cups-pdf', u'application/vnd.cups-pdf-banner', u'application/vnd.cups-postscript', u'application/vnd.cups-raw', u'application/vnd.samsung-ps', u'application/x-cshell', u'application/x-csource', u'application/x-perl', u'application/x-shell', u'image/gif', u'image/jpeg', u'image/png', u'image/tiff', u'image/urf', u'image/x-bitmap', u'image/x-photocd', u'image/x-portable-anymap', u'image/x-portable-bitmap', u'image/x-portable-graymap', u'image/x-portable-pixmap', u'image/x-sgi-rgb', u'image/x-sun-raster', u'image/x-xbitmap', u'image/x-xpixmap', u'image/x-xwindowdump', u'text/css', u'text/html', u'text/plain'], 'finishings-default': 3, 'finishings-supported': [3], 'generated-natural-language-supported': [u'en-us'], 'ipp-versions-supported': [u'1.0', u'1.1', u'2.0', u'2.1'], 'ippget-event-life': 15, 'job-creation-attributes-supported': [u'copies', u'finishings', u'ipp-attribute-fidelity', u'job-hold-until', u'job-name', u'job-priority', u'job-sheets', u'media', u'media-col', u'multiple-document-handling', u'number-up', u'output-bin', u'orientation-requested', u'page-ranges', u'print-color-mode', u'print-quality', u'printer-resolution', u'sides'], 'job-hold-until-default': u'no-hold', 'job-hold-until-supported': [u'no-hold', u'indefinite', u'day-time', u'evening', u'night', u'second-shift', u'third-shift', u'weekend'], 'job-ids-supported': True, 'job-k-limit': 0, 'job-k-octets-supported': (0, 470914416), 'job-page-limit': 0, 'job-priority-default': 50, 'job-priority-supported': [100], 'job-quota-period': 0, 'job-settable-attributes-supported': [u'copies', u'finishings', u'job-hold-until', u'job-name', u'job-priority', u'media', u'media-col', u'multiple-document-handling', u'number-up', u'output-bin', u'orientation-requested', u'page-ranges', u'print-color-mode', u'print-quality', u'printer-resolution', u'sides'], 'job-sheets-default': (u'none', u'none'), 'job-sheets-supported': [u'none', u'classified', u'confidential', u'form', u'secret', u'standard', u'topsecret', u'unclassified'], 'jpeg-k-octets-supported': (0, 470914416), 'jpeg-x-dimension-supported': (0, 65535), 'jpeg-y-dimension-supported': (1, 65535), 'marker-change-time': 0, 'media-bottom-margin-supported': [423], 'media-col-default': u'(unknown IPP value tag 0x34)', 'media-col-supported': [u'media-bottom-margin', u'media-left-margin', u'media-right-margin', u'media-size', u'media-source', u'media-top-margin', u'media-type'], 'media-default': u'iso_a4_210x297mm', 'media-left-margin-supported': [423], 'media-right-margin-supported': [423],

    Read the article

  • Clever memory usage through the years

    - by Ben Emmett
    A friend and I were recently talking about the really clever tricks people have used to get the most out of memory. I thought I’d share my favorites, and would love to hear yours too! Interleaving on drum memory Back in the ye olde days before I’d been born (we’re talking the 50s / 60s here), working memory commonly took the form of rotating magnetic drums. These would spin at a constant speed, and a fixed head would read from memory when the correct part of the drum passed it by, a bit like a primitive platter disk. Because each revolution took a few milliseconds, programmers took to manually arranging information non-sequentially on the drum, timing when an instruction or memory address would need to be accessed, then spacing information accordingly around the edge of the drum, thus reducing the access delay. Similar techniques were still used on hard disks and floppy disks into the 90s, but have become irrelevant with modern disk technologies. The Hashlife algorithm Conway’s Game of Life has attracted numerous implementations over the years, but Bill Gosper’s Hashlife algorithm is particularly impressive. Taking advantage of the repetitive nature of many cellular automata, it uses a quadtree structure to store the hashes of pieces of the overall grid. Over time there are fewer and fewer new structures which need to be evaluated, so it starts to run faster with larger grids, drastically outperforming other algorithms both in terms of speed and the size of grid which can be simulated. The actual amount of memory used is huge, but it’s used in a clever way, so makes the list . Elite’s procedural generation Ok, so this isn’t exactly a memory optimization – more a storage optimization – but it gets an honorable mention anyway. When writing Elite, David Braben and Ian Bell wanted to build a rich world which gamers could explore, but their 22K memory was something of a limitation (for comparison that’s about the size of my avatar picture at the top of this page). They procedurally generated all the characteristics of the 2048 planets in their virtual universe, including the names, which were stitched together using a lookup table of parts of names. In fact the original plans were for 2^52 planets, but it was decided that that was probably too many. Oh, and they did that all in assembly language. Other games of the time used similar techniques too – The Sentinel’s landscape generation algorithm being another example. Modern Garbage Collectors Garbage collection in managed languages like Java and .NET ensures that most of the time, developers stop needing to care about how they use and clean up memory as the garbage collector handles it automatically. Achieving this without killing performance is a near-miraculous feet of software engineering. Much like when learning chemistry, you find that every time you think you understand how the garbage collector works, it turns out to be a mere simplification; that there are yet more complexities and heuristics to help it run efficiently. Of course introducing memory problems is still possible (and there are tools like our memory profiler to help if that happens to you) but they’re much, much rarer. A cautionary note In the examples above, there were good and well understood reasons for the optimizations, but cunningly optimized code has usually had to trade away readability and maintainability to achieve its gains. Trying to optimize memory usage without being pretty confident that there’s actually a problem is doing it wrong. So what have I missed? Tell me about the ingenious (or stupid) tricks you’ve seen people use. Ben

    Read the article

  • Seven Random Thoughts on JavaOne

    - by HecklerMark
    As most people reading this blog may know, last week was JavaOne. There are a lot of summary/recap articles popping up now, and while I didn't want to just "add to pile", I did want to share a few observations. Disclaimer: I am an Oracle employee, but most of these observations are either externally verifiable or based upon a collection of opinions from Oracle and non-Oracle attendees alike. Anyway, here are a few take-aways: The Java ecosystem is alive and well, with a breadth and depth that is impossible to adequately describe in a short post...or a long post, for that matter. If there is any one area within the Java language or JVM that you would like to - or need to - know more about, it's well-represented at J1. While there are several IDEs that are used to great effect by the developer community, NetBeans is on a roll. I lost count how many sessions mentioned or used NetBeans, but it was by far the dominant IDE in use at J1. As a recent re-convert to NetBeans, I wasn't surprised others liked it so well, only how many. OpenJDK, OpenJFX, etc. Many developers were understandably concerned with the change of sponsorship/leadership when Java creator and longtime steward Sun Microsystems was acquired by Oracle. The read I got from attendees regarding Oracle's stewardship was almost universally positive, and the push for "openness" is deep and wide within the current Java environs. Few would probably have imagined it to be this good, this soon. Someone observed that "Larry (Ellison) is competitive, and he wants to be the best...so if he wants to have a community, it will be the best community on the planet." Like any company, Oracle is bound to make missteps, but leadership seems to be striking an excellent balance between embracing open efforts and innovating in competitive paid offerings. JavaFX (2.x) isn't perfect or comprehensive, but a great many people (myself included) see great potential, are developing for it, and are really excited about where it is and where it may be headed. This is another part of the Java ecosystem that has impressive depth for being so new (JavaFX 1.x aside). If you haven't kicked the tires yet, give it a try! You'll be surprised at how capable and versatile it is, and you'll probably catch yourself smiling while coding again.  :-) JavaEE is everywhere. Not exactly a newsflash, but there is a lot of buzz around EE still/again/anew. Sessions ranged from updated component specs/technologies to Websockets/HTML5, from frameworks to profiles and application servers. Programming "server-side" Java isn't confined to the server (as you no doubt realize), and if you still consider JavaEE a cumbersome beast, you clearly haven't been using the last couple of versions. Download GlassFish or the WebLogic Zip distro (or another JavaEE 6 implementation) and treat yourself. JavaOne is not inexpensive, but to paraphrase an old saying, "If you think that's expensive, you should try ignorance." :-) I suppose it's possible to attend J1 and learn nothing, but you'd have to really work at it! Attending even a single session is bound to expand your horizons and make you approach your code, your problem domain, differently...even if it's a session about something you already know quite well. The various presenters offer vastly different perspectives and challenge you to re-think your own approach(es). And finally, if you think the scheduled sessions are great - and make no mistake, most are clearly outstanding - wait until you see what you pick up from what I like to call the "hallway sessions". Between the presentations, people freely mingle in the hallways, go to lunch and dinner together, and talk. And talk. And talk. Ideas flow freely, sparking other ideas and the "crowdsourcing" of knowledge in a way that is hard to imagine outside of a conference of this magnitude. Consider this the "GO" part of a "BOGO" (Buy One, Get One) offer: you buy the ticket to the "structured" part of JavaOne and get the hallway sessions at no additional charge. They're really that good. If you weren't able to make it to JavaOne this year, you can still watch/listen to the sessions online by visiting the JavaOne course catalog and clicking the media link(s) in the right column - another demonstration of Oracle's commitment to the Java community. But make plans to be there next year to get the full benefit! You'll be glad you did. All the best,Mark P.S. - I didn't mention several other exciting developments in areas like the embedded space and the "internet of things" (M2M), robotics, optimization, and the cloud (among others), but I think you get the idea. JavaOne == brainExpansion;  Hope to see you there next year!

    Read the article

  • Stepping outside Visual Studio IDE [Part 1 of 2] with Eclipse

    - by mbcrump
    “If you're walking down the right path and you're willing to keep walking, eventually you'll make progress." – Barack Obama In my quest to become a better programmer, I’ve decided to start the process of learning Java. I will be primary using the Eclipse Language IDE. I will not bore you with the history just what is needed for a .NET developer to get up and running. I will provide links, screenshots and a few brief code tutorials. Links to documentation. The Official Eclipse FAQ’s Links to binaries. Eclipse IDE for Java EE Developers the Galileo Package (based on Eclipse 3.5 SR2)  Sun Developer Network – Java Eclipse officially recommends Java version 5 (also known as 1.5), although many Eclipse users use the newer version 6 (1.6). That's it, nothing more is required except to compile and run java. Installation Unzip the Eclipse IDE for Java EE Developers and double click the file named Eclipse.exe. You will probably want to create a link for it on your desktop. Once, it’s installed and launched you will have to select a workspace. Just accept the defaults and you will see the following: Lets go ahead and write a simple program. To write a "Hello World" program follow these steps: Start Eclipse. Create a new Java Project: File->New->Project. Select "Java" in the category list. Select "Java Project" in the project list. Click "Next". Enter a project name into the Project name field, for example, "HW Project". Click "Finish" Allow it to open the Java perspective Create a new Java class: Click the "Create a Java Class" button in the toolbar. (This is the icon below "Run" and "Window" with a tooltip that says "New Java Class.") Enter "HW" into the Name field. Click the checkbox indicating that you would like Eclipse to create a "public static void main(String[] args)" method. Click "Finish". A Java editor for HW.java will open. In the main method enter the following line.      System.out.println("This is my first java program and btw Hello World"); Save using ctrl-s. This automatically compiles HW.java. Click the "Run" button in the toolbar (looks like a VCR play button). You will be prompted to create a Launch configuration. Select "Java Application" and click "New". Click "Run" to run the Hello World program. The console will open and display "This is my first java program and btw Hello World". You now have your first java program, lets go ahead and make an applet. Since you already have the HW.java open, click inside the window and remove all code. Now copy/paste the following code snippet. Java Code Snippet for an applet. 1: import java.applet.Applet; 2: import java.awt.Graphics; 3: import java.awt.Color; 4:  5: @SuppressWarnings("serial") 6: public class HelloWorld extends Applet{ 7:  8: String text = "I'm a simple applet"; 9:  10: public void init() { 11: text = "I'm a simple applet"; 12: setBackground(Color.GREEN); 13: } 14:  15: public void start() { 16: System.out.println("starting..."); 17: } 18:  19: public void stop() { 20: System.out.println("stopping..."); 21: } 22:  23: public void destroy() { 24: System.out.println("preparing to unload..."); 25: } 26:  27: public void paint(Graphics g){ 28: System.out.println("Paint"); 29: g.setColor(Color.blue); 30: g.drawRect(0, 0, 31: getSize().width -1, 32: getSize().height -1); 33: g.setColor(Color.black); 34: g.drawString(text, 15, 25); 35: } 36: } The Eclipse IDE should look like Click "Run" to run the Hello World applet. Now, lets test our new java applet. So, navigate over to your workspace for example: “C:\Users\mbcrump\workspace\HW Project\bin” and you should see 2 files. HW.class java.policy.applet Create a HTML page with the following code: 1: <HTML> 2: <BODY> 3: <APPLET CODE=HW.class WIDTH=200 HEIGHT=100> 4: </APPLET> 5: </BODY> 6: </HTML> Open, the HTML page in Firefox or IE and you will see your applet running.  I hope this brief look at the Eclipse IDE helps someone get acquainted with Java Development. Even if your full time gig is with .NET, it will not hurt to have another language in your tool belt. As always, I welcome any suggestions or comments.

    Read the article

  • Future Of F# At Jazoon 2011

    - by Alois Kraus
    I was at the Jazoon 2011 in Zurich (Switzerland). It was a really cool event and it had many top notch speaker not only from the Microsoft universe. One of the most interesting talks was from Don Syme with the title: F# Today/F# Tomorrow. He did show how to use F# scripting to browse through open databases/, OData Web Services, Sharepoint, …interactively. It looked really easy with the help of F# Type Providers which is the next big language feature in a future F# version. The object returned by a Type Provider is used to access the data like in usual strongly typed object model. No guessing how the property of an object is called. Intellisense will show it just as you expect. There exists a range of Type Providers for various data sources where the schema of the stored data can somehow be dynamically extracted. Lets use e.g. a free database it would be then let data = DbProvider(http://.....); data the object which contains all data from e.g. a chemical database. It has an elements collection which contains an element which has the properties: Name, AtomicMass, Picture, …. You can browse the object returned by the Type Provider with full Intellisense because the returned object is strongly typed which makes this happen. The same can be achieved of course with code generators that use an input the schema of the input data (OData Web Service, database, Sharepoint, JSON serialized data, …) and spit out the necessary strongly typed objects as an assembly. This does work but has the downside that if the schema of your data source is huge you will quickly run against a wall with traditional code generators since the generated “deserialization” assembly could easily become several hundred MB. *** The following part contains guessing how this exactly work by asking Don two questions **** Q: Can I use Type Providers within C#? D: No. Q: F# is after all a library. I can reference the F# assemblies and use the contained Type Providers? D: F# does annotate the generated types in a special way at runtime which is not a static type that C# could use. The F# type providers seem to use a hybrid approach. At compilation time the Type Provider is instantiated with the url of your input data. The obtained schema information is used by the compiler to generate static types as usual but only for a small subset (the top level classes up to certain nesting level would make sense to me). To make this work you need to access the actual data source at compile time which could be a problem if you want to keep the actual url in a config file. Ok so this explains why it does work at all. But in the demo we did see full intellisense support down to the deepest object level. It looks like if you navigate deeper into the object hierarchy the type provider is instantiated in the background and attach to a true static type the properties determined at run time while you were typing. So this type is not really static at all. It is static if you define as a static type that its properties shows up in intellisense. But since this type information is determined while you are typing and it is not used to generate a true static type and you cannot use these “intellistatic” types from C#. Nonetheless this is a very cool language feature. With the plotting libraries you can generate expressive charts from any datasource within seconds to get quickly an overview of any structured data storage. My favorite programming language C# will not get such features in the near future there is hope. If you restrict yourself to OData sources you can use LINQPad to query any OData enabled data source with LINQ with ease. There you can query Stackoverflow with The output is also nicely rendered which makes it a very good tool to explore OData sources today.

    Read the article

  • SharePoint: Numeric/Integer Site Column (Field) Types

    - by CharlesLee
    What field type should you use when creating number based site columns as part of a SharePoint feature? Windows SharePoint Services 3.0 provides you with an extensible and flexible method of developing and deploying Site Columns and Content Types (both of which are required for most SharePoint projects requiring list or library based data storage) via the feature framework (more on this in my next full article.) However there is an interesting behaviour when working with a column or field which is required to hold a number, which I thought I would blog about today. When creating Site Columns in the browser you get a nice rich UI in order to choose the properties of this field: However when you are recreating this as a feature defined in CAML (Collaborative Application Mark-up Language), which is a type of XML (more on this in my article) then you do not get such a rich experience.  You would need to add something like this to the element manifest defined in your feature: <Field SourceID="http://schemas.microsoft.com/sharepoint/3.0"        ID="{C272E927-3748-48db-8FC0-6C7B72A6D220}"        Group="My Site Columns"        Name="MyNumber"        DisplayName="My Number"        Type="Numeric"        Commas="FALSE"        Decimals="0"        Required="FALSE"        ReadOnly="FALSE"        Sealed="FALSE"        Hidden="FALSE" /> OK, its not as nice as the browser UI but I can deal with this. Hang on. Commas="FALSE" and yet for my number 1234 I get 1,234.  That is not what I wanted or expected.  What gives? The answer lies in the difference between a type of "Numeric" which is an implementation of the SPFieldNumber class and "Integer" which does not correspond to a given SPField class but rather represents a positive or negative integer.  The numeric type does not respect the settings of Commas or NegativeFormat (which defines how to display negative numbers.)  So we can set the Type to Integer and we are good to go.  Yes? Sadly no! You will notice at this point that if you deploy your site column into SharePoint something has gone wrong.  Your site column is not listed in the Site Column Gallery.  The deployment must have failed then?  But no, a quick look at the site columns via the API reveals that the column is there.  What new evil is this?  Unfortunately the base type for integer fields has this lovely attribute set on it: UserCreatable = FALSE So WSS 3.0 accordingly hides your field in the gallery as you cannot create fields of this type. However! You can use them in content types just like any other field (except not in the browser UI), and if you add them to the content type as part of your feature then they will show up in the UI as a field on that content type.  Most of the time you are not going to be too concerned that your site columns are not listed in the gallery as you will know that they are there and that they are still useable. So not as bad as you thought after all.  Just a little quirky.  But that is SharePoint for you.

    Read the article

  • Five Key Trends in Enterprise 2.0 for 2011

    - by kellsey.ruppel(at)oracle.com
    We recently sat down with Andy MacMillan, an industry veteran and vice president of product management for Enterprise 2.0 at Oracle, to get his take on the year ahead in Enterprise 2.0 (E2.0). He offered us his five predictions about the ways he believes E2.0 technologies will transform business in 2011. 1. Forward-thinking organizations will achieve an unprecedented level of organizational awareness. Enterprise 2.0 and Web 2.0 technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed. But this year we are anticipating that organizations will go to the next step and integrate social activities with business applications to deliver rich contextual "activity streams." Activity streams are a new way for enterprise users to get relevant information as quickly as it happens, by navigating to that information in context directly from their portal. We don't mean syndicating social activities limited to a single application. Instead, we believe back-office systems will be combined with social media tools to drive how users make informed business decisions in brand new ways. For example, an account manager might log into the company portal and automatically receive notification that colleagues are closing business around a certain product in his market segment. With a single click, he can reach out instantly to these colleagues via social media and learn from their successes to drive new business opportunities in his own area. 2. Online customer engagement will become a high priority for CMOs. A growing number of chief marketing officers (CMOs) have created a new direct report called "head of online"--a senior marketing executive responsible for all engagements with customers and prospects via the Web, mobile, and social media. This new field has been dubbed "Web experience management" or "online customer engagement" by firms and analyst organizations. It is likely to rapidly increase demand for a host of new business objectives and metrics from Web content management solutions. As companies interface with customers more and more over the Web, Web experience management solutions will help deliver more targeted interactions to ensure increased customer loyalty while meeting sales and business objectives. 3. Real composite applications will be widely adopted. We expect organizations to move from the concept of a single "uber-portal" that encompasses all the necessary features to a more modular, component-based concept for composite applications. This approach is now possible as IT and power users are empowered to assemble new, purpose-built composite applications quickly from existing components. 4. Records management will drive ECM consolidation. We continue to see a significant shift in the approach to records management. Several years ago initiatives were focused on overlaying records management across a set of electronic repositories and physical storage locations. We believe federated records management will continue, but we also expect to see records management driving conversations around single-platform content management consolidation. 5. Organizations will demand ECM at extreme scale. We have already seen a trend within IT organizations to provide a common, highly scalable infrastructure to consolidate and support content and information needs. But as data sizes grow exponentially, ECM at an extreme scale is likely to spread at unprecedented speeds this year. This makes sense as regulations and transparency requirements rise. The model in which ECM and lightweight CMS systems provide basic content services such as check-in, update, delete, and search has converged around a set of industry best practices and has even been coded into new industry standards such as content management interoperability services. As these services converge and the demand for them accelerates, organizations are beginning to rationalize investments into a single, highly scalable infrastructure. Is your organization ready for Enterprise 2.0 in 2011? Learn more.

    Read the article

  • Yes, I did it - Skydiving in Mauritius

    Finally, I did it or better said we did it. Already back in November last year I saw the big billboard advertisement of Skydive Austral Mauritius near Caudan Waterfront in Port Louis and decided for myself that this is going to be the perfect birthday gift for my wife. Simply out of curiosity I would join her tandem jump with a second instructor. Due to her pregnancy of our son I had to be patient... But then finally, her birthday had arrived and on our midnight celebration session I showed her her netbook with the website preloaded. Actually, it was the "perfect" timing... Recovery from her cesarean is fine, local weather conditions are gorgious and the children were under surveillance of my mum - spending her annual holidays on the island. So, after late wake-up in the morning, we packed our stuff and off we went. According to Google Maps direction indication we had to drive for roughly 50km (only) but traffic here in Mauritius is always challenging. The dropzone is at the Zone Industrielle Mon Loisir Sugar Estate near Riviere du Rempart at the northern east coast. Anyways, we were not in a hurry and arrived there shortly after noon. The access road to the airfield are just small down-driven paths through sugar cane fields and according to our daughter "it's bumpy!". True true true... The facilities at Skydive Austral Mauritius are complete except for food. Enough space for parking, easy handling at the reception and a lot to see for the kids. There's even a big terrace with several sets of tables and chairs, small bar for soft drinks, strictly non-alcoholic. The team over there is all welcoming and warm-hearthy! Having the kids with us was no issue at all. Quite the opposite, our daugther was allowed to discover a lot of things than we adults did. Even visiting the small air plane was on the menu for her. Really great stuff! While waiting for our turn we enjoyed watching other people getting ready in the jump gear, taking off with the Cessna, and finally coming back down on the tandem parachute. Actually, the different expressions on their faces was one of the best parts while waiting. Great mental preparation as my wife was getting more anxious about her first jump... {loadposition content_adsense} First, we got some information about the procedures on the plane about how to get seated, tight up with our instructors and how to get ready for the jump off the plane as soon as we arrive the height of 10.000 ft. All well explained and easy to understand after all.Next, we met with our jumpers Chris and Lee aka "Rasta" to get dressed and ready for take-off. Those guys are really cool and relaxed for their job. From that point on, the DVD session / recording for my wife's birthday started and we really had a lot of fun... The difference between that small Cessna and a commercial flight with an Airbus or a Boeing is astronomic! The climb up to 10.000 ft took us roughly 25 minutes and we enjoyed the magnificent view over the turquoise lagunes near Poste de Flacq, Lafayette and Isle d'Ambre on the north-east coast. After flying through the clouds we sun-bathed and looked over "iced-sugar covered" Mauritius. You might have a look at the picture gallery of Skydive Mauritius for better imagination. The moment of truth, or better said, point of no return came after approximately 25 minutes. The door opens, moving into position on the side on top of the wheel and... out! Back flip and free fall! Slight turns and Wooooohooooo! through the clouds... It so amazing and breath-taking! So undescribable! You have to experience this yourself! Some seconds later the parachute opened and we glided smoothly with some turns and spins back down to the dropzone. The rest of the family could hear and see us soon and the landing was easy going. We never had any doubts or fear about our instructors. They did a great job and we are looking forward to book our next job. I might even consider to follow educational classes on skydiving and earn a license. By the way, feel free to get in touch with Skydive Austral Mauritius. Either via contact details on their website or tweeting a little bit with them. Follow the tweets of Chris and fellows on SkydiveAustral.

    Read the article

  • Oracle MAA Part 1: When One Size Does Not Fit All

    - by JoeMeeks
    The good news is that Oracle Maximum Availability Architecture (MAA) best practices combined with Oracle Database 12c (see video) introduce first-in-the-industry database capabilities that truly make unplanned outages and planned maintenance transparent to users. The trouble with such good news is that Oracle’s enthusiasm in evangelizing its latest innovations may leave some to wonder if we’ve lost sight of the fact that not all database applications are created equal. Afterall, many databases don’t have the business requirements for high availability and data protection that require all of Oracle’s ‘stuff’. For many real world applications, a controlled amount of downtime and/or data loss is OK if it saves money and effort. Well, not to worry. Oracle knows that enterprises need solutions that address the full continuum of requirements for data protection and availability. Oracle MAA accomplishes this by defining four HA service level tiers: BRONZE, SILVER, GOLD and PLATINUM. The figure below shows the progression in service levels provided by each tier. Each tier uses a different MAA reference architecture to deploy the optimal set of Oracle HA capabilities that reliably achieve a given service level (SLA) at the lowest cost.  Each tier includes all of the capabilities of the previous tier and builds upon the architecture to handle an expanded fault domain. Bronze is appropriate for databases where simple restart or restore from backup is ‘HA enough’. Bronze is based upon a single instance Oracle Database with MAA best practices that use the many capabilities for data protection and HA included with every Oracle Enterprise Edition license. Oracle-optimized backups using Oracle Recovery Manager (RMAN) provide data protection and are used to restore availability should an outage prevent the database from being able to restart. Silver provides an additional level of HA for databases that require minimal or zero downtime in the event of database instance or server failure as well as many types of planned maintenance. Silver adds clustering technology - either Oracle RAC or RAC One Node. RMAN provides database-optimized backups to protect data and restore availability should an outage prevent the cluster from being able to restart. Gold raises the game substantially for business critical applications that can’t accept vulnerability to single points-of-failure. Gold adds database-aware replication technologies, Active Data Guard and Oracle GoldenGate, which synchronize one or more replicas of the production database to provide real time data protection and availability. Database-aware replication greatly increases HA and data protection beyond what is possible with storage replication technologies. It also reduces cost while improving return on investment by actively utilizing all replicas at all times. Platinum introduces all of the sexy new Oracle Database 12c capabilities that Oracle staff will gush over with great enthusiasm. These capabilities include Application Continuity for reliable replay of in-flight transactions that masks outages from users; Active Data Guard Far Sync for zero data loss protection at any distance; new Oracle GoldenGate enhancements for zero downtime upgrades and migrations; and Global Data Services for automated service management and workload balancing in replicated database environments. Each of these technologies requires additional effort to implement. But they deliver substantial value for your most critical applications where downtime and data loss are not an option. The MAA reference architectures are inherently designed to address conflicting realities. On one hand, not every application has the same objectives for availability and data protection – the Not One Size Fits All title of this blog post. On the other hand, standard infrastructure is an operational requirement and a business necessity in order to reduce complexity and cost. MAA reference architectures address both realities by providing a standard infrastructure optimized for Oracle Database that enables you to dial-in the level of HA appropriate for different service level requirements. This makes it simple to move a database from one HA tier to the next should business requirements change, or from one hardware platform to another – whether it’s your favorite non-Oracle vendor or an Oracle Engineered System. Please stay tuned for additional blog posts in this series that dive into the details of each MAA reference architecture. Meanwhile, more information on Oracle HA solutions and the Maximum Availability Architecture can be found at: Oracle Maximum Availability Architecture - Webcast Maximize Availability with Oracle Database 12c - Technical White Paper

    Read the article

  • Solaris 11.2: Functional Deprecation

    - by alanc
    In Solaris 11.1, I updated the system headers to enable use of several attributes on functions, including noreturn and printf format, to give compilers and static analyzers more information about how they are used to give better warnings when building code. In Solaris 11.2, I've gone back in and added one more attribute to a number of functions in the system headers: __attribute__((__deprecated__)). This is used to warn people building software that they’re using function calls we recommend no longer be used. While in many cases the Solaris Binary Compatibility Guarantee means we won't ever remove these functions from the system libraries, we still want to discourage their use. I made passes through both the POSIX and C standards, and some of the Solaris architecture review cases to come up with an initial list which the Solaris architecture review committee accepted to start with. This set is by no means a complete list of Obsolete function interfaces, but should be a reasonable start at functions that are well documented as deprecated and seem useful to warn developers away from. More functions may be flagged in the future as they get deprecated, or if further passes are made through our existing deprecated functions to flag more of them. Header Interface Deprecated by Alternative Documented in <door.h> door_cred(3C) PSARC/2002/188 door_ucred(3C) door_cred(3C) <kvm.h> kvm_read(3KVM), kvm_write(3KVM) PSARC/1995/186 Functions on kvm_kread(3KVM) man page kvm_read(3KVM) <stdio.h> gets(3C) ISO C99 TC3 (Removed in ISO C11), POSIX:2008/XPG7/Unix08 fgets(3C) gets(3C) man page, and just about every gets(3C) reference online from the past 25 years, since the Morris worm proved bad things happen when it’s used. <unistd.h> vfork(2) PSARC/2004/760, POSIX:2001/XPG6/Unix03 (Removed in POSIX:2008/XPG7/Unix08) posix_spawn(3C) vfork(2) man page. <utmp.h> All functions from getutent(3C) man page PSARC/1999/103 utmpx functions from getutentx(3C) man page getutent(3C) man page <varargs.h> varargs.h version of va_list typedef ANSI/ISO C89 standard <stdarg.h> varargs(3EXT) <volmgt.h> All functions PSARC/2005/672 hal(5) API volmgt_check(3VOLMGT), etc. <sys/nvpair.h> nvlist_add_boolean(3NVPAIR), nvlist_lookup_boolean(3NVPAIR) PSARC/2003/587 nvlist_add_boolean_value, nvlist_lookup_boolean_value nvlist_add_boolean(3NVPAIR) & (9F), nvlist_lookup_boolean(3NVPAIR) & (9F). <sys/processor.h> gethomelgroup(3C) PSARC/2003/034 lgrp_home(3LGRP) gethomelgroup(3C) <sys/stat_impl.h> _fxstat, _xstat, _lxstat, _xmknod PSARC/2009/657 stat(2) old functions are undocumented remains of SVR3/COFF compatibility support If the above table is cut off when viewing in the blog, try viewing this standalone copy of the table. To See or Not To See To see these warnings, you will need to be building with either gcc (versions 3.4, 4.5, 4.7, & 4.8 are available in the 11.2 package repo), or with Oracle Solaris Studio 12.4 or later (which like Solaris 11.2, is currently in beta testing). For instance, take this oversimplified (and obviously buggy) implementation of the cat command: #include <stdio.h> int main(int argc, char **argv) { char buf[80]; while (gets(buf) != NULL) puts(buf); return 0; } Compiling it with the Studio 12.4 beta compiler will produce warnings such as: % cc -V cc: Sun C 5.13 SunOS_i386 Beta 2014/03/11 % cc gets_test.c "gets_test.c", line 6: warning: "gets" is deprecated, declared in : "/usr/include/iso/stdio_iso.h", line 221 The exact warning given varies by compilers, and the compilers also have a variety of flags to either raise the warnings to errors, or silence them. Of couse, the exact form of the output is Not An Interface that can be relied on for automated parsing, just shown for example. gets(3C) is actually a special case — as noted above, it is no longer part of the C Standard Library in the C11 standard, so when compiling in C11 mode (i.e. when __STDC_VERSION__ >= 201112L), the <stdio.h> header will not provide a prototype for it, causing the compiler to complain it is unknown: % gcc -std=c11 gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: warning: implicit declaration of function ‘gets’ [-Wimplicit-function-declaration] while (gets(buf) != NULL) ^ The gets(3C) function of course is still in libc, so if you ignore the error or provide your own prototype, you can still build code that calls it, you just have to acknowledge you’re taking on the risk of doing so yourself. Solaris Studio 12.4 Beta % cc gets_test.c "gets_test.c", line 6: warning: "gets" is deprecated, declared in : "/usr/include/iso/stdio_iso.h", line 221 % cc -errwarn=E_DEPRECATED_ATT gets_test.c "gets_test.c", line 6: "gets" is deprecated, declared in : "/usr/include/iso/stdio_iso.h", line 221 cc: acomp failed for gets_test.c This warning is silenced in the 12.4 beta by cc -erroff=E_DEPRECATED_ATT No warning is currently issued by Studio 12.3 & earler releases. gcc 3.4.3 % /usr/sfw/bin/gcc gets_test.c gets_test.c: In function `main': gets_test.c:6: warning: `gets' is deprecated (declared at /usr/include/iso/stdio_iso.h:221) Warning is completely silenced with gcc -Wno-deprecated-declarations gcc 4.7.3 % /usr/gcc/4.7/bin/gcc gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: warning: ‘gets’ is deprecated (declared at /usr/include/iso/stdio_iso.h:221) [-Wdeprecated-declarations] % /usr/gcc/4.7/bin/gcc -Werror=deprecated-declarations gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: error: ‘gets’ is deprecated (declared at /usr/include/iso/stdio_iso.h:221) [-Werror=deprecated-declarations] cc1: some warnings being treated as errors Warning is completely silenced with gcc -Wno-deprecated-declarations gcc 4.8.2 % /usr/bin/gcc gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: warning: ‘gets’ is deprecated (declared at /usr/include/iso/stdio_iso.h:221) [-Wdeprecated-declarations] while (gets(buf) != NULL) ^ % /usr/bin/gcc -Werror=deprecated-declarations gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: error: ‘gets’ is deprecated (declared at /usr/include/iso/stdio_iso.h:221) [-Werror=deprecated-declarations] while (gets(buf) != NULL) ^ cc1: some warnings being treated as errors Warning is completely silenced with gcc -Wno-deprecated-declarations

    Read the article

  • Nodemanager Init.d Script

    - by john.graves(at)oracle.com
    I’ve seen many of these floating around.  This is my favourite on an Ubuntu based machine. Just throw it into the /etc/init.d directory and update the following lines: export MW_HOME=/opt/app/wls10.3.4 user='weblogic' Then run: update-rc.d nodemanager default Everything else should be ok for 10.3.4. #!/bin/sh # ### BEGIN INIT INFO # Provides: nodemanager # Required-Start: # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: WebLogic Nodemanager ### END INIT INFO # nodemgr Oracle Weblogic NodeManager service # # chkconfig: 345 85 15 # description: Oracle Weblogic NodeManager service # ### BEGIN INIT INFO # Provides: nodemgr # Required-Start: $network $local_fs # Required-Stop: # Should-Start: # Should-Stop: # Default-Start: 3 4 5 # Default-Stop: 0 1 2 6 # Short-Description: Oracle Weblogic NodeManager service. # Description: Starts and stops Oracle Weblogic NodeManager. ### END INIT INFO # Source function library. . /lib/lsb/init-functions # set Weblogic environment defining CLASSPATH and LD_LIBRARY_PATH # to start/stop various components. export MW_HOME=/opt/app/wls10.3.4 # # Note: # The setWLSEnv.sh not only does a good job of setting the environment, # but also advertises the fact explicitly in the console! Silence it. # . $MW_HOME/wlserver_10.3/server/bin/setWLSEnv.sh > /dev/null # set NodeManager environment export NodeManagerHome=$WL_HOME/common/nodemanager NodeManagerLockFile=$NodeManagerHome/nodemanager.log.lck # check JAVA_HOME if [ -z ${JAVA_HOME:-} ]; then export JAVA_HOME=/opt/sun/products/java/jdk1.6.0_18 fi exec=$MW_HOME/wlserver_10.3/server/bin/startNodeManager.sh prog='nodemanager' user='weblogic' is_nodemgr_running() { local nodemgr_cnt=`ps -ef | \ grep -i 'java ' | \ grep -i ' weblogic.NodeManager ' | \ grep -v grep | \ wc -l` echo $nodemgr_cnt } get_nodemgr_pid() { nodemgr_pid=0 if [ `is_nodemgr_running` -eq 1 ]; then nodemgr_pid=`ps -ef | \ grep -i 'java ' | \ grep -i ' weblogic.NodeManager ' | \ grep -v grep | \ tr -s ' ' | \ cut -d' ' -f2` fi echo $nodemgr_pid } check_nodemgr_status () { local retval=0 local nodemgr_cnt=`is_nodemgr_running` if [ $nodemgr_cnt -eq 0 ]; then if [ -f $NodeManagerLockFile ]; then retval=2 else retval=3 fi elif [ $nodemgr_cnt -gt 1 ]; then retval=4 else retval=0 fi echo $retval } start() { ulimit -n 65535 [ -x $exec ] || exit 5 echo -n $"Starting $prog: " su $user -c "$exec &" retval=$? echo return $retval } stop() { echo -n $"Stopping $prog: " kill -s 9 `get_nodemgr_pid` &> /dev/null retval=$? echo [ $retval -eq 0 ] && rm -f $NodeManagerLockFile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { local retval=`check_nodemgr_status` if [ $retval -eq 0 ]; then echo "$prog (pid:`get_nodemgr_pid`) is running..." elif [ $retval -eq 4 ]; then echo "Multiple instances of $prog are running..." else echo "$prog is stopped" fi return $retval } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo -n "Usage: $0 {" echo -n "start|" echo -n "stop|" echo -n "status|" echo -n "restart|" echo -n "condrestart|" echo -n "try-restart|" echo -n "reload|" echo -n "force-reload" echo "}" exit 2 esac exit $? .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    Read the article

  • My collection of favourite TFS utilities

    - by Aaron Kowall
    So, you’re in charge of your company or team’s Team Foundation Server.  Wish it was easier to manage, administer, extend?  Well, here are a few utilities that I highly recommend looking at. I’ve recently had need to rebuild my laptop and upgrade my local TFS environment to TFS 2012 Update 1.  This gave me cause to enumerate some of the utilities I like to have on hand. One of the reasons I love to use TFS on projects is that it’s basically a complete ALM toolkit.  Everything from Task Management, Version Control, Build Management, Test Management, Metrics and Reporting are all there ‘in the box’.  However, no matter how complete a product set it, there are always ways to make it better.  Here are a list of utilities and libraries that are pretty generally useful.  this is not intended to be an exhaustive list of TFS extensions but rather a set that I recommend you look at.  There are many more out there that may be applicable in one scenario or another.  This set of tools should work with TFS 2012 or 2010 if you grab the right version. Most of these tools (and more) are available from the Visual Studio Gallery or CodePlex. General TFS Power Tools – This is ‘the’ collection of utilities and extensions delivered by the Product Group.  Highly recommended from here are the Best Practice Analyzer for ensuring your TFS implementation is healthy and the Team Foundation Server Backups to ensure your TFS databases are backed up correctly. TFS Administrators Toolkit – helps make updates to work item types and reports across many team projects.  Also provides visibility of disk usage by finding large files in version control or test attachments to assist in managing storage utilization. Version Control Git-TF - a set of cross-platform, command line tools that facilitate sharing of changes between TFS and Git. These tools allow a developer to use a local Git repository, and configure it to share changes with a TFS server.  Great for all Git lovers who must integrate into a TFS repository. Testing TFS 2012 Tester Power Tool – A utility for bulk copying test cases which assists in an approach for managing test cases across multiple releases.  A little plug that this utility was written and maintained by Anna Russo of Imaginet where I also work. Test Scribe - A documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc. Reporting Community TFS Report Extensions - a single repository of SQL Server Reporting Services report for Team Foundation 2010 (and above).  Check out the Test Plan Status report by Imaginet’s Steve St. Jean.  Very valuable for your test managers. Builds TFS Build Manager – A great utility if you are build manager over a complex build environment with many TFS build definitions. Community TFS Build Extensions – contains many custom build activities.  Current release binaries are for TFS 2010 but many of the activities can be recompiled for use with TFS 2012. While compiling this list, I was surprised by the number of TFS utilities and extensions I no longer use/need in TFS 2012 because of the great work by the TFS team addressing many gaps since the 2010 release. Are there any utilities you depend on that I’ve missed?  I’d love to hear about them in the comments!

    Read the article

  • On Reflector Pricing

    - by Nick Harrison
    I have heard a lot of outrage over Red Gate's decision to charge for Reflector. In the interest of full disclosure, I am a fan of Red Gate. I have worked with them on several usability tests. They also sponsor Simple Talk where I publish articles. They are a good company. I am also a BIG fan of Reflector. I have used it since Lutz originally released it. I have written my own add-ins. I have written code to host reflector and use its object model in my own code. Reflector is a beautiful tool. The care that Lutz took to incorporate extensibility is amazing. I have never had difficulty convincing my fellow developers that it is a wonderful tool. Almost always, once anyone sees it in action, it becomes their favorite tool. This wide spread adoption and usability has made it an icon and pivotal pillar in the DotNet community. Even folks with the attitude that if it did not come out of Redmond then it must not be any good, still love it. It is ironic to hear everyone clamoring for it to be released as open source. Reflector was never open source, it was free, but you never were able to peruse the source code and contribute your own changes. You could not even use Reflector to view the source code. From the very beginning, it was never anyone's intention for just anyone to examine the source code and make their own contributions aside from the add-in model. Lutz chose to hand over the reins to Red Gate because he believed that they would be able to build on his original vision and keep the product viable and effective. He did not choose to make it open source, hoping that the community would be up to the challenge. The simplicity and elegance may well have been lost with the "design by committee" nature of open source. Despite being a wonderful and beloved tool, Reflector cannot be an easy tool to maintain. Maybe because it is so wonderful and beloved, it is even more difficult to maintain. At any rate, we have high expectations. Reflector must continue to be able to reasonably disassemble every language construct that the framework and core languages dream up. We want it to be fast, and we also want it to continue to be simple to use. No small order. Red Gate tried to keep the core product free. Sadly there was not enough interest in the Pro version to subsidize the rest of the expenses. $35 is a reasonable cost, more than reasonable. I have read the blog posts and forum posts complaining about the time associated with getting the expense approved. I have heard people complain about the cost being unreasonable if you are a developer from certain countries. Let's do the math. How much of a productivity boost is Reflector? How many hours do you think it saves you in a typical project? The next question is a little easier if you are a contractor or a consultant, but what is your hourly rate? If you are not a contractor, you can probably figure out an hourly rate. How long does it take to get a return on your investment? The value added proposition is not a difficult one to make. I have read people clamoring that Red Gate sucks and is evil. They complain about broken promises and conflicts of interest. Relax! Red Gate is not evil. The world is not coming to an end. The sun will come up tomorrow. I am sure that Red Gate will come up with options for volume licensing or site licensing for companies that want to get a licensed copy for their entire team. Don't panic, and I am sure that many great improvements are on the horizon. Switching the UI to WPF and including a tabbed interface opens up lots of possibilities.

    Read the article

  • Good DBAs Do Baselines

    - by Louis Davidson
    One morning, you wake up and feel funny. You can’t quite put your finger on it, but something isn’t quite right. What now? Unless you happen to be a hypochondriac, you likely drag yourself out of bed, get on with the day and gather more “evidence”. You check your symptoms over the next few days; do you feel the same, better, worse? If better, then great, it was some temporal issue, perhaps caused by an allergic reaction to some suspiciously spicy chicken. If the same or worse then you go to the doctor for some health advice, but armed with some data to share, and having ruled out certain possible causes that are fixed with a bit of rest and perhaps an antacid. Whether you realize it or not, in comparing how you feel one day to the next, you have taken baseline measurements. In much the same way, a DBA uses baselines to gauge the gauge health of their database servers. Of course, while SQL Server is very willing to share data regarding its health and activities, it has almost no idea of the difference between good and bad. Over time, experienced DBAs develop “mental” baselines with which they can gauge the health of their servers almost as easily as their own body. They accumulate knowledge of the daily, natural state of each part of their database system, and so know instinctively when one of their databases “feels funny”. Equally, they know when an “issue” is just a passing tremor. They see their SQL Server with all of its four CPU cores running close 100% and don’t panic anymore. Why? It’s 5PM and every day the same thing occurs when the end-of-day reports, which are very CPU intensive, are running. Equally, they know when they need to respond in earnest when it is the first time they have heard about an issue, even if it has been happening every day. Nevertheless, no DBA can retain mental baselines for every characteristic of their systems, so we need to collect physical baselines too. In my experience, surprisingly few DBAs do this very well. Part of the problem is that SQL Server provides a lot of instrumentation. If you look, you will find an almost overwhelming amount of data regarding user activity on your SQL Server instances, and use and abuse of the available CPU, I/O and memory. It seems like a huge task even to work out which data you need to collect, let alone start collecting it on a regular basis, managing its storage over time, and performing detailed comparative analysis. However, without baselines, though, it is very difficult to pinpoint what ails a server, just by looking at a single snapshot of the data, or to spot retrospectively what caused the problem by examining aggregated data for the server, collected over many months. It isn’t as hard as you think to get started. You’ve probably already established some troubleshooting queries of the type SELECT Value FROM SomeSystemTableOrView. Capturing a set of baseline values for such a query can be as easy as changing it as follows: INSERT into BaseLine.SomeSystemTable (value, captureTime) SELECT Value, SYSDATETIME() FROM SomeSystemTableOrView; Of course, there are monitoring tools that will collect and manage this baseline data for you, automatically, and allow you to perform comparison of metrics over different periods. However, to get yourself started and to prove to yourself (or perhaps the person who writes the checks for tools) the value of baselines, stick something similar to the above query into an agent job, running every hour or so, and you are on your way with no excuses! Then, the next time you investigate a slow server, and see x open transactions, y users logged in, and z rows added per hour in the Orders table, compare to your baselines and see immediately what, if anything, has changed!

    Read the article

  • Cannot Mount USB 3.0 Hard Disk ?!!

    - by Tenken
    Hi, I have a USB 3.0 external hard disk which I am unable to mount. The entry appears in the "lsusb" command, but I do not exactly understand how to mount it. This is the output for my lsusb command. "ASMedia Technology Inc." is the USB 3.0 device. I would appreciate some help in mounting and accessing the hard disk. This the relevant output of my "lsusb -v" : Bus 009 Device 002: ID 174c:5106 ASMedia Technology Inc. Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x174c ASMedia Technology Inc. idProduct 0x5106 bcdDevice 0.01 iManufacturer 2 ASMedia iProduct 3 AS2105 iSerial 1 00000000000000000000 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 3 bMaxPacketSize0 9 idVendor 0x1d6b Linux Foundation idProduct 0x0003 3.0 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.35-28-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:04:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection TT think time 8 FS bits bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0503 highspeed power enable connect Port 4: 0000.0503 highspeed power enable connect Device Status: 0x0003 Self Powered Remote Wakeup Enabled This is the error given when I try to mount the hard drive: shinso@shinso-IdeaPad:~$ sudo mount /dev/sdb /mnt [sudo] password for shinso: mount: /dev/sdb: unknown device This the output of "dmesg|tail": [30062.774178] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [30535.800977] usb 9-4: USB disconnect, address 3 [30659.237342] Valid eCryptfs headers not found in file header region or xattr region [30659.237351] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [31259.268310] Valid eCryptfs headers not found in file header region or xattr region [31259.268313] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [31860.059058] Valid eCryptfs headers not found in file header region or xattr region [31860.059062] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [32465.220590] Valid eCryptfs headers not found in file header region or xattr region [32465.220593] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO I am using Ubuntu 10.10 (64 bit). Any help is appreciated.

    Read the article

  • How to get faster graphics in KVM? VNC is painfully slow with Haiku OS guest, Spice won't install and SDL doesn't work

    - by Don Quixote
    I've been coming up to speed on the Haiku operating system, an Open Source clone of BeOS 5 Pro. I'm using an Apple MacBook Pro as my development machine. Apple's BootCamp BIOS does not support more than four partitions on the internal hard drive. While I can set up extended and logical partitions, doing so will prevent any of the installed operating systems from booting. To run Haiku directly on the iron, I boot it off a USB stick. Using external storage is also helpful because I am perpetually out of filesystem space. While VirtualBox is documented to allow access to physical drives, I could not actually get it to work. Also VirtualBox can only use one of the host CPU's cores. While VB guests can be configured for more than one CPU, they are only emulated. A full build of the Haiku OS takes 4.5 under VB. I had the hope of reducing build times by using KVM instead, but it's not working nearly as well as VirtualBox did. The Linux Kernel Virtual Machine is broken in all manner of fundamental ways as seen from Haiku. But I'm a coder; maybe I could contribute to fixing some of those problems. The first problem I've got is that Haiku's video in virt-manager is quite painfully slow. When I drag Haiku windows around the desktop, they lag quite far behind where my mouse is. It's quite difficult to move a window to a precise position on the screen. Just imagine that the mouse was connected to the window title bar with a really stretchy spring. Also Haiku's mouse lags quite far behind where I have moved it. I found lots of Personal Package Archives that enable Spice from QEMU / KVM at the Ubuntu Personal Package Arhives. I tried a few of the PPAs but none of them worked; with one of them, the command "add-apt-repository" crashed with a traceback. There is a Wiki page about Spice, but it says that it only works on 64-bit. My Early 2006 MacBook Pro is 32-bit. Its Apple Model Identifier is MacBookPro1,1; these use Core Duos NOT Core 2 Duos. I don't mind building a source deb for 32-bit if I can expect it to work. Is there some reason that Spice should be 64-bit only? Does it need features of the x86_64 Instruction Set Architecture that x86 does not have? When I try using SDL from virt-manager, the configuration for Local SDL Window says "Xauth: /home/mike/.Xauthority". When I try to start my guest, virt-manager emits an error. When I Googled the error message, the usual solution was to make ~/.Xauthority readible. However, .Xauthorty does not exist in my home directory. Instead I have a $XAUTHORITY environment variable. There is no way to configure SDL in virt-manager to use $XAUTHORITY instead of ~/.Xauthority. Neither does it work to copy the value of $XAUTHORITY into the file. I am ready to scream, because I've been five fscking days trying to make KVM work for Haiku development. There is a whole lot more that is broken than the slow video. All I really want to do for now is speed up my full builds of Haiku by using "jam -j2" to use both cores in my CPU. I may try Xen next, but the last time I monkeyed with Xen it was far, far more broken than I am finding KVM to be. Just for now, I would be satisfied if there were some way to use my USB stick as a drive in VirtualBox. VB does allow me to configure /dev/sdb as a drive, but it always causes a fatal error when I try to launch the guest. Thank You For Any Advice You Can Give Me. -

    Read the article

  • Heterogeneous Datacenter Management with Enterprise Manager 12c

    - by Joe Diemer
    The following is a Guest Blog, contributed by Bryce Kaiser, Product Manager at Blue MedoraWhen I envision a perfect datacenter, it would consist of technologies acquired from a single vendor across the entire server, middleware, application, network, and storage stack - Apps to Disk - that meets your organization’s every IT requirement with absolute best-of-breed solutions in every category.   To quote a familiar motto, your datacenter would consist of "Hardware and Software, Engineered to Work Together".  In almost all cases, practical realities dictate something far less than the IT Utopia mentioned above.   You may wish to leverage multiple vendors to keep licensing costs down, a single vendor may not have an offering in the IT category you need, or your preferred vendor may quite simply not have the solution that meets your needs.    In other words, your IT needs dictate a heterogeneous IT environment.  Heterogeneity, however, comes with additional complexity. The following are two pretty typical challenges:1) No End-to-End Visibility into the Enterprise Wide Application Deployment. Each vendor solution which is added to an infrastructure may bring its own tooling creating different consoles for different vendor applications and platforms.2) No Visibility into Performance Bottlenecks. When multiple management tools operate independently, you lose diagnostic capabilities including identifying cross-tier issues with database, hung-requests, slowness, memory leaks and hardware errors/failures causing DB/MW issues. As adoption of Oracle Enterprise Manager (EM) has increased, especially since the release of Enterprise Manager 12c, Oracle has seen an increase in the number of customers who want to leverage their investments in EM to manage non-Oracle workloads.  Enterprise Manager provides a single pane of glass view into their entire datacenter.  By creating a highly extensible framework via the Oracle EM Extensibility Development Kit (EDK), Oracle has provided the tooling for business partners such as my company Blue Medora as well as customers to easily fill gaps in the ecosystem and enhance existing solutions.  As mentioned in the previous post on the Enterprise Manager Extensibility Exchange, customers have access to an assortment of Oracle and Partner provided solutions through this Exchange, which is accessed at http://www.oracle.com/goto/emextensibility.  Currently, there are over 80 Oracle and partner provided plug-ins across the EM 11g and EM 12c versions.  Blue Medora is one of those contributing partners, for which you will find 3 of our solutions including our flagship plugin for VMware.  Let's look at Blue Medora’s VMware plug-in as an example to what I'm trying to convey.  Here is a common situation solved by true visibility into your entire stack:Symptoms•    My database is bogging down, however the database appears okay internally.  Maybe it’s starved for resources?•    My OS tooling is showing everything is “OK”.  Something doesn’t add up. Root cause•    Through the VMware plugin we can see the problem is actually on the virtualization layer Solution•    From within Enterprise Manager  -- the same tool you use for all of your database tuning -- we can overlay the data of the database target, host target, and virtual machine target for a true picture of the true root cause. Here is the console view: Perhaps your monitoring conditions are more specific to your environment.  No worries, Enterprise Manager still has you covered.  With Metric Extensions you have the “Next Generation” of User-Defined Metrics, which easily bring the power of your existing management scripts into a single console while leveraging the proven Enterprise Manager framework. Simply put, Oracle Enterprise manager boasts a growing ecosystem that provides the single pane of glass for your entire datacenter from the database and beyond.  Bryce can be contacted at [email protected]

    Read the article

< Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >