Search Results

Search found 6670 results on 267 pages for 'speed dial'.

Page 198/267 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Are You Afraid of Each Other? Study Shows CMO’s/CIO’s Missing Benefits of Collaboration

    - by Mike Stiles
    Remember that person in school you spent months being too scared to talk to?  Then when you finally did, it led to a wonderful friendship…if not something more. New research from Oracle, Social Media Today and Leader Networks shows marketing and IT need to get over whatever’s holding them back and start reaping the benefits of collaboration. Back in the old days of just a few years ago, marketing could stay on their side of the building, IT could stay on their side of the building, and both could refer to the other as “those guys.” Today, the structure of organizations is shifting from islands to “us,” one integrated body where each part knows what the other parts are doing, and all parts work together in accomplishing job one…a winning customer experience. Ignore that, and you start losing. Give your reluctance to change priority over the benefits of new collaborations, and you start losing. You’re either working together and accelerating forward or getting in the way of each other’s separate agendas and grinding down…much to your competitors’ delight. The study reveals a basic current truth: those who are collaborating in marketing and IT report being more effective, however less than 1/3 report collaborating even “frequently.” In other words, this is obviously a good thing, so we’d better not do it. Smart. The white paper, “Socially Driven Collaboration,” set out to explore how today’s always-changing digital, social and mobile landscape is forcing change across the enterprise, whether it’s welcomed or not. Part of what it found is marketing and IT leaders are not unaware of what’s going on and see their roles evolving. And both know the ability to collaborate more effectively now exists. And of those who are collaborating, over 2/3 say they’re “more effective” professionally because of it. Yet even if you don’t want to take the Oracle study’s word for it, an August 2013 Accenture study of 400 senior marketing and 250 IT executives revealed only 10% think CMO/CIO collaboration is at the right level. There’s a lot of room for improvement here, and not just around people. Collaboration is also being called for across processes and technologies. Business benefits of such collaboration cited in the Oracle study include stronger marketing messages, faster speed-to-market, greater product adoption, faster discovery of product and service shortcomings, and reduction in project costs. Those are the benefits you will cheat yourself out of by keeping “those guys” at arm’s length and continuing to try to function in traditional roles while modern business and the consumer is changing around you. “Intelligence is the ability to adapt to change.” –Stephen Hawking @mikestilesPhoto: istockphoto

    Read the article

  • links for 2011-03-15

    - by Bob Rhubart
    Dr. Frank Munz: Resize AWS EC2 Cloud Instances Dr Munz says: "You cannot dynamically resize a running cloud instance. E.g. there is no API call to ask for 2.2 GHz CPU speed instead of 1.8 GHz or to dynamically add another 3.5 GB of RAM." (tags: oracle cloud amazon ec2) Roddy Rodstein: Oracle VM Manager Architecture and Scalability Rodstein says: "Oracle VM Manager can be installed in an all-in-one configuration using the default Oracle 10g Express Database or in a more traditional two tier architecture with an OC4J web tier and a 10 or 11g database tier." (tags: oracle otn virtualization oraclevm) Mark Nelson: Getting started with Continuous Integration for SOA projects Nelson says: "I am exploring how to use Maven and Hudson to create a continuous integration capability for SOA and BPM projects. This will be the first post of several on this topic, and today we will look at setting up some simple continuous integration for a single SOA project." (tags: oracle maven hudson soa bpm) 5 New Java Champions (The Java Source) Tori Wieldt shares the big news. Congratulations to new Java Champs Jonas Bonér, James Strachan, Rickard Oberg, Régina ten Bruggencate, and Clara Ko. (tags: oracle java) Alert for Forms customers running Oracle Forms 10g (Grant Ronald's Blog) Ronald says: "While you might have been happily running your Forms 10g applications for about 5 years or so now, the end of premier support is creeping up and you need to start planning for a move to Oracle Forms 11g." (tags: oracle oracleforms) Brenda Michelson: Enterprise Architecture Rant #4,892 "I’m increasingly concerned about the macro-direction of our field, as we continue to suffer ivory tower enterprise architecture punditry, rigid frameworks and endless philosophical waxing." - Brenda Michelson (tags: entarch enterprisearchitecture ivorytower) Amitabh Apte: Enterprise Architecture - Different Perspectives "Business does not need Enterprise Architecture," says Apte, "it needs value and outcomes from the EA function." (tags: entarch enterprisearchitecture) First Ever MySQL on Windows Online Forum - March 16, 2011 (Oracle's MySQL Blog) Monica Kumar shares the details. (tags: oracle mysql mswindows) Jeff Davies: Running Multiple WebLogic and OSB Domains "There is a small 'gotcha' if you want to create multiple domains on a devevelopment machine," says Jeff Davies. But don't worry - there's a solution. (tags: oracle soa osb weblogic servicebus) The Arup Nanda Blog: Good Engineering "Engineering is not about being superficially creative," Nanda says, "it's about reliability and trustworthiness." (tags: oracle engineering software technology) Welcome to the SOA & E2.0 Partner Community Forum (SOA Partner Community Blog) (tags: ping.fm)

    Read the article

  • Move Window Buttons Back to the Right in Ubuntu 10.04

    - by Trevor Bekolay
    One of the more controversial changes in the Ubuntu 10.04 beta is the Mac OS-inspired change to have window buttons on the left side. We’ll show you how to move the buttons back to the right. Before While the change may or may not persist through to the April 29 release of Ubuntu 10.04, in the beta version the maximize, minimize, and close buttons appear in the top left of a window. How to move the window buttons The window button locations are dictated by a configuration file. We’ll use the graphical program gconf-editor to change this configuration file. Press Alt+F2 to bring up the Run Application dialog box, enter “gconf-editor” in the text field, and click on Run. The Configuration Editor should pop up. The key that we want to edit is in apps/metacity/general. Click on the + button next to the “apps” folder, then beside “metacity” in the list of folders expanded for apps, and then click on the “general” folder. The button layout can be changed by changing the “button_layout” key. Double-click button_layout to edit it. Change the text in the Value text field to: menu:maximize,minimize,close Click OK and the change will occur immediately, changing the location of the window buttons in the Configuration Editor. Note that this ordering of the window buttons is slightly different than the typical order; in previous versions of Ubuntu and in Windows, the minimize button is to the left of the maximize button. You can change the button_layout string to reflect that ordering, but using the default Ubuntu 10.04 theme, it looks a bit strange. If you plan to change the theme, or even just the graphics used for the window buttons, then this ordering may be more natural to you. After After this change, all of your windows will have the maximize, minimize, and close buttons on the right. What do you think of Ubuntu 10.04’s visual change? Let us know in the comments! Similar Articles Productive Geek Tips Move a Window Without Clicking the Titlebar in UbuntuBring Misplaced Off-Screen Windows Back to Your Desktop (Keyboard Trick)Keep the Display From Turning Off on UbuntuPut Close/Maximize/Minimize Buttons on the Left in UbuntuAllow Remote Control To Your Desktop On Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users

    Read the article

  • CISDI Cloud - Industrial Cloud Computing Platform based on Oracle Products

    - by Wenyu Duan
    In today's era, Cloud Computing is becoming integral to the vision and corporate strategy of leading organizations and is often seen as a key business driver to achieve growth and innovation. Headquartered in Chongqing, China, CISDI Engineering Co., Ltd. is a large state-owned engineering company, offering consulting, engineering design, EPC contracting, and equipment integration services to steel producers all over the world. With over 50 years of experience, CISDI offers quality services for every aspect of production for projects in the metal industry and the company has evolved into a leading international engineering service group with 18 subsidiaries providing complete lifecycle for E&C projects. CISDI group delegation led by Mr. Zhaohui Yu, CEO of CISDI Group, Mr. Zhiyou Li, CEO of CISDI Info, Mr. Qing Peng, CTO of CISDI Info and Mr. Xin Xiao, Head of CISDI Info's R&D joined Oracle OpenWorld 2012 and presented a very impressive cloud initiative case in their session titled “E&C Industry Solution in CISDI Cloud - An Industrial Cloud Computing Platform Based on Oracle Products”. CISDI group plans to expand through three phases in the construction of its cloud computing platform: first, it will relocate its existing technologies to Oracle systems, along with establishing private cloud for CISDI; secondly, it will gradually provide mixed cloud services for its subsidiaries and partners; and finally it plans to launch an industrial cloud with a highly mature, secure and scalable environment providing cloud services for customers in the engineering construction and steel industries, among others. “CISDI Cloud” will become the growth engine for the organization to expand its global reach through online services and achieving the strategic objective of being the preferred choice of E&C companies worldwide. The new cloud computing platform is designed to provide access to the shared computing resources pool in a self-service, dynamic, elastic and measurable way. It’s flexible and scalable grid structure can support elastic expansion and sustainable growth, and can bring significant benefits in speed, agility and efficiency. Further, the platform can greatly cut down deployment and maintenance costs. CISDI delegation highlighted these points as the key reasons why the group decided to have a strategic collaboration with Oracle for building this world class industrial cloud - - Oracle’s strategy: Open, Complete and Integrated - Oracle as the only company who can provide engineered system, with complete product chain of hardware and software - Exadata, Exalogic, EM 12c to provide solid foundation for "CISDI Cloud" The cloud blueprint and advanced architecture for industrial cloud computing platform presented in the session shows how Oracle products and technologies together with industrial applications from CISDI can provide end-end portfolio of E&C industry services in cloud. CISDI group was recognized for business leadership and innovative solutions and was presented with Engineering and Construction Industry Excellence Award during Oracle OpenWorld.

    Read the article

  • Why does my MySQL remote-connection fail (VLAN)?

    - by Johannes Nielsen
    ubuntu-community! Again I have a problem with my special friend MySQL :D I have got two servers - a database-server and a web-server - who are connected via VLAN. Now I want the web-server to have remote access to the database-server's MySQL. So I created the user user in mysql.user. user's Host is xxx.yyy.zzz.9 which is the internal IP-address of the web-server. xxx.yyy.zzz.0 is the network. I also created user with Host % . As long as I use MySQL on the database-server logging in as user, everything works fine. But trying to log in as user from xxx.yyy.zzz.9 using mysql -h xxx.yyy.zzz.8 -u user -p (where xxx.yyy.zzz.8 is the database-server's internal IP), I get ERROR 2003 (HY000): Can't connect to MySQL server on 'xxx.yyy.zzz.8' (110) So I tried to activate Bind-Address in the my.cnf file. Well, if I use xxx.yyy.zzz.8, nothing changes. But if I try xxx.yyy.zzz.9 and try to restart MySQL, I get mysql stop/waiting start: Job failed to start I checked the log files and found - nothing. The database-server's MySQL doesn't even register, that the web-server tries to connect remotely. My idea is, that maybe I didn't configure the VLAN properley, even though I asked someone who actually knows such stuff and he told me, I did everything right. What I wrote into /etc/networking/interfaces is: #The VLAN auto eth1 iface eth1 inet static address xxx.yyy.zzz..8 netmask 255.255.255.0 network xxx.yyy.zzz.0 broadcast xxx.yyy.zzz.255 mtu 1500 ifconfig returns eth1 Link encap:Ethernet HWaddr xxxxxxxxxxxxxx inet addr:xxx.yyy.zzz.8 Bcast:xxx.yyy.zzz.255 Mask:255.255.255.0 inet6 addr: xxxxxxxxxxxxxxx/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:241146 errors:0 dropped:0 overruns:0 frame:0 TX packets:9765 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:17825995 (17.8 MB) TX bytes:566602 (566.6 KB) Memory:fb900000-fb920000 for the eth1, what is, what I configured. (This is for the database-server, the web-server looks similar). ethtool eth1 returns: Settings for eth1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000003 (3) drv probe Link detected: yes (This is for the database-server, the web-server looks similar). Actually I think, everything is right, but it still doesn't work. Is there someone with an idea? EDIT: I commented ou Bind-Address in my.cnf after it didn't work.

    Read the article

  • How To Fix YouTube Re-Buffering On Full Screen Issue

    - by Gopinath
    YouTube has an annoying bug – videos starts re-buffering when we switch to full screen mode from normal mode. On a high speed broadband connection the re-buffering issue may not be annoying much but on a slow broadband connection it annoys hell out of us. When users reported this problem to YouTube, the engineers at YouTube dubbed it as a feature rather than bug!. That is sick and this behaviour shows that they started ignoring the users and their problem. Anyways we got solutions to get around this annoying issue. Root Cause Of The Issue The root cause of the bug is YouTube’s resolution switching mechanism.When the video is loaded in normal model it is buffered and played at 360p, but when the full screen mode is activated YouTube player switches to 480p and starts re-buffering the video. How To Fix The Issue on Google Chrome Browser Fixing this issue on Google Chrome is very simple. All we need to do is to install this Greasemonkey script and it fixes everything for you. How To Fix The Issue on Firefox Browser Fixing this issue on Firefox browser involves an extra step when compared to Chrome browser. To fix the issue Step 1: Install Greasemonkey Addon for Firefox Step 2: Install this script from userscripts.org Done. Firefox will handle the full screen switching smoothly. How To Fix The Issue on Internet Explorer Hufff!! Internet Explorer users are poor users not because they are dumb but because they are using stone age browser. No offense, IE is a pathetic browser and there is no support for Greasemonkey scripts. Anyway lets look at the solution for fixing YouTube issue on IE. To fix the YouTube bug you need to follow the official solution provided by Google and it’s not a friendly one. Step 1: Login to your YouTube account and select the option “I have a slow connection. Never play higher-quality video“. Step 2 – Repeat Always: Make sure that you are always logged into your YouTube account as YouTube need to know your settings before switching the resolution. (Now you know why I called IE as a poor browser). Related: Set the start time of a YouTube Video This article titled,How To Fix YouTube Re-Buffering On Full Screen Issue, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Manage Your Favorite Social Accounts in Chrome and Iron with Seesmic

    - by Asian Angel
    Are you looking for a way to manage your Twitter, Facebook, Google Buzz, LinkedIn, and Foursquare accounts all in one place? Using the Seesmic Web App for Chrome and Iron you can access your favorite accounts and manage them in a single, simple-to-use interface. A feature that we loved from the start was the ability to access Twitter without creating a special Seesmic account. And in these days of multiple accounts who needs another one to complicate things up? All that you need to do is to sign in with your user name/e-mail along with your password. You do have to authorize access for Seesmic to connect with your account but the whole process (login & authorization) is handled in a single window instance. Now on to a quick look at some of the UI features… The sidebar allows you to add additional columns to the main interface, set your favorite location for Trends, and tie in additional social services as desired. You can also access additional options and controls in the upper right corner. When you are ready to start tweeting click in the blank at the top and enter your text, etc. in the convenient drop-down window that appears. Another nice perk is the ability to switch to a black and grey theme if the white is too bright for your needs. The Seesmic web app provides a simple-to-use, highly efficient way to manage your Twitter account and other favorite social services in a single tab interface. Seesmic [Chrome Web Store] Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Manage Your Favorite Social Accounts in Chrome and Iron with Seesmic E.T. II – Extinction [Fake Movie Sequel Video] Remastered King’s Quest Games Offer Classic Gaming on Modern Machines Compare Your Internet Cost and Speed to Global Averages [Infographic] Orbital Battle for Terra Wallpaper WizMouse Enables Mouse Over Scrolling on Any Window

    Read the article

  • Wireless Drivers for Broadcom BCM 4321 (14e4:4329) will not stay connected to a wireless network

    - by Eugene
    So, I'm not necessary new to Linux, I just never took the time to learn it, so please, bare with me. I just swapped out one of my wireless cards from one computer to another. This wireless card in question would be a "Broadcom BCM4321 (14e4:4329)" or actually a "Netgear WN311B Rangemax Next 270 Mbps Wireless PCI Adapter", but that's not important. I've tried (but probably screwed up in the process) installing the "wl" , "b43" and "brcmsmac" drivers, or at least I think I did. Currently I have only the following drivers loaded: eugene@EugeneS-PCu:~$ lsmod | grep "brcmsmac\|b43\|ssb\|bcma\|wl" b43 387371 0 bcma 52096 1 b43 mac80211 630653 1 b43 cfg80211 484040 2 b43,mac80211 ssb_hcd 12869 0 ssb 62379 2 b43,ssb_hcd The main issue is that with most of the drivers available that I've installed, they will find my wireless network but, they will only stay connected for about a minute with abnormally slow speed and then all of a sudden disconnect. Currently, the computer is hooked into another to share it's connect so that I can install drivers from the internet instead of loading them on to a flash drive and doing it offline. If anyone has any insight to the problem, that would be awesome. If not, I'll probably just look up how to install the Windows closed source driver. Edit 1: Even when I try the method here, as suggested when this was marked as a duplicate, I still can't stay connected to a wireless network. Edit 2: After discussing my issue with @Luis, he opened my question back up and told me to include the tests/procedures in the comments. Basically I did this: Read the first answer of the link above when this question was marked as duplicate which involved installing removing bcmwl-kernel-source and instead install firmware-b43-installer and b43-fwcutter. No change of result and contacted Luis in the comments, who then told me to try the second answer which involved removing my previous mistake and installing bcmwl-kernel-source Now the Network Manger (this has happend before, but usally I fixed it by using a different driver) even recognizes WiFi exist (both non-literal and literal). Luis who then suggested sudo rfkill unblock all rfkill unblock all didn't return anything, so I decide to try sudo rfkill list all. Returns nothing (no wonder rfkill unblock all did nothing). I enter lsmod | grep "brcmsmac\|b43\|ssb\|bcma\|wl" and that returns nothing. Try loading the driver by entering sudo modprobe b43 and try lsmod | grep "brcmsmac\|b43\|ssb\|bcma\|wl" again. Returns this: eugene@Eugenes-uPC:~$ sudo modprobe b43 eugene@Eugenes-uPC:~$ lsmod | grep "brcmsmac\|b43\|ssb\|bcma\|wl" b43 387371 0 bcma 52096 1 b43 mac80211 630653 1 b43 cfg80211 484040 2 b43,mac80211 ssb_hcd 12869 0 ssb 62379 2 b43,ssb_hcd So to recap: Currently Network Manager doesn't recognize Wireless exists, b43 drivers are loaded and I've currently hardwired a connect from my laptop to the computer that's causing this.

    Read the article

  • ASP.Net or WPF(C#)?

    - by Rachel
    Our team is divided on this and I wanted to get some third-party opinions. We are building an application and cannot decide if we want to use .Net WPF Desktop Application with a WCF server, or ASP.Net web app using jQuery. I thought I'd ask the question here, with some specs, and see what the pros/cons of using either side would be. I have my own favorite and feel I am biased. Ideally we want to build the initial release of the software as fast as we can, then slow down and take time to build in the additional features/components we want later on. Above all we want the software to be fast. Users go through records all day long and delays in loading records or refreshing screens kills their productivity. Application Details: I'm estimating around 100 different screens for initial version, with plans for a lot of additional screens being added on later after the initial release. We are looking to use two-way communication for reminder and event systems Currently has to support around 100 users, although we've been told to allow for growth up to 500 users We have multiple locations Items to consider (maybe not initially in some cases but in future releases): Room for additional components to be added after initial release (there are a lot of of these... perhaps work here than the initial application) Keyboard navigation Performance is a must Production Speed to initial version Low maintenance overhead Future support Softphone/Scanner integration Our Developers: We have 1 programmer who has been learning WPF the past few months and was the one who suggested we use WPF for this. We have a 2nd programmer who is familiar with ASP.Net and who may help with the project in the future, although he will not be working on it much up until the initial release since his time is spent maintaining our current software. There is me, who has worked with both and am comfortable in either We have an outside company doing the project management, and they are an ASP.Net company. We plan on hiring 1-2 others, however we need to know what direction we are going in first Environment: General users are on Windows 2003 server with Terminal Services. They connect using WYSE thin-clients over an RDP connection. Admin staff has their own PCs with XP or higher. Users are allowed to specify their own resolution although they are limited to using IE as the web browser. Other locations connects to our network over a MPLS connection Based on that, what would you choose and why? I am asking here instead of SO because I am looking for opinions and not answers

    Read the article

  • SBUG Session: The Enterprise Cache

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] I did a session on "The Enterprise Cache" at the UK SOA/BPM User Group yesterday which generated some useful discussion. The proposal was for a dedicated caching layer which all app servers and service providers can hook into, sharing resources and common data. The architecture might end up like this: I'll update this post with a link to the slide deck once it's available. The next session will have Udi Dahan walking through nServiceBus, register on EventBrite if you want to come along. Synopsis Looked at the benefits and drawbacks of app-centric isolated caches, compared to an enterprise-wide shared cache running on dedicated nodes; Suggested issues and risks around caching including staleness of data, resource usage, performance and testing; Walked through a generic service cache implemented as a WCF behaviour – suitable for IIS- or BizTalk-hosted services - which I'll be releasing on CodePlex shortly; Listed common options for cache providers and their offerings. Discussion Cache usage. Different value propositions for utilising the cache: improved performance, isolation from underlying systems (e.g. service output caching can have a TTL large enough to cover downtime), reduced resource impact – CPU, memory, SQL and cost (e.g. caching results of paid-for services). Dedicated cache nodes. Preferred over in-host caching provided latency is acceptable. Depending on cache provider, can offer easy scalability and global replication so cache clients always use local nodes. Restriction of AppFabric Caching to Windows Server 2008 not viewed as a concern. Security. Limited security model in most cache providers. Options for securing cache content suggested as custom implementations. Obfuscating keys and serialized values may mean additional security is not needed. Depending on security requirements and architecture, can ensure cache servers only accessible to cache clients via IPsec. Staleness. Generally thought to be an overrated problem. Thinking in line with eventual consistency, that serving up stale data may not be a significant issue. Good technical arguments support this, although I suspect business users will be harder to persuade. Providers. Positive feedback for AppFabric Caching – speed, configurability and richness of the distributed model making it a good enterprise choice. .NET port of memcached well thought of for performance but lack of replication makes it less suitable for these shared scenarios. Replicated fork – repcached – untried and less active than memcached. NCache also well thought of, but Express version too limited for enterprise scenarios, and commercial versions look costly compared to AppFabric.

    Read the article

  • The Loneliest Road in America and the OTN Garage

    - by rickramsey
    Source I never told anyone how the image of the OTN Garage on Facebook came to be. I took the Facebook picture on Route 50 in Nevada, USA, in October of 2010. I was riding from Colorado to Oracle OpenWorld in San Francisco, so it was probably October. Route 50 is known as "The Loneliest Road in America." There are roads across Nevada that have even LESS traffic, but Route 50 still one. desolate. road. Although I have seen stranger things while riding along Nevada's Extraterrestrial Highway, I still run across notable oddities every time I ride Route 50. Like the old man with a bandolero of water bottles jogging along the side of the highway in the middle of the day, 50 miles from the closest town. First ultra-marathoner I'd seen in action. He waved at me. Or the dozen Corvettes with California license plates driving toward me, all doing the speed limit in the middle of nowhere because they were being tailed by half a dozen Nevada state troopers. #fail. I don't remember which town I was in, but I noticed the building when I stopped at the gas station. While standing there pouring fuel into the Harley, the store caught my eye. So I pulled the bike in front and walked inside. The owner is a little old lady, about 100 years old. Most of the goods she had on the shelves looked like they had been placed there during WWII. She was itty bitty and could barely see over the counter, but she was so happy when I bought a bar of Hershey's chocolate that she gave me a five cent discount. I took a few pictures and, when I got back, Kemer Thomson, who sometimes blogs here, photoshopped the OTN Garage and Oil Change signs onto it. The bike is a 2009 Road King Classic with a Bob Dron fairing and a Corbin heated seat. The seat came in handy when I rode home over Tioga Pass. The Road King is a very comfy touring bike with a great Harley rumble. I'm kinda sorry I sold it. When I stopped for fuel about 75 miles down the road at the next town, I peeled back the chocolate bar. I had turned into powder. Probably 50 years ago. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Book &ldquo;Team Foundation Server 2012 Starter&rdquo; published

    - by terje
    During the summer and fall this year, me and my colleague Jakob Ehn has worked together on a book project that has now finally hit the stores! The title of the book is Team Foundation Server 2012 Starter and is published by Packt Publishing. Get it from http://www.packtpub.com/team-foundation-server-2012-starter/book or from Amazon http://www.amazon.com/dp/1849688389                     The book is part of a concept that Packt have with starter-books, intended for people new to Team Foundation Server 2012 and who want a quick guideline to get it up and working.  It covers the fundamentals, from installing and configuring it, and how to use it with source control, work items and builds. It is done as a step-by-step guide, but also includes best practices advice in the different areas. It covers the use of both the on-premises and the TFS Services version. It also has a list of links and references in the end to the most relevant Visual Studio 2012 ALM sites. Our good friend and fellow ALM MVP Mathias Olausson have done the review of the book, thanks again Mathias! We hope the book fills the gap between the different online guide sites and the more advanced books that are out. Book Description Your quick start guide to TFS 2012, top features, and best practices with hands on examples Overview Install TFS 2012 from scratch Get up and running with your first project Streamline release cycles for maximum productivity In Detail Team Foundation Server 2012 is Microsoft's leading ALM tool, integrating source control, work item and process handling, build automation, and testing. This practical "Team Foundation Server 2012 Starter Guide" will provide you with clear step-by-step exercises covering all major aspects of the product. This is essential reading for anyone wishing to set up, organize, and use TFS server. This hands-on guide looks at the top features in Team Foundation Server 2012, starting with a quick installation guide and then moving into using it for your software development projects. Manage your team projects with Team Explorer, one of the many new features for 2012. Covering all the main features in source control to help you work more efficiently, including tools for branching and merging, we will delve into the Agile Planning Tools for planning your product and sprint backlogs. Learn to set up build automation, allowing your team to become faster, more streamlined, and ultimately more productive with this "Team Foundation Server 2012 Starter Guide". What you will learn from this book Install TFS 2012 on premise Access TFS Services in the cloud Quickly get started with a new project with product backlogs, source control, and build automation Work efficiently with source control using the top features Understand how the tools for branching and merging in TFS 2012 help you isolate work and teams Learn about the existing process templates, such as Visual Studio Scrum 2.0 Manage your product and sprint backlogs using the Agile planning tools Approach This Starter guide is a short, sharp introduction to Team Foundation Server 2012, covering everything you need to get up and running. Who this book is written for If you are a developer, project lead, tester, or IT administrator working with Team Foundation Server 2012 this guide will get you up to speed quickly and with minimal effort.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris Stevens
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • ath9k driver does not weak up

    - by shantanu
    I know this is common question but i found no suitable answer, so i am asking this again. I installed ubuntu 11.10. I found the bug for ath9k, so set first network boot from BIOS menu. That's worked. I have upgraded to 12.04 yesterday. Now ath9k is creating problem again. First network boot is still enable. ath9k works at start. But failed(connect again and again) after couple of minutes. dmesg show error that it can not weak up in 500us. So i tried #compat-wireless-3.5.1-1. But result is same. I have also added #nohwcrypt=1 option in /etc/modeprob.d/ath9k.conf. Still no luck. I tried #rmmod and then modprobe sudo modprobe ath9k nohwcrypt=1 dmesg shows me error: [ 400.690086] ath9k: Driver unloaded [ 406.214329] ath9k 0000:06:00.0: enabling device (0000 -> 0002) [ 406.214348] ath9k 0000:06:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 406.214368] ath9k 0000:06:00.0: setting latency timer to 64 [ 406.428517] ath9k 0000:06:00.0: Failed to initialize device [ 406.428852] ath9k 0000:06:00.0: PCI INT A disabled [ 406.428887] ath9k: probe of 0000:06:00.0 failed with error -5 dmesg error when driver fail: 355.023521] ath: Chip reset failed [ 355.023524] ath: Unable to reset channel, reset status -22 [ 355.023556] ath: Unable to set channel [ 355.088569] ath: Failed to stop TX DMA, queues=0x10f! [ 355.122708] ath: DMA failed to stop in 10 ms AR_CR=0xffffffff AR_DIAG_SW=0xffffffff DMADBG_7=0xffffffff [ 355.122714] ath: Could not stop RX, we could be confusing the DMA engine when we start RX up [ 355.263962] ath: Chip reset failed [ 355.263966] ath: Unable to reset channel (2437 MHz), reset status -22 [ 358.996063] ath: Failed to wakeup in 500us [ 364.004182] ath: Failed to wakeup in 500us I can not install fresh ubuntu because i have lots of application installed. System : Acer Aspire 4250 AMD dual core 1.6GHZ Atheros Communications Inc. AR9485 Wireless Network Adapter (rev 01) EDITED Now i am in serious problem. No wifi device is not showing in ifconfig or lshw commands. Only ether-net interface shows. I tried (FN + WIFI) several times to enable the device but nothing helps. Now I have installed fresh ubuntu 12.04. Please help lshw -c network: *-network description: Ethernet interface product: 82566DC Gigabit Network Connection vendor: Intel Corporation physical id: 19 bus info: pci@0000:00:19.0 logical name: eth0 version: 02 serial: 00:19:d1:7a:8e:f9 size: 100Mbit/s capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=2.0.0-k duplex=full firmware=1.1-0 ip=192.168.1.114 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:45 memory:90300000-9031ffff memory:90324000-90324fff ioport:20c0(size=32) rfkill command does not show anything but no error.

    Read the article

  • composite-video-to-usb adaptor

    - by sawa
    I bought a composite-video-to-usb adaptor. I want to stream video game in ubuntu. How can I do that? My environment: Monoprice USB Video and Audio Grabber Ubuntu 11.04 The relevant output of lsusb: Bus 001 Device 011: ID 0572:262a Conexant Systems (Rockwell), Inc. The relevant output of sudo lshw: *-usb:0 description: USB Controller product: 82801JI (ICH10 Family) USB UHCI Controller #4 vendor: Intel Corporation physical id: 1a bus info: pci@0000:00:1a.0 version: 00 width: 32 bits clock: 33MHz capabilities: uhci bus_master cap_list configuration: driver=uhci_hcd latency=0 resources: irq:16 ioport:f0e0(size=32) *-usb:1 description: USB Controller product: 82801JI (ICH10 Family) USB UHCI Controller #5 vendor: Intel Corporation physical id: 1a.1 bus info: pci@0000:00:1a.1 version: 00 width: 32 bits clock: 33MHz capabilities: uhci bus_master cap_list configuration: driver=uhci_hcd latency=0 resources: irq:21 ioport:f0c0(size=32) *-usb:2 description: USB Controller product: 82801JI (ICH10 Family) USB UHCI Controller #6 vendor: Intel Corporation physical id: 1a.2 bus info: pci@0000:00:1a.2 version: 00 width: 32 bits clock: 33MHz capabilities: uhci bus_master cap_list configuration: driver=uhci_hcd latency=0 resources: irq:18 ioport:f0a0(size=32) *-usb:3 description: USB Controller product: 82801JI (ICH10 Family) USB2 EHCI Controller #2 vendor: Intel Corporation physical id: 1a.7 bus info: pci@0000:00:1a.7 version: 00 width: 32 bits clock: 33MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=0 resources: irq:18 memory:e0525c00-e0525fff *-multimedia description: Audio device product: 82801JI (ICH10 Family) HD Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=HDA Intel latency=0 resources: irq:43 memory:e0520000-e0523fff *-usb:4 description: USB Controller product: 82801JI (ICH10 Family) USB UHCI Controller #1 vendor: Intel Corporation physical id: 1d bus info: pci@0000:00:1d.0 version: 00 width: 32 bits clock: 33MHz capabilities: uhci bus_master cap_list configuration: driver=uhci_hcd latency=0 resources: irq:23 ioport:f080(size=32) *-usb:5 description: USB Controller product: 82801JI (ICH10 Family) USB UHCI Controller #2 vendor: Intel Corporation physical id: 1d.1 bus info: pci@0000:00:1d.1 version: 00 width: 32 bits clock: 33MHz capabilities: uhci bus_master cap_list configuration: driver=uhci_hcd latency=0 resources: irq:19 ioport:f060(size=32) *-usb:6 description: USB Controller product: 82801JI (ICH10 Family) USB UHCI Controller #3 vendor: Intel Corporation physical id: 1d.2 bus info: pci@0000:00:1d.2 version: 00 width: 32 bits clock: 33MHz capabilities: uhci bus_master cap_list configuration: driver=uhci_hcd latency=0 resources: irq:18 ioport:f040(size=32) *-usb:7 description: USB Controller product: 82801JI (ICH10 Family) USB2 EHCI Controller #1 vendor: Intel Corporation physical id: 1d.7 bus info: pci@0000:00:1d.7 version: 00 width: 32 bits clock: 33MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=0 resources: irq:23 memory:e0525800-e0525bff The relevant output of dmesg: [18953.220035] usb 1-1: new high speed USB device using ehci_hcd and address 6 [19964.761076] Linux video capture interface: v2.00 [19964.767112] usbcore: registered new interface driver uvcvideo [19964.767115] USB Video Class driver (v1.0.0)

    Read the article

  • Atheros AR9285 / Lenovo G560 wireless not working after installing 13.04

    - by teyi
    I had Ubuntu 12.04 initially installed on my laptop. I upgraded to 12.10 then 13.04. Everything worked fine, including wireless. After adding a new memory card ( I only had 2 gb and one memory slot free) my wireess stopped working. I backed up all my data and reinstallled Ubuntu 13.04. Everything works fine except wireess. I bought this laptop in 2010 from Japan. It has Intel Core i5 CPU M 450 @2.40 Ghz * 4 3,7 Gb RAM os type 64 bit The output of iwconfig: eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off The output of rfkill list all: 0: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no The output of lshw -C network: *-network description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:05:00.0 logical name: wlan0 version: 01 serial: 78:e4:00:7d:fe:fa width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.8.0-19-generic firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:d6400000-d640ffff *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:06:00.0 logical name: eth0 version: 02 serial: 88:ae:1d:2b:36:ac size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full ip=192.168.2.2 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:2000(size=256) memory:d2410000-d2410fff memory:d2400000-d240ffff memory:d2420000-d243ffff The wi-fi network appears as disconnected ( it's greyed out) Strangely enough I see a wifi network ( not mine) but not mine or the rest. That network doesn't require a password . I click on it, try to connect and i get an error message: failed to connect to xxxxx ... 32) The access point/org/freedesktop/NetworkManager/AccessPoint/0 was not in the scan list. Someone help please

    Read the article

  • 2 Servers 1 Database - Can I use Redis?

    - by Aust
    Ok I have a couple of questions here. First let me give you some background information. I'm starting a project where I have a node.js server running my application and my website running on another normal server. My application will allow multiple users simultaneous connections and updates to the database so Redis seemed like a good fit there because of its speed and atomic functions. For someone to access my application they have to login with an account. To get an account, they have to signup for one through my website. So my website needs a database, but its not important to have a database like Redis here because it doesn't need it. Which leads me to my first question: 1. Can Redis even be used without node.js? It seems like it would be convenient if both of my servers were using the same database to keep track of information. In some cases, they will keep track of the same information (as in user information) and in other cases, they will be keeping track of separate information. So even if the website wouldn't be taking full advantage of all that Redis has to offer it seems like it would be more convenient. So assuming Redis could be used in this situation that leads to my next question: 2. Since Redis is linked with JavaScript, how would I handle the security from my website users? What would be stopping my website users from opening firebug or chrome's inspector and making changes to the database? Maybe if I designed my site with the layout like this: apply.php-update.php-home.php. Where after they submitted their form it would redirect them to the update page where the JavaScript would run and then redirect them after the database updated to the home page. I don't really know I'm just taking shots in the dark at this point. :) Maybe a better alternative would be to have my node.js application access its own Redis database and also have access to another MySQL database that my website also has access to. Or maybe there is another database that would be better suited for this situation other than Redis. Anyways any direction on this matter would be greatly appreciated. :)

    Read the article

  • Character Stats and Power

    - by Stephen Furlani
    I'm making an RPG game system and I'm having a hard time deciding on doing detailed or abstract character statistics. These statistics define the character's natural - not learned - abilities. For example: Mass Effect: 0 (None that I can see) X20 (Xtreme Dungeon Mastery): 1 "STAT" Diablo: 4 "Strength, Magic, Dexterity, Vitality" Pendragon: 5 "SIZ, STR, DEX, CON, APP" Dungeons & Dragons (3.x, 4e): 6 "Str, Dex, Con, Wis, Int, Cha" Fallout 3: 7 "S.P.E.C.I.A.L." RIFTS: 8 "IQ, ME, MA, PS, PP, PE, PB, Spd" Warhammer Fantasy Roleplay (1st ed?): 12-ish "WS, BS, S, T, Ag, Int, WP, Fel, A, Mag, IP, FP" HERO (5th ed): 14 "Str, Dex, Con, Body, Int, Ego, Pre, Com, PD, ED, Spd, Rec, END, STUN" The more stats, the more complex and detailed your character becomes. This comes with a trade-off however, because you usually only have limited resources to describe your character. D&D made this infamous with the whole min/max-ing thing where strong characters were typically not also smart. But also, a character with a high Str typically also has high Con, Defenses, Hit Points/Health. Without high numbers in all those other stats, they might as well not be strong since they wouldn't hold up well in hand-to-hand combat. So things like that force trade-offs within the category of strength. So my original (now rejected) idea was to force players into deciding between offensive and defensive stats: Might / Body Dexterity / Speed Wit / Wisdom Heart Soul But this left some stat's without "opposites" (or opposites that were easily defined). I'm leaning more towards the following: Body (Physical Prowess) Mind (Mental Prowess) Heart (Social Prowess) Soul (Spiritual Prowess) This will define a character with just 4 numbers. Everything else gets based off of these numbers, which means they're pretty important. There won't, however, be ways of describing characters who are fast, but not strong or smart, but absent minded. Instead of defining the character with these numbers, they'll be detailing their character by buying skills and powers like these: Quickness Add a +2 Bonus to Body Rolls when Dodging. for a character that wants to be faster, or the following for a big, tough character Body Building Add a +2 Bonus to Body Rolls when Lifting, Pushing, or Throwing objects. [EDIT - removed subjectiveness] So my actual questions is what are some pitfalls with a small stat list and a large amount of descriptive powers? Is this more difficult to port cross-platform (pen&paper, PC) for example? Are there examples of this being done well/poorly? Thanks,

    Read the article

  • USB drives not recognized all of a sudden (module usb_storage not loading)

    - by Siddharth
    I am very close to the solution, just need to know how to get usb-storage to load I have tried most of the advice on askubuntu and other sites, usb_storage enable to fdisk -l. But I am unable to find steps to get it working again. sudo lsusb results Bus.... skipped 4 lines Bus 004 Device 002: ID 413c:3012 Dell Computer Corp. Optical Wheel Mouse Bus 005 Device 002: ID 413c:2105 Dell Computer Corp. Model L100 Keyboard Bus 001 Device 005: ID 8564:1000 sudo dmseg | tail reports [ 69.567948] usb 1-4: USB disconnect, device number 4 [ 74.084041] usb 1-6: new high-speed USB device number 5 using ehci_hcd [ 74.240484] Initializing USB Mass Storage driver... [ 74.256033] scsi5 : usb-storage 1-6:1.0 [ 74.256145] usbcore: registered new interface driver usb-storage [ 74.256147] USB Mass Storage support registered. [ 74.257290] usbcore: deregistering interface driver usb-storage fdisk -l reports Device Boot Start End Blocks Id System /dev/sda1 * 2048 972656639 486327296 83 Linux /dev/sda2 972658686 976771071 2056193 5 Extended /dev/sda5 972658688 976771071 2056192 82 Linux swap / Solaris I think I need steps to install and get usb_storage module working. Edit : I tried sudo modprobe -v usb-storage reports sudo modprobe -v usb-storage insmod /lib/modules/3.2.0-48-generic-pae/kernel/drivers/usb/storage/usb-storage.ko Edit : jsiddharth@siddharth-desktop:~$ sudo udevadm monitor --udev monitor will print the received events for: UDEV - the event which udev sends out after rule processing UDEV [4757.144372] add /module/usb_storage (module) UDEV [4757.146558] remove /module/usb_storage (module) UDEV [4757.148707] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6 (usb) UDEV [4757.149699] add /bus/usb/drivers/usb-storage (drivers) UDEV [4757.151214] remove /bus/usb/drivers/usb-storage (drivers) UDEV [4757.156873] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0 (usb) UDEV [4757.160903] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) UDEV [4757.164672] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165163] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165440] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) Narrowing down more : Seems like I need usb_storage to load as a module jsiddharth@siddharth-desktop:~$ lsmod | grep usb usbserial 37201 0 usbhid 41937 0 hid 77428 1 usbhid Still no usb driver mounted. Nor does a device show up in /dev. Any step by step process to debug and fix this will be really helpful.

    Read the article

  • SQL SERVER – Iridium I/O – SQL Server Deduplication that Shrinks Databases and Improves Performance

    - by Pinal Dave
    Database performance is a common problem for SQL Server DBA’s.  It seems like we spend more time on performance than just about anything else.  In many cases, we use scripts or tools that point out performance bottlenecks but we don’t have any way to fix them.  For example, what do you do when you need to speed up a query that is already tuned as well as possible?  Or what do you do when you aren’t allowed to make changes for a database supporting a purchased application? Iridium I/O for SQL Server was originally built at Confio software (makers of Ignite) because DBA’s kept asking for a way to actually fix performance instead of just pointing out performance problems. The technology is certified by Microsoft and was so promising that it was spun out into a separate company that is now run by the Confio Founder/CEO and technology management team. Iridium uses deduplication technology to both shrink the databases as well as boost IO performance.  It is intriguing to see it work.  It will deduplicate a live database as it is running transactions.  You can watch the database get smaller while user queries are running. Iridium is a simple tool to use. After installing the software, you click an “Analyze” button which will spend a minute or two on each database and estimate both your storage and performance savings.  Next, you click an “Activate” button to turn on Iridium I/O for your selected databases.  You don’t need to reboot the operating system or restart the database during any part of the process. As part of my test, I also wanted to see if there would be an impact on my databases when Iridium was removed.  The ‘revert’ process (bringing the files back to their SQL Server native format) was executed by a simple click of a button, and completed while the databases were available for normal processing. I was impressed and enjoyed playing with the software and encourage all of you to try it out.  Here is the link to the website to download Iridium for free. . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Cannot dual Windows XP and Ubuntu

    - by Fabio Machado
    I am new to Ubuntu and at the moment I am trying to get Ubuntu 12.10 to one of my machines. The machine is a Pentium 4 @ 3.06, 2Gb RAM, 200GB Hard Drive and a NVidia GeForce 8800 GT. A few days ago, I tried Ubuntu without installing and it worked perfectly. Yesterday, I decided to formatted the hard drive and divide my hard drive into four partitions: 1 for the XP, 1 for Ubuntu, 1 for swamp and 1 where I will have my documents. Everything went great, I installed XP and then Ubuntu but I did something wrong on the partition window (Ubunto partion window) that I ended up without boot loader. This morning, I formatted everything again, installed XP and when I went to install Ubuntu (with the same DVD as before) the problems started. First, I had a black screen with a msg written with white text saying something like: unable to find a medium containing a live file system. After I burned another CD and tried again, I got stuck at the red dots (loading screen). I then went online and I read somewhere that it could be the CD, so I checked the integrity of the CD and everything was fine. I also unplugged all USBs connected to the computer and nothing changed. I goggled further options to try to solve my problem and some users suggested that people having these types of problems should try the alternate installation, which if I am not wrong is for networks. I then tried to install and yes the installation process was different from the normal CD, but it did get stuck on a page where it was doing something, like: ...finding ethd0 and it was stuck on the 100%. I tried USB installation as well and it also got stuck at the red dots (I do not have USB 3.0 on the computer in question). I have burned 5 different CD's and all at low speed. I checked the integrity and all are fine. I downloaded other distribution as well as other versions of Ubuntu and I still cannot install or even run the Live CD of Ubuntu or any other distribution. What is really annoying me is that everything was working perfectly before, when I first tried to install Ubuntu. Anyway, any help is welcome. Edit: My boot load is normal, no errors and all the hardware is working fine. I forgot to mention that after the loading screen (red dots) gets stuck, the DVD drive and the hard drive goes into idle state. I also restored the default values of the BIOS and no luck.

    Read the article

  • Back-sliding into Unmanaged Code

    - by Laila
    It is difficult to write about Microsoft's ambivalence to .NET without mentioning clichés about dog food.  In case you've been away a long time, you'll remember that Microsoft surprised everyone with the speed and energy with which it introduced and evangelised the .NET Framework for managed code. There was good reason for this. Once it became obvious to all that it had sleepwalked into third place as a provider of development languages, behind Borland and Sun, it reacted quickly to attract the best talent in the industry to produce a windows version of the Java runtime, with Bounds-checking, Automatic Garbage collection, structures exception handling and common data types. To develop applications for this managed runtime, it produced several excellent languages, and more are being provided. The only thing Microsoft ever got wrong was to give it a stupid name. The logical step for Microsoft would be to base the entire operating system on the .NET framework, and to re-engineer its own applications. In 2002, Bill Gates, then Microsoft Chairman and Chief Software Architect said about their plans for .NET, "This is a long-term approach. These things don't happen overnight." Now, eight years later, we're still waiting for signs of the 'long-term approach'. Microsoft's vision of an entirely managed operating system has subsided since the Vista fiasco, but stays alive yet dormant as Midori, still being developed by Microsoft Research. This is an Internet-centric fork of the singularity operating system, a research project started in 2003 to build a highly-dependable operating system in which the kernel, device drivers, and applications are all written in managed code. Midori is predicated on the prevalence of connected systems, with provisions for distributed concurrency where application components exist 'in the cloud', and supports a programming model that can tolerate cancellation, intermittent connectivity and latency. It features an entirely new security model that sandboxes applications for increased security. So have Microsoft converted its existing applications to the .NET framework? It seems not. What Windows applications can run on Mono? Very few, it seems. We all thought that .NET spelt the end of DLL Hell and the need for COM interop, but it looks as if Bill Gates' idea of 'not overnight' might stretch to a decade or more. The Operating System has shown only minimal signs of migrating to .NET. Even where the use of .NET has come to dominate, when used for server applications with IIS, IIS itself is still entirely developed in unmanaged code. This is an irritation to Microsoft's greatest supporters who committed themselves fully to the NET framework, only to find parts of the Ambivalent Microsoft Empire quietly backsliding into unmanaged code and the awful C++. It is a strategic mistake that the invigorated Apple didn't make with the Mac OS X Architecture. Cheers, Laila

    Read the article

  • Using CTAS & Exchange Partition Replace IAS for Copying Partition on Exadata

    - by Bandari Huang
    Usage Scenario: Copy data&index from one partition to another partition in a partitioned table. Solution: Create a partition definition Copy data from one partition to another partiton by 'Insert as select (IAS)' Create a nonpartitioned table by 'Create table as select (CTAS)' Convert a nonpartitioned table into a partition of partitoned table by exchangng their data segments. Rebuild unusable index Exchange Partition Convertion Mutual convertion between a partition (or subpartition) and a nonpartitioned table Mutual convertion between a hash-partitioned table and a partition of a composite *-hash partitioned table Mutual convertiton a [range | list]-partitioned table into a partition of a composite *-[range | list] partitioned table. Exchange Partition Usage Scenario High-speed data loading of new, incremental data into an existing partitioned table in DW environment Exchanging old data partitions out of a partitioned table, the data is purged from the partitioned table without actually being deleted and can be archived separately Exchange Partition Syntax ALTER TABLE schema.table EXCHANGE [PARTITION|SUBPARTITION] [partition|subprtition] WITH TABLE schema.table [INCLUDE|EXCLUDING] INDEX [WITH|WITHOUT] VALIDATION UPDATE [INDEXES|GLOBAL INDEXES] INCLUDING | EXCLUDING INDEXES Specify INCLUDING INDEXES if you want local index partitions or subpartitions to be exchanged with the corresponding table index (for a nonpartitioned table) or local indexes (for a hash-partitioned table). Specify EXCLUDING INDEXES if you want all index partitions or subpartitions corresponding to the partition and all the regular indexes and index partitions on the exchanged table to be marked UNUSABLE. If you omit this clause, then the default is EXCLUDING INDEXES. WITH | WITHOUT VALIDATION Specify WITH VALIDATION if you want Oracle Database to return an error if any rows in the exchanged table do not map into partitions or subpartitions being exchanged. Specify WITHOUT VALIDATION if you do not want Oracle Database to check the proper mapping of rows in the exchanged table. If you omit this clause, then the default is WITH VALIDATION.  UPADATE INDEX|GLOBAL INDEX Unless you specify UPDATE INDEXES, the database marks UNUSABLE the global indexes or all global index partitions on the table whose partition is being exchanged. Global indexes or global index partitions on the table being exchanged remain invalidated. (You cannot use UPDATE INDEXES for index-organized tables. Use UPDATE GLOBAL INDEXES instead.) Exchanging Partitions&Subpartitions Notes Both tables involved in the exchange must have the same primary key, and no validated foreign keys can be referencing either of the tables unless the referenced table is empty.  When exchanging partitioned index-organized tables: – The source and target table or partition must have their primary key set on the same columns, in the same order. – If key compression is enabled, then it must be enabled for both the source and the target, and with the same prefix length. – Both the source and target must be index organized. – Both the source and target must have overflow segments, or neither can have overflow segments. Also, both the source and target must have mapping tables, or neither can have a mapping table. – Both the source and target must have identical storage attributes for any LOB columns. 

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >