Search Results

Search found 11107 results on 445 pages for 'drive bay'.

Page 389/445 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • Looking for an ultra portable laptop for Ubuntu

    - by prule
    Hi, I'm in the market for a new laptop, and portability is important since I really only use it when I'm travelling to and from work - primarily for programming. I've been searching high and low for something like this: less than 2kg hopefully Intel i5 (but negotiable) NO dvd drive - just don't need it 4G ram either 7200rpm disk or SSD (ssd preferable) 13 inch screen not too pricey (MacBook Air is about $1700 AUD) available in Australia The Dell Inspiron 13z and Lenovo Edge 13 look close, but I've not found anything that says I'm not going to have a fight with compatibility. The MacBook Air 13 looks like the PERFECT hardware, but I'm afraid it will just be easier to run MacOS than Ubuntu. I want to stay with Ubuntu, but the MacBook Air is only $1700 so I'm in danger of becoming another apple fanboi if I can't find anything competitive. Going through all the sites looking for stuff has been a huge waste of time System 76 doesn't deliver to Australia http://www.linux-laptop.net/ and http://www.linlap.com/ are hard work and not confidence inspiring http://www.vgcomputing.com.au/nsintro.html is hard work again, searching for every laptop they say has excellent compatibility on the web to find out what spec it is http://zareason.com/shop/Strata-Pro-13.html (at $1345 USD) looks interesting, but I've got no idea how much I'll get stung by customs importing Dell Inspiron 13z with i5, 4G, 320 7200rpm disk, ATI Mobility Radeon HD5430 - 1GB, Dell Wireless 1501 802.11b/g/n @ $1200 AUD seems like the only competitor but is it compatible? (Dell support offer no opinion - as far as they are concerned they only have 2 models that are certified for ubuntu) Am I worrying too much about the compatibility? Should I just go with Dell? Or switch to MacOS? (It would be good to have a searchable database that had the full machine specs, and compatibility - I'm thinking about building something... but I don't have much time right now...) Thanks. UPDATE I went with a MacBook Air. The price/weight/power was just right. Everything else was either too pricy (i5) or too heavy, or underpowered (SU7300 1.3GHz). Its a pity, because I didn't really want to leave Ubuntu. I'll still run it on my media center and spare (heavy) laptop.

    Read the article

  • WebCenter Customer Spotlight: Regency Centers Corporation

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryRegency Centers Corporation, based in Jacksonville, FL, is a leading national owner, operator, and developer of grocery-anchored and community shopping centers. Regency grew rapidly over much of the last decade. To keep up with the monthly and yearly administrative processes required to manage thousands of tenants, including reconciling yearly pass-through expenses, the customer upgraded to Oracle’s JD Edwards EnterpriseOne Version 9.0 and deployed Oracle WebCenter Imaging, Process Management and Oracle BI Publisher, to streamline invoice processing and reporting. Using Oracle WebCenter Imaging - Regency accelerated and improved vendor invoice accuracy  which increases process integrity by identifying potential duplicate bills while enabling rapid approval of electronic invoice documents. Company Overview Regency Centers Corporation, based in Jacksonville, FL,  is a leading national owner, operator, and developer of grocery-anchored and community shopping centers. The company owns 367 centers, totaling nearly 50 million square feet, located in top markets throughout the United States. Founded in 1963 and operating as a fully integrated real estate company, Regency is a qualified real estate investment trust that is self-administered and self-managed, operating from 17 regional offices around the country.  Business Challenges Ensure continued support of vital business applications that drive the real estate developer’s key business processes, including property management and tenant payment processing Streamline year-end expense recognition and calculation, enabling faster tenant billing Move to a Web-based platform to deliver greater mobility and convenience to employees Minimize system customizations to reduce IT management costs and burden moving forward Solution DeployedRecency Centers Corporation worked with the  Oracle Partner ICS to upgrade to Oracle’s JD Edwards EnterpriseOne Version 9.0, migrating to a more user-friendly, Web-based platform and realizing numerous new efficiencies in property management and tenant payment processing. They accelerated and improved vendor invoice accuracy with Oracle WebCenter Imaging, which increases process integrity by identifying potential duplicate bills while enabling rapid approval of electronic invoice documents. Business Results Enabled faster and more accurate tenant billing for year-end expenses, accelerating collections of millions of dollars in revenue Gained full audit and drill-down capabilities that facilitate understanding various aspects of calculations for expense participation generation Increases process integrity by identifying potential duplicate bills while enabling rapid approval of electronic invoice documents Helped to ensure on-time payments to hundreds of vendors, including contractors and utilities "We have realized numerous efficiencies with Oracle’s JD Edwards EnterpriseOne 9.0, particularly around tenant billings. It accelerates our year-end expense reconciliation process and enables us to create and process billings more quickly.” James Chiang, Vice President of Real Estate Accounting Regency Centers Corporation Additional Information Regency Centers Corporation Customer Snapshot Oracle WebCenter Imaging JD Edwards EnterpriseOne Financials 9.0 JD Edwards EnterpriseOne Project Costing JD Edwards EnterpiseOne Real Estate Management Oracle Business Intelligence Publisher Oracle Essbase

    Read the article

  • Managing Regulated Content in WebCenter: USDM and Oracle Offer a New Part 11 Compliant Solution for Life Sciences

    - by Michael Snow
    Guest post today provided by Oracle partner, USDM  Regulated Content in WebCenterUSDM and Oracle offer a new Part 11 compliant solution for Life Sciences (White Paper) Life science customers now have the ability to take advantage of all of the benefits of Oracle’s WebCenter Content, a global leader in Enterprise Content Management.   For the past year, USDM has been developing best practice compliance solutions to meet regulated content management requirements for 21 CFR Part 11 in WebCenter Content. USDM has been an expert in ECM for life sciences since 1999 and in 2011, certified that WebCenter was a 21CFR Part 11 compliant content management platform (White Paper).  In addition, USDM has built Validation Accelerators Packs for WebCenter to enable life science organizations to quickly and cost effectively validate this world class solution.With the Part 11 certification, Oracle’s WebCenter now provides regulated life science organizations  the ability to manage REGULATORY content in WebCenter, as well as the ability to take advantage of ALL of the additional functionality of WebCenter, including  a complete, open, and integrated portfolio of portal, web experience management, content management and social networking technology.  Here are a few screen shot examples of Part 11 functionality included in the product: E-Sign, E-Sign Rendor, Meta Data History, Audit Trail Report, and Access Reporting. Gone are the days that life science companies have to spend millions of dollars a year to implement, maintain, and validate ECM systems that no longer meet the ever changing business and regulatory requirements.  Life science companies now have the ability to use WebCenter Content, an ECM system with a substantially lower cost of ownership and unsurpassed functionality.Oracle has been #1 in life sciences because of their ability to develop cost effective, easy-to-use, scalable solutions which help increase insight and efficiency to drive growth for their customers.  Adding a world class ECM solution to this product portfolio allows life science organizations the chance to get rid of costly ECM systems that no longer meet their needs and use WebCenter, part of the Oracle Fusion Technology stack, with their other leading enterprise applications.USDM provides:•    Expertise in Life Science ECM Business Processes•    Prebuilt Life Science Configuration in WebCenter •    Validation Accelerator Packs for WebCenterUSDM is very proud to support Oracle’s expanding commitment to Life Sciences…. For more information please contact:  [email protected] Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Accessing SQL Server data from iOS apps

    - by RobertChipperfield
    Almost all mobile apps need access to external data to be valuable. With a huge amount of existing business data residing in Microsoft SQL Server databases, and an ever-increasing drive to make more and more available to mobile users, how do you marry the rather separate worlds of Microsoft's SQL Server and Apple's iOS devices? The classic answer: write a web service layer Look at any of the questions on this topic asked in Internet discussion forums, and you'll inevitably see the answer, "just write a web service and use that!". But what does this process gain? For a well-designed database with a solid security model, and business logic in the database, writing a custom web service on top of this just to access some of the data from a different platform seems inefficient and unnecessary. Desktop applications interact with the SQL Server directly - why should mobile apps be any different? The better answer: the iSql SDK Working along the lines of "if you do something more than once, make it shared," we set about coming up with a better solution for the general case. And so the iSql SDK was born: sitting between SQL Server and your iOS apps, it provides the simple API you're used to if you've been developing desktop apps using the Microsoft SQL Native Client. It turns out a web service remained a sensible idea: HTTP is much more suited to the Big Bad Internet than SQL Server's native TDS protocol, removing the need for complex configuration, firewall configuration, and the like. However, rather than writing a web service for every app that needs data access, we made the web service generic, serving only as a proxy between the SQL Server and a client library integrated into the iPhone or iPad app. This client library handles all the network communication, and provides a clean API. OSQL in 25 lines of code As an example of how to use the API, I put together a very simple app that allowed the user to enter one or more SQL statements, and displayed the results in a rather primitively formatted text field. The total amount of Objective-C code responsible for doing the work? About 25 lines. You can see this in action in the demo video. Beta out now - your chance to give us your suggestions! We've released the iSql SDK as a beta on the MobileFoo website: you're welcome to download a copy, have a play in your own apps, and let us know what we've missed using the Feedback button on the site. Software development should be fun and rewarding: no-one wants to spend their time writing boiler-plate code over and over again, so stop writing the same web service code, and start doing exciting things in the new world of mobile data!

    Read the article

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • As the current draft stands, what is the most significant change the "National Strategy for Trusted Identities in Cyberspace" will provoke?

    - by mfg
    A current draft of the "National Strategy for Trusted Identities in Cyberspace" has been posted by the Department of Homeland Security. This question is not asking about privacy or constitutionality, but about how this act will impact developers' business models and development strategies. When the post was made I was reminded of Jeff's November blog post regarding an internet driver's license. Whether that is a perfect model or not, both approaches are attempting to handle a shared problem (of both developers and end users): How do we establish an online identity? The question I ask here is, with respect to the various burdens that would be imposed on developers and users, what are some of the major, foreseeable implementation issues that will arise from the current U.S. Government's proposed solution? For a quick primer on the setup, jump to page 12 for infrastructure components, here are two stand-outs: An Identity Provider (IDP) is responsible for the processes associated with enrolling a subject, and establishing and maintaining the digital identity associated with an individual or NPE. These processes include identity vetting and proofing, as well as revocation, suspension, and recovery of the digital identity. The IDP is responsible for issuing a credential, the information object or device used during a transaction to provide evidence of the subject’s identity; it may also provide linkage to authority, roles, rights, privileges, and other attributes. The credential can be stored on an identity medium, which is a device or object (physical or virtual) used for storing one or more credentials, claims, or attributes related to a subject. Identity media are widely available in many formats, such as smart cards, security chips embedded in PCs, cell phones, software based certificates, and USB devices. Selection of the appropriate credential is implementation specific and dependent on the risk tolerance of the participating entities. Here are the first considered actionable components of the draft: Action 1: Designate a Federal Agency to Lead the Public/Private Sector Efforts Associated with Achieving the Goals of the Strategy Action 2: Develop a Shared, Comprehensive Public/Private Sector Implementation Plan Action 3:Accelerate the Expansion of Federal Services, Pilots, and Policies that Align with the Identity Ecosystem Action 4:Work Among the Public/Private Sectors to Implement Enhanced Privacy Protections Action 5:Coordinate the Development and Refinement of Risk Models and Interoperability Standards Action 6: Address the Liability Concerns of Service Providers and Individuals Action 7: Perform Outreach and Awareness Across all Stakeholders Action 8: Continue Collaborating in International Efforts Action 9: Identify Other Means to Drive Adoption of the Identity Ecosystem across the Nation

    Read the article

  • Play the Microsoft Game “Are You Certifiable?”

    - by Mysticgeek
    Want to know if you have what it takes to be certified by Microsoft? Today we check out an enjoyable way to practice and test your IT knowledge of Microsoft products.  There are two modes, one where you log in with your Live account so you can save your progress, and play additional levels.   If you log in with your Live account, it’s obvious that Microsoft wants to sell you some certification courses, so just be aware of that. Or Guest Play where you can only play one episode and scores are not saved.   Playing the Game We’ll take a look at the Guest Play just so you get a sense of what the game is about. Enter in a username and pick an avatar… Then read the instructions…we won’t go over them all here, there are a lot of options and points are scored by correct answers, amount of time it takes to answer them, you get vouchers to play a question before answers are shown…etc. Once you start playing, you get certification questions, you can take as much time to read the question as you want, then hit the Answer button when you’re ready. Now you have four answers to choose from…notice the time clicking down, so you want to try to answer as quickly as possible. After selecting the answer, you’re told if it is correct or not, then given an answer explaination, along with your score. You can flag the topic so it comes up again, which is a good way to get repetition of various topics, which really helps when taking the cert tests. If you get an answer wrong, you still get an answer explanation which is cool, so you can learn and better understand the topic. Conclusion This game is definitely not for everyone, only those who are curious or want a fun way to practice for Microsoft certifications. If you are interested in a cert from Microsoft, it’s a fun way to practice up. Play Are You Certifiable? Similar Articles Productive Geek Tips Geek Fun: Play Alien Arena the Free FPS GameFriday Fun: Get Your Mario OnFriday Fun: Play Bubble QuodFriday Fun: 13 Days in HellFriday Fun: Open Doors TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar Manage Photos Across Different Social Sites With Dropico Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3

    Read the article

  • Sharing on Github

    - by Alan
    Over the past couple weeks I have gotten a lot of help from StackOverflow users on a project, and rather than keep the finished product to myself I wanted to share it unencumbered by licenses, but don't want there to be so much legwork during installation that users shy away from trying it. I am about to post it to Github and choosing public domain licensing. I would like to to be super simple for users to make use of and just FTP it up and go. That being said, do I need to make sure I remove things like the JQuery file, and other GPL / MIT licensed dependencies that I didn't write but that my code depends on? I haven't removed any copyright notices from the other code and all of it open source, it would just be nice if users could download everything at once while of course not trying to represent that I am the license holder of the dependencies. Inside my files are also some snippets, do those have to be externalized with installation instructions or can it be posted as is? Here is an example, my nav.php file is 115 lines long and I have these at the top: <script type="text/javascript" src="./js/ddaccordion.js"> /*********************************************** * Accordion Content script- (c) Dynamic Drive DHTML code library (www.dynamicdrive.com) * Visit http://www.dynamicDrive.com for hundreds of DHTML scripts * This notice must stay intact for legal use ***********************************************/ </script> <link href="css/admin.css" rel="stylesheet"> <script type="text/javascript"> ddaccordion.init({ headerclass: "submenuheader", //Shared CSS class name of headers group contentclass: "submenu", //Shared CSS class name of contents group revealtype: "click", //Reveal content when user clicks or onmouseover the header? Valid value: "click", "clickgo", or "mouseover" mouseoverdelay: 200, //if revealtype="mouseover", set delay in milliseconds before header expands onMouseover collapseprev: false, //Collapse previous content (so only one open at any time)? true/false defaultexpanded: [], //index of content(s) open by default [index1, index2, etc] [] denotes no content onemustopen: false, //Specify whether at least one header should be open always (so never all headers closed) animatedefault: false, //Should contents open by default be animated into view? persiststate: true, //persist state of opened contents within browser session? toggleclass: ["", ""], //Two CSS classes to be applied to the header when it's collapsed and expanded, respectively ["class1", "class2"] togglehtml: ["suffix", "<img src='./images/plus.gif' class='statusicon' />", "<img src='./images/minus.gif' class='statusicon' />"], //Additional HTML added to the header when it's collapsed and expanded, respectively ["position", "html1", "html2"] (see docs) animatespeed: "fast", //speed of animation: integer in milliseconds (ie: 200), or keywords "fast", "normal", or "slow" oninit:function(headers, expandedindices){ //custom code to run when headers have initalized //do nothing }, onopenclose:function(header, index, state, isuseractivated){ //custom code to run whenever a header is opened or closed //do nothing } }) </script>

    Read the article

  • Partner BI Applications 4-Day Hands-on Training Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} 12th - 15th February 2012, Oracle Reading (UK) - REGISTER NOW This training will provide attendees with an in-depth working understanding of the architecture, the technical and the functional content of the Oracle Business Intelligence Applications, whilst also providing an understanding of their installation, configuration and extension. The course will cover the following topics: Overview of Oracle Business Intelligence Applications Oracle BI Applications Fundamentals and Features Configuring BI Applications for Oracle E-Business Suite Understanding BI Applications Architecture Fundamentals of BI Applications Security Prerequisites - This training is only for OPN member Partners. Good understanding of basic data warehousing concepts Hands on experience in Oracle Business Intelligence Enterprise Edition Hands on experience in Informatica Good understanding of any of the following Oracle EBS modules: General Ledger, Accounts Receivables, Accounts Payables Some understanding of  Oracle BI Applications is required (See Sales & Technical Tutorials for OBI, BI-Apps and Hyperion EPM)  Please note that attendees are required to bring a laptop. Laptop 4GB RAM-Recognized by Windows 64 bits 80GB free space in Hard drive or External Device CPU Core 2 Duo or Higher Operating System Requirements Windows 7, Windows XP, Windows 2003 NOT ALLOWED with Windows Vista An Administrator User

    Read the article

  • Criteria for selecting timeout value?

    - by stijn
    Situation: a piece of software reads frames of data from a file in a seperate thread and puts it on a queue, emptied by another thread. That second thread periodically checks on the queue and fails rather gracefully, by showing an error message stating the read timed out, if no data is available within a certain amount of time. Initially this timeout was set to 200mSec. There was no real reasoning behind that constant though, but it worked fine. We measured on a couple of machines and for large data frames, larger than what would be used by customers, a read took like 20mSec whith no other load on the machine. However one customer now gets timeout errors now and then (on the second try all is fine, probably the file is in cache or the virus scanner leaves it alone). The programmers are like 'well, yeah, but that customer's machine is full of cruft, virus scanners, tons of unneeded background processes etc'. Of course the customer is like 'hey this should just work, shouldn't it'? While the programers have a point, since the software is heavy enough to validate the need for a dedicated machine, that does not make the customer happy. Increasing the timeout to 2 seconds, for example, solves the problem. But I'd like to make a proper decision now instead of just randomly pick some magic constant that is probably ok in 99% of cases. What criteria should be used for that? We could just pick a large number, but that feels wrong. (and then we end up with a program that has the horrible bahaviour of hanging when trying to read from a disconnected drive for instance, whereas we'd rather make it show an error right away). Or we could make the timeout value a user setting, but then we need to ducument it clearly and even then not all customers are tech savy enough to really understand what it does. Or we could try and wait until another customer reports timeouts and increase the value again. And again. Until we find something ok for 99.99% of the cases.. Any good practice for this type of situation?

    Read the article

  • What does Ubuntu do when I signal undocking to a laptop?

    - by Seppo Erviälä
    It seems that Ubuntu runs some script or command when I signal that I want to undock my laptop by pressing the undock button on the dock. Most visible thing that happens is that resolution on external display is changed. After prepearing for undock my laptop is still connected to power, VGA-output and audio jacks through dock but not to any usb devices or optical drive. I'm running 11.04 on a ThinkPad X61s with X6 UltraBase. What happens when I signal undocking? This is what dmesg says after pressing undock button: [81459.990682] ata1.00: disabled [81459.990727] ata1.00: detaching (SCSI 0:0:0:0) [81459.991722] ACPI: \_SB_.GDCK - undocking [81460.009462] ehci_hcd 0000:00:1a.7: power state changed by ACPI to D0 [81460.020252] ehci_hcd 0000:00:1a.7: BAR 0: set to [mem 0xfe226c00-0xfe226fff] (PCI address [0xfe226c00-0xfe226fff]) [81460.020265] ehci_hcd 0000:00:1a.7: power state changed by ACPI to D0 [81460.020281] ehci_hcd 0000:00:1a.7: restoring config space at offset 0xf (was 0x300, writing 0x30b) [81460.020309] ehci_hcd 0000:00:1a.7: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900102) [81460.020338] ehci_hcd 0000:00:1a.7: PME# disabled [81460.020346] ehci_hcd 0000:00:1a.7: power state changed by ACPI to D0 [81460.020352] ehci_hcd 0000:00:1a.7: power state changed by ACPI to D0 [81460.020363] ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 22 (level, low) -> IRQ 22 [81460.020372] ehci_hcd 0000:00:1a.7: setting latency timer to 64 [81460.020432] ehci_hcd 0000:00:1d.7: power state changed by ACPI to D0 [81460.040071] ehci_hcd 0000:00:1d.7: BAR 0: set to [mem 0xfe227000-0xfe2273ff] (PCI address [0xfe227000-0xfe2273ff]) [81460.040085] ehci_hcd 0000:00:1d.7: power state changed by ACPI to D0 [81460.040104] ehci_hcd 0000:00:1d.7: restoring config space at offset 0xf (was 0x400, writing 0x40b) [81460.040133] ehci_hcd 0000:00:1d.7: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900102) [81460.040170] ehci_hcd 0000:00:1d.7: PME# disabled [81460.040178] ehci_hcd 0000:00:1d.7: power state changed by ACPI to D0 [81460.040184] ehci_hcd 0000:00:1d.7: power state changed by ACPI to D0 [81460.040195] ehci_hcd 0000:00:1d.7: PCI INT D -> GSI 19 (level, low) -> IRQ 19 [81460.040204] ehci_hcd 0000:00:1d.7: setting latency timer to 64 [81460.040503] ehci_hcd 0000:00:1d.7: PCI INT D disabled [81460.040552] ehci_hcd 0000:00:1d.7: PME# enabled [81460.061657] ehci_hcd 0000:00:1d.7: power state changed by ACPI to D3 [81460.200414] usb 1-4: USB disconnect, address 14 [81462.220088] ehci_hcd 0000:00:1a.7: PCI INT C disabled [81462.220169] ehci_hcd 0000:00:1a.7: PME# enabled [81462.240115] ehci_hcd 0000:00:1a.7: power state changed by ACPI to D3

    Read the article

  • Segmentation and Targeting: Your Tools for Personalizing the Online Customer Experience

    - by Christie Flanagan
    In order to deliver the kind of personalized and engaging online experiences that customers expect today, look to segmentation and targeting.  Segmentation is the practice of dividing your site visitors into distinct groups based on shared characteristics or behavior – for example, a segment may consist of site visitors who have visited pages related to certain product type, or they may consist of visitors within the same age group or geographic area.  The idea is that those within a segment are more likely to have common needs, problems or interests that can be served by your business. Targeting is the process by which the most relevant content, whether an article promotion or other piece of content, is delivered to your visitors based on their segment membership. Segmentation and targeting are used to drive greater engagement on your web presence by delivering content to your site visitors that is tailored to their interests, behavior or other attributes.  You may have a number of different goals for your segmentation and targeting efforts: Up-sell or cross-sell to your customers Conduct A/B testing on your offers and creative Offer discounts, promotions or other incentives for the time and duration that you specify Make is easier to find relevant information about products and services Create premium content model There are two different approaches you can take toward segmentation and targeting for you online customer experience initiatives. The first is more of a manual process, in which marketers manage the process of determining which segments to create and which content to target to those segments. The benefit of this approach is that it gives marketers a high level of control over the whole process which works well when you have a thorough understanding of your segments and which content is most likely to serve their needs.  Tools for marketer managed segmentation and targeting are often built right in to your WEM platform, as they are with Oracle WebCenter Sites. The downside is that the more segments and content that you have, the more time consuming and complicated in can be to manage manually.The second approach relies on predictive intelligence to automate the segmentation and targeting process.  This allows optimization of the process to occur in real time. This approach helps reduce the burden of manual segmentation and targeting and can result in new insights into segments that you may never have thought of on your own.  It also provides you with the capability to quickly test new offers and promotions on your site.  Predictive segmentation and targeting can be achieved by using Oracle WebCenter Sites and Oracle Real-Time Decisions together. *****Get a taste for how Oracle WebCenter Sites and Oracle Real-Time Decisions combine to deliver powerful capabilities for predictive segmentation and targeting by watching this on demand webcast introducing Oracle WebCenter Sites 11g or by reading IDC’s take on the latest release of Oracle’s web experience management solution.  Be sure to return to the Oracle WebCenter blog on Thursday for a closer look at how to optimize the online customer experience using these two products together.

    Read the article

  • The Internet Key Wave MW833UP is not recognized in Ubuntu

    - by gio900
    I can't use my Onda MW833UP... :( Any advice? Here is something that someone else may understand: ~$: lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric ~$: lsusb Bus 001 Device 005: ID 1ee8:0012 ~$: dmesg [ 22.709475] cdc_acm 1-1:1.0: ttyACM0: USB ACM device [ 22.714856] usbcore: registered new interface driver cdc_acm [ 22.714866] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters [ 23.520490] ieee80211 phy0: wl_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) [ 24.244530] usbcore: registered new interface driver usbserial [ 24.244575] USB Serial support registered for generic [ 24.244673] usbcore: registered new interface driver usbserial_generic [ 24.244681] usbserial: USB Serial Driver core [ 24.265879] USB Serial support registered for GSM modem (1-port) [ 24.285680] usbcore: registered new interface driver option [ 24.285691] option: v0.7.2:USB Driver for GSM modems [ 24.425878] EXT4-fs (sda9): re-mounted. Opts: errors=remount-ro,commit=600 [ 24.736540] EXT4-fs (sda8): re-mounted. Opts: commit=600 [ 35.705796] Easy slow down manager: checking for SABI support. [ 35.706002] Easy slow down manager: SABI is supported (f5189) [ 36.060099] usbcore: deregistering interface driver uvcvideo [ 139.508061] CE: hpet increased min_delta_ns to 20113 nsec [ 6798.378917] usb 1-1: USB disconnect, device number 5 [ 6809.108232] usb 1-1: new high speed USB device number 6 using ehci_hcd [ 6809.242692] scsi5 : usb-storage 1-1:1.0 [ 6810.241257] scsi 5:0:0:0: CD-ROM Onda Datacard CD-ROM 0001 PQ: 0 ANSI: 0 [ 6810.241841] scsi 5:0:0:1: Direct-Access Onda Storage 0001 PQ: 0 ANSI: 0 [ 6810.271410] sr0: scsi3-mmc drive: 0x/0x caddy [ 6810.272099] sr 5:0:0:0: Attached scsi CD-ROM sr0 [ 6810.272852] sr 5:0:0:0: Attached scsi generic sg1 type 5 [ 6810.279954] sd 5:0:0:1: [sdb] Attached SCSI removable disk [ 6810.281210] sd 5:0:0:1: Attached scsi generic sg2 type 0 [ 6810.380591] sr0: CDROM (ioctl) error, command: Xpwrite, Read disk info 51 00 00 00 00 00 00 00 02 00 [ 6810.380617] sr: Sense Key : Hardware Error [current] [ 6810.380625] sr: Add. Sense: No additional sense information [ 6810.613937] sr0: CDROM (ioctl) error, command: Xpwrite, Read disk info 51 00 00 00 00 00 00 00 02 00 [ 6810.613972] sr: Sense Key : Hardware Error [current] [ 6810.613984] sr: Add. Sense: No additional sense information [ 6810.673716] usb 1-1: USB disconnect, device number 6 [ 6815.572142] usb 1-1: new high speed USB device number 7 using ehci_hcd [ 6815.706828] cdc_acm 1-1:1.0: ttyACM0: USB ACM device The last 3 lines are where I inserted the Internet key, then reconnected it. usb-device T: Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 7 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=1ee8 ProdID=0012 Rev=00.01 S: Manufacturer=Onda S: Product=MW833UP S: SerialNumber=9230B35D870F9CB7AE684EACC5C12BE5EC33B26E C: #Ifs= 2 Cfg#= 1 Atr=a0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 1 Cls=02(commc) Sub=02 Prot=01 Driver=cdc_acm I: If#= 1 Alt= 0 #EPs= 2 Cls=0a(data ) Sub=00 Prot=00 Driver=cdc_acm Then there is /dev/ttyACM0. When the key is connected to the USB port, everything that will meant...

    Read the article

  • Any ideas on reducing lag in terrain generation?

    - by l5p4ngl312
    Ok so here's the deal. I've written an isometric engine that generates terrain based on camera values using 2D perlin noise. I planned on doing 3D but first I need to work out the lag issues I'm having. I will try to explain how I am doing this so that maybe someone can spot where I am going wrong. I know it should not be this laggy. There is the abstract class Block which right now just contains render(). BlockGrass, etc. extend this class and each has code in the render function to create a textured quad at the given position. Then there is the class Chunk which has the function Generate() and setBlocksInArea(). Generate uses 2D perlin noise to make a height map and stores the heights in a 2D array. It stores the positions of each block it generates in blockarray[x][y][z]. The chunks are 8x8x128. In the main game class there is a 3D array called blocksInArea. The blocks in this array are what gets rendered. When a chunk generates, it adds its blocks to this array at the correct index. It is like this so chunks can be saved to the hard drive (even though they aren't yet) but there can still be optimization with the rendering that you wouldn't have if you rendered each chunk separately. Here's where the laggy part comes in: When the camera moves to a new chunk, a row of chunks generates on the end of the axis that the camera moved on. But it still has to move the other chunks up/down in the blocksInArea (render) array. It does this by calculating the new position in the array and doing the Chunk.setBlocksInArea(): for(int x = 0; x < 8; x++){ for(int y = 0; y < 8; y++){ nx = x+(coordX - camCoordX)*8 ny = y+(coordY - camCoordY)*8 for(int z = 0; z < height[x][y]; z++){ blockarray[x][y][z] = Game.blocksInArea[nx][ny][z]; } } } My reasoning was that this would be much faster than doing the perlin noise all over again, but there are still little spikes of lag when you move in between chunks. Edit: Would it be possible to create a 3 dimensional array list so that shifting of chunks within the array would not be neccessary?

    Read the article

  • Lenovo W520 back usb port not working

    - by jaudette
    The usb port in the back of my laptop (on the right side when viewed from using perspective) is not working. Does anybody know if we can get this port working, and what the port number is. Here is my lsusb, if it can help. % lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 003 Device 002: ID 046d:c01e Logitech, Inc. MX518 Optical Mouse Bus 003 Device 003: ID 046d:c318 Logitech, Inc. Illuminated Keyboard Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 0765:5001 X-Rite, Inc. Huey PRO Colorimeter Bus 001 Device 004: ID 147e:2016 Upek Biometric Touchchip/Touchstrip Fingerprint Sensor Bus 001 Device 005: ID 0a5c:217f Broadcom Corp. Bluetooth Controller Bus 001 Device 006: ID 04f2:b217 Chicony Electronics Co., Ltd Lenovo Integrated Camera (0.3MP) Bus 002 Device 003: ID 17ef:1003 Lenovo Integrated Smart Card Reader I am running 12.10, upgraded from 12.04 but it did not work either in 12.04. The two usb ports on the left work just fine. EDIT: I just updated my bios from 1.32 to 1.39, no change in behaviour. The port does not even power up my devices. EDIT 2: Booted up windows, and the port is working. I went into the device manager and looked at the USB settings. I found my USB drive on Port 2, Hub 3, i just don't know how that relates to the Bus and Device numbers of linux. In windows, Smart card reader was on USB hub located at port 1 hub 2, fingerprint and bluetooth were on USB hub located at port 1 hub 1 EDIT 3: Went and looked at this SO post. Tried to look at my kern.log file with tail -f /var/log/kern.log. Got some activity when plugging/removing devices in other ports, but nothing happens when connecting a device into that port. It really looks disabled. Looked at my usb1-4 sys/bus/usb/devices/usb1/power/control and they are all set on auto. As expected, usb4 have version 3.00, the others (usb1-usb3) are 2.00.

    Read the article

  • Is Innovation Dead?

    - by wulfers
    My question is has innovation died?  For large businesses that do not have a vibrant, and fearless leadership (see Apple under Steve jobs), I think is has.  If you look at the organizational charts for many of the large corporate megaliths you will see a plethora of middle managers who are so risk averse that innovation (any change involves risk) is choked off since there are no innovation champions in the middle layers.  And innovation driven top down can only happen when you have a visionary in the top ranks, and that is also very rare.So where is actual innovation happening, at the bottom layer, the people who live in the trenches…   The people who live for a challenge. So how can big business leverage this innovation layer?  Remove the middle management layer.   Provide an innovation champion who has an R&D budget and is tasked with working with the bottom layer of a company, the engineers, developers  and business analysts that live on the edge (Where the corporate tires meet the road). Here are two innovation failures I will tell you about, and both have been impacted by a company so risk averse it is starting to fail in its primary business ventures: This company initiated an innovation process several years ago.  The process was driven companywide with team managers being the central points of collection of innovative ideas.  These managers were given no budget to do anything with these ideas.  There was no process or incentive for these managers to drive it about their team.  This lasted close to a year and the innovation program slowly slipped into oblivion…. A second example:  This same company failed an attempt to market a consumer product in a line where there was already a major market leader.  This product was under development for several years and needed to provide some major device differentiation form the current market leader.  This same company had a large Lead Technologist community made up of real innovators in all areas of technology.  Did this same company leverage the skills and experience of this internal community,   NO!!! So to wrap this up, if large companies really want to survive, then they need to start acting like a small company.  Support those innovators and risk takers!  Reward them by implementing their innovative ideas.  Champion (from the top down) innovation (found at the bottom) in your companies.  Remember if you stand still you are really falling behind.Do it now!  Take a risk!

    Read the article

  • Ask the Readers: Which Browser is a Must-Have for You on Linux?

    - by Asian Angel
    Linux systems all come with their own particular set of default browsers, but those browsers may not be the ones you want or need. This week we would like to know which browser (or browsers) are considered “must-have” on your Linux systems. As a general rule many Linux distributions have Firefox and/or Konqueror as one of the default installation browsers. During this past year the open source browser Chromium has also been gaining a lot of traction as a default install for systems. For most people these browsers are the ones that they like best or feel work well enough to not make any changes. But there are other people who want more than what is available with a default system install. They may favor a particular browser for its’ extensibility or speed…others prefer a particular browser for its’ features or minimalist UI. Whatever your preferences may be, there is a browser out there to fit your style. Some people may even prefer to run only bleeding edge nightly releases or add them in with their current browsers. The important part is that you have choices when it comes to your Linux system. What we would like to know this week is which browser or browsers you make sure are always installed on your Linux systems. Does the Linux system you use already have your favorite browser installed as part of the default set? Maybe you are content with using the default set of browsers that come with the system. Or perhaps you prefer to rework the entire browser setup on your system by removing the defaults and adding your favorites. Let us know which browsers you consider “must have” and why in the comments! Note: You can make up to two selections on today’s poll since most people will likely have more than one browser that they make certain is always installed. How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC BotSync Enables Secure FTP File Synchronization on Android Devices Enjoy Beautiful City Views with the Cityscape Theme for Windows 7 Luigi Installs Any OS on Google’s Cr-48 Notebook DIY iPad Stylus Offers Pen-Based Interaction on the Cheap Serene Blue Ubuntu Wallpaper for Your Desktop Enjoy Old School Style Video Game Fun with Chicken Invaders

    Read the article

  • Got Samba, Got PyNeighbourhood but still no connection. What else do I need?

    - by Frank A
    I am sure I had already hit post before but then could only find it by backing through browser. Was it deleted? is the question too dumb, sorry that I do not know the right jargon just trying to get answers to my problem anyway have reworded stuff a bit This seems to be a number one requirement for lots of people and 2 months on from setting up my Ubuntu pc, I am still unable to get a lasting connection in either direction. Adding a windows pc to a network is so easy... just a few clicks and get on with using it all. Using all command approaches and modifying configuration files is hardly user friendly. Googling brings up thousands of solutions but mostly they are too techy or assume the user is fully aware of how to use Linux. I do realise that their must be a lot of flavours for connecting to networks. So far I have installed Samba and fiddled with its config file. The day I did all that it worked from XP to Ubuntu. When I came back two days later to transfer my data over it would not connect. Although the the share does show up in Windows (XP) My Network Places. Today I installed PyNeighbourhood and this shows the Ubuntu box and all of the shares I had created at some point on Ubuntu and it even shows this under the XP workgroup name. But instructions on setting the connection up seem to relate to an earlier version and nothing seems to work there either. (I unshared most of those test folders but they still show up her but that is another question. When I click on mount- I can only click on one on the Ubuntu machine, there is one with no name so I assume this to be my attempt to add one XP Shared drive using ipaddress, I get errors. (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", mount error(6): No such device or address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) OK tried to find the manual referred to... only an old comment that manual would be produced for future versions. I saw in another thread that Winbind is needed as well or at least I assume as well? Totally lost again? Please help, what else needs to be installed to connect to win pcs on the network.

    Read the article

  • Achieving decoupling in Model classes

    - by Guven
    I am trying to test-drive (or at least write unit tests) my Model classes but I noticed that my classes end up being too coupled. Since I can't break this coupling, writing unit tests is becoming harder and harder. To be more specific: Model Classes: These are the classes that hold the data in my application. They resemble pretty much the POJO (plain old Java objects), but they also have some methods. The application is not too big so I have around 15 model classes. Coupling: Just to give an example, think of a simple case of Order Header - Order Item. The header knows the item and the item knows the header (needs some information from the header for performing certain operations). Then, let's say there is the relationship between Order Item - Item Report. The item report needs the item as well. At this point, imagine writing tests for Item Report; you need have a Order Header to carry out the tests. This is a simple case with 3 classes; things get more complicated with more classes. I can come up with decoupled classes when I design algorithms, persistence layers, UI interactions, etc... but with model classes, I can't think of a way to separate them. They currently sit as one big chunk of classes that depend on each other. Here are some workarounds that I can think of: Data Generators: I have a package that generates sample data for my model classes. For example, the OrderHeaderGenerator class creates OrderHeaders with some basic data in it. I use the OrderHeaderGenerator from my ItemReport unit-tests so that I get an instance to OrderHeader class. The problem is these generators get complicated pretty fast and then I also need to test these generators; defeating the purpose a little bit. Interfaces instead of dependencies: I can come up with interfaces to get rid of the hard dependencies. For example, the OrderItem class would depend on the IOrderHeader interface. So, in my unit tests, I can easily mock the behaviour of an OrderHeader with a FakeOrderHeader class that implements the IOrderHeader interface. The problem with this approach is the complexity that the Model classes would end up having. Would you have other ideas on how to break this coupling in the model classes? Or, how to make it easier to unit-test the model classes?

    Read the article

  • WebCenter Customer Spotlight: Sberbank of Russia

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummarySberbank of Russia is the largest credit institution in Russia and the Commonwealth of Independent States (CIS), accounting for 27% of Russian banking assets and 26% of Russian banking capital.Sberbank of Russia needed to increase business efficiency and employee productivity due to the growth in its corporate clientele from 1.2 million to an estimated 1.6 million.Sberbank of Russia deployed Oracle’s Siebel Customer Relationship Management (CRM) applications to create a single client view, optimize client communication, improve efficiency, and automate distressed asset processing. Based on Oracle WebCenter Content, they implemented an enterprise content management system for documents, unstructured content storage and search, which became an indispensable service across the organization and in the board room business results. Sberbank of Russia consolidated borrower information across the entire organization into a single repository to obtain, for the first time, a single view on the bank’s borrowers. With the implemented solution they reducing the amount of bad debt significantly. Company OverviewSberbank of Russia is the largest credit institution in Russia and the Commonwealth of Independent States (CIS), accounting for 27% of Russian banking assets and 26% of Russian banking capital. In 2010, it ranked 43rd in the world for Tier 1 capital. Business ChallengesSberbank of Russia needed to increase business efficiency and employee productivity due to the growth in its corporate clientele from 1.2 million to an estimated 1.6 million. It also wanted to automate distressed asset management to reduce the number of corporate clients’ bad debts. As part of their business strategy they wanted to drive high-quality, competitive customer services by simplifying client communication processes and enabling personnel to quickly access client information Solution deployedSberbank of Russia deployed Oracle’s Siebel Customer Relationship Management (CRM) applications to create a single client view, optimize client communication, improve efficiency, and automate distressed asset processing. Based on Oracle WebCenter Content, they implemented an enterprise content management system for documents, unstructured content storage and search which became an indispensable service across the organization and in the board room business results. Business ResultsSberbank of Russia consolidated borrower information across the entire organization into a single repository to obtain, for the first time, a single view on the bank’s borrowers. They monitored 103,000 client transactions and 32,000 bank cards with credit collection issues (100% of Sberbank’s bad borrowers) reducing the amount of bad debt significantly. “Innovation and client service are the foundation of our business strategy. Oracle’s Siebel CRM applications helped advance our objectives by enabling us to deliver faster, more personalized service while managing and tracking distressed assets.” A.B. Sokolov, Head of Center of Business Administration and Customer Relationship Management, Sberbank of Russia Additional Information Sberbank of Russia Customer Snapshot Oracle WebCenter Content Siebel Customer Relationship Management 8.1 Oracle Business Intelligence, Enterprise Edition 11g

    Read the article

  • Analysing SQLBits Feedback

    - by jamiet
    Earlier this week I received all the feedback that people offered on my session at SQLBits 7 in York – “SSIS Dataflow Performance Tuning” (the video is available online if you wish to see it). As you may have gathered from previous posts on this blog and my less-SQLy-focused Wordpress blog I am a big fan of collecting and tracking both personal and public data and session feedback lends itself very well to tracking because it is quantitative rather than qualitative; by that I mean attendees are invited to provide marks out of ten rather than (or, in the case of SQLBits, as well as) written comments. The SQLBits feedback is also useful because they use a consistent format – the same questions are asked each time – this means it is particularly easy to to track whether the scores that people give are trending up or down. I suspect that somewhere the SQLBits organisers have a big Analysis Services cube (ok, perhaps its an Excel pivot table) that allows them to analyse these scores per conference, speaker, track etc.… and there’s no reason that we as session speakers cannot do the same thing. To that end I have started to store my feedback in an Excel spreadsheet of my own which in the interests of transparency is available for public viewing (only a web browser required) on SkyDrive at http://cid-550f681dad532637.office.live.com/view.aspx/Public/Misc/Personal%20SQLBits%20Session%20Feedback.xlsx. I have used a pivot table to aggregate all that feedback and here is a screenshot: I am hereby making a public plea to the SQLBits organisers (on the off-chance that they are reading) to please continue to keep the feedback format consistent in the future and I encourage them to publish all of the feedback in an anonymised form. I would also encourage anyone doing conference speaking to track their conference feedback in the same way that I am doing so that you get an insight into whether or not you are improving over time. It is not difficult to setup and maintaining it as you do more sessions takes very little effort. Storing feedback data like this leads me to wider thoughts about well-known conventions and data format standardisation. Let’s imagine a utopia where there were a standard set of questions for capturing session feedback that were leveraged at every conference regardless of subject matter, location or culture; that would give rise to immense cross-conference and cross-discipline analysis – the data analyst in me goes giddy at the thought of it. It is scenarios like this that drive my interest both in data formats such as iCalendar, microformats and RDF, and in emerging movements such as the semantic web and linked data, all things which I have written about in the past. I don’t know whether we will ever reach the stage where every piece of data has structured, descriptive metadata associated with it but I live in hope. @Jamiet

    Read the article

  • PASS Summit 2011: Save Money Now

    - by Bill Graziano
    Register by March 31st and save $200.  On April 1st we increase the price.  On July 1st we increase it again.  We have regular price bumps all the way through to the Summit.  You can save yourself $200 if you register by Thursday. In two years of marketing for PASS and a year of finance I’ve learned a fair bit about our pricing, why we do this and how you react to it.  Let me help you save some money! Price bumps drive registrations.  We see big spikes in the two weeks prior to a price increase.  Having a deadline with a cost attached is a great motivator to get people to take action. Registering early helps you and it helps PASS.  You get the exact same Summit at a cheaper rate.  PASS gets smoother cash flow and a better idea of how many people to expect.  We also get people that are already registered that will tell their friends about the conference. This tiered pricing lets us serve those that are very price conscious.  They can register early and take advantage of these discounts.  I know there are people that pay for this conference out of their own pockets.  This is a great way for those people to reduce the cost of the conference.  (And remember for next year that our cheapest pricing starts right after the Summit and usually goes up around the first of the year.) We also get big price bumps after we announce the program and the pre-conference sessions.  If you wrote down the 50 or so best known speakers in the SQL Server community I’m guessing we’ll have nearly all of them at the conference.  We did last year.  I expect we will this year too.  We’re going to have good sessions.  Why wait?  Register today. If you want to attend a pre-conference session you can always add it to your registration later.  Pre-con prices don’t change.  It’s very easy to update your registration and add a pre-conference session later. I want as many people as possible to attend the Summit.  It’s been a great experience for me and I hope it will be for you.  And if you are going to go, do yourself a favor and save some money.  Register today!

    Read the article

  • RTS Game Style Application [closed]

    - by Daniel Wynand van Wyk
    My question may seem somewhat odd, but I hope that my specifications will clarify EXACTLY what it is that I am after. I need some help choosing the right tooling for a particular endeavour. My background is in desktop application development and large back-end systems. I have worked primarily on the Microsoft stack using C# and the .Net framework. My goal is to develop a 2D, RTS style, interactive office simulation. The simulation will model various office spaces, office equipment, employees and their interactions with one another. The idea is to abstract the concept of an office completely. Under the hood the application will do many things that are nothing like a game. This includes P2P networking, VPN tunnelling, streaming video, instant messaging, document collaboration, remote screen sharing, file-sharing, virus scanning, VOIP, document scanning, faxing, emailing, distributed computing, content management and much more! A somewhat similar thing has been attempted by IBM, where they created a virtual office in second life. If their attempt was a game, the game-play would be notably horrible, to say the least! The users/players will drive and control my application through the various objects modelled in the simulation. A single application capable of performing all of these various tasks would be a nightmare to navigate for even the most expert user. Using the concept of a game, I can easily separate functionality by assigning them to objects that relate 1-1 with their real world counter-parts. This can greatly simplify computing for novice users, with many added benefits in terms of visibility, transparency of process and centralized configuration. My hope is to make complex computing tasks accessible to all kinds of users and to greatly reduce the cognitive load associated with using the many different utilities and applications inside office settings. The complexity is therefore limited to the complexity of the space in which you find yourself. I want the application to target as many platforms as possible and run on computers that have no accelerated graphics capabilities. The simulation won't contain any of the fancy eye-candy you find in modern games, to the contrary, my "game" will purposefully be clean and simple. The closest thing I could imagine would be an old game like "Theme Hospital" or the first instalment of "The Sims". All the content will be pre-created and not user-generated like Second Life. New functionality will be added via a plugin system. Given my background and nature of my "game", I would like to spend most of my time writing code that does not have to do with the simulated office, as the "game" is really just a glorified application menu. I have done much reading about existing engines, frameworks and tools. I need the help of an experienced game developer who has tried and tested various products over the years who can guide me in the right direction given my very particular needs. I would appreciate any help I can get!

    Read the article

  • What arguments can I use to "sell" the BDD concept to a team reluctant to adopt it?

    - by S.Robins
    I am a bit of a vocal proponent of the BDD methodology. I've been applying BDD for a couple of years now, and have adopted StoryQ as my framework of choice when developing DotNet applications. Even though I have been unit testing for many years, and had previously shifted to a test-first approach, I've found that I get much more value out of using a BDD framework, because my tests capture the intent of the requirements in relatively clear English within my code, and because my tests can execute multiple assertions without ending the test halfway through - meaning I can see which specific assertions pass/fail at a glance without debugging to prove it. This has really been the tip of the iceberg for me, as I've also noticed that I am able to debug both test and implementation code in a more targeted manner, with the result that my productivity has grown significantly, and that I can more easily determine where a failure occurs if a problem happens to make it all the way to the integration build due to the output that makes its way into the build logs. Further, the StoryQ api has a lovely fluent syntax that is easy to learn and which can be applied in an extraordinary number of ways, requiring no external dependencies in order to use it. So with all of these benefits, you would think it an easy to introduce the concept to the rest of the team. Unfortunately, the other team members are reluctant to even look at StoryQ to evaluate it properly (let alone entertain the idea of applying BDD), and have convinced each other to try and remove a number of StoryQ elements from our own core testing framework, even though they originally supported the use of StoryQ, and that it doesn't impact on any other part of our testing system. Doing so would end up increasing my workload significantly overall and really goes against the grain, as I am convinced through practical experience that it is a better way to work in a test-first manner in our particular working environment, and can only lead to greater improvements in the quality of our software, given I've found it easier to stick with test first using BDD. So the question really comes down to the following: What arguments can I use to really drive the point home that it would be better to use StoryQ, or at the very least apply the BDD methodology? Can you point me to any anecdotal evidence that I can use to support my argument to adopt BDD as our standard method of choice? What counter arguments can you think of that could suggest that my wish to convert the team efforts to BDD might be in error? Yes, I'm happy to be proven wrong provided the argument is a sound one. NOTE: I am not advocating that we rewrite our tests in their entirety, but rather to simply start working in a different manner for all future testing work.

    Read the article

  • Partners - Steer Clear of the Unknown with Oracle Enterprise Manager12c and Plug-in Extensibility

    - by Get_Specialized!
    Imagine if you just purchased a new car and as you entered the vehicle to drive it home and you found there was no steering wheel. And upon asking the dealer you were told that it was an option and you had a choice now or later of a variety of aftermarket steering wheels that fit a wide variety of automobiles. If you expected the car to already have a steering wheel designed to manage your transportation solution, you might wonder why someone would offer an application solution where its management is not offered as an option or come as part of the solution... Using management designed to support the underlying technology and that can provide management and support  for your own Oracle technology based solution can benefit your business  a variety of ways: increased customer satisfaction, reduction of support calls, margin and revenue growth. Sometimes when something is not included or recommended , customers take their own path which may not be optimal when using your solution and has later impact on the customers satisfaction or worse a negative impact on their business. As an Oracle Partner, you can reduce your research, certification, and time to market by selecting and offering management designed, developed, and supported for Oracle product technology by Oracle with Oracle Enterprise Manager 12c. For partners with solution specific management needs or seeking to differentiate themselves in the market, Enterprise Manager 12c is extensible and provides partners the opportunity to create their own plug-ins as well as a validation program for them.  Today a number of examples by partners are available and Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} more on the way from such partners as NetApp for NetApp storage and Blue Medora for VMware vSphere. To review and consider further for applicability to your solution, visit  the Oracle PartnerNetwork KnowledgeZone for Enterprise Manager under the Develop Tab http://www.oracle.com/partners/goto/enterprisemanager

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >