Search Results

Search found 12780 results on 512 pages for 'webmaster tools'.

Page 466/512 | < Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >

  • Bowing to User Experience

    As a consumer of geeky news it is hard to check my Google Reader without running into two or three posts about Apples iPad and in particular the changes to the developer guidelines which seemingly restrict developers to using Apples Xcode tool and Objective-C language for iPad apps. One of the alternatives to Objective-C affected, is MonoTouch, an option with some appeal to me as it is based on the Mono implementation of C#. Seemingly restricted is the key word here, as far as I can tell, no official announcement has been made about its fate. For more details around MonoTouch for iPhone OS, check out Miguel de Icazas post: http://tirania.org/blog/archive/2010/Apr-28.html. These restrictions have provoked some outrage as the perception is that Apple is arrogantly restricting developers freedom to create applications as they choose and perhaps unwittingly shortchanging iPhone/iPad users who wont benefit from these now never-to-be-made great applications. Apples response has mostly been to say they are concentrating on providing a certain user experience to their customers, and to do this, they insist everyone uses the tools they approve. Which isnt a surprising line of reasoning given Apple restricts the hardware used and content of the apps already. The vogue term for this approach is curated, as in a benevolent museum director selecting only the finest artifacts for display or a wise gardener arranging the plants in a garden just so. If this is what a curated experience is like it is hard to argue that consumers are not responding. My iPhone is probably the most satisfying piece of technology I own. Coming from the Razr, it really was an revolution in how the form factor, interface and user experience all tied together. While the curated approach reinvented the smart phone genre, it is easy to forget that this is not a new approach for Apple. Macbooks and Macs are Apple hardware that run Apple software. And theyve been successful, but not quite in the same way as the iPhone or iPad (based on early indications). Why not? Well a curated approach can only be wildly successful if the curator a) makes the right choices and b) offers choices that no one else has. Although its advantages are eroding, the iPhone was different from other phones, a unique, focused, touch-centric experience. The iPad is an attempt to define another category of computing. Macs and Macbooks are great devices, but are not fundamentally a different user experience than a PC, you still have windows, file folders, mouse and keyboard, and similar applications. So the big question for Apple is can they hold on to their market advantage, continuing innovating in user experience and stay on top? Or are they going be like Xerox, and the rest of the world says thank you for the windows metaphor, now let me implement that better? It will be exciting to watch, with Android already a viable competitor and Microsoft readying Windows Phone 7. And to close the loop back to the restrictions on developing for iPhone OS. At this point the main target appears to be Adobe and Adobe Flash. Apples calculation is that a) they dont need those developers or b) the developers they want will learn Apples stuff anyway. My guess is that they are correct; that as much as I like the idea of developers having more options, I am not going to buy a competitors product to spite Apple unless that product is just as usable. For a non-technical consumer, I dont know that this conversation even factors into the buying decision. If it did, wed be talking about how Microsoft is trying to retake a slice of market share from the behemoth that is Linux.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Get More From Your Service Request

    - by Get Proactive Customer Adoption Team
    Leveraging Service Request Best Practices Use best practices to get there faster. In the daily conversations I have with customers, they sometimes express frustration over their Service Requests. They often feel powerless to make needed changes, so their sense of frustration grows. To help you avoid some of the frustration you might feel in dealing with your Service Requests (SR), here are a few pointers that come from our best practice discussions. Be proactive. If you can anticipate some of the questions that Support will ask, or the information they may need, try to provide this up front, when you log the SR. This could be output from the Remote Diagnostic Agent (RDA), if this is a database issue, or the output from another diagnostic tool, if you’re an EBS customer. Any information you can supply that helps us understand the situation better, helps us resolve the issue sooner. As you use some of these tools proactively, you might even find the solution to the problem before you log an SR! Be right. Make sure you have the correct severity level. Since you select the initial severity level, it’s easy to accept the default without considering how significant this may be. Business impact is the driving factor, so make sure you take a moment to select the severity level that is appropriate to the situation. Also, make sure you ask us to change the severity level, should the situation dictate. Be responsive! If this is an important issue to you, quickly follow up on any action plan submitted to you by Oracle Support. The support engineer assigned to your Service Request will be able to move the issue forward more aggressively when they have the needed information. This is crucial in resolving your issues in a timely manner. Be thorough. If there are five questions in the action plan, make sure you provide an answer for all five questions in one response, rather than trickling them in one at a time. This will allow the engineer to look at all of the information as a whole and to avoid multiple trips to your SR, saving valuable time and getting you a resolution sooner. Be your own advocate! You know your situation best; make sure Oracle Support understands both how and why this issue is important to you and your company. Use the escalation process if you're concerned that your SR isn't going the right direction, the right pace, or through the right person. Don't wait until you're frustrated and angry. An escalation is as simple as a quick conversation on the phone and can be amazingly effective in getting your issues back on track. The support manager you speak with is empowered to make any needed changes. Be our partner. You can make your support experience better. When your SR has been resolved, you may receive a survey request. This is intended to get your feedback about how your SR went and what we can do to improve your overall support experience. Oracle Support is here to help you. Our goal with any Service Request is to provide the best possible solution as quickly as possible. With your help, we’ll be able to do this with your Service Request too.  

    Read the article

  • What Counts For a DBA – Depth

    - by Louis Davidson
    SQL Server offers very simple interfaces to many of its features. Most people could open up SSMS, connect to a server, write a simple query and see the results. Even several of the core DBA tasks are deceptively straightforward. It doesn’t take a rocket scientist to perform a basic database backup or run a trace (even using the newfangled Extended Events!). However, appearances can be deceptive, and often times it is really important that a DBA understands not just the basics of how to perform a task, but why we do a task, and how that task works. As an analogy, consider a child walking into a darkened room. Most would know that they need to turn on the light, and how to do it, so they flick the switch. But what happens if light fails to shine forth. Most would immediately tell you that you need to consider changing the light bulb. So you hop in the car and take them to the local home store and instruct them to buy a replacement. Confronted with a 40 foot display of light bulbs, how will they decide which of the hundreds of types of bulbs, of different types, fittings, shapes, colors, power and efficiency ratings, is the right choice? Obviously the main lesson the child is going to learn this day is how to use their cell phone as a flashlight so they don’t have to ask for help the next time. Likewise, when the metaphorical toddlers who use your database server have issues, they will instinctively know something is wrong, and may even have some idea what caused it, but will have no depth of knowledge to figure out the right solution. That is where the DBA comes in and attempts to save the day. However, when one looks beneath the shiny UI, SQL Server has its own “40 foot display of light bulbs”, in the form of the tremendous number of tools and the often-bewildering amount of information they can present to the DBA, to help us find issues. Unfortunately, resorting to guesswork, to trying different “bulbs” over and over, hoping to stumble on the answer. This is where the right depth of knowledge goes a long way. If we need to write a SELECT statement, then knowing the syntax and where to find the data is not enough. Knowledge of indexes and query plans is essential. Without it, we might hit on a query that “works”, but we are basically still a user, not a programmer, because we have no real control over our platform. Is that level of knowledge deep enough? Probably not, since knowledge of the underlying metadata and structures would be very useful in helping us make sense of any query plan. Understanding the structure of an index makes the “key lookup” operator not sound like what you do when someone tapes your car key to the ceiling. So is even this level of understanding deep enough? Do we need to understand the memory architecture used to process the query? It might be a comforting level of knowledge, and will doubtless come in handy at some point, but is not strictly necessary in most cases. Beyond that lies (more or less) full knowledge of SQL language and the intricacies of every step the SQL Server engine takes to process our query. My personal theory is that, as a professional, our knowledge of a given task should extend, at a minimum, one level deeper than is strictly necessary to perform the task. Anything deeper can be left to the ridiculously smart, or obsessive, or both. As an example. tasked with storing an integer value between 0 and 99999999, it’s essential that I know that choosing an Integer over Decimal(8,0) will likely offer performance benefits. It is then useful that I also understand the value of adding a CHECK constraint, to make sure the values are valid to the desired range; and comforting that I know a little about the underlying processors, registers and computer math. Anything further, I leave to the likes of Joe Chang, whose recent blog post on the topic offers depth by the bucketful!  

    Read the article

  • TechEd 2010 Important Events

    If youll be attending TechEd in New Orleans in a couple of weeks, make sure the following are all on your calendar:   Party with Palermo TechEd 2010 Edition Sunday 6 June 2010 7:30-930pm Central Time RSVP and see who else is coming here.  The party takes place from 730pm to 930pm Central (Local) Time,  and includes a full meal, free swag, and prizes.  The event is being held at Jimmy Buffetts Margaritaville located at 1104 Decatur Street.   Developer Practices Session: DPR304 FAIL: Anti-Patterns and Worst Practices Monday 7 June 2010 4:30pm-545pm Central Time Room 276 Come to my session and hear about what NOT to do on your software project.  Hear my own and others war stories and lessons learned.  Youll laugh, youll cry, youll realize youre a much better developer than a lot of folks out there.  Heres the official description: Everybody likes to talk about best practices, tips, and tricks, but often it is by analyzing failures that we learn from our own and others' mistakes. In this session, Steve describes various anti-patterns and worst practices in software development that he has encountered in his own experience or learned about from other experts in the field, along with advice on recognizing and avoiding them. View DPR304 in TechEd Session Catalog >> Exhibition Hall Reception Monday 7 June 2010 545pm-9pm Immediately following my session, come meet the shows exhibitors, win prizes, and enjoy plenty of food and drink.  Always a good time.   Party: Geekfest Tuesday 8 June 8pm-11pm Central Time, Pat OBriens Lets face it, going to a technical conference is good for your career but its not a whole lot of fun. You need an outlet. You need to have fun. Cheap beer and lousy pizza (with a New Orleans twist) We are bringing back GeekFest! Join us at Pat OBriens for a night of gumbo, beer and hurricanes. There are limited invitations available, so what are you waiting for? If you are attending the TechEd 2010 conference and you are a developer, you are invited. To register pick up your "duck" ticket (and wristband) in the TechEd Technical Learning Center (TLC) at the Developer Tools & Languages (DEV) information desk. You must have wristband to get in. Tuesday, June 8th from 8pm 11pm Pat OBriens New Orleans 624 Bourbon Street New Orleans, LA 70130 Closing Party at Mardi Gras World Thursday 10 June 730pm-10pm Central Time Join us for the Closing Party and enjoy great food, beverages, and the excitement of New Orleans at Mardi Gras World. The colors, the lights, the music, the joie de vivreits all here.  Learn more >> Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Is Your Company Social on the Inside?

    - by Mike Stiles
    As we talk about the extension of social from an outbound-facing marketing tool to a platform that will reach across the entire enterprise, servicing multiple functions of that enterprise, it might be time to take a look at how social can be effectively employed for internal communications. Remember the printed company newsletter? Yeah, nobody reads it. Remember the emailed company newsletter? Yeah, nobody reads it. Why not? Shouldn’t your employees care about the company more than anything else in life and be voraciously hungry for any information related to it? The more realistic prospect is that a company’s employees don’t behave much differently at work where information is concerned than they do in their personal lives. They “tune in” to information that’s immediately relevant to them, that peaks their interest, and/or that’s presented in a visually engaging way. That currently makes an internal social platform the most ideal way to communicate within the organization. It not only facilitates more immediate, more targeted (and thus more relevant) messaging from the company out to employees, it sets a stage for employees to communicate with each other and efficiently get answers to questions from peers. It’s a collaboration tool on steroids. If you build such an internal social portal and you do it right, will employees use it? Considering social media has officially been declared more addictive than cigarettes, booze and sex…probably. But what does it mean to do an internal social platform “right”? The bar has been set pretty high. Your employees are used to Twitter and Facebook, and would roll their eyes at anything less simple or harder to navigate than those. All the Facebook best practices would apply to your internal social as well, including the importance of managing posting frequency, using photos and video, moderation & response, etc. And don’t worry, you won’t be the first to jump in. WPP's global digital agency Possible has its own social network called Colab. Nestle has “The Nest.” Red Robin’s got one. I myself got an in-depth look at McGraw-Hill’s internal social platform at Blogwell NYC. Some of these companies are building their own platforms, others are buying them off the shelf or customizing readymade solutions. But you won’t be the last either. Prescient Digital Media and the IABC learned 39% of companies don’t offer employees any social tools. Not a social network, not discussion forums, not even IM. And a great many continue to ban the use of Facebook and Twitter on the premises. That’s pretty astonishing since social has become as essential a modern day communications tool as the telephone. But such holdouts will pay a big price for being mired in fear while competitors exploit social connections unchallenged. Fish where the fish are. If social has become the way people communicate and take in information, let that be the way communication is trafficked in the organization.

    Read the article

  • Setting up your project

    - by ssoolsma
    Before any coding we first make sure that the project is setup correctly. (Please note, that this blog is all about how I do it, and incase i forget, i can return here and read how i used to do it. Maybe you come up with some idea’s for yourself too.) In these series we will create a minigolf scoring cart. Please note that we eventually create a fully functional application which you cannot use unless you pay me alot of money! (And i mean alot!)   1. Download and install the appropriate tools. Download the following: - TestDriven.Net (free version on the bottom of the download page) - nUnit TestDriven is a visual studio plugin for many unittest frameworks, which allows you to run  / test code very easily with a right click –> run test. nUnit is the test framework of choice, it works seamless with TestDriven.   2. Create your project Fire up visual studio and create your DataAccess project:  MidgetWidget.DataAccess is it’s name. (I choose MidgetWidget as name for the solution). Also, make sure that the MidgetWidget.DataAccess project is a c# ClassLibary Hit OK to create the solution. (in the above example the checkbox Create directory for solution is checked, because i’m pointing the location to the root of c:\development where i want MidgetWidget to be created.   3. Setup the database. You should have thought about a database when you reach this point. Let’s assume that you’ve created a database as followed: Table name: LoginKey Fields: Id (PK), KeyName (uniqueidentifier), StartDate (datetime), EndDate (datetime) Table name:  Party Fields: Id (PK), Key (uniqueidentifier, Created (datetime) Table name:  Person Fields: Id(PK),  PartyId (int), Name (varchar) Tablename: Score Fields: Id (PK), Trackid (int), PersonId (int), Strokes (int) Tablename: Track Fields: Id (PK), Name (varchar) A few things to take note about the database setup. I’ve singularized all tablenames (not “Persons“ but “Person”. This is because in a few minutes, when this is in our code, we refer to the database objects as single rows. We retrieve a single Person not a single “Persons” from the database.   4. Create the entity framework In your solution tree create a new folder and call it “DataModel”. Inside this folder: Add new item –> and choose ADO.NET Entity Data Model. Name it “Entities.edmx” and hit  “Add”. Once the edmx is added, open it (double click) and right click the white area and choose “Update model from database…". Now, point it to your database (i include sensitive data in the connectionstring) and select all the tables. After that hit “Finish” and let the entity framework do it’s code generation. Et Voila, after a few seconds you have set up your entity model. Next post we will start building the data-access! I’m off to the beach.

    Read the article

  • Using Recursive SQL and XML trick to PIVOT(OK, concat) a "Document Folder Structure Relationship" table, works like MySQL GROUP_CONCAT

    - by Kevin Shyr
    I'm in the process of building out a Data Warehouse and encountered this issue along the way.In the environment, there is a table that stores all the folders with the individual level.  For example, if a document is created here:{App Path}\Level 1\Level 2\Level 3\{document}, then the DocumentFolder table would look like this:IDID_ParentFolderName1NULLLevel 121Level 232Level 3To my understanding, the table was built so that:Each proposal can have multiple documents stored at various locationsDifferent users working on the proposal will have different access level to the folder; if one user is assigned access to a folder level, she/he can see all the sub folders and their content.Now we understand from an application point of view why this table was built this way.  But you can quickly see the pain this causes the report writer to show a document link on the report.  I wasn't surprised to find the report query had 5 self outer joins, which is at the mercy of nobody creating a document that is buried 6 levels deep, and not to mention the degradation in performance.With the help of 2 posts (at the end of this post), I was able to come up with this solution:Use recursive SQL to build out the folder pathUse SQL XML trick to concat the strings.Code (a reminder, I built this code in a stored procedure.  If you copy the syntax into a simple query window and execute, you'll get an incorrect syntax error) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} -- Get all folders and group them by the original DocumentFolderID in PTSDocument table;WITH DocFoldersByDocFolderID(PTSDocumentFolderID_Original, PTSDocumentFolderID_Parent, sDocumentFolder, nLevel)AS (-- first member      SELECT 'PTSDocumentFolderID_Original' = d1.PTSDocumentFolderID            , PTSDocumentFolderID_Parent            , 'sDocumentFolder' = sName            , 'nLevel' = CONVERT(INT, 1000000)      FROM (SELECT DISTINCT PTSDocumentFolderID                  FROM dbo.PTSDocument_DY WITH(READPAST)            ) AS d1            INNER JOIN dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)                  ON d1.PTSDocumentFolderID = df1.PTSDocumentFolderID      UNION ALL      -- recursive      SELECT ddf1.PTSDocumentFolderID_Original            , df1.PTSDocumentFolderID_Parent            , 'sDocumentFolder' = df1.sName            , 'nLevel' = ddf1.nLevel - 1      FROM dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)            INNER JOIN DocFoldersByDocFolderID AS ddf1                  ON df1.PTSDocumentFolderID = ddf1.PTSDocumentFolderID_Parent)-- Flatten out folder path, DocFolderSingleByDocFolderID(PTSDocumentFolderID_Original, sDocumentFolder)AS (SELECT dfbdf.PTSDocumentFolderID_Original            , 'sDocumentFolder' = STUFF((SELECT '\' + sDocumentFolder                                         FROM DocFoldersByDocFolderID                                         WHERE (PTSDocumentFolderID_Original = dfbdf.PTSDocumentFolderID_Original)                                         ORDER BY PTSDocumentFolderID_Original, nLevel                                         FOR XML PATH ('')),1,1,'')      FROM DocFoldersByDocFolderID AS dfbdf      GROUP BY dfbdf.PTSDocumentFolderID_Original) And voila, I use the second CTE to join back to my original query (which is now a CTE for Source as we can now use MERGE to do INSERT and UPDATE at the same time).Each part of this solution would not solve the problem by itself because:If I don't use recursion, I cannot build out the path properly.  If I use the XML trick only, then I don't have the originating folder ID info that I need to link to the document.If I don't use the XML trick, then I don't have one row per document to show in the report.I could conceivably do this in the report function, but I'd rather not deal with the beginning or ending backslash and how to attach the document name.PIVOT doesn't do strings and UNPIVOT runs into the same problem as the above.I'm excited that each version of SQL server provides us new tools to solve old problems and/or enables us to solve problems in a more elegant wayThe 2 posts that helped me along:Recursive Queries Using Common Table ExpressionHow to use GROUP BY to concatenate strings in SQL server?

    Read the article

  • Cannot get ATI Drivers installed

    - by bittoast67
    I am trying to install the Catalyst driver. The best I can get is a strange resolution problem and firefox acts all wonkt. The worst I have gotten is low graphics mode in which I just reinstall Ubuntu. I have a HP Pavilion Dv7 laptop. With Radeon 3200 HD. I plan to try again with a fresh install of Ubuntu 12.4.3 as I have heard its the most compatible. This is what I have done: I have tried just the easy way of going to synaptic and installing the drivers that way. the fglrx package (not the fglrx update). And if memory serves I think that boots me into low graphics mode. So, fresh install of Ubuntu and tried again. I have done everything a couple times from this site (http://wiki.cchtml.com/index.php/Ubuntu_Precise_Installation_Guide) following every instruction to a T. That gets me something, such as a lowered fan speed and a much cooler computer, but I also lose most of my resolution. And displays says its the best resolution I can get. I also have a very screwy firefox. Using this method I can see AMD Catalyst Control Center in my dash (two of them really one administrator and one not) but when I try to open it it says no amd driver detected. So again, ubuntu reinstall. I have tried the GUI method from the Legacy driver I got from AMD's site. It runs through smoothly and at the very end after I exit the installer it gives me an error. I have also tried various other methods using terminal, as well as various different drivers (the one from the amd's site and the one suggested in the above link for my graphics card) both to no avail. When I try the method in the link on number 2, and I get the super low res and screwy fire fox. I type in, fglrxinfo ,and get a badrequest error. I have yet to type in fglrxinfo and get anything like what I am supposed to. UPDATE: I am now currently reinstalling Ubuntu 12.4. I tried the above mentioned link - thank you very much!- just to see on the previously failed driver attempt by following the purge commands. And to no avail when typing fglrxinfo I still get the badrequest thing. I will update again after a try with a true fresh install. Thanks again!! UPDATE: Alright everyone. Still no go. I have done everything word per word in the provided tutorial. I have rebooted my computer again to a fucked up resolution and this is what I get when typing fglrxinfo: $ fglrxinfo X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 153 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 I would like to add that when installing this file: fglrx_8.970-0ubuntu1_amd64.deb I got this: Building initial module for 3.8.0-29-generic Error! Bad return status for module build on kernel: 3.8.0-29-generic (x86_64) Consult /var/lib/dkms/fglrx/8.970/build/make.log for more information. update-initramfs: deferring update (trigger activated) Processing triggers for ureadahead ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.8.0-29-generic Processing triggers for libc-bin ... ldconfig deferred processing now taking place Any ideas? Anyone? I cant for the life of me figure out what I am doing wrong.

    Read the article

  • Cloud Adoption Challenges

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2013/11/07/cloud-adoption-challenges.aspxWhile cloud computing makes sense for most organizations and countless projects, I have seen customers significantly struggle with cloud adoption challenges. This blog post is not an attempt to provide a generic assessment of cloud adoption; rather it is an account of personal experiences in the field, some of which may or may not apply to your organization. Cloud First, Burst? In the rush to cloud adoption some companies have made the decision to redesign their core system with a cloud first approach. However a cloud first approach means that the system may not work anymore on-premises after it has been redesigned, specifically if the system depends on Platform as a Service (PaaS) components (such as Azure Tables). While PaaS makes sense when your company is in a position to adopt the cloud exclusively, it can be difficult to leverage with systems that need to work in different clouds or on-premises. As a result, some companies are starting to rethink their cloud strategy by designing for on-premises first, and modify only the necessary components to burst when needed in the cloud. This generally means that the components need to work equally well in any environment, which requires leveraging Infrastructure as a Service (IaaS) or additional investments for PaaS applications, or both.  What’s the Problem? Although most companies can benefit from cloud computing, not all of them can clearly identify a business reason for doing so other than in very generic terms. I heard many companies claim “it’s cheaper”, or “it allows us to scale”, without any specific metric or clear strategy behind the adoption decision. Other companies have a very clear strategy behind cloud adoption and can precisely articulate business benefits, such as “we have a 500% increase in traffic twice a year, so we need to burst in the cloud to avoid doubling our network and server capacity”. Understanding the problem being solved through by adopting cloud computing can significantly help organizations determine the optimum path and timeline to adoption. Performance or Scalability? I stopped counting the number of times I heard “the cloud doesn’t scale; our database runs faster on a laptop”.  While performance and scalability are related concepts, they are nonetheless different in nature. Performance is a measure of response time under a given load (meaning with a specific number of users), while scalability is the performance curve over various loads. For example one system could see great performance with 100 users, but timeout with 1,000 users, in which case the system wouldn’t scale. However another system could have average performance with 100 users, but display the exact same performance with 1,000,000 users, in which case the system would scale. Understanding that cloud computing does not usually provide high performance, but instead provides the tools necessary to build a scalable system (usually using PaaS services such as queuing and data federation), is fundamental to proper cloud adoption. Uptime? Last but not least, you may want to read the Service Level Agreement of your cloud provider in detail if you haven’t done so. If you are expecting 99.99% uptime annually you may be in for a surprise. Depending on the component being used, there may be no associated SLA at all! Other components may be restarted at any time, or services may experience failover conditions weekly ( or more) based on current overall conditions of the cloud service provider, most of which are outside of your control. As a result, for PaaS cloud environments (and to a certain extent some IaaS systems), applications need to assume failure and gracefully retry to be successful in the cloud in order to provide service continuity to end users. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting (http://www.bluesyntax.net). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • Using Private Extension Galleries in Visual Studio 2012

    - by Jakob Ehn
    Note: The installer and the complete source code is available over at CodePlex at the following location: http://inmetavsgallery.codeplex.com   Extensions and addins are everywhere in the Visual Studio ALM ecosystem! Microsoft releases new cool features in the form of extensions and the list of 3rd party extensions that plug into Visual Studio just keeps growing. One of the nice things about the VSIX extensions is how they are deployed. Microsoft hosts a public Visual Studio Gallery where you can upload extensions and make them available to the rest of the community. Visual Studio checks for updates to the installed extensions when you start Visual Studio, and installing/updating the extensions is fast since it is only a matter of extracting the files within the VSIX package to the local extension folder. But for custom, enterprise-specific extensions, you don’t want to publish them online to the whole world, but you still want an easy way to distribute them to your developers and partners. This is where Private Extension Galleries come into play. In Visual Studio 2012, it is now possible to add custom extensions galleries that can point to any URL, as long as that URL returns the expected content of course (see below).Registering a new gallery in Visual Studio is easy, but there is very little documentation on how to actually host the gallery. Visual Studio galleries uses Atom Feed XML as the protocol for delivering new and updated versions of the extensions. This MSDN page describes how to create a static XML file that returns the information about your extensions. This approach works, but require manual updates of that file every time you want to deploy an update of the extension. Wouldn’t it be nice with a web service that takes care of this for you, that just lets you drop a new version of your VSIX file and have it automatically detect the new version and produce the correct Atom Feed XML? Well search no more, this is exactly what the Inmeta Visual Studio Gallery Service does for you :-) Here you can see that in addition to the standard Online galleries there is an Inmeta Gallery that contains two extensions (our WIX templates and our custom TFS Checkin Policies). These can be installed/updated i the same way as extensions from the public Visual Studio Gallery. Installing the Service Download the installler (Inmeta.VSGalleryService.Install.msi) for the service and run it. The installation is straight forward, just select web site, application pool and (optional) a virtual directory where you want to install the service.   Note: If you want to run it in the web site root, just leave the application name blank Press Next and finish the installer. Open web.config in a text editor and locate the the <applicationSettings> element Edit the following setting values: FeedTitle This is the name that is shown if you browse to the service using a browser. Not used by Visual Studio BaseURI When Visual Studio downloads the extension, it will be given this URI + the name of the extension that you selected. This value should be on the following format: http://SERVER/[VDIR]/gallery/extension/ VSIXAbsolutePath This is the path where you will deploy your extensions. This can be a local folder or a remote share. You just need to make sure that the application pool identity account has read permissions in this folder Save web.config to finish the installation Open a browser and enter the URL to the service. It should show an empty Feed page:   Adding the Private Gallery in Visual Studio 2012 Now you need to add the gallery in Visual Studio. This is very easy and is done as follows: Go to Tools –> Options and select Environment –> Extensions and Updates Press Add to add a new gallery Enter a descriptive name, and add the URL that points to the web site/virtual directory where you installed the service in the previous step   Press OK to save the settings. Deploying an Extension This one is easy: Just drop the file in the designated folder! :-)  If it is a new version of an existing extension, the developers will be notified in the same way as for extensions from the public Visual Studio gallery: I hope that you will find this sever useful, please contact me if you have questions or suggestions for improvements!

    Read the article

  • Custom Text and Binary Payloads using WebSocket (TOTD #186)

    - by arungupta
    TOTD #185 explained how to process text and binary payloads in a WebSocket endpoint. In summary, a text payload may be received as public void receiveTextMessage(String message) {    . . . } And binary payload may be received as: public void recieveBinaryMessage(ByteBuffer message) {    . . .} As you realize, both of these methods receive the text and binary data in raw format. However you may like to receive and send the data using a POJO. This marshaling and unmarshaling can be done in the method implementation but JSR 356 API provides a cleaner way. For encoding and decoding text payload into POJO, Decoder.Text (for inbound payload) and Encoder.Text (for outbound payload) interfaces need to be implemented. A sample implementation below shows how text payload consisting of JSON structures can be encoded and decoded. public class MyMessage implements Decoder.Text<MyMessage>, Encoder.Text<MyMessage> {     private JsonObject jsonObject;    @Override    public MyMessage decode(String string) throws DecodeException {        this.jsonObject = new JsonReader(new StringReader(string)).readObject();               return this;    }     @Override    public boolean willDecode(String string) {        return true;    }     @Override    public String encode(MyMessage myMessage) throws EncodeException {        return myMessage.jsonObject.toString();    } public JsonObject getObject() { return jsonObject; }} In this implementation, the decode method decodes incoming text payload to MyMessage, the encode method encodes MyMessage for the outgoing text payload, and the willDecode method returns true or false if the message can be decoded. The encoder and decoder implementation classes need to be specified in the WebSocket endpoint as: @WebSocketEndpoint(value="/endpoint", encoders={MyMessage.class}, decoders={MyMessage.class}) public class MyEndpoint { public MyMessage receiveMessage(MyMessage message) { . . . } } Notice the updated method signature where the application is working with MyMessage instead of the raw string. Note that the encoder and decoder implementations just illustrate the point and provide no validation or exception handling. Similarly Encooder.Binary and Decoder.Binary interfaces need to be implemented for encoding and decoding binary payload. Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish TOTD #184 - Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Problem with sprite direction and rotation

    - by user2236165
    I have a sprite called Tool that moves with a speed represented as a float and in a direction represented as a Vector2. When I click the mouse on the screen the sprite change its direction and starts to move towards the mouseclick. In addition to that I rotate the sprite so that it is facing in the direction it is heading. However, when I add a camera that is suppose to follow the sprite so that the sprite is always centered on the screen, the sprite won't move in the given direction and the rotation isn't accurate anymore. This only happens when I add the Camera.View in the spriteBatch.Begin(). I was hoping anyone could maybe shed a light on what I am missing in my code, that would be highly appreciated. Here is the camera class i use: public class Camera { private const float zoomUpperLimit = 1.5f; private const float zoomLowerLimit = 0.1f; private float _zoom; private Vector2 _pos; private int ViewportWidth, ViewportHeight; #region Properties public float Zoom { get { return _zoom; } set { _zoom = value; if (_zoom < zoomLowerLimit) _zoom = zoomLowerLimit; if (_zoom > zoomUpperLimit) _zoom = zoomUpperLimit; } } public Rectangle Viewport { get { int width = (int)((ViewportWidth / _zoom)); int height = (int)((ViewportHeight / _zoom)); return new Rectangle((int)(_pos.X - width / 2), (int)(_pos.Y - height / 2), width, height); } } public void Move(Vector2 amount) { _pos += amount; } public Vector2 Position { get { return _pos; } set { _pos = value; } } public Matrix View { get { return Matrix.CreateTranslation(new Vector3(-_pos.X, -_pos.Y, 0)) * Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) * Matrix.CreateTranslation(new Vector3(ViewportWidth * 0.5f, ViewportHeight * 0.5f, 0)); } } #endregion public Camera(Viewport viewport, float initialZoom) { _zoom = initialZoom; _pos = Vector2.Zero; ViewportWidth = viewport.Width; ViewportHeight = viewport.Height; } } And here is my Update and Draw-method: protected override void Update (GameTime gameTime) { float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; TouchCollection touchCollection = TouchPanel.GetState (); foreach (TouchLocation tl in touchCollection) { if (tl.State == TouchLocationState.Pressed || tl.State == TouchLocationState.Moved) { //direction the tool shall move towards direction = touchCollection [0].Position - toolPos; if (direction != Vector2.Zero) { direction.Normalize (); } //change the direction the tool is moving and find the rotationangle the texture must rotate to point in given direction toolPos += (direction * speed * elapsed); RotationAngle = (float)Math.Atan2 (direction.Y, direction.X); } } if (direction != Vector2.Zero) { direction.Normalize (); } //move tool in given direction toolPos += (direction * speed * elapsed); //change cameracentre to the tools position Camera.Position = toolPos; base.Update (gameTime); } protected override void Draw (GameTime gameTime) { graphics.GraphicsDevice.Clear (Color.Blue); spriteBatch.Begin (SpriteSortMode.BackToFront, BlendState.AlphaBlend, null, null, null, null, Camera.View); spriteBatch.Draw (tool, new Vector2 (toolPos.X, toolPos.Y), null, Color.White, RotationAngle, originOfToolTexture, 1, SpriteEffects.None, 1); spriteBatch.End (); base.Draw (gameTime); }

    Read the article

  • WebLogic How-to Videos: Install, Upgrade, & Patch

    - by Ruma Sanyal
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here is another great YouTube video by our product manager Monica Riccelli. She talks about installers now being standardized in Oracle for greater consistency -- no more WebLogic native installers. Also, JDK is no longer a part of the WebLogic install. The various installers she discusses include OUI, ZIP, OEPE, Coherence and more. Monica then takes us through a step by step install process. After the install process is complete the video takes us through the configuration wizard. The ZIP installer is then discussed and its effectiveness, such as it being the smallest downloadable option, easy, and very popular with our customers and limitations (such as for development only and not to be used in production) highlighted. Monica then takes us through the configuration wizard, its usage, and when to use WLST scripts. The video then discusses NodeManager and its usage and discusses how to reconfigure a WebLogic domain on upgrade – through our GUI tools or through command line interface. Lastly, it highlights Opatch – a patch application tool used by our customers and standardized across all Oracle products. Really detailed video. Check it out!  /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • On a BPM Mission with Process Accelerators. Part 1: BPM as an ATV

    - by Cesare Rotundo
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Part 1: BPM as an ATV It’s always exciting to talk to customers that are in the middle of a BPM transformational journey. Their thirst for new processes to improve with BPM makes them explorers in a landscape of opportunities. They have discovered that with BPM the can “go places” they couldn’t reach before. In a way, learning how to generate value with BPM is like adopting a new mean of transportation. Apps are like regular cars: very efficient, but to be used on paved roads: the road/process has been traced, and there are fixed paths to follow to get from “opportunity to quote” or from “quote to cash”. Getting off the road is risky, and laying down new asphalt is slow and expensive. Custom development is like running: you can go virtually anywhere, following any path you like, yet it’s slow, and a lot of sweat. BPM allows you to go “off the beaten path” laid out by packaged apps, yet make fast progress compared to custom development. BPM is therefore more like an All-Terrain Vehicle (ATV): less efficient than a car, but much faster than running, with a powerful enough engine that can get you places. The similarities between BPM and ATVs don’t stop here: you must learn to ride it even if you already know how to drive a car; you can reach places but figuring out the path to your destination is harder. Ultimately, with BPM as with an ATV, you reach places that you thought you could never reach, and you discover new destinations that provide great benefit to you … and that you didn’t even know existed! That’s where the sense of accomplishment that we heard from our BPM customers comes from, as well as the desire to share their experience, or even, as in the case of a County, the willingness to contribute their BPM solutions to help other agencies that face the same challenges. The question we wanted to answer is how can we teach organizations to drive ATV/BPM, thus leading them to deeper success with BPM, while increasing their awareness of the potential for reaching new targets, and finally equip them with the right tools. Like with ATVs, getting from point A to point B is more of a work of art than cruising on the highway by car. There is a lot we can do: after all many sought after destinations are common: someone else has been on the same path before. If only you could learn from their experience …

    Read the article

  • Multichannel Digital Engagement: Find Out How Your Organization Measures Up

    - by Michael Snow
    This article was originally published in the September 2013 Edition of the Oracle Information InDepth Newsletter ORACLE WEBCENTER EDITION Thanks to mobile and social technologies, interactive online experiences are now commonplace. Not only that, they give consumers more choices, influence, and control than ever before. So how can you make your organization stand out? The key building blocks for delivering exceptional cross-channel digital experiences are outlined below. Also, a new assessment tool is available to help you measure your organization's ability to deliver such experiences. A clearly defined digital strategy. The customer journey is growing increasingly complex, encompassing multiple touchpoints and channels. It used to be easy to map marketing efforts to specific offline channels; for example, a direct mail piece with an offer to visit a store for a discounted purchase. Now it is more difficult to cultivate and track such clear cause-and-effect relationships. To deliver an integrated digital experience in this more complex world, organizations need a clearly defined and comprehensive digital marketing strategy that is backed up by an integrated set of software, middleware, and hardware solutions. Strong support for business agility and speed-to-market. As both IT and marketing executives know, speed-to-market and business agility are key to competitive advantage. That means marketers need solutions to support the rapid implementation of online marketing initiatives—plus the flexibility to adapt quickly to a changing marketplace. And IT needs tools with the performance, scalability, and ease of integration to support marketing efforts. Both teams benefit when business users are empowered to implement marketing initiatives on their own, with minimal IT intervention. The ability to deliver relevant, personalized content. Delivering a one-size-fits-all online customer experience is no longer acceptable. Customers expect you to know who they are, including their preferences and past relationship with your brand. That means delivering the most relevant content from the moment a visitor enters your site. To make that happen, you need a powerful rules engine so that marketers and business users can easily define site visitor segments and deliver content accordingly. That includes both implicit targeting that is based on the user’s behavior, and explicit targeting that takes a user’s profile information into account. Ideally, the rules engine can also intelligently weight recommendations when multiple segments apply to a specific customer. Support for social interactivity. With the advent of Facebook and LinkedIn, visitors expect to participate in and contribute to your web presence—and share their experience on their own social networks. That requires easy incorporation of user-generated content such as comments, ratings, reviews, polls, and blogs; seamless integration with third-party social networking sites; and support for social login, which helps to remove barriers to social participation. The ability to deliver connected, multichannel experiences that include powerful, flexible mobile capabilities. By 2015, mobile usage is projected to surpass that of PCs and other wired devices. In other words, mobile is an essential element in delivering exceptional online customer experiences. This requires the creation and management of mobile experiences that are optimized for delivery to the thousands of different devices that are in use today. Just as important, organizations must be able to easily extend their traditional web presence to the mobile channel and deliver highly personalized and relevant multichannel marketing initiatives while also managing to minimize the time and effort required to manage mobile sites. Are you curious to know how your organization measures up when it comes to delivering an engaging, multichannel digital experience? If so, take this brief, 15-question online assessment and see how your organization scores in the areas of digital strategy, digital agility, relevance and personalization, social interactivity, and multichannel experience.

    Read the article

  • Understanding the Value of SOA

    - by Mala Narasimharajan
    Written By: Debra Lilley, ACE Director, Fusion Applications Again I want to talk from my area of expertise of Fusion Applications and talk about their design fundamentals. If you look at the table below and start at the bottom Oracle have defined all of the business objects e.g. accounts, people, customers, invoices etc. used by Fusion Applications; each of these objects contain all of the information required and can be expanded if necessary.  That Oracle have created for each of these business objects every action that is needed for the applications e.g. all the actions to create a new customer, checking to see if it exists, credit checking with D&B (Dun & Bradstreet < http://www.dnb.co.uk/> ) , creating the record, notifying those required etc. Each of these actions is a stand-alone web service. Again you can create a new actions or subscribe to an external provided web service e.g. the D&B check. The diagram also shows that all of development of Fusion Applications is from their Fusion Middleware offerings. Then the Intelligent Business Process is the order in which you run these actions, this is Service Orientated Architecture, SOA. Not only is SOA used to orchestrate actions within Fusion Applications it is also used in the integration of Fusion Applications with the rest of the Oracle stable of applications such as EBS, PeopleSoft, JDE and Siebel. The other applications are written with propriety development tools so how do they work with SOA? It’s a very simple answer, with the introduction of the Oracle SOA platform each process within these applications was made available to be called as a web service. I won’t go into technically how that is done but what’s known as a wrapper to allow each of them to act in this way was added. Finally at the top of the diagram are the questions that each Fusion Application process must answer, and this is the ‘special’ sauce that makes them so good, the User Experience, but that is a topic for another day, or you can read about it in my blog http://debrasoracle.blogspot.co.uk/2014/04/going-on-record-about-fusion-apps-cloud.html or Oracle’s own UX blog https://blogs.oracle.com/usableapps/ The concept behind AppAdvantage is not new the idea that Oracle technology can add value to your Oracle applications investments is pretty fundamental. Nishit Rao who is in AppAdvantage team provided myself and other ACE Directors with demo kits so that we could demonstrate SOA running with the applications. The example I learnt to build was that of the EBS inventory open interface. The simple concept is that request records can be added to a table and an import run that creates these as transactions in inventory. What’s SOA allows you to do is to add to the table from any source and then run this process automatically whereas traditionally you had to run the process at regular intervals because you didn’t know if the table was empty or not. This may just sound like a different way of doing the same thing but if the process is critical for your business then the interval was very small and the process run potentially many times unnecessarily. Using SOA it only happened when necessary without any delay. So in my post today I’ve talked about how SOA is used with Fusion Applications and in the linking with more traditional applications but that is only the tip of the iceberg of potential, your applications are just part of your IT systems and SOA can orchestrate your data across all of them; the beauty of open standards.  Debra Lilley, Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director.  Lilley has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Lead and Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts

    Read the article

  • What do you need to know to be a world-class master software developer? [closed]

    - by glitch
    I wanted to bring up this question to you folks and see what you think, hopefully advise me on the matter: let's say you had 30 years of learning and practicing software development in front of you, how would you dedicate your time so that you'd get the biggest bang for your buck. What would you both learn and work on to be a world-class software developer that would make a large impact on the industry and leave behind a legacy? I think that most great developers end up being both broad generalists and specialists in one-two areas of interest. I'm thinking Bill Joy, John Carmack, Linus Torvalds, K&R and so on. I'm thinking that perhaps one approach would be to break things down by categories and establish a base minimum of "software development" greatness. I'm thinking: Operating Systems: completely internalize the core concepts of OS, perhaps gain a lot of familiarity with an OSS one such as Linux. Anything from memory management to device drivers has to be complete second nature. Programming Languages: this is one of those topics that imho has to be fully grokked even if it might take many years. I don't think there's quite anything like going through the process of developing your own compiler, understanding language design trade-offs and so on. Programming Language Pragmatics is one of my favorite books actually, I think you want to have that internalized back to back, and that's just the start. You could go significantly deeper, but I think it's time well spent, because it's such a crucial building block. As a subset of that, you want to really understand the different programming paradigms out there. Imperative, declarative, logic, functional and so on. Anything from assembly to LISP should be at the very least comfortable to write in. Contexts: I believe one should have experience working in different contexts to truly be able to appreciate the trade-offs that are being made every day. Embedded, web development, mobile development, UX development, distributed, cloud computing and so on. Hardware: I'm somewhat conflicted about this one. I think you want some understanding of computer architecture at a low level, but I feel like the concepts that will truly matter will be slightly higher level, such as CPU caching / memory hierarchy, ILP, and so on. Networking: we live in a completely network-dependent era. Having a good understanding of the OSI model, knowing how the Web works, how HTTP works and so on is pretty much a pre-requisite these days. Distributed systems: once again, everything's distributed these days, it's getting progressively harder to ignore this reality. Slightly related, perhaps add solid understanding of how browsers work to that, since the world seems to be moving so much to interfacing with everything through a browser. Tools: Have a really broad toolset that you're familiar with, one that continuously expands throughout the years. Communication: I think being a great writer, effective communicator and a phenomenal team player is pretty much a prerequisite for a lot of a software developer's greatness. It can't be overstated. Software engineering: understanding the process of building software, team dynamics, the requirements of the business-side, all the pitfalls. You want to deeply understand where what you're writing fits from the market perspective. The better you understand all of this, the more of your work will actually see the daylight. This is really just a starting list, I'm confident that there's a ton of other material that you need to master. As I mentioned, you most likely end up specializing in a bunch of these areas as you go along, but I was trying to come up with a baseline. Any thoughts, suggestions and words of wisdom from the grizzled veterans out there who would like to share their thoughts and experiences with this? I'd really love to know what you think!

    Read the article

  • Upgrading to 9.2 - Info You Can Use (part 1)

    - by John Webb
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Rebekah Jackson joins our blog with a series of helpful hints on planning your upgrade to PeopleSoft 9.2.   Find Features & Capabilities There are many ways that you might learn about new features and capabilities within our releases, but if you aren’t sure where to start or how best to go about it, we recommend: Go to www.peoplesoftinfo.com Select the product line you are interested in, and go to the ‘Release Content’ tab Use the Video Feature Overviews (VFOs) on YouTube and the Cumulative Feature Overview (CFO) tool to find features and functions. The VFOs are brief recordings that summarize some of our most popular capabilities. These recordings are great tools for learning about new features, or helping others to visualize the value they can bring to your organization. The VFOs focus on some of our highest value and most compelling new capabilities. We also provide summarized ‘Why Upgrade to 9.2’ VFOs for HCM, Financials, and Supply Chain. The CFO is a spreadsheet based tool that allows you to select the release you are currently on, and compare it to the new release. It will return the list of all new features and capabilities, by product. You can browse the full list and / or highlight areas that look particularly interesting. Once you have a list of features by product, use the Release Value Proposition, Pre-Release Notes, and the Release Notes documents to get more details on and supporting value statements about why those features will be helpful. Gather additional data and supporting information, including: Go to the Product Data Sheets tab, and review the respective data sheets. These summarize the capabilities in the product, and provide succinct value statements for the product and capabilities. The PeopleSoft 9.2 Upgrade page, which has many helpful resources. Important Notes:   -  We recommend that you go through the above steps for the application areas of interest, as well as for PeopleTools. There are many areas in PeopleTools 8.53 and the 9.2 application releases that combine technical and functional capabilities to deliver transformative value.    - We also recommend that you review the Portal Solutions content. With your license to PeopleSoft applications, you have access to many of the most powerful capabilities within the Interaction Hub.    -  If you have recently upgraded to PeopleSoft 9.1, and an immediate upgrade to 9.2 is simply not realistic, you can apply the same approaches described here to find untapped capabilities in your current products. Many of the features in 9.2 were delivered first in our 9.1 Feature Packs. To find the Release Value Proposition, Pre-Release Notes, and Release Notes for these releases, search on ‘PeopleSoft 9.1 Documentation Home Page’ on My Oracle Support, and select your desired product area. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Webcast On-Demand: Building Java EE Apps That Scale

    - by jeckels
    With some awesome work by one of our architects, Randy Stafford, we recently completed a webcast on scaling Java EE apps efficiently. Did you miss it? No problem. We have a replay available on-demand for you. Just hit the '+' sign drop-down for access.Topics include: Domain object caching Service response caching Session state caching JSR-107 HotCache and more! Further, we had several interesting questions asked by our audience, and we thought we'd share a sampling of those here for you - just in case you had the same queries yourself. Enjoy! What is the largest Coherence deployment out there? We have seen deployments with over 500 JVMs in the Coherence cluster, and deployments with over 1000 JVMs using the Coherence jar file, in one system. On the management side there is an ecosystem of monitoring tools from Oracle and third parties with dashboards graphing values from Coherence's JMX instrumentation. For lifecycle management we have seen a lot of custom scripting over the years, but we've also integrated closely with WebLogic to leverage its management ecosystem for deploying Coherence-based applications and managing process life cycles. That integration introduces a new Java EE archive type, the Grid Archive or GAR, which embeds in an EAR and can be seen by a WAR in WebLogic. That integration also doesn't require any extra WebLogic licensing if Coherence is licensed. How is Coherence different from a NoSQL Database like MongoDB? Coherence can be considered a NoSQL technology. It pre-dates the NoSQL movement, having been first released in 2001 whereas the term "NoSQL" was coined in 2009. Coherence has a key-value data model primarily but can also be used for document data models. Coherence manages data in memory currently, though disk persistence is in a future release currently in beta testing. Where the data is managed yields a few differences from the most well-known NoSQL products: access latency is faster with Coherence, though well-known NoSQL databases can manage more data. Coherence also has features that well-known NoSQL database lack, such as grid computing, eventing, and data source integration. Finally Coherence has had 15 years of maturation and hardening from usage in mission-critical systems across a variety of industries, particularly financial services. Can I use Coherence for local caching? Yes, you get additional features beyond just a java.util.Map: you get expiration capabilities, size-limitation capabilities, eventing capabilites, etc. Are there APIs available for GoldenGate HotCache? It's mostly a black box. You configure it, and it just puts objects into your caches. However you can treat it as a glass box, and use Coherence event interceptors to enhance its behavior - and there are use cases for that. Are Coherence caches updated transactionally? Coherence provides several mechanisms for concurrency control. If a project insists on full-blown JTA / XA distributed transactions, Coherence caches can participate as resources. But nobody does that because it's a performance and scalability anti-pattern. At finer granularity, Coherence guarantees strict ordering of all operations (reads and writes) against a single cache key if the operations are done using Coherence's "EntryProcessor" feature. And Coherence has a unique feature called "partition-level transactions" which guarantees atomic writes of multiple cache entries (even in different caches) without requiring JTA / XA distributed transaction semantics.

    Read the article

  • ORA-4031 Troubleshooting Tool ???

    - by Takeyoshi Sasaki
    ORA-4031 ???????????????????? SGA ????????????(??????)??????????????????????????????????????????????????? ORA-4031 ??????????????????? ORA-4031 Troubleshooting Tool ??????????? ORA-4031 Troubleshooting Tool ?? ORA-4031 Troubleshooting Tool ? ORA-4031 ????????? ORA-4031 ???????????????????????????????????WEB????????????????????????????????My Oracle Support ??????????????????????? ORA-4031 ??????????????????????????????ORA-4031 ?????????????????????????????? ORA-4031 Troubleshooting Tool ???????? My Oracle Support ?????? Diagnostic Tools Catalog ??  ORA-4031 Troubleshooting Tool ???????????????? ORA-4031 Troubleshooting Tool ??????????????? ORA-4031 Troubleshooting Tool ????? ???2??????????????? ORA-4031 ?????????????????????????????ORA-4031 Troubleshooting Tool ????????????????????????????????????????ORA-4031 ???????????????????? ??????????????? ORA-4031???????????? ????????????? ORA-4031 ?????? AWR???????????????????????????????????????????????????????????? ????·???????·???????????? ORA-4031 ?????????? ????????? SR ?????????????????????????? [ADR] ????·??????·?????????? [10g ???] AWR?????(STATSPACK????)???? ?????????? ORA-4031 ????????????????????????????????????????????? ORA-4031 ?????????????? ORA-4031 ??????1?1??????????? ORA-4031 Troubleshooting Tool ???????????(??????????????)???????? ORA-4031 ????????? ???????????????????????????????????????????????? ??????????????????????????????????????1)High Session_Cached_Cursor Setting Causing Excessive Consumption of Shared Pool???SESSION_CACHED_CURSOR ??????????????????????????????????????????????2)Insufficient SGA Free Memory at StartupThis issue could occur if in the init.ora parameters of your Alert log, (shared_pool_size + large_pool_size + java_pool_size + db_keep_cache_size + streams_pool_size + db_cache_size) / sga_target is greater than 90%.????????????????????????? ?????????????? shared_pool_size, large_pool_size, java_pool_size, db_keep_cache_size, streams_pool_size, db_cache_size ????? /sga_target ??? 90% ??????????????????sga_target ??? memory_target ??????????????????????????????????????????????????????????????????? shared_pool_size ???????????????????????????????????????????????????????????????????????(?????????)? sga_target ????????????????????????????????????????????????????????????? ?????????????????????????????????????????????1)In your Alert log,* Look for parameters under "System parameters with non-default values:". If session_cached_cursor * 2000 / shared_pool_size is greater than 10%, then session_cached_cursors are consuming significant shared_pool_size.??? ????????????? "System parameters with non-default values:" ????  session_cached_cursor * 2000 ??? shared_pool_size ? 10% ???????????????????????? ???2) In your Alert log, SGA Utilization (Sum of shared_pool_size, large_pool_size, java_pool_size, db_keep_cache_size, streams_pool_size and db_cache_size over sga_target) is 99%, which might be too high. ??? shared_pool_size, large_pool_size, java_pool_size, db_keep_cache_size, streams_pool_size and db_cache_size ???? sga_target ? 99% ???????????????????? ?????????????????????????????? My Oracle Support ??????????????????????????1)Decrease the parameter SESSION_CACHED_CURSORSSESSION_CACHED_CURSORS ???????????????????????2)Reduce the minimum values for the dynamic SGA components to allow memory manager to make changes as neededSGA ?????????????????(???)?????????????????? 2????????????????????? ORA-4031 Trobuleshooting Tool ????????????????????????????????????????? ORA-4031 Troubleshooting Tool ?????? ORA-4031 ??????????????????????ORA-4031 Troubleshooting Tool ???????????????????????????????????????????????????????ORA-4031 ????????????????????????????ORA-4031 ??????????????? ORA-4031 Troubleshooting Tool ?????????

    Read the article

  • How to be Agile when new work keeps affecting completed work?

    - by jdln
    The project I'm working on is to re-skin an existing website. The functionally will stay the same, its just the styles that are changing. The HTML is not changing, I'm only modifying the CSS files. The site is pretty complex. There are dozens of pages. Users can be logged in and have a number of different roles. Depending on their role the content of the page and what pages they are allowed to see varys. We're using GIT and Github. I'm trying to write CSS that works as components. So when the same form elements, headings, etc appear on multiple pages they are already styled and are consistent. Most of time this is working well. Sadly the format and class names in the HTML are at times messy and unpredictable. When I fix something on one page it can break another. The job is also harder as no one knows exactly all the variations that are possible due to the user roles. As such I'm continuously finding new variations as I go along. I'm making headway by putting a lot of comments in my CSS. If I need to remove a CSS rule Ill comment it out so I can still see it with the chrome dev tools, and ill put a comment in the CSS saying why I removed it and for what page this was done. This means that if on another page I'm about to add add the rule to fix a different problem, there is more of a chance I will see how this would break the first page. This allows me to either find a different solution that will work for both pages, or I can make the override page specific. This has been working quite well for me. If I had complete free reign and the only deadline was to finish the project by the end then this method would be fine. However my manager is trying to mitigate risk by breaking the work into areas to be completed per sprint. This is counter to how I have been approaching things as something like my typography styles will affect all other pages on the site. The other issue is that the different stakeholders want to sign off each section as I go along. However once I've finished a section it may change if I change CSS that affects it and also affects a new section I'm working on. I've asked that the stakeholders have a quick unofficial sign off in stages (eg per sprint), and have the final official sign off at the end of the project, but this is being met with resistance. I do understand why it would be higher risk to do this, but the only way to guarantee that a signed off section will not change is to make ALL future changes page specific. In addition to this I'm being told that all work that I push to the Git repo should be ready to go live, and as such should not contain any code comments. This is risky for me as I wont know until I've finished the site if I will ever benefit from these comments or not. Has anyone else been in a similar situation and managed to find a compromise that worked for my development approach and also the desires of management and stakeholders to have a more Agile approach? A more Agile workflow works great when you can break the work into components and know that once something is done it wont be affected by future work. However the nature of this project makes this hard to achieve.

    Read the article

  • ZFS Storage Appliance ? ldap ??????

    - by user13138569
    ZFS Storage Appliance ? Openldap ????????? ???ldap ?????????????? Solaris 11 ? Openldap ????????????? ??? slapd.conf ??ldif ?????????? user01 ??????? ?????? slapd.conf # # See slapd.conf(5) for details on configuration options. # This file should NOT be world readable. # include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/nis.schema # Define global ACLs to disable default read access. # Do not enable referrals until AFTER you have a working directory # service AND an understanding of referrals. #referral ldap://root.openldap.org pidfile /var/openldap/run/slapd.pid argsfile /var/openldap/run/slapd.args # Load dynamic backend modules: modulepath /usr/lib/openldap moduleload back_bdb.la # moduleload back_hdb.la # moduleload back_ldap.la # Sample security restrictions # Require integrity protection (prevent hijacking) # Require 112-bit (3DES or better) encryption for updates # Require 63-bit encryption for simple bind # security ssf=1 update_ssf=112 simple_bind=64 # Sample access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # Directives needed to implement policy: # access to dn.base="" by * read # access to dn.base="cn=Subschema" by * read # access to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! ####################################################################### # BDB database definitions ####################################################################### database bdb suffix "dc=oracle,dc=com" rootdn "cn=Manager,dc=oracle,dc=com" # Cleartext passwords, especially for the rootdn, should # be avoid. See slappasswd(8) and slapd.conf(5) for details. # Use of strong authentication encouraged. rootpw secret # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. directory /var/openldap/openldap-data # Indices to maintain index objectClass eq ?????????ldif???? dn: dc=oracle,dc=com objectClass: dcObject objectClass: organization dc: oracle o: oracle dn: cn=Manager,dc=oracle,dc=com objectClass: organizationalRole cn: Manager dn: ou=People,dc=oracle,dc=com objectClass: organizationalUnit ou: People dn: ou=Group,dc=oracle,dc=com objectClass: organizationalUnit ou: Group dn: uid=user01,ou=People,dc=oracle,dc=com uid: user01 objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: user01 uidNumber: 10001 gidNumber: 10000 homeDirectory: /home/user01 userPassword: secret loginShell: /bin/bash shadowLastChange: 10000 shadowMin: 0 shadowMax: 99999 shadowWarning: 14 shadowInactive: 99999 shadowExpire: -1 ldap?????????????ZFS Storage Appliance??????? Configuration SERVICES LDAP ??Base search DN ?ldap??????????? ???? ldap ????????? user01 ???????????????? ???????????? user ????????? Unknown or invalid user ?????????????????? ????????????????Solaris 11 ???????????? ????????????? ldap ????????getent ??????????????? # svcadm enable svc:/network/nis/domain:default # svcadm enable ldap/client # ldapclient manual -a authenticationMethod=none -a defaultSearchBase=dc=oracle,dc=com -a defaultServerList=192.168.56.201 System successfully configured # getent passwd user01 user01:x:10001:10000::/home/user01:/bin/bash ????????? user01 ?????????????? # mount -F nfs -o vers=3 192.168.56.101:/export/user01 /mnt # su user01 bash-4.1$ cd /mnt bash-4.1$ touch aaa bash-4.1$ ls -l total 1 -rw-r--r-- 1 user01 10000 0 May 31 04:32 aaa ?????? ldap ??????????????????????????!

    Read the article

  • Oracle????????(2012?11?)

    - by Steve He(???)
      Oracle Support Training Oracle ???????????,????????????,??????,?????Oracle??????????,????????????????????????????????Oracle???????????? ???? ?? ?? ?? ?? ???? ?? Support Best Practices (formerly WEWS) ???? ?? 11?13? 15:00 ?? EBS - Support Diagnostics Tools ???? ?? 11?15? 15:00 ?? OSWatcher Black Box: How to improve performance and monitor your system automatically ???? ?? 11?15? 15:00 ?? MOS - Configuration Manager ???? ?? 11?20? 15:00 ?? Get Proactive Resolve - Answers Generic ???? ?? 11?22? 15:00 ?? MOS - Communities ???? ?? 11?27? 15:00 ?? ?????? My Oracle Support ??????????????????????,??? world clock.??????? Oracle ?????????????,??? note 603505.1 ????????????,??????????????(Mandarin)?????? Internet Explorer ??? My Oracle Support ????????????????? ?? ?? ?? ?? Creating Customer Value ???? ?? ?? Oracle Support Basics ???? ?? ?? An Introduction to My Oracle Support ???? ?? ?? Service Request Management ???? ?? ?? Customer User Administration ???? ?? ?? Managing Favorite ???? ?? ?? Quick Search ???? ?? ?? Hot Topic Email ???? ?? ?? Patch and Update ???? ?? ?? Site Alert ???? ?? ?? Search and Browse Features in My Oracle Support ???? ?? ?? Why Use Configuration Manager In The My Oracle Support ???? ?? ?? Enterprise Manager 11g and My Oracle Support ???? ?? ?? Oracle Collaborative Support ???? ?? ?? How to Escalate a Service Request within Oracle Support ???? ?? ?? ????????,?? Support Training Community ?????????? Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • Oracle????????(2012?7?)

    - by user763198
      Oracle Support Training Oracle ???????????,????????????,??????,?????Oracle??????????,????????????????????????????????Oracle???????????? ???? ?? ?? ?? ?? ???? ?? My Oracle Support Basics ???? ?? 7?12? 15:00 ?? Working Effectively with Support ???? ?? 7?19? 15:00 ?? EBS - R12 Support Diagnostics Tools EBS?? ?? 7?26? 15:00 ?? ?????? My Oracle Support ??????????????????????,??? world clock.??????? Oracle ?????????????,??? note 603505.1 ????????????,??????????????(Mandarin)?????? Internet Explorer ??? My Oracle Support ????????????????? ?? ?? ?? ?? Creating Customer Value ???? ?? ?? Oracle Support Basics ???? ?? ?? An Introduction to My Oracle Support ???? ?? ?? Service Request Management ???? ?? ?? Customer User Administration ???? ?? ?? Managing Favorite ???? ?? ?? Hot Topic Email ???? ?? ?? Quick Search ???? ?? ?? Patch and Update ???? ?? ?? Site Alert ???? ?? ?? Search and Browse Features in My Oracle Support ???? ?? ?? Why Use Configuration Manager In The My Oracle Support ???? ?? ?? Enterprise Manager 11g and My Oracle Support ???? ?? ?? Oracle Collaborative Support ???? ?? ?? How to Escalate a Service Request within Oracle Support ???? ?? ?? ????????,?? Support Training Community ?????????? Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

< Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >