Search Results

Search found 7044 results on 282 pages for 'monitor stand'.

Page 250/282 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • How do I fix my resolution after Directx install through Steam?

    - by Justin
    I'm a bit long-winded so see bottom for quick version and specs. Friendly Hello: Hello all on these askUbuntu pages, I just recently built my own computer and decided to switch to Ubuntu for the extra coolness. I've been learning a lot through all this, and mostly been trying to figure out issues on my own (read: Google searches). However, I couldn't seem to find others with this problem so I've come here for help. Detailed Recount: So I just used WINE and WINETRICKS to install Steam. All went well and it worked. Then I went to trying a game out. I remembered that Orcs Must Die! worked from http://www.steamgamesonlinux.com/ so I tried that out. After selecting to download it, that's when the problem occurred. The screen suddenly zoomed in!!! I think it's the resolution right? Half the screen is cut off and I can't see parts of the right side of windows. My theory is that this is due to Direct X being installed through Steam, as Steam automatically installed it as I chose to download the game. It didn't even ask me to install Direct X or not ): It all happened so fast. This all being said, the game works fine! It looks a little strange, as if the resolution was off, but it plays just fine. What I did so far: Restarted my computer. Didn't work -_- Researched Steam installing DirectX on Ubuntu then messing up resolution and couldn't really find anything. Researched uninstalling DirectX from Ubuntu but only found uninstalling DirectX after having been installed with Wine, not through Steam. Got mad and ate my feelings. Tried "xrandr -s 0" but it didn't do anything. Ran xrandr alone and terminal showed this: Screen 0: minimum 8 x 8, current 640 x 480, maximum 16384 x 16384 DVI-I-0 connected 640x480+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 640x480 59.9*+ 320x240 120.1 DVI-I-1 disconnected (normal left inverted right x axis y axis) HDMI-0 disconnected (normal left inverted right x axis y axis) DP-0 disconnected (normal left inverted right x axis y axis) DVI-D-0 disconnected (normal left inverted right x axis y axis) DP-1 disconnected (normal left inverted right x axis y axis) About now I was mad so I played Odin's Sphere then took a nap. Back to it! I entered the following: xrandr --output DVI-I-0 --mode 1024x768 But I was met with this message: xrandr: cannot find mode 1024x768 I get the same messages for 800x600, 1400x1050, and seemingly any other combination of numbers. I then tried Going into System Settings then Displays, then playing around in there. My Resolution is set to 640x480 and there are no other options for me to choose from. Rotation has Normal, Clockwise, Counter Clockwise, and 180 Degrees. It's set to Normal and I haven't messed with that. Launcher Placement has Unknown and All Displays as its two options. It's set to Unknown, but moving it to All Displays doesn't seem to do anything. Finally, when I click Detect Displays, nothing seems to happen. Quick Version: Linux noob. Steam installed with Wine and Winetricks. Steam downloaded and installed game + DirectX. Resolution messed up now (I think; pretty sure), can't fix it, very annoying, no idea what's going on, halp! Specs: Ubuntu Version 12.04 Wine Version 1.4.1 Have not changed any settings in Wine Using Winetricks Graphics Card: http://www.gigabyte.com/products/pro...px?pid=4361#sp Drivers: Proprietary (Installing those were a LOT of fun) Also let it be known that I have a DVI to VGA cord running from my Graphics card to my monitor. If any more information is needed I am ready to report. Thank You: Thanks a lot for your help and all the work you do to support noob ubuntuers like me (:

    Read the article

  • Rotate a Video 90 degrees with VLC or Windows Live Movie Maker

    - by DigitalGeekery
    Have you ever captured video with your cell phone or camcorder only to discover when you play it back on your computer that the video is rotated 90 degrees? Or maybe you shot it that way on purpose because you preferred portrait style to a landscape view? Before you go straining your neck or flipping your monitor on it’s side to watch your video, we’ll show you a few easier methods. If you simply want to rotate the video while you watch it, we’ll show you how to accomplish that with VLC Media Player. If you want to convert the video so it is rotated permanently, we’ll show you how to do that with Windows Live Movie Maker and output your video as a WMV file. Rotate and Watch a Video in VLC Download, install, and run VLC Media Player. (See download link below)   Open your video file by going to Media  > Open File… and browsing for your file. Or, by just dragging and dropping your video onto the VLC player.   Choose Tools from the Menu bar and select Effects and Filters. On the Video Effects tab, tick the Transform checkbox and choose your degrees of rotation. The video is rotated counter-clockwise, so to rotate clockwise 90 degrees you’ll want to choose Rotate by 270 degrees.   Now you can enjoy your video the way it was intended to be viewed. Rotate and Convert the Video with Windows Live Movie Maker Starting with Windows 7, Windows Movie Maker no longer comes pre-installed with the OS. It’s now part of the Windows Live suite that is available as a separate, free download for Windows 7 and Vista. (Windows XP is not supported) You can find the link to our detailed instruction on how to install Windows Live at the end of the article. To add your video files to Windows Movie Maker, click on Add videos and photos on the Home tab, or drag and drop the video into the blank area on the right side of the application. Next, you’ll need to rotate the video. Staying on the Home tab, click on the Rotate right 90° or Rotate left 90°.   You’ll see your video is now oriented properly on the left.   To save and convert your video to WMV format, click the Movie Maker tab just to the left of the Home tab. Hover your cursor over Save movie, and then select your output settings. You also have the option to burn directly to DVD. Browse for a location to save it and rename the output file if you’d like. Click Save. You’ll be notified when the file is complete. Now you’ll have your video properly oriented in WMV file format.   These are two rather easy ways to accomplish rotating your video. Unfortunately, Windows Live Movie Maker doesn’t give you a lot of  options for output. If you want to output to a file, your only choice is WMV format or DVD. However, previous versions will also allow you to export to AVI. How-To Geek’s Install Windows Live Essentials In Windows 7 Article. Download Windows Live Download VLC Media Player Similar Articles Productive Geek Tips How to Make/Edit a movie with Windows Movie Maker in Windows VistaCreate and Author DVDs in Windows 7Family Fun: Share Photos with Photo Gallery and Windows Live SpacesInstall Windows Live Essentials In Windows 7Add Network Support to Windows Live MovieMaker TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses

    Read the article

  • User Experience Highlights in PeopleSoft and PeopleTools: Direct from Jeff Robbins

    - by mvaughan
    By Kathy Miedema, Oracle Applications User Experience  This is the fifth in a series of blog posts on the user experience (UX) highlights in various Oracle product families. The last posted interview was with Nadia Bendjedou, Senior Director, Product Strategy on upcoming Oracle E-Business Suite user experience highlights. You’ll see themes around productivity and efficiency, and get an early look at the latest mobile offerings coming through these product lines. Today’s post is on the user experience in PeopleSoft and PeopleTools. To learn more about what’s ahead, attend PeopleSoft or PeopleTools OpenWorld presentations.This interview is with Jeff Robbins, Senior Director, PeopleSoft Development. Jeff Robbins Q: How would you describe the vision you have for the user experience of PeopleSoft?A: Intuitive – Specifically, customers use PeopleSoft to help their employees do their day-to-day work, and the UI (user interface) has been helpful and assistive in that effort. If it’s not obvious what they need to do a task, then the UI isn’t working. So the application needs to make it simple for users to find information they need, complete a task, do all the things they are responsible for, and it really helps when the UI just makes sense. Productive – PeopleSoft is a tool used to support people to do their work, and a lot of users are measured by how much work they’re able to get done per hour, per day, etc. The UI needs to help them be as productive as possible, and can’t make them waste time or energy. The UI needs to reflect the type of work necessary for a task -- if it's data entry, the UI needs to assist the user to get information into the system. For analysts, the UI needs help users assess or analyze information in a particular way. Innovative – The concept of the UI being innovative is something we’ve been working on for years. It’s not just that we want to be seen as innovative, the fact is that companies are asking their employees to do more than they’ve ever asked before. More often companies want to roll out processes as employee or manager self-service, where an employee is responsible to review and maintain their own data. So we’ve had to reinvent, and ask,  “How can we modify the ways an employee interacts with our applications so that they can be more productive and efficient – even with tasks that are entirely unfamiliar?”  Our focus on innovation has forced us to design new ways for users to interact with the entire application.Q: How are the UX features you have delivered so far resonating with customers?  A: Resonating very well. We’re hearing tremendous responses from users, managers, decision-makers -- who are very happy with the improved user experience. Many of the individual features resonate well. Some have really hit home, others are better than they used to be but show us that there’s still room for improvement.A couple innovations really stand out; features that have a significant effect on how users interact with PeopleSoft.First, the deployment of PeopleSoft in a way that’s more like a consumer website with the PeopleSoft Home page and Dashboards.  This new approach is very web-centric, where users feel they’re coming to a website rather than logging into an enterprise application.  There’s lots of information from all around the organization collected in a way that feels very familiar to users. In order to do your job, you can come to this web site rather than having to learn how to log into an application and figure out a complicated menu. Companies can host these really rich web sites for employees that are home pages for accessing critical tasks and information. The UI elements of incorporating search into the whole navigation process is another hit. Rather than having to log in and choose a task from a menu, users come to the web site and begin a task by simply searching for data: themselves, another employee, a customer record, whatever.  The search results include the data along with a set of actions the user might take, completely eliminating the need to hunt through a complicated system menu. Search-centric navigation is really sitting well with customers who are trying to deploy an intuitive set of systems. Q: Are any UX highlights more popular than you expected them to be?  A: We introduced a feature called Pivot Grid in the last release, which is a combination of an interactive grid, like an Excel Pivot Table, along with a dynamic visual chart that automatically graphs the data. I wasn’t certain at first how extensively this would be used. It looked like an innovative tool, but it wasn’t clear how it would be incorporated in business process applications. The fact is that everyone who sees Pivot Grids is thrilled with that kind of interactivity.  It reflects the amount of analytical thinking customers are asking employees to do. Employees can’t just enter data any more. They must interact with it, analyze it, and make decisions. Pivot Grids fit into this way of working. Q: What can you tell us about PeopleSoft’s mobile offerings?A: A lot of customers are finding that mobile is the chief priority in their organization.  They tell us they want their employees to be able to access company information from their mobile devices.  Of course, not everyone has the same requirements, so we’re working to make sure we can help our customers accomplish what they’re trying to do.  We’ve already delivered a number of mobile features.  For instance, PeopleSoft home pages, dashboards and workcenters all work well on an iPad, straight out of the box.  We’ve delivered a number of key functions and tasks for mobile workers – those who are responsible for using a mobile device to manage inventory, for example.  Customers tell us they also need a holistic strategy, one that allows their employees to access nearly every task from a mobile device.  While we don’t expect users to do extensive data entry from their smartphone, it makes sense that they have access to company information and systems while away from their desk.  That’s where our strategy is going now.  We plan to unveil a number of new mobile offerings at OpenWorld.  Some will be available then, some shortly after. Q: What else are you working on now that you think is going to be exciting to customers at Oracle OpenWorld?A: Our next release -- the big thing is PeopleSoft 9.2, and we’ll be talking about the huge amount of work that’s gone into the next versions. A new toolset, 8.53, will be coming, and there’s a lot to talk about there, and the next generation of PeopleSoft 9.2.  We have a ton of new stuff coming.Q: What do you want PeopleSoft customers to know? A: We have been focusing on the user experience in PeopleSoft as a very high priority for the last 4 years, and it’s had interesting effects. One thing is that the application is better, more usable.  We’ve made visible improvements. Another aspect is that in customers’ minds, the PeopleSoft brand is being reinvigorated. Customers invested in PeopleSoft years ago, and then they weren’t sure where PeopleSoft was going.  This investment in the UI and overall user experience keeps PeopleSoft current, innovative and fresh.  Customers  are able to take advantage of a lot of new features, even on the older applications, simply by upgrading their PeopleTools. The interest in that ability has been tremendous. Knowing they have a lot of these features available -- right now, that’s pretty huge. There’s been a tremendous amount of positive response, just on the fact that we’re focusing on the user experience. Editor’s note: For more on PeopleSoft and PeopleTools user experience highlights, visit the Usable Apps web site.To find out more about these enhancements at Openworld, be sure to check out these sessions: GEN8928     General Session: PeopleSoft Update and Product RoadmapCON9183     PeopleSoft PeopleTools Technology Roadmap CON8932     New Functional PeopleSoft PeopleTools Capabilities for the Line-of-Business UserCON9196     PeopleSoft PeopleTools Roadmap: Mobile ApplicationsCON9186     Case Study: Delivering a Groundbreaking User Interface with PeopleSoft PeopleTools

    Read the article

  • Video on Architecture and Code Quality using Visual Studio 2012&ndash;interview with Marcel de Vries and Terje Sandstrom by Adam Cogan

    - by terje
    Find the video HERE. Adam Cogan did a great Web TV interview with Marcel de Vries and myself on the topics of architecture and code quality.  It was real fun participating in this session.  Although we know each other from the MVP ALM community,  Marcel, Adam and I haven’t worked together before. It was very interesting to see how we agreed on so many terms, and how alike we where thinking.  The basics of ensuring you have a good architecture and how you could document it is one thing.  Also, the same agreement on the importance of having a high quality code base, and how we used the Visual Studio 2012 tools, and some others (NDepend for example)  to measure and ensure that the code quality was where it should be.  As the tools, methods and thinking popped up during the interview it was a lot of “Hey !  I do that too!”.  The tools are not only for “after the fact” work, but we use them during the coding.  That way the tools becomes an integrated part of our coding work, and helps us to find issues we may have overlooked.  The video has a bunch of call outs, pinpointing important things to remember. These are also listed on the corresponding web page. I haven’t seen that touch before, but really liked this way of doing it – it makes it much easier to spot the highlights.  Titus Maclaren and Raj Dhatt from SSW have done a terrific job producing this video.  And thanks to Lei Xu for doing the camera and recording job.  Thanks guys ! Also, if you are at TechEd Amsterdam 2012, go and listen to Adam Cogan in his session on “A modern architecture review: Using the new code review tools” Friday 29th, 10.15-11.30 and Marcel de Vries session on “Intellitrace, what is it and how can I use it to my benefit” Wednesday 27th, 5-6.15 The highlights points out some important practices.  I’ll elaborate on a few of them here: Add instructions on how to compile the solution.  You do this by adding a text file with instructions to the solution, and keep it under source control.  These instructions should contain what is needed on top of a standard install of Visual Studio.  I do a lot of code reviews, and more often that not, I am not even able to compile the program, because they have used some tool or library that needs to be installed.  The same applies to any new developer who enters into the team, so do this to increase your productivity when the team changes, or a team member switches computer. Don’t forget to document what you have to configure on the computer, the IIS being a common one. The more automatic you can do this, the better.  Use NuGet to get down libraries. When the text document gets more than say, half a page, with a bunch of different things to do, convert it into a powershell script instead.  The metrics warning levels.  These are very conservatively set by Microsoft.  You rarely see anything but green, and besides, you should have color scales for each of the metrics.  I have a blog post describing a more appropriate set of levels, based on both research work and industry “best practices”.  The essential limits are: Cyclomatic complexity and coupling:  Higher numbers are worse On method levels: Green :  From 0 to 10 Yellow:  From 10 to 20  (some say 15).   Acceptable, but have a look to see if there is something unneeded here. Red: From 20 to 40:   Action required, get these down. Bleeding Red: Above 40   This is the real red alert.  Immediate action!  (My invention, as people have asked what do I do when I have cyclomatic complexity of 150.  The only answer I could think of was: RUN! ) Maintainability index:  Lower numbers are worse, scale from 0 to 100. On method levels: Green:  60 to 100 Yellow:  40 – 60.    You will always have methods here too, accept the higher ones, take a look at those who are down to the lower limit.  Check up against the other metrics.) Red:  20 – 40:  Action required, fix these. Bleeding red:  Below 20.  Immediate action required. When doing metrics analysis, you should leave the generated code out.  You do this by adding attributes, unfortunately Microsoft has “forgotten” to add these to all their stuff, so you might have to add them to some of the code.  It most cases it can be done so that it is not overwritten by a new round of code generation.  Take a look a my blog post here for details on how to do that. Class level metrics might also be useful, at least for coupling and maintenance.  But it is much more difficult to set any fixed limits on those.  Any metric aggregations on higher level tend to be pretty useless, as the number of methods vary pretty much, and there are little science on what number of methods can be regarded as good or bad.  NDepend have a recommendation, but they say it may vary too.  And in these days of data binding, the number might be pretty high, as properties counts as methods.  However, if you take the worst case situations, classes with more than 20 methods are suspicious, and coupling and cyclomatic complexity go red above 20, so any classes with more than 20x20 = 400 for these measures should be checked over. In the video we mention the SOLID principles, coined by “Uncle Bob” (Richard Martin). One of them, the Dependency Inversion principle we discuss in the video.  It is important to note that this principle is NOT on whether you should use a Dependency Inversion Container or not, it is about how you design the interfaces and interactions between your classes.  The Dependency Inversion Container is just one technique which is based on this principle, but which main purpose is to isolate things you would like to change at runtime, for example if you implement a plug in architecture.  Overuse of a Dependency Inversion Container is however, NOT a good thing.  It should be used for a purpose and not as a general DI solution.  The general DI solution and thinking however is useful far beyond the DIC.   You should always “program to an abstraction”, and not to the concreteness.  We also talk a bit about the GRASP patterns, a term coined by Craig Larman in his book Applying UML and design patterns. GRASP patterns stand for General Responsibility Assignment Software Patterns and describe fundamental principles of object design and responsibility assignment.  What I find great with these patterns is that they is another way to focus on the responsibility of a class.  One of the things I most often found that is broken in software designs, is that the class lack responsibility, and as a result there are a lot of classes mucking around in the internals of the other classes.  We also discuss the term “Code Smells”.  This term was invented by Kent Beck and Martin Fowler when they worked with Fowler’s “Refactoring” book. A code smell is a set of “bad” coding practices, which are the drivers behind a corresponding set of refactorings.  Here is a good list of the smells, and their corresponding refactor patterns. See also this.

    Read the article

  • Add the Vista Style Sidebar Back to Windows 7

    - by Mysticgeek
    If you are moving from Vista to Windows 7, you might miss the Sidebar which was introduced in Vista. Today we take a look at a couple options for getting a Sidebar back in Windows 7. Copy Files from Vista Note: In this example we are using 32-bit versions of Vista and Windows 7. Make sure you are logged in with Administrator credentials. If you have a Vista machine running, we can copy the Windows Sidebar files over to the Windows 7 machine. On the Vista machine navigate to C:\Program Files and copy the Windows Sidebar folder and all of its contents over to a flash drive or network location. On the Windows 7 machine go to C:\Program Files and rename the Windows Sidebar folder to something like Windows Sidebar_old. Now copy the Vista Windows Sidebar folder into C:\Program Files… Now you will have both folders…Windows Sidebar and Windows Sidebar_old in your C:\Program Files folder. Right-click on the desktop and select Gadgets. There you are…the Original Vista Sidebar is back and will act as it did in Vista. Move Sidebar Gadgets Another work around if you don’t have a copy of Vista, you can simply move the Desktop Gadgets you want over to the right side of the screen and they will stay there…no dock needed. Type gadgets into the Search box in the Windows Start Menu and click on Desktop Gadgets. Then drag the included Gadgets you want over to the right side of the screen. Or click on the link to Get more gadgets online to find more. Once you have them where you want, each time you reboot they will still be in the same location. This holds true no matter where you place them on your desktop as well. Install Desktop Sidebar If you want an enhanced sidebar that includes a lot of different features, and don’t have a copy of Vista, you might want to check out Desktop Sidebar Beta (link below). This is a freeware application that works with Windows XP, Vista, and Windows 7. After installation you can access it from the Start Menu… Here is how it will look after you launch it… It includes several pre-installed panels including a clock, Media Player, Search Bar, Slideshow, Messenger, Outlook inbox, Tasks, Quick Launch, Performance…and a lot more. It is highly customizable and allows you to change skins, add various levels of transparency, and a lot more. One caveat with going with Desktop Sidebar is we didn’t find a way to add Windows Gadgets to it (though there might be a plugin for it that we’re not aware of). But there are so many options, you may not mind. However, you can still use the desktop gadgets as you normally would in Windows 7. Believe it or not, some people actually prefer the Vista style Sidebar and would like it back in Windows 7. With these options you can get the Vista Sidebar back if you have a copy of Vista, place the Gadgets on the desktop, or go the freeware route. Download Desktop Sidebar (freeware) Similar Articles Productive Geek Tips Disable Windows Sidebar in VistaHow To Repair Your Crashed or Hanging Vista SidebarApplying Themes To Your Windows Vista SidebarDisable Sidebar / Desktop Gadgets on Windows 7Put AOL Instant Messenger (AIM) In your Windows Sidebar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox)

    Read the article

  • Kernel Mode Rootkit

    - by Pajarito
    On the other 3 computers in my family, I believe that we have a kernel-mode rootkit for windows. It appears that the same rootkit is on all of them. We think. We changed all the important passwords from my computer, running linux right now. On all of the infected computers is Symantic Endpoint Protection, because it's free from the university where my mom and dad work. In my opinion symantec is a piece of crap, seeing as it didn't even manager to delete the tracking cookies it found when I tried it on my own computer. The Computers and their set-ups: Computer A: Vista Business; symantec antivirus. runs it as admin, no password. IE8. no other security software other than what comes with windows. IE8 security settings the default Computer B: XP Home Premium; symantec antivirus. runs as normal user, no password, admin account with weak password, spybot, uses IE8 with default settings, sometimes Firefox Computer C: XP Home Premium; symantec antivirus. runs as normal user, no password, admin account with weak password, uses IE8 with default settings, no other security programs except what came with windows This is what's happening. Cut and pasted from my dad's forum post. -- When I scanned my laptop (Dell XPS M1330 with Windows Vista Small Business), Symantec Endpoint Protection hangs for a while, perhaps 10 seconds or so, on some of the following files 9129837.exe, hide_evr2.sys, VirusRemoval.vbs, NewVirusRemoval.vbs, dll.dll, alsmt.ext, and _epnt.sys. It does this if a run a scan that I set up to run on a new thumbnail drive and it does this even if the thumbnail is not plugged in. It doesn't seem to do this if I scan only the C: drive. I've check for problems with symantec endpoint protection and also with Microsoft Security Essentials and Malwarebytes Anti-Malware. They found nothing and I can't find anything by searching for hidden files. Next I tried microsoft's rootkitrevealer. It (rootkitrevealer) finds 279660 (or so) discrepancies and the interface is so glitchy after that I can't really figure out what is going on. The screen is squirrely. The rootkitrevealer pulls up many files in the folder \programdata\applicationdata and there are numberous appended \applicationdata on the end of that as well. -- As you can see, what we did was install MSE and MBAM and scan with both of them. Nothing but a tracking cookie. Then I took over and ran rootkitrevealer.exe from MicroSoft from a flash drive. It found a bunch of discrepancies, but only about 20 or so where security related, the rest being files that you just couldn't see from Windows Explorer. I couldn't see whether of not the files list above, the ones that the scan was hanging on, where in the list. The other thing is, I have no idea what to do about the things the scan comes up with. Then we checked the other computers and they do the same thing when you scan with Symantec. The people at the university seen to think that dad might not have a virus, but 2 of the computers slowed down noticably AND IE8 started acting all funny. None of my family is very computer oriented, and 2 of the possible causes for the rootkit are: -My dad bought a new flash drive, which shipped with a data security executable on it -My dad has to download lots of articles for his work Those are the only things that stand out, but it could have been anything. We are currently backing up our data, and I'll post again after trying IceSword 1.22. I just looked at my dad's forum topic, and someone recommended GMER. I'll try that too.

    Read the article

  • Building vs. Buying a Master Data Management Solution

    - by david.butler(at)oracle.com
    Many organizations prefer to build their own MDM solutions. The argument is that they know their data quality issues and their data better than anyone. Plus a focused solution will cost less in the long run then a vendor supplied general purpose product. This is not unreasonable if you think of MDM as a point solution for a particular data quality problem. But this approach carries significant risk. We now know that organizations achieve significant competitive advantages when they deploy MDM as a strategic enterprise wide solution: with the most common best practice being to deploy a tactical MDM solution and grow it into a full information architecture. A build your own approach most certainly will not scale to a larger architecture unless it is done correctly with the larger solution in mind. It is possible to build a home grown point MDM solution in such a way that it will dovetail into broader MDM architectures. A very good place to start is to use the same basic technologies that Oracle uses to build its own MDM solutions. Start with the Oracle 11g database to create a flexible, extensible and open data model to hold the master data and all needed attributes. The Oracle database is the most flexible, highly available and scalable database system on the market. With its Real Application Clusters (RAC) it can even support the mixed OLTP and BI workloads that represent typical MDM data access profiles. Use Oracle Data Integration (ODI) for batch data movement between applications, MDM data stores, and the BI layer. Use Oracle Golden Gate for more real-time data movement. Use Oracle's SOA Suite for application integration with its: BPEL Process Manager to orchestrate MDM connections to business processes; Identity Management for managing users; WS Manager for managing web services; Business Intelligence Enterprise Edition for analytics; and JDeveloper for creating or extending the MDM management application. Oracle utilizes these technologies to build its MDM Hubs.  Customers who build their own MDM solution using these components will easily migrate to Oracle provided MDM solutions when the home grown solution runs out of gas. But, even with a full stack of open flexible MDM technologies, creating a robust MDM application can be a daunting task. For example, a basic MDM solution will need: a set of data access methods that support master data as a service as well as direct real time access as well as batch loads and extracts; a data migration service for initial loads and periodic updates; a metadata management capability for items such as business entity matrixed relationships and hierarchies; a source system management capability to fully cross-reference business objects and to satisfy seemingly conflicting data ownership requirements; a data quality function that can find and eliminate duplicate data while insuring correct data attribute survivorship; a set of data quality functions that can manage structured and unstructured data; a data quality interface to assist with preventing new errors from entering the system even when data entry is outside the MDM application itself; a continuing data cleansing function to keep the data up to date; an internal triggering mechanism to create and deploy change information to all connected systems; a comprehensive role based data security system to control and monitor data access, update rights, and maintain change history; a flexible business rules engine for managing master data processes such as privacy and data movement; a user interface to support casual users and data stewards; a business intelligence structure to support profiling, compliance, and business performance indicators; and an analytical foundation for directly analyzing master data. Oracle's pre-built MDM Hub solutions are full-featured 3-tier Internet applications designed to participate in the full Oracle technology stack or to run independently in other open IT SOA environments. Building MDM solutions from scratch can take years. Oracle's pre-built MDM solutions can bring quality data to the enterprise in a matter of months. But if you must build, at lease build with the world's best technology stack in a way that simplifies the eventual upgrade to Oracle MDM and to the full enterprise wide information architecture that it enables.

    Read the article

  • ATG Live Webcast: Advanced E-Business Suite Architectures

    - by BillSawyer
    I am pleased to announce the ATG Live Webcast event for Dec. 8th, 2011: Advanced E-Business Suite Architectures Join Elke Phelps, Senior Principal Product Manager and Sriram Veeraraghavan, Senior Principal Software Engineer as they discuss advanced E-Business Suite architectures that can help you improve performance, scalability, business continuity, utilization, provisioning, and security. This one-hour webcasts provides an overview of advanced architectures with Q&A. This session will cover the latest advanced architectural options, including the use of Oracle database high-availability features and functions such as Real Application Clusters, ASM, Active Data Guard, clouds, virtualization, Oracle VM, high-availability and load-balancing architectures, WebLogic Server, and more. This session will also cover the latest updates to systems management tools like AutoConfig, and may also include sneak previews of upcoming functionality. This event is targeted to architects, system administrators, DBAs, developers, and implementers. The agenda for the Advanced E-Business Suite Architectures webcast includes the following topics: Advanced Oracle E-Business Suite Architectures Optional External Integrations Oracle E-Business Suite 12.2 Improving Performance and Scalability Providing Business Continuity Improving Utilization and Provisioning Improving Security Date:            Thursday, December 8, 2011Time:           8:00 AM - 9:00 AM Pacific Standard TimePresenter:  Elke Phelps, Senior Principal Product Manager                      Sriram Veeraraghavan, Senior Principal Software EngineerWebcast Registration Link (Preregistration is optional but encouraged)To hear the audio feed:    Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              98514To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  273291684If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training at http://blogs.oracle.com/stevenChan/entry/e_business_suite_technology_learningIf you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • MySQL is running VERY slow

    - by user1032531
    I have two servers: a VPS and a laptop. I recently re-built both of them, and MySQL is running about 20 times slower on the laptop. Both servers used to run CentOS 5.8 and I think MySQL 5.1, and the laptop used to do great so I do not think it is the hardware. For the VPS, my provider installed CentOS 6.4, and then I installed MySQL 5.1.69 using yum with the CentOS repo. For the laptop, I installed CentOS 6.4 basic server and then installed MySQL 5.1.69 using yum with the CentOS repo. my.cnf for both servers are identical, and I have shown below. For both servers, I've also included below the output from SHOW VARIABLES; as well as output from sysbench, file system information, and cpu information. I have tried adding skip-name-resolve, but it didn't help. The matrix below shows the SHOW VARIABLES output from both servers which is different. Again, MySQL was installed the same way, so I do not know why it is different, but it is and I think this might be why the laptop is executing MySQL so slowly. Why is the laptop running MySQL slowly, and how do I fix it? Differences between SHOW VARIABLES on both servers +---------------------------+-----------------------+-------------------------+ | Variable | Value-VPS | Value-Laptop | +---------------------------+-----------------------+-------------------------+ | hostname | vps.site1.com | laptop.site2.com | | max_binlog_cache_size | 4294963200 | 18446744073709500000 | | max_seeks_for_key | 4294967295 | 18446744073709500000 | | max_write_lock_count | 4294967295 | 18446744073709500000 | | myisam_max_sort_file_size | 2146435072 | 9223372036853720000 | | myisam_mmap_size | 4294967295 | 18446744073709500000 | | plugin_dir | /usr/lib/mysql/plugin | /usr/lib64/mysql/plugin | | pseudo_thread_id | 7568 | 2 | | system_time_zone | EST | PDT | | thread_stack | 196608 | 262144 | | timestamp | 1372252112 | 1372252046 | | version_compile_machine | i386 | x86_64 | +---------------------------+-----------------------+-------------------------+ my.cnf for both servers [root@server1 ~]# cat /etc/my.cnf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid innodb_strict_mode=on sql_mode=TRADITIONAL # sql_mode=STRICT_TRANS_TABLES,NO_ZERO_DATE,NO_ZERO_IN_DATE character-set-server=utf8 collation-server=utf8_general_ci log=/var/log/mysqld_all.log [root@server1 ~]# VPS SHOW VARIABLES Info Same as Laptop shown below but changes per above matrix (removed to allow me to be under the 30000 characters as required by ServerFault) Laptop SHOW VARIABLES Info auto_increment_increment 1 auto_increment_offset 1 autocommit ON automatic_sp_privileges ON back_log 50 basedir /usr/ big_tables OFF binlog_cache_size 32768 binlog_direct_non_transactional_updates OFF binlog_format STATEMENT bulk_insert_buffer_size 8388608 character_set_client utf8 character_set_connection utf8 character_set_database latin1 character_set_filesystem binary character_set_results utf8 character_set_server latin1 character_set_system utf8 character_sets_dir /usr/share/mysql/charsets/ collation_connection utf8_general_ci collation_database latin1_swedish_ci collation_server latin1_swedish_ci completion_type 0 concurrent_insert 1 connect_timeout 10 datadir /var/lib/mysql/ date_format %Y-%m-%d datetime_format %Y-%m-%d %H:%i:%s default_week_format 0 delay_key_write ON delayed_insert_limit 100 delayed_insert_timeout 300 delayed_queue_size 1000 div_precision_increment 4 engine_condition_pushdown ON error_count 0 event_scheduler OFF expire_logs_days 0 flush OFF flush_time 0 foreign_key_checks ON ft_boolean_syntax + -><()~*:""&| ft_max_word_len 84 ft_min_word_len 4 ft_query_expansion_limit 20 ft_stopword_file (built-in) general_log OFF general_log_file /var/run/mysqld/mysqld.log group_concat_max_len 1024 have_community_features YES have_compress YES have_crypt YES have_csv YES have_dynamic_loading YES have_geometry YES have_innodb YES have_ndbcluster NO have_openssl DISABLED have_partitioning YES have_query_cache YES have_rtree_keys YES have_ssl DISABLED have_symlink DISABLED hostname server1.site2.com identity 0 ignore_builtin_innodb OFF init_connect init_file init_slave innodb_adaptive_hash_index ON innodb_additional_mem_pool_size 1048576 innodb_autoextend_increment 8 innodb_autoinc_lock_mode 1 innodb_buffer_pool_size 8388608 innodb_checksums ON innodb_commit_concurrency 0 innodb_concurrency_tickets 500 innodb_data_file_path ibdata1:10M:autoextend innodb_data_home_dir innodb_doublewrite ON innodb_fast_shutdown 1 innodb_file_io_threads 4 innodb_file_per_table OFF innodb_flush_log_at_trx_commit 1 innodb_flush_method innodb_force_recovery 0 innodb_lock_wait_timeout 50 innodb_locks_unsafe_for_binlog OFF innodb_log_buffer_size 1048576 innodb_log_file_size 5242880 innodb_log_files_in_group 2 innodb_log_group_home_dir ./ innodb_max_dirty_pages_pct 90 innodb_max_purge_lag 0 innodb_mirrored_log_groups 1 innodb_open_files 300 innodb_rollback_on_timeout OFF innodb_stats_method nulls_equal innodb_stats_on_metadata ON innodb_support_xa ON innodb_sync_spin_loops 20 innodb_table_locks ON innodb_thread_concurrency 8 innodb_thread_sleep_delay 10000 innodb_use_legacy_cardinality_algorithm ON insert_id 0 interactive_timeout 28800 join_buffer_size 131072 keep_files_on_create OFF key_buffer_size 8384512 key_cache_age_threshold 300 key_cache_block_size 1024 key_cache_division_limit 100 language /usr/share/mysql/english/ large_files_support ON large_page_size 0 large_pages OFF last_insert_id 0 lc_time_names en_US license GPL local_infile ON locked_in_memory OFF log OFF log_bin OFF log_bin_trust_function_creators OFF log_bin_trust_routine_creators OFF log_error /var/log/mysqld.log log_output FILE log_queries_not_using_indexes OFF log_slave_updates OFF log_slow_queries OFF log_warnings 1 long_query_time 10.000000 low_priority_updates OFF lower_case_file_system OFF lower_case_table_names 0 max_allowed_packet 1048576 max_binlog_cache_size 18446744073709547520 max_binlog_size 1073741824 max_connect_errors 10 max_connections 151 max_delayed_threads 20 max_error_count 64 max_heap_table_size 16777216 max_insert_delayed_threads 20 max_join_size 18446744073709551615 max_length_for_sort_data 1024 max_long_data_size 1048576 max_prepared_stmt_count 16382 max_relay_log_size 0 max_seeks_for_key 18446744073709551615 max_sort_length 1024 max_sp_recursion_depth 0 max_tmp_tables 32 max_user_connections 0 max_write_lock_count 18446744073709551615 min_examined_row_limit 0 multi_range_count 256 myisam_data_pointer_size 6 myisam_max_sort_file_size 9223372036853727232 myisam_mmap_size 18446744073709551615 myisam_recover_options OFF myisam_repair_threads 1 myisam_sort_buffer_size 8388608 myisam_stats_method nulls_unequal myisam_use_mmap OFF net_buffer_length 16384 net_read_timeout 30 net_retry_count 10 net_write_timeout 60 new OFF old OFF old_alter_table OFF old_passwords OFF open_files_limit 1024 optimizer_prune_level 1 optimizer_search_depth 62 optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on pid_file /var/run/mysqld/mysqld.pid plugin_dir /usr/lib64/mysql/plugin port 3306 preload_buffer_size 32768 profiling OFF profiling_history_size 15 protocol_version 10 pseudo_thread_id 3 query_alloc_block_size 8192 query_cache_limit 1048576 query_cache_min_res_unit 4096 query_cache_size 0 query_cache_type ON query_cache_wlock_invalidate OFF query_prealloc_size 8192 rand_seed1 rand_seed2 range_alloc_block_size 4096 read_buffer_size 131072 read_only OFF read_rnd_buffer_size 262144 relay_log relay_log_index relay_log_info_file relay-log.info relay_log_purge ON relay_log_space_limit 0 report_host report_password report_port 3306 report_user rpl_recovery_rank 0 secure_auth OFF secure_file_priv server_id 0 skip_external_locking ON skip_name_resolve OFF skip_networking OFF skip_show_database OFF slave_compressed_protocol OFF slave_exec_mode STRICT slave_load_tmpdir /tmp slave_max_allowed_packet 1073741824 slave_net_timeout 3600 slave_skip_errors OFF slave_transaction_retries 10 slow_launch_time 2 slow_query_log OFF slow_query_log_file /var/run/mysqld/mysqld-slow.log socket /var/lib/mysql/mysql.sock sort_buffer_size 2097144 sql_auto_is_null ON sql_big_selects ON sql_big_tables OFF sql_buffer_result OFF sql_log_bin ON sql_log_off OFF sql_log_update ON sql_low_priority_updates OFF sql_max_join_size 18446744073709551615 sql_mode sql_notes ON sql_quote_show_create ON sql_safe_updates OFF sql_select_limit 18446744073709551615 sql_slave_skip_counter sql_warnings OFF ssl_ca ssl_capath ssl_cert ssl_cipher ssl_key storage_engine MyISAM sync_binlog 0 sync_frm ON system_time_zone PDT table_definition_cache 256 table_lock_wait_timeout 50 table_open_cache 64 table_type MyISAM thread_cache_size 0 thread_handling one-thread-per-connection thread_stack 262144 time_format %H:%i:%s time_zone SYSTEM timed_mutexes OFF timestamp 1372254399 tmp_table_size 16777216 tmpdir /tmp transaction_alloc_block_size 8192 transaction_prealloc_size 4096 tx_isolation REPEATABLE-READ unique_checks ON updatable_views_with_limit YES version 5.1.69 version_comment Source distribution version_compile_machine x86_64 version_compile_os redhat-linux-gnu wait_timeout 28800 warning_count 0 VPS Sysbench Info [root@vps ~]# cat sysbench.txt sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 8 Doing OLTP test. Running mixed OLTP test Doing read-only test Using Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases) Using "BEGIN" for starting transactions Using auto_inc on the id column Threads started! Time limit exceeded, exiting... (last message repeated 7 times) Done. OLTP test statistics: queries performed: read: 1449966 write: 0 other: 207138 total: 1657104 transactions: 103569 (1726.01 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 1449966 (24164.08 per sec.) other operations: 207138 (3452.01 per sec.) Test execution summary: total time: 60.0050s total number of events: 103569 total time taken by event execution: 479.1544 per-request statistics: min: 1.98ms avg: 4.63ms max: 330.73ms approx. 95 percentile: 8.26ms Threads fairness: events (avg/stddev): 12946.1250/381.09 execution time (avg/stddev): 59.8943/0.00 [root@vps ~]# Laptop Sysbench Info [root@server1 ~]# cat sysbench.txt sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 8 Doing OLTP test. Running mixed OLTP test Doing read-only test Using Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases) Using "BEGIN" for starting transactions Using auto_inc on the id column Threads started! Time limit exceeded, exiting... (last message repeated 7 times) Done. OLTP test statistics: queries performed: read: 634718 write: 0 other: 90674 total: 725392 transactions: 45337 (755.56 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 634718 (10577.78 per sec.) other operations: 90674 (1511.11 per sec.) Test execution summary: total time: 60.0048s total number of events: 45337 total time taken by event execution: 479.4912 per-request statistics: min: 2.04ms avg: 10.58ms max: 85.56ms approx. 95 percentile: 19.70ms Threads fairness: events (avg/stddev): 5667.1250/42.18 execution time (avg/stddev): 59.9364/0.00 [root@server1 ~]# VPS File Info [root@vps ~]# df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/simfs simfs 20971520 16187440 4784080 78% / none tmpfs 6224432 4 6224428 1% /dev none tmpfs 6224432 0 6224432 0% /dev/shm [root@vps ~]# Laptop File Info [root@server1 ~]# df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_server1-lv_root ext4 72383800 4243964 64462860 7% / tmpfs tmpfs 956352 0 956352 0% /dev/shm /dev/sdb1 ext4 495844 60948 409296 13% /boot [root@server1 ~]# VPS CPU Info Removed to stay under the 30000 character limit required by ServerFault Laptop CPU Info [root@server1 ~]# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7100 @ 1.80GHz stepping : 13 cpu MHz : 800.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 3591.39 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7100 @ 1.80GHz stepping : 13 cpu MHz : 800.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 3591.39 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: [root@server1 ~]# EDIT New Info requested by shakalandy [root@localhost ~]# cat /proc/meminfo MemTotal: 2044804 kB MemFree: 761464 kB Buffers: 68868 kB Cached: 369708 kB SwapCached: 0 kB Active: 881080 kB Inactive: 246016 kB Active(anon): 688312 kB Inactive(anon): 4416 kB Active(file): 192768 kB Inactive(file): 241600 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 4095992 kB SwapFree: 4095992 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 688428 kB Mapped: 65156 kB Shmem: 4216 kB Slab: 92428 kB SReclaimable: 31260 kB SUnreclaim: 61168 kB KernelStack: 2392 kB PageTables: 28356 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 5118392 kB Committed_AS: 1530212 kB VmallocTotal: 34359738367 kB VmallocUsed: 343604 kB VmallocChunk: 34359372920 kB HardwareCorrupted: 0 kB AnonHugePages: 520192 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 8556 kB DirectMap2M: 2078720 kB [root@localhost ~]# ps aux | grep mysql root 2227 0.0 0.0 108332 1504 ? S 07:36 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/localhost.badobe.com.pid mysql 2319 0.1 24.5 1470068 501360 ? Sl 07:36 0:57 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/localhost.badobe.com.err --pid-file=/var/lib/mysql/localhost.badobe.com.pid root 3579 0.0 0.1 201840 3028 pts/0 S+ 07:40 0:00 mysql -u root -p root 13887 0.0 0.1 201840 3036 pts/3 S+ 18:08 0:00 mysql -uroot -px xxxxxxxxxx root 14449 0.0 0.0 103248 840 pts/2 S+ 18:16 0:00 grep mysql [root@localhost ~]# ps aux | grep mysql root 2227 0.0 0.0 108332 1504 ? S 07:36 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/localhost.badobe.com.pid mysql 2319 0.1 24.5 1470068 501356 ? Sl 07:36 0:57 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/localhost.badobe.com.err --pid-file=/var/lib/mysql/localhost.badobe.com.pid root 3579 0.0 0.1 201840 3028 pts/0 S+ 07:40 0:00 mysql -u root -p root 13887 0.0 0.1 201840 3048 pts/3 S+ 18:08 0:00 mysql -uroot -px xxxxxxxxxx root 14470 0.0 0.0 103248 840 pts/2 S+ 18:16 0:00 grep mysql [root@localhost ~]# vmstat 1 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 742172 76376 371064 0 0 6 6 78 202 2 1 97 1 0 0 0 0 742164 76380 371060 0 0 0 16 191 467 2 1 93 5 0 0 0 0 742164 76380 371064 0 0 0 0 148 388 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 159 418 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 145 380 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 166 429 2 1 97 0 0 1 0 0 742164 76380 371064 0 0 0 0 148 373 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 149 382 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 168 408 2 0 97 0 0 0 0 0 742164 76380 371064 0 0 0 0 165 394 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 159 354 2 1 98 0 0 0 0 0 742164 76388 371060 0 0 0 16 180 447 2 0 91 6 0 0 0 0 742164 76388 371064 0 0 0 0 143 344 2 1 98 0 0 0 1 0 742784 76416 370044 0 0 28 580 360 678 3 1 74 23 0 1 0 0 744768 76496 367772 0 0 40 1036 437 865 3 1 53 43 0 0 1 0 747248 76596 365412 0 0 48 1224 561 923 3 2 53 43 0 0 1 0 749232 76696 363092 0 0 32 1132 512 883 3 2 52 44 0 0 1 0 751340 76772 361020 0 0 32 1008 472 872 2 1 52 45 0 0 1 0 753448 76840 358540 0 0 36 1088 512 860 2 1 51 46 0 0 1 0 755060 76936 357636 0 0 28 1012 481 922 2 2 52 45 0 0 1 0 755060 77064 357988 0 0 12 896 444 902 2 1 53 45 0 0 1 0 754688 77148 358448 0 0 16 1096 506 1007 1 1 56 42 0 0 2 0 754192 77268 358932 0 0 12 1060 481 957 1 2 53 44 0 0 1 0 753696 77380 359392 0 0 12 1052 512 1025 2 1 55 42 0 0 1 0 751028 77480 359828 0 0 8 984 423 909 2 2 52 45 0 0 1 0 750524 77620 360200 0 0 8 788 367 869 1 2 54 44 0 0 1 0 749904 77700 360664 0 0 8 928 439 924 2 2 55 43 0 0 1 0 749408 77796 361084 0 0 12 976 468 967 1 1 56 43 0 0 1 0 748788 77896 361464 0 0 12 992 453 944 1 2 54 43 0 1 1 0 748416 77992 361996 0 0 12 784 392 868 2 1 52 46 0 0 1 0 747920 78092 362336 0 0 4 896 382 874 1 1 52 46 0 0 1 0 745252 78172 362780 0 0 12 1040 444 923 1 1 56 42 0 0 1 0 744764 78288 363220 0 0 8 1024 448 934 2 1 55 43 0 0 1 0 744144 78408 363668 0 0 8 1000 461 982 2 1 53 44 0 0 1 0 743648 78488 364148 0 0 8 872 443 888 2 1 54 43 0 0 1 0 743152 78548 364468 0 0 16 1020 511 995 2 1 55 43 0 0 1 0 742656 78632 365024 0 0 12 928 431 913 1 2 53 44 0 0 1 0 742160 78728 365468 0 0 12 996 470 955 2 2 54 44 0 1 1 0 739492 78840 365896 0 0 8 988 447 939 1 2 52 46 0 0 1 0 738872 78996 366352 0 0 12 972 442 928 1 1 55 44 0 1 1 0 738244 79148 366812 0 0 8 948 549 1126 2 2 54 43 0 0 1 0 737624 79312 367188 0 0 12 996 456 953 2 2 54 43 0 0 1 0 736880 79456 367660 0 0 12 960 444 918 1 1 53 46 0 0 1 0 736260 79584 368124 0 0 8 884 414 921 1 1 54 44 0 0 1 0 735648 79716 368488 0 0 12 976 450 955 2 1 56 41 0 0 1 0 733104 79840 368988 0 0 12 932 453 918 1 2 55 43 0 0 1 0 732608 79996 369356 0 0 16 916 444 889 1 2 54 43 0 1 1 0 731476 80128 369800 0 0 16 852 514 978 2 2 54 43 0 0 1 0 731244 80252 370200 0 0 8 904 398 870 2 1 55 43 0 1 1 0 730624 80384 370612 0 0 12 1032 447 977 1 2 57 41 0 0 1 0 730004 80524 371096 0 0 12 984 469 941 2 2 52 45 0 0 1 0 729508 80636 371544 0 0 12 928 438 922 2 1 52 46 0 0 1 0 728888 80756 371948 0 0 16 972 439 943 2 1 55 43 0 0 1 0 726468 80900 372272 0 0 8 960 545 1024 2 1 54 43 0 1 1 0 726344 81024 372272 0 0 8 464 490 1057 1 2 53 44 0 0 1 0 726096 81148 372276 0 0 4 328 441 1063 2 1 53 45 0 1 1 0 726096 81256 372292 0 0 0 296 387 975 1 1 53 45 0 0 1 0 725848 81380 372284 0 0 4 332 425 1034 2 1 54 44 0 1 1 0 725848 81496 372300 0 0 4 308 386 992 2 1 54 43 0 0 1 0 725600 81616 372296 0 0 4 328 404 1060 1 1 54 44 0 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 1 0 725600 81732 372296 0 0 4 328 439 1011 1 1 53 44 0 0 1 0 725476 81848 372308 0 0 0 316 441 1023 2 2 52 46 0 1 1 0 725352 81972 372300 0 0 4 344 451 1021 1 1 55 43 0 2 1 0 725228 82088 372320 0 0 0 328 427 1058 1 1 54 44 0 1 1 0 724980 82220 372300 0 0 4 336 419 999 2 1 54 44 0 1 1 0 724980 82328 372320 0 0 4 320 430 1019 1 1 54 44 0 1 1 0 724732 82436 372328 0 0 0 388 363 942 2 1 54 44 0 1 1 0 724608 82560 372312 0 0 4 308 419 993 1 2 54 44 0 1 0 0 724360 82684 372320 0 0 0 304 421 1028 2 1 55 42 0 1 0 0 724360 82684 372388 0 0 0 0 158 416 2 1 98 0 0 1 1 0 724236 82720 372360 0 0 0 6464 243 855 3 2 84 12 0 1 0 0 724112 82748 372360 0 0 0 5356 266 895 3 1 84 12 0 2 1 0 724112 82764 372380 0 0 0 3052 221 511 2 2 93 4 0 1 0 0 724112 82796 372372 0 0 0 4548 325 1067 2 2 81 16 0 1 0 0 724112 82816 372368 0 0 0 3240 259 829 3 1 90 6 0 1 0 0 724112 82836 372380 0 0 0 3260 309 822 3 2 88 8 0 1 1 0 724112 82876 372364 0 0 0 4680 326 978 3 1 77 19 0 1 0 0 724112 82884 372380 0 0 0 512 207 508 2 1 95 2 0 1 0 0 724112 82884 372388 0 0 0 0 138 361 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 158 397 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 146 395 2 1 98 0 0 2 0 0 724112 82884 372388 0 0 0 0 160 395 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 163 382 1 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 176 422 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 134 351 2 1 98 0 0 0 0 0 724112 82884 372388 0 0 0 0 190 429 2 1 97 0 0 0 0 0 724104 82884 372392 0 0 0 0 139 358 2 1 98 0 0 0 0 0 724848 82884 372392 0 0 0 4 211 432 2 1 97 0 0 1 0 0 724980 82884 372392 0 0 0 0 166 370 2 1 98 0 0 0 0 0 724980 82884 372392 0 0 0 0 164 397 2 1 98 0 0 ^C [root@localhost ~]#

    Read the article

  • SPARC T4-4 Beats 8-CPU IBM POWER7 on TPC-H @3000GB Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered a world record TPC-H @3000GB benchmark result for systems with four processors. This result beats eight processor results from IBM (POWER7) and HP (x86). The SPARC T4-4 server also delivered better performance per core than these eight processor systems from IBM and HP. Comparisons below are based upon system to system comparisons, highlighting Oracle's complete software and hardware solution. This database world record result used Oracle's Sun Storage 2540-M2 arrays (rotating disk) connected to a SPARC T4-4 server running Oracle Solaris 11 and Oracle Database 11g Release 2 demonstrating the power of Oracle's integrated hardware and software solution. The SPARC T4-4 server based configuration achieved a TPC-H scale factor 3000 world record for four processor systems of 205,792 QphH@3000GB with price/performance of $4.10/QphH@3000GB. The SPARC T4-4 server with four SPARC T4 processors (total of 32 cores) is 7% faster than the IBM Power 780 server with eight POWER7 processors (total of 32 cores) on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 36% better in price performance compared to the IBM Power 780 server on the TPC-H @3000GB Benchmark. The SPARC T4-4 server is 29% faster than the IBM Power 780 for data loading. The SPARC T4-4 server is up to 3.4 times faster than the IBM Power 780 server for the Refresh Function. The SPARC T4-4 server with four SPARC T4 processors is 27% faster than the HP ProLiant DL980 G7 server with eight x86 processors on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 52% faster than the HP ProLiant DL980 G7 server for data loading. The SPARC T4-4 server is up to 3.2 times faster than the HP ProLiant DL980 G7 for the Refresh Function. The SPARC T4-4 server achieved a peak IO rate from the Oracle database of 17 GB/sec. This rate was independent of the storage used, as demonstrated by the TPC-H @3000TB benchmark which used twelve Sun Storage 2540-M2 arrays (rotating disk) and the TPC-H @1000TB benchmark which used four Sun Storage F5100 Flash Array devices (flash storage). [*] The SPARC T4-4 server showed linear scaling from TPC-H @1000GB to TPC-H @3000GB. This demonstrates that the SPARC T4-4 server can handle the increasingly larger databases required of DSS systems. [*] The SPARC T4-4 server benchmark results demonstrate a complete solution of building Decision Support Systems including data loading, business questions and refreshing data. Each phase usually has a time constraint and the SPARC T4-4 server shows superior performance during each phase. [*] The TPC believes that comparisons of results published with different scale factors are misleading and discourages such comparisons. Performance Landscape The table lists the leading TPC-H @3000GB results for non-clustered systems. TPC-H @3000GB, Non-Clustered Systems System Processor P/C/T – Memory Composite(QphH) $/perf($/QphH) Power(QppH) Throughput(QthH) Database Available SPARC Enterprise M9000 3.0 GHz SPARC64 VII+ 64/256/256 – 1024 GB 386,478.3 $18.19 316,835.8 471,428.6 Oracle 11g R2 09/22/11 SPARC T4-4 3.0 GHz SPARC T4 4/32/256 – 1024 GB 205,792.0 $4.10 190,325.1 222,515.9 Oracle 11g R2 05/31/12 SPARC Enterprise M9000 2.88 GHz SPARC64 VII 32/128/256 – 512 GB 198,907.5 $15.27 182,350.7 216,967.7 Oracle 11g R2 12/09/10 IBM Power 780 4.1 GHz POWER7 8/32/128 – 1024 GB 192,001.1 $6.37 210,368.4 175,237.4 Sybase 15.4 11/30/11 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64/128 – 512 GB 162,601.7 $2.68 185,297.7 142,685.6 SQL Server 2008 10/13/10 P/C/T = Processors, Cores, Threads QphH = the Composite Metric (bigger is better) $/QphH = the Price/Performance metric in USD (smaller is better) QppH = the Power Numerical Quantity QthH = the Throughput Numerical Quantity The following table lists data load times and refresh function times during the power run. TPC-H @3000GB, Non-Clustered Systems Database Load & Database Refresh System Processor Data Loading(h:m:s) T4Advan RF1(sec) T4Advan RF2(sec) T4Advan SPARC T4-4 3.0 GHz SPARC T4 04:08:29 1.0x 67.1 1.0x 39.5 1.0x IBM Power 780 4.1 GHz POWER7 05:51:50 1.5x 147.3 2.2x 133.2 3.4x HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 08:35:17 2.1x 173.0 2.6x 126.3 3.2x Data Loading = database load time RF1 = power test first refresh transaction RF2 = power test second refresh transaction T4 Advan = the ratio of time to T4 time Complete benchmark results found at the TPC benchmark website http://www.tpc.org. Configuration Summary and Results Hardware Configuration: SPARC T4-4 server 4 x SPARC T4 3.0 GHz processors (total of 32 cores, 128 threads) 1024 GB memory 8 x internal SAS (8 x 300 GB) disk drives External Storage: 12 x Sun Storage 2540-M2 array storage, each with 12 x 15K RPM 300 GB drives, 2 controllers, 2 GB cache Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Enterprise Edition Audited Results: Database Size: 3000 GB (Scale Factor 3000) TPC-H Composite: 205,792.0 QphH@3000GB Price/performance: $4.10/QphH@3000GB Available: 05/31/2012 Total 3 year Cost: $843,656 TPC-H Power: 190,325.1 TPC-H Throughput: 222,515.9 Database Load Time: 4:08:29 Benchmark Description The TPC-H benchmark is a performance benchmark established by the Transaction Processing Council (TPC) to demonstrate Data Warehousing/Decision Support Systems (DSS). TPC-H measurements are produced for customers to evaluate the performance of various DSS systems. These queries and updates are executed against a standard database under controlled conditions. Performance projections and comparisons between different TPC-H Database sizes (100GB, 300GB, 1000GB, 3000GB, 10000GB, 30000GB and 100000GB) are not allowed by the TPC. TPC-H is a data warehousing-oriented, non-industry-specific benchmark that consists of a large number of complex queries typical of decision support applications. It also includes some insert and delete activity that is intended to simulate loading and purging data from a warehouse. TPC-H measures the combined performance of a particular database manager on a specific computer system. The main performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@SF, where SF is the number of GB of raw data, referred to as the scale factor). QphH@SF is intended to summarize the ability of the system to process queries in both single and multiple user modes. The benchmark requires reporting of price/performance, which is the ratio of the total HW/SW cost plus 3 years maintenance to the QphH. A secondary metric is the storage efficiency, which is the ratio of total configured disk space in GB to the scale factor. Key Points and Best Practices Twelve Sun Storage 2540-M2 arrays were used for the benchmark. Each Sun Storage 2540-M2 array contains 12 15K RPM drives and is connected to a single dual port 8Gb FC HBA using 2 ports. Each Sun Storage 2540-M2 array showed 1.5 GB/sec for sequential read operations and showed linear scaling, achieving 18 GB/sec with twelve Sun Storage 2540-M2 arrays. These were stand alone IO tests. The peak IO rate measured from the Oracle database was 17 GB/sec. Oracle Solaris 11 11/11 required very little system tuning. Some vendors try to make the point that storage ratios are of customer concern. However, storage ratio size has more to do with disk layout and the increasing capacities of disks – so this is not an important metric in which to compare systems. The SPARC T4-4 server and Oracle Solaris efficiently managed the system load of over one thousand Oracle Database parallel processes. Six Sun Storage 2540-M2 arrays were mirrored to another six Sun Storage 2540-M2 arrays on which all of the Oracle database files were placed. IO performance was high and balanced across all the arrays. The TPC-H Refresh Function (RF) simulates periodical refresh portion of Data Warehouse by adding new sales and deleting old sales data. Parallel DML (parallel insert and delete in this case) and database log performance are a key for this function and the SPARC T4-4 server outperformed both the IBM POWER7 server and HP ProLiant DL980 G7 server. (See the RF columns above.) See Also Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage 2540-M2 Array oracle.com OTN Disclosure Statement TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org. SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads.

    Read the article

  • Closer look at the SOA 12c Feature: Oracle Managed File Transfer

    - by Tshepo Madigage-Oracle
    The rapid growth of cloud-based applications in the enterprise, combined with organizations' desire to integrate applications with mobile technologies, is dramatically increasing application integration complexity. To meet this challenge, Oracle introduced Oracle SOA Suite 12c, the latest version of the industry's most complete and unified application integration and SOA solution. With simplified cloud, mobile, on-premises, and Internet of Things (IoT) integration capabilities, all within a single platform, Oracle SOA Suite 12c helps organizations speed time to integration, improve productivity, and lower TCO. To extend its B2B solution capabilities with Oracle SOA Suite 12c, Oracle unveiled Oracle Managed File Transfer, an integrated solution that enables organizations to virtually eliminate file transfer complexities. This allows customers to load data securely into Oracle Cloud applications as well as third-party cloud or partner applications. Oracle Managed File Transfer (Oracle MFT) enables secure file exchange and management with internal departments and external partners. It protects against inadvertent access to unsecured files at every step in the end-to-end transfer of files. It is easy to use especially for non technical staff so you can leverage more resources to manage the transfer of files. The extensive reporting capabilities allow you to get quick status of a file transfer and resubmit it as required. You can protect data in your DMZ by using the SSH/FTP reverse proxy. Oracle Managed File Transfer can help integrate applications by transferring files between them in complex use case patterns. Standalone: Transferring files on its own using embedded FTP and sFTP servers and the file systems to which it has access. SOA Integration: a SOA application can be the source or target of a transfer. A SOA application can also be the common endpoint for the target of one transfer and the source of another. B2B Integration: B2B application can be the source or target of a transfer. A B2B application can also be the common endpoint for the target of one transfer and the source of another. Healthcare Integration:  Healthcare application can be the source or target of a transfer. A Healthcare application can also be the common endpoint for the target of one transfer and the source of another. Oracle Service Bus (OSB) integration: OMT can integrate with Oracle Service Bus web service interfaces. OSB interface can be the source or target of a transfer. An Oracle Service Bus interface can also be the common endpoint for the target of one transfer and the source of another. Hybrid Integration: can be one participant in a web of data transfers that includes multiple application types. Oracle Managed File Transfers has four user roles: file handlers, designers, monitors, and administrators. File Handlers: - Copy files to file transfer staging areas, which are called sources. - Retrieve files from file transfer destinations, which are called targets. Designers: - Create, read, update and delete file transfer sources. - Create, read, update and delete file transfer targets. - Create, read, update and delete transfers, which link sources and targets in complete file delivery flows. - Deploy and test transfers. Monitors: - Use the Dashboard and reports to ensure that transfer instances are successful. - Pause and resume lengthy transfers. - Troubleshoot errors and resubmit transfers. - View artifact deployment details and history. - View artifact dependence relationships. - Enable and disable sources, targets, and transfers. - Undeploy sources, targets, and transfers. - Start and stop embedded FTP and sFTP servers. Administrators: - All file handler tasks - All designer tasks - All monitor tasks - Add other users and determine their roles - Configure user directory permissions - Configure the Oracle Managed File Transfer server - Configure embedded FTP and sFTP servers, including security - Configure B2B and Healthcare domains - Back up and restore the Oracle Managed File Transfer configuration - Purge transferred files and instance data - Archive and restore instance data and payloads - Import and export metadata You will find all the related information about SOA 12.1.3. Oracle Manages File Transfer OMT in the documentation: Using Oracle Manages File Transfer Resources and links: Oracle Unveils Oracle SOA Suite 12c Oracle Managed Files Transfer Oracle Managed Files Transfer SOA 12c White Paper For further enquiries don't hesitate to contact us at [email protected] and join our Partner Webcast on Oracle SOA Suite 12c

    Read the article

  • Speed up SQL Server queries with PREFETCH

    - by Akshay Deep Lamba
    Problem The SAN data volume has a throughput capacity of 400MB/sec; however my query is still running slow and it is waiting on I/O (PAGEIOLATCH_SH). Windows Performance Monitor shows data volume speed of 4MB/sec. Where is the problem and how can I find the problem? Solution This is another summary of a great article published by R. Meyyappan at www.sqlworkshops.com.  In my opinion, this is the first article that highlights and explains with working examples how PREFETCH determines the performance of a Nested Loop join.  First of all, I just want to recall that Prefetch is a mechanism with which SQL Server can fire up many I/O requests in parallel for a Nested Loop join. When SQL Server executes a Nested Loop join, it may or may not enable Prefetch accordingly to the number of rows in the outer table. If the number of rows in the outer table is greater than 25 then SQL will enable and use Prefetch to speed up query performance, but it will not if it is less than 25 rows. In this section we are going to see different scenarios where prefetch is automatically enabled or disabled. These examples only use two tables RegionalOrder and Orders.  If you want to create the sample tables and sample data, please visit this site www.sqlworkshops.com. The breakdown of the data in the RegionalOrders table is shown below and the Orders table contains about 6 million rows. In this first example, I am creating a stored procedure against two tables and then execute the stored procedure.  Before running the stored proceudre, I am going to include the actual execution plan. --Example provided by www.sqlworkshops.com --Create procedure that pulls orders based on City --Do not forget to include the actual execution plan CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City END GO SET STATISTICS time ON GO --Example provided by www.sqlworkshops.com --Execute the procedure with parameter SmallCity1 EXEC RegionalOrdersProc 'SmallCity1' GO After running the stored procedure, if we right click on the Clustered Index Scan and click Properties we can see the Estimated Numbers of Rows is 24.    If we right click on Nested Loops and click Properties we do not see Prefetch, because it is disabled. This behavior was expected, because the number of rows containing the value ‘SmallCity1’ in the outer table is less than 25.   Now, if I run the same procedure with parameter ‘BigCity’ will Prefetch be enabled? --Example provided by www.sqlworkshops.com --Execute the procedure with parameter BigCity --We are using cached plan EXEC RegionalOrdersProc 'BigCity' GO As we can see from the below screenshot, prefetch is not enabled and the query takes around 7 seconds to execute. This is because the query used the cached plan from ‘SmallCity1’ that had prefetch disabled. Please note that even if we have 999 rows for ‘BigCity’ the Estimated Numbers of Rows is still 24.   Finally, let’s clear the procedure cache to trigger a new optimization and execute the procedure again. DBCC freeproccache GO EXEC RegionalOrdersProc 'BigCity' GO This time, our procedure runs under a second, Prefetch is enabled and the Estimated Number of Rows is 999.   The RegionalOrdersProc can be optimized by using the below example where we are using an optimizer hint. I have also shown some other hints that could be used as well. --Example provided by www.sqlworkshops.com --You can fix the issue by using any of the following --hints --Create procedure that pulls orders based on City DROP PROC RegionalOrdersProc GO CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City       --Hinting optimizer to use SmallCity2 for estimation       OPTION (optimize FOR (@City = 'SmallCity2'))       --Hinting optimizer to estimate for the currnet parameters       --option (recompile)       --Hinting optimize not to use histogram rather       --density for estimation (average of all 3 cities)       --option (optimize for (@City UNKNOWN))       --option (optimize for UNKNOWN) END GO Conclusion, this tip was mainly aimed at illustrating how Prefetch can speed up query execution and how the different number of rows can trigger this.

    Read the article

  • The new Auto Scaling Service in Windows Azure

    - by shiju
    One of the key features of the Cloud is the on-demand scalability, which lets the cloud application developers to scale up or scale down the number of compute resources hosted on the Cloud. Auto Scaling provides the capability to dynamically scale up and scale down your compute resources based on user-defined policies, Key Performance Indicators (KPI), health status checks, and schedules, without any manual intervention. Auto Scaling is an important feature to consider when designing and architecting cloud based solutions, which can unleash the real power of Cloud to the apps for providing truly on-demand scalability and can also guard the organizational budget for cloud based application deployment. In the past, you have had to leverage the the Microsoft Enterprise Library Autoscaling Application Block (WASABi) or a services like  MetricsHub for implementing Automatic Scaling for your cloud apps hosted on the Windows Azure. The WASABi required to host your auto scaling block in a Windows Azure Worker Role for effectively implementing the auto scaling behaviour to your Windows Azure apps. The newly announced Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. Unlike WASABi hosted on a Worker Role, you don’t need to host any monitoring service for using the new Auto Scaling service and the Auto Scaling service will be available to individual Windows Azure Compute Services as part of the Scaling. Configure Auto Scaling for a Windows Azure Cloud Service Currently the Auto Scaling service supports Cloud Services, Web Sites and Virtual Machine. In this demo, I will be used a Cloud Services app with a Web Role and a Worker Role. To enable the Auto Scaling, select t your Windows Azure app in the Windows Azure management portal, and choose “SCLALE” tab. The Scale tab will show the all information regards with Auto Scaling. The below image shows that we have currently disabled the AutoScale service. To enable Auto Scaling, you need to choose either CPU or QUEUE. The QUEUE option is not available for Web Sites. The image below demonstrates how to configure Auto Scaling for a Web Role based on the utilization of CPU. We have configured the web role app for running with 1 to 5 Virtual Machine instances based on the CPU utilization with a range of 50 to 80%. If the aggregate utilization is becoming above above 80%, it will scale up instances and it will scale down instances when utilization is becoming below 50%. The image below demonstrates how to configure Auto Scaling for a Worker Role app based on the messages added into the Windows Azure storage Queue. We configured the worker role app for running with 1 to 3 Virtual Machine instances based on the Queue messages added into the Windows Azure storage Queue. Here we have specified the number of messages target per machine is 2000. The image below shows the summary of the Auto Scaling for the Cloud Service after configuring auto scaling service. Summary Auto Scaling is an extremely important behaviour of the Cloud applications for providing on-demand scalability without any manual intervention. Windows Azure provides greater support for enabling Auto Scaling for the apps deployed on the Windows Azure cloud platform. The new Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. In the new Auto Scaling service, you don’t have to host any monitor service like you have had in WASABi block. The Auto Scaling service is an excellent alternative to the manually hosting WASABi block in a Worker Role app.

    Read the article

  • Tips to Make Your Website Cell Phone Friendly

    - by Aditi
    Working on a new website design? or Redesigning your website? There is a lot more to consider now a days not just user experience, clean code, CSS etc. one of the important attribute one must not miss, which is making them mobile friendly! With the growing use of handhelds & unlimited data plans, people browse on their cellphones! and All come in different sizes! it is tough to make a website that would look great not just on a high resolution widescreen monitor/LCD, but also should look equally impressive on the low resolutions of cellphones. We are today going to discuss about such factors that can help you make a website Cellphone Friendly. Fluid Width Layouts As we start discussing about this, Most people speak of the Fluid Width Layouts as vital step in moving your website to be mobile friendly. Fluid width allows the width of your website stretch or shrink depending on the browser size. However, having a layout which flows with the width of the screen’s resolution is certainly convenient, more often than not the website was originally laid out for a desktop in mind. Compressing a fluid layout to 320 pixels can do some serious damage to layout, Thus some people strongly believe it is far better to have a mobile style sheet and lay out the content specifically for that screen and have more control on the display. The best thing to do is to detect the type of platform that is connected to your website and disabling or changing some tools and effects to make it look better if not perfect. Keep Your Web Pages Short length One must avoid long pages on their website, a lot of scroll makes it very non user friendly for people, especially on mobile devices this is a huge draw back because of the longer load time it takes to download the webpage. Everyone likes crisp & concise content such pages are easier to load & browse. This makes your website accessible across all platforms. Also try to keep shorter urls, if they have to type..save them from that much work especially if someone is using a cellphone with no QWERTY keyboard it can be tough. Usable Navigation & Search Unlike Desktops, your website’s Navigation won’t super work on a cellphone. Keep in mind the user experience for cellphone users as you design your Navigation. Try to keep your content centered as they do have difficulty in reading the webpage. I always look upto Google and their pages as available on mobile as a great example. Keeping a functional & very visible search bar helps mobile users navigate by searching. Understanding Clean Website Code : Evolved for Mobile Clean code is important when you consider the diversity out there for handheld devices. Some cell phones may only understand WAP. More capable phones may understand WAP2, which allows rendering websites with XHTML and CSS. Most mobiles won’t display tables, floats, frames, JavaScript, and dynamic menus. Most cellphone will not support cookies. Devices at the high end of the mobile market such as BlackBerry, Palm, or the upcoming iPhone are highly capable and support nearly as much as a standard computer..but masses still do not have such phones. You can use specific emulators to test your website on mobile devices. Make sure your color combinations provide good contrast between foreground and background colors, particularly for devices with fewer color options.

    Read the article

  • Install Oracle Configuration Manager's Standalone Collector

    - by Get Proactive Customer Adoption Team
    Untitled Document The Why and the How If you have heard of Oracle Configuration Manager (OCM), but haven’t installed it, I’m guessing this is for one of two reasons. Either you don’t know how it helps you or you don’t know how to install it. I’ll address both of those reasons today. First, let’s take a quick look at how My Oracle Support and the Oracle Configuration Manager work together to gain a good understanding of what their differences and roles are before we tackle the install.   Oracle Configuration Manger is the tool that actually performs the data collection task. You deploy this lightweight piece of software into your system to collect configuration information about the system and OCM uploads that data to Oracle’s customer configuration repository. Oracle Support Engineers then have the configuration data available when you file a service request. You can also view the data through My Oracle Support. The real value is that the data Oracle Configuration Manager collects can help you avoid problems and get your Service Requests solved more quickly. When you view the information in My Oracle Support’s user interface to OCM, it may help you avoid situations that create problems. The proactive tools included in Oracle Configuration Manager help you avoid issues before they occur. You also save time because you didn’t need to open a service request. For example, you can use this capability when you need to compare your system configuration at two points in time, or monitor the system health. If you make the configuration data available to Oracle Support Engineers, when you need to open a Service Request the data helps them diagnose and resolve your critical system issues more quickly, which means you get answers more quickly too. Quick Installation Process Overview Before we dive into the step-by-step details, let me provide a quick overview. For some of you, this will be all you need. Log in to My Oracle Support and download the data collector from Collector tab. If you don’t see the Collector tab, click the More tab gain access. On the Collector tab, you will find a drop-down list showing which platforms are available. You can also see more ways to the Collector can help you if you click through the carousel of benefits. After you download the software for your platform, use FTP to move that file (.zip) from your PC to the server that hosts the Oracle software. Once you have that file on the server, locate the $ORACLE_HOME directory, and unzip the file within that directory. You can then use the command line tool to start the installation process. The installation process requires the My Oracle Support credential (Support Identifier, username, and password) Proxy specification (Host IP Address, Port number, username and password) Installation Step-by-Step Download the collector zip file from My Oracle Support and place it into your $Oracle_Home Unzip the zip file you downloaded from My Oracle Support – this will create a directory named CCR with several subdirectories Using the command line go to “$ORACLE_HOME/CCR/bin” and run the following command “setupCCR” Provide your My Oracle Support credential: login, password, and Support Identifier The installer will start deploying the collector application You have installed the Collector Post Installation Now that you have installed successfully, the scheduler is ready to collect configuration information for the software available in your Oracle Home. By default, the first collection will take place the day after the installation. If you want to run an instrumentation script to start the configuration collection of your Oracle Database server, E-Business Suite, or Enterprise Manager, you will find more details on that in the Installation and Administration Guide for My Oracle Support Configuration Manager. Related documents available on My Oracle Support Oracle Configuration Manager Installation and Administration Guide [ID 728989.5] Oracle Configuration Manager Prerequisites [ID 728473.5] Oracle Configuration Manager Network Connectivity Test [ID 728970.5] Oracle Configuration Manager Collection Overview [ID 728985.5] Oracle Configuration Manager Security Overview [ID 728982.5] Oracle Software Configuration Manager: Disconnected Mode Collection [ID 453412.1]

    Read the article

  • SQL SERVER – Another lesser known feature of SQL Server Management Studio 2012 – Guest Post by Balmukund Lakhani

    - by Pinal Dave
    This is a fantastic blog post from my dear friend Balmukund ( blog | twitter | facebook ). He had presented a fantastic session in our last UG and there were lots of requests from attendees that he blogs about it. Well, here is the blog post about the same very popular UG session. Let us read the entire blog post in the voice of the Balmukund himself. In one of my previous guest blog on SQL Authority, I wrote about “Additional Connection Parameter” tab of login screen in SQL Server Management Studio (a.k.a. SSMS). On the similar lines, this blog is going to show little less known new feature of login main screen (“Connect to Server”) of SSMS 2012. You might have seen below screen countless times and you might wonder what is there is blog about in this simple screen. Well, continue reading and you would get the answer. Many times, DBA have to login to production server from non-regular machine, may be a developer’s workstation. Once you login to SQL, do your work and close the management studio. Do you know that your server name is saved in management studio? Of course, very useful feature because you may not like to type server name/IP address every time. Whatever servers you have connected, it would be stored by management studio. But sometime, it’s annoying! What you would do if you want SQL Server Management Studio to forget “all” the servers listed in drop down of Server name? To do that, you need to know how and where it’s stored. You can use one of my favorite tool from sysinternals called Process Monitor (also known as ProcMon) and easily figure out that this is stored in a file under your windows user profile. Below is the file in SQL 2008 R2 Management Studio. %appdata%\Microsoft\Microsoft SQL Server\100\Tools\Shell\SqlStudio.bin For SQL Server 2012, here is what we can see in ProcMon So, the path is %appdata%\Microsoft\Microsoft SQL Server\110\Tools\Shell\SqlStudio.bin So far, you might wonder, where is the new feature? I have been asked by many users to delete entries from SSMS “Connect to Server” server name list. Well, unofficially, you can delete the file directly which we found via ProcMon. Note that delete file to get rid of server list is not officially supported by Microsoft. Better way to achieve this is provided in SSMS 2012. To delete the servers from the list, highlight the name we want to delete (via keyboard or mouse) and then press delete key via keyboard. We can’t be multi-select and has to be done one by one. We can delete as many entries we want. I have delete few from first screenshot taken and here is the modified version. This is not available in SQL 2008 R2 and its previous version. This came from feedback given to SQL Server Product group. Hope you have learned something new today! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • It was worth the wait… Welcome Oracle GoldenGate 11g Release 2

    - by Irem Radzik
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} It certainly was worth the wait to meet Oracle GoldenGate 11gR2, because it is full of new features on multiple fronts. In fact, this release has the longest and strongest list of new features in Oracle GoldenGate’s history. The new release brings GoldenGate closer to the Oracle Database while expanding the support for global implementations and heterogeneous systems. It is more secure, more flexible, and faster. We announced the availability of Oracle GoldenGate 11gR2 via a press release. If you haven’t seen it yet, please check it out. As covered in this announcement, there are a variety of improvements in the product: Integrated Capture for Oracle Database: brings Oracle GoldenGate’s Capture process closer to the Oracle Database engine and enables support for Advanced Compression among other benefits. Enhanced Conflict Detection & Resolution, speeds and simplifies the conflict detection and resolution process for Active-Active deployments. Globalization, meaning Oracle GoldenGate can be deployed for databases that use multi-byte/Unicode character sets. Security and Performance Improvements, includes support Federal Information Protection Standard (FIPS). Increased Extensibility by kicking off actions based on an event record in the transaction log or in the Trail file. Integration with Oracle Enterprise Manager 12c , in addition to the Oracle GoldenGate Monitor product. Expanded Heterogeneity, including capture from IBM DB2 for i on iSeries (AS/400) and delivery to Postgres We will explain these new features in more detail at our upcoming launch webcast: Harness the Power of the New Release of Oracle GoldenGate 11g- (Sept 12 8am/10am PT) In addition to learning more about these new features, the webcast will allow you to ask your questions to product management via live Q&A section. So, I hope you will not miss this opportunity to explore the new release of Oracle GoldenGate 11g and see how it can deliver enterprise-class real-time data integration solutions.. I look forward to a great webcast to unveil GoldenGate’s new capabilities.

    Read the article

  • C#/.NET Little Wonders: Using &lsquo;default&rsquo; to Get Default Values

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Today’s little wonder is another of those small items that can help a lot in certain situations, especially when writing generics.  In particular, it is useful in determining what the default value of a given type would be. The Problem: what’s the default value for a generic type? There comes a time when you’re writing generic code where you may want to set an item of a given generic type.  Seems simple enough, right?  We’ll let’s see! Let’s say we want to query a Dictionary<TKey, TValue> for a given key and get back the value, but if the key doesn’t exist, we’d like a default value instead of throwing an exception. So, for example, we might have a the following dictionary defined: 1: var lookup = new Dictionary<int, string> 2: { 3: { 1, "Apple" }, 4: { 2, "Orange" }, 5: { 3, "Banana" }, 6: { 4, "Pear" }, 7: { 9, "Peach" } 8: }; And using those definitions, perhaps we want to do something like this: 1: // assume a default 2: string value = "Unknown"; 3:  4: // if the item exists in dictionary, get its value 5: if (lookup.ContainsKey(5)) 6: { 7: value = lookup[5]; 8: } But that’s inefficient, because then we’re double-hashing (once for ContainsKey() and once for the indexer).  Well, to avoid the double-hashing, we could use TryGetValue() instead: 1: string value; 2:  3: // if key exists, value will be put in value, if not default it 4: if (!lookup.TryGetValue(5, out value)) 5: { 6: value = "Unknown"; 7: } But the “flow” of using of TryGetValue() can get clunky at times when you just want to assign either the value or a default to a variable.  Essentially it’s 3-ish lines (depending on formatting) for 1 assignment.  So perhaps instead we’d like to write an extension method to support a cleaner interface that will return a default if the item isn’t found: 1: public static class DictionaryExtensions 2: { 3: public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> dict, 4: TKey key, TValue defaultIfNotFound) 5: { 6: TValue value; 7:  8: // value will be the result or the default for TValue 9: if (!dict.TryGetValue(key, out value)) 10: { 11: value = defaultIfNotFound; 12: } 13:  14: return value; 15: } 16: } 17:  So this creates an extension method on Dictionary<TKey, TValue> that will attempt to get a value using the given key, and will return the defaultIfNotFound as a stand-in if the key does not exist. This code compiles, fine, but what if we would like to go one step further and allow them to specify a default if not found, or accept the default for the type?  Obviously, we could overload the method to take the default or not, but that would be duplicated code and a bit heavy for just specifying a default.  It seems reasonable that we could set the not found value to be either the default for the type, or the specified value. So what if we defaulted the type to null? 1: public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> dict, 2: TKey key, TValue defaultIfNotFound = null) // ... No, this won’t work, because only reference types (and Nullable<T> wrapped types due to syntactical sugar) can be assigned to null.  So what about a calling parameterless constructor? 1: public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> dict, 2: TKey key, TValue defaultIfNotFound = new TValue()) // ... No, this won’t work either for several reasons.  First, we’d expect a reference type to return null, not an “empty” instance.  Secondly, not all reference types have a parameter-less constructor (string for example does not).  And finally, a constructor cannot be determined at compile-time, while default values can. The Solution: default(T) – returns the default value for type T Many of us know the default keyword for its uses in switch statements as the default case.  But it has another use as well: it can return us the default value for a given type.  And since it generates the same defaults that default field initialization uses, it can be determined at compile-time as well. For example: 1: var x = default(int); // x is 0 2:  3: var y = default(bool); // y is false 4:  5: var z = default(string); // z is null 6:  7: var t = default(TimeSpan); // t is a TimeSpan with Ticks == 0 8:  9: var n = default(int?); // n is a Nullable<int> with HasValue == false Notice that for numeric types the default is 0, and for reference types the default is null.  In addition, for struct types, the value is a default-constructed struct – which simply means a struct where every field has their default value (hence 0 Ticks for TimeSpan, etc.). So using this, we could modify our code to this: 1: public static class DictionaryExtensions 2: { 3: public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> dict, 4: TKey key, TValue defaultIfNotFound = default(TValue)) 5: { 6: TValue value; 7:  8: // value will be the result or the default for TValue 9: if (!dict.TryGetValue(key, out value)) 10: { 11: value = defaultIfNotFound; 12: } 13:  14: return value; 15: } 16: } Now, if defaultIfNotFound is unspecified, it will use default(TValue) which will be the default value for whatever value type the dictionary holds.  So let’s consider how we could use this: 1: lookup.GetValueOrDefault(1); // returns “Apple” 2:  3: lookup.GetValueOrDefault(5); // returns null 4:  5: lookup.GetValueOrDefault(5, “Unknown”); // returns “Unknown” 6:  Again, do not confuse a parameter-less constructor with the default value for a type.  Remember that the default value for any type is the compile-time default for any instance of that type (0 for numeric, false for bool, null for reference types, and struct will all default fields for struct).  Consider the difference: 1: // both zero 2: int i1 = default(int); 3: int i2 = new int(); 4:  5: // both “zeroed” structs 6: var dt1 = default(DateTime); 7: var dt2 = new DateTime(); 8:  9: // sb1 is null, sb2 is an “empty” string builder 10: var sb1 = default(StringBuilder()); 11: var sb2 = new StringBuilder(); So in the above code, notice that the value types all resolve the same whether using default or parameter-less construction.  This is because a value type is never null (even Nullable<T> wrapped types are never “null” in a reference sense), they will just by default contain fields with all default values. However, for reference types, the default is null and not a constructed instance.  Also it should be noted that not all classes have parameter-less constructors (string, for instance, doesn’t have one – and doesn’t need one). Summary Whenever you need to get the default value for a type, especially a generic type, consider using the default keyword.  This handy word will give you the default value for the given type at compile-time, which can then be used for initialization, optional parameters, etc. Technorati Tags: C#,CSharp,.NET,Little Wonders,default

    Read the article

  • Measuring ASP.NET and SharePoint output cache

    - by DigiMortal
    During ASP.NET output caching week in my local blog I wrote about how to measure ASP.NET output cache. As my posting was based on real work and real-life results then I thought that this posting is maybe interesting to you too. So here you can read what I did, how I did and what was the result. Introduction Caching is not effective without measuring it. As MVP Henn Sarv said in one of his sessions then you will get what you measure. And right he is. Lately I measured caching on local Microsoft community portal to make sure that our caching strategy is good enough in environment where this system lives. In this posting I will show you how to start measuring the cache of your web applications. Although the application measured is built on SharePoint Server publishing infrastructure, all those counters have same meaning as similar counters under pure ASP.NET applications. Measured counters I used Performance Monitor and the following performance counters (their names are similar on ASP.NET and SharePoint WCMS): Total number of objects added – how much objects were added to output cache. Total object discards – how much objects were deleted from output cache. Cache hit count – how many times requests were served by cache. Cache hit ratio – percent of requests served from cache. The first three counters are cumulative while last one is coefficient. You can use also other counters to measure the full effect of caching (memory, processor, disk I/O, network load etc before and after caching). Measuring process The measuring I describe here started from freshly restarted web server. I measured application during 12 hours that covered also time ranges when users are most active. The time range does not include late evening hours and night because there is nothing to measure during these hours. During measuring we performed no maintenance or administrative tasks on server. All tasks performed were related to usual daily content management and content monitoring. Also we had no advertisement campaigns or other promotions running at same time. The results You can see the results on following graphic.   Total number of objects added   Total object discards   Cache hit count   Cache hit ratio You can see that adds and discards are growing in same tempo. It is good because cache expires and not so popular items are not kept in memory. If there are more popular content then the these lines may have bigger distance between them. Cache hit count grows faster and this shows that more and more content is served from cache. In current case it shows that cache is filled optimally and we can do even better if we tune caches more. The site contains also pages that are discarded when some subsite changes (page was added/modified/deleted) and one modification may affect about four or five pages. This may also decrease cache hit count because during day the site gets about 5-10 new pages. Cache hit ratio is currently extremely good. The suggested minimum is about 85% but after some tuning and measuring I achieved 98.7% as a result. This is due to the fact that new pages are most often requested and after new pages are added the older ones are requested only sometimes. So they get discarded from cache and only some of these will return sometimes back to cache. Although this may also indicate the need for additional SEO work the result is very well in technical means. Conclusion Measuring ASP.NET output cache is not complex thing to do and you can start by measuring performance of cache as a start. Later you can move on and measure caching effect to other counters such as disk I/O, network, processors etc. What you have to achieve is optimal cache that is not full of items asked only couple of times per day (you can avoid this by not using too long cache durations). After some tuning you should be able to boost cache hit ratio up to at least 85%.

    Read the article

  • 6 Prominent Features of New GMail User Interface

    - by Gopinath
    GMail’s user interface has got a big make over today and the new user interface is available to everyone. We can switch to the new user interface by click on “Switch to the new look” link available at the bottom right of GMail (If you are on IE 6 or similar type of bad browsers, you will not see the option!). I switched to the new user interface as soon I noticed the link and played with it for sometime. In this post I want to share the prominent features of all new GMail interface. 1. All New Conversations Interface GMail’s threaded conversations is a game changing feature when it was first introduced by Google. For  a long time we have not seen much updates to the threaded conversation views. In the new GMail interface, threaded conversation sports a great new look – conversations are always visible in a horizontal fashion as opposed to stack interface of earlier version. When you open a conversation, you get a quick glance of individual thread without expanding the thread. Readability is improved a lot now.  Check image after the break 2. Sender Profile Photos In Email Threads Did you observe the above screenshot of conversations view? It has profile images of the participants in the thread. Identifying person of a thread is much more easy. 3. Advanced Search Box Search is the heart of Google’s business and it’s their flagship technology. GMail’s search interface is enhanced to let you quickly find the required e-mails. Also you can create mail filters from the search box without leaving the screen or opening up a new popup. 4. Gmail Automatically Resizing To Fit Multiple Devices There is no doubt that this is post PC era where people started using more of tablets and big screen smartphones than ever. The new user interface of GMail automatically resizes itself to fit the size of screen seamlessly. 5. HD Images For Your Themes, Sourced from iStockphoto Are you bored with minimalistic GMail interface and the few flashy themes? Here comes GMail HD themes backed by stock photographs sourced from iStockPhoto website. If you have a widescreen HD monitor then decorate your inbox with beautiful themes. 6. Resize Labels & Chat Panels Now you got a splitter between Labels & Chat panel that lets resize their height as you prefer. Also Label panel auto expands its height when you mouse over to show you hidden labels if any. Video – overview of new GMail features This article titled,6 Prominent Features of New GMail User Interface, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Detecting Process Shutdown/Startup Events through ActivationAgents

    - by Ramkumar Menon
    @10g - This post is motivated by one of my close friends and colleague - who wanted to proactively know when a BPEL process shuts down/re-activates. This typically happens when you have a BPEL Process that has an inbound polling adapter, when the adapter loses connectivity to the source system. Or whatever causes it. One valuable suggestion came in from one of my colleagues - he suggested I write my own ActivationAgent to do the job. Well, it really worked. Here is a sample ActivationAgent that you can use. There are few methods you need to override from BaseActivationAgent, and you are on your way to receiving notifications/what not, whenever the shutdown/startup events occur. In the example below, I am retrieving the emailAddress property [that is specified in your bpel.xml activationAgent section] and use that to send out an email notification on the activation agent initialization. You could choose to do different things. But bottomline is that you can use the below-mentioned API to access the very same properties that you specify in the bpel.xml. package com.adapter.custom.activation; import com.collaxa.cube.activation.BaseActivationAgent; import com.collaxa.cube.engine.ICubeContext; import com.oracle.bpel.client.BPELProcessId; import java.util.Date; import java.util.Properties; public class LifecycleManagerActivationAgent extends BaseActivationAgent { public BPELProcessId getBPELProcessId() { return super.getBPELProcessId(); } private void handleInit() throws Exception { //Write initialization code here System.err.println("Entered initialization code...."); //e.g. String emailAddress = getActivationAgentDescriptor().getPropertyValue(emailAddress); //send an email sendEmail(emailAddress); } private void handleLoad() throws Exception { //Write load code here System.err.println("Entered load code...."); } private void handleUnload() throws Exception { //Write unload code here System.err.println("Entered unload code...."); } private void handleUninit() throws Exception { //Write uninitialization code here System.err.println("Entered uninitialization code...."); } public void init(ICubeContext icubecontext) throws Exception { super.init(icubecontext); System.err.println("Initializing LifecycleManager Activation Agent ....."); handleInit(); } public void unload(ICubeContext icubecontext) throws Exception { super.unload(icubecontext); System.err.println("Unloading LifecycleManager Activation Agent ....."); handleUnload(); } public void uninit(ICubeContext icubecontext) throws Exception{ super.uninit(icubecontext); System.err.println("Uninitializing LifecycleManager Activation Agent ....."); handleUninit(); } public String getName() { return "Lifecyclemanageractivationagent"; } public void onStateChanged(int i, ICubeContext iCubeContext) { } public void onLifeCycleChanged(int i, ICubeContext iCubeContext) { } public void onUndeployed(ICubeContext iCubeContext) { } public void onServerShutdown() { } } Once you compile this code, generate a jar file and ensure you add it to the server startup classpath. The library is ready for use after the server restarts. To use this activationAgent, add an additional activationAgent entry in the bpel.xml for the BPEL Process that you wish to monitor. After you deploy the process, the ActivationAgent object will be called back whenever the events mentioned in the overridden methods are raised. [init(), load(), unload(), uninit()]. Subsequently, your custom code is executed. Sample bpel.xml illustrating activationAgent definition and property definition. <?xml version="1.0" encoding="UTF-8"? <BPELSuitcase timestamp="1291943469921" revision="1.0" <BPELProcess wsdlPort="{http://xmlns.oracle.com/BPELTest}BPELTestPort" src="BPELTest.bpel" wsdlService="{http://xmlns.oracle.com/BPELTest}BPELTest" id="BPELTest" <partnerLinkBindings <partnerLinkBinding name="client" <property name="wsdlLocation"BPELTest.wsdl</property </partnerLinkBinding <partnerLinkBinding name="test" <property name="wsdlLocation"test.wsdl</property </partnerLinkBinding </partnerLinkBindings <activationAgents <activationAgent className="oracle.tip.adapter.fw.agent.jca.JCAActivationAgent" partnerLink="test" <property name="portType"Read_ptt</property </activationAgent <activationAgent className="com.oracle.bpel.activation.LifecycleManagerActivationAgent" partnerLink="test" <property name="emailAddress"[email protected]</property </activationAgent </activationAgents </BPELProcess </BPELSuitcase em

    Read the article

  • Introducing Oracle System Assistant

    - by B.Koch
    by Josh Rosen One of the challenges with today's servers is getting the server up and running and understanding what all of the steps are once you plug the server in for the first time. So many different pieces come into play: installing drivers, updating firmware, configuring RAID, and provisioning the operating system. All of these steps must be done before you can even start using the server. Finding the latest firmware and drivers, making sure you have the right versions, and knowing that all the different software and firmware components work together properly can be a real challenge. If not done correctly, such as if you separately downloading disk firmware or controller firmware that doesn't match the existing OS drivers, you could experience bugs, performance problems, and incompatibilities. Gone are the days of having to locate the tools and drivers media that shipped with the server only to find out that newer versions of software and firmware are available on the web. Oracle has solved these challenges in the new X3-2 family of servers by introducing Oracle System Assistant. Oracle System Assistant is an innovative tool that is built-in to every new x86 server. It provides step-by-step assistance with configuring the server, updating firmware and drivers, and provisioning the operating system. Once you have completed all of the steps in the Oracle System Assistant tool, the server is ready to use. Oracle System Assistant was designed to be easy and straightforward. Starting it is as simple as pressing F9 when the server is booting. You'll need a keyboard, monitor, and mouse or you can use the remote console feature of Oracle ILOM (Integrated Lights Out Manager) to access a virtual KVM to the server from any machine. From there Oracle System Assistant will walk you through each of the steps necessary to set up your server. After configuring the network settings for Oracle System Assistant, the next step is to check for any new software or firmware for the server. Oracle System Assistant connects back to Oracle using your My Oracle Support account and downloads any updates that were made available to you for this specific server. This is where you really start to see the innovation that went into Oracle System Assistant. Firmware for Oracle ILOM and BIOS, operating system drivers, and other system firmware (including for option cards and disk drivers) come as a single bundle, downloading as a single unit, that has been engineered and tested to work together by Oracle. Oracle System Assistant figures out the right combination for your server, so you don't have to. Now that the server has the latest firmware, Oracle System Assistant will next walk you through configuring the hardware. From Oracle System Assistant, you can configure many Oracle ILOM settings, including the network settings and initial user accounts. This ensures that ILOM is accessible and ready to use. Oracle System Assistant is where all parts of the server come together. In addition to communicating with Oracle ILOM and interacting with BIOS, Oracle System Assistant understands and can configure the storage subsystem. Before installing the operating system, Oracle System Assistant can detect the storage configuration and configure RAID for all disks in the system. At this point, the server is ready to be provisioned with the host operating system. You can use Oracle System Assistant to provision a supported OS, including Oracle Linux, Oracle VM, RHEL, SuSe Linux, and Windows. And by using Oracle System Assistant, you can be sure that the proper OS drivers are installed for each of the installed hardware components. With Oracle System Assistant, initial setup of the server has never been easier. If we can innovate around problems and find solutions to make our servers easier to manage, this reduces IT costs and makes managing servers simpler. I think with Oracle System Assistant we have done just that. Josh Rosen is a Principal Product Manager at Oracle and previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • How to future-proof my touch-enabled web application?

    - by Rice Flour Cookies
    I recently went out and purchased a touch-screen monitor with the intention of learning how to program touch-enabled web applications. I had reviewed the MDN documentation about touch events, as well as the W3C specification. To get started, I wrote a very short test page with two event handlers: one for the mousedown event and one for the touchstart event. I fired up the web page in IE and touched the document and found that only the mousedown event fired. I saw the same behavior with Firefox, only to find out later that Firefox can be set to enable the touchstart event using about:config. When touch events are enabled, the touchstart event fires, but not mousedown. Chrome was even stranger: it fired both events when I touched the document: touchstart and mousedown, in that order. Only on my Android phone does it appear to be the case that only the touchstart event fires when I touch the document. I did a a Google search and ended up on two interesting pages. First, I found the page on CanIUse for touch events: http://caniuse.com/#feat=touch Can I Use clearly indicates that IE does not support touch events as of this writing, and Firefox only supports touch events if they are manually enabled. Furthermore, all four browsers I mentioned treat the touch in a completely different way. It boils down to this: IE: simulated mouse click Firefox with touch disabled: simulated mouse click Firefox with touch enabled: touch event Chrome: touch event and simulated mouse click Android: touch event What is more frustrating is that Google also found a Microsoft page called RethinkIE. RethinkIE brags about touch support in IE; as a matter of fact, one of their slogans is "Touch the Web". It links to a number of touch-based application. I followed some of these links, and as best I can tell, it's just like CanIUse described; no proper touch support; just simulated mouse clicks. The MDN (https://developer.mozilla.org/en-US/docs/Web/API/Touch) and W3C (http://www.w3.org/TR/touch-events/) documentation describe a far richer interface; an interface that doesn't just simulate mouse clicks, but keeps track of multiple touches at once, the contact area, rotation, and force of each touch, and unique identifiers for each touch so that they can be tracked individually. I don't see how simulated mouse clicks can ever touch the above described functionality, which, once again, is part of the W3C specification, although it is listed as "non-normative", meaning that a browser can claim to be standards-compliant without implementing it. (Why bother making it part of the standard, then?) What motivated my research is that I've written an HTML5 application that doesn't work on Android because Android doesn't fire mouse events. I'm now afraid to try to implement touch for my application because the browsers all behave so differently. I imagine that at some time in the future, the browsers might start handling touch similarly, but how can I tell how they might be handled in the future short of writing code to handle the behavior of each individual browser? Is it possible to write code today that will work with touch-enabled browsers for years to come? If so, how?

    Read the article

  • E-Business Suite Plug-in 12.1.0.1 for Enterprise Manager 12c Now Available

    - by Steven Chan (Oracle Development)
    Oracle E-Business Suite Plug-in 12.1.0.1.0 is now available for use with Oracle Enterprise Manager 12c.  Oracle E-Business Suite Plug-in 12.1.0.1 is an integral part of Oracle Enterprise Manager 12 Application Management Suite for Oracle E-Business Suite. This latest plug-in extends EM 12c Cloud Control with E-Business Suite specific system management capabilities and features enhanced change management support. The Oracle Enterprise Manager 12c Application Management Suite for Oracle E-Business Suite includes: Oracle E-Business Suite Plug-in 12.1.0.1 combines functionality that was available in the previously-standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle Real User Experience Insight Oracle Configuration & Compliance capabilities  Features that were previously available in the standalone management packs are now packaged in the Oracle E-Business Suite Plug-in, which is certified with Oracle Enterprise Manager 12c Cloud Control:  Functionality previously available for Application Management Pack (AMP) is now classified as “System Management for Oracle E-Business Suite” within the plug-in. Functionality previously available for Application Change Management Pack (ACMP) is now classified as “Change Management for Oracle E-Business Suite” within the plug-in. The Application Configuration Console and the Configuration Change Console are now native components of Oracle Enterprise Manager 12c. System Management Enhancements General Oracle Enterprise Manager 12c Base Platform uptake: All components of the management suite are certified with Oracle Enterprise Manager 12c Cloud Control. Security Privilege Delegation: The Oracle E-Business Suite Plug-in now extends Enterprise Manager’s privilege delegation through Sudo and PowerBroker to Oracle E-Business Suite Plug-in host targets. Privileges and Roles for Managing Oracle E-Business Suite: This release includes new ready-to-use target and resource privileges to monitor, manage, and perform Change Management functionality. Cloning Named Credentials Uptake in Cloning: The Clone module transactions now let users leverage the Named Credential feature introduced in Enterprise Manager 12c, thereby passing all the benefits of Named Credentials features in Enterprise Manager to the Oracle E-Business Suite Plug-in users. Smart Clone improvements: In addition to the existing 11i support that was available on previous releases, the new Oracle E-Business Suite Plug-in widens the coverage supporting Oracle E-Business Suite releases 12.0.x and 12.1.x. The new and improved Smart Clone UI supports the adding of "pre and post" custom steps to a copy of the ready-to-use cloning deployment procedure. Now a user can pass parameters to the custom steps through the interview screen of the UI as well as pass ready-to-use parameters to the custom steps. Additional configuration enhancements are included for configuring RAC targets databases, such as the ability to customize listener names and the option to configure with Virtual IP or Scan IP. Change Management Enhancements Customization Manager Support for longer file names: Customization Manager now handles file names up to thirty characters in length. Patch Manager Queuing of Patch Manager Runs: This feature allows patch runs to queue up if Patch Manager detects a specific target is in a blackout state. Multi-node system patching: The patch run interview has been enhanced to allow Enterprise Manager Administrator to choose which nodes adpatch will run on. New AD Administration Options: The patch run interview has been extended to include AD Administration Options "Relink Application Programs", "Generate Product Jars Files", "Generate Report Files", and "Generate Form Files". Downloads Fresh install For new customers or existing customers wishing to perform a fresh install Enterprise Manager Store (within Enterprise Manager 12c) Oracle Software Delivery Cloud Upgrades For existing customers wishing to upgrade their AMP 4.0 or AMP 3.1 installations Oracle Technology Network Getting Started with Oracle E-Business Suite Plug-In, Release 12.1.0.1 (Note 1434392.1) Prerequisites Enterprise Manager Cloud Control 12cOne or more of the following Oracle E-Business Suite Releases Release 11.5.10 CU2 with 11i.ATG_PF.H.RUP6 or higher Release 12.0.4 with R12.ATG_PF.A.delta.6 Release 12.1 with R12.ATG_PF.B.delta.3 Platforms and OS Release certification information is available from My Oracle Support via the Certification page. Search for "Oracle Application Management Pack for Oracle E-Business Suite and release 12.1.0.1.0." Related Articles Oracle E-Business Suite Plug-in 4.0 Released for OEM 11g (11.1.0.1)

    Read the article

  • Welcome 2011

    - by WeigeltRo
    Things that happened in 2010 MIX10 was absolutely fantastic. Read my report of MIX10 to see why.   The dotnet Cologne 2010, the community conference organized by the .NET user group Köln and my own group Bonn-to-Code.Net became an even bigger success than I dared to dream of.   There was a huge discrepancy between the efforts by Microsoft to support .NET user groups to organize public live streaming events of the PDC keynote (the dotnet Cologne team joined forces with netug  Niederrhein to organize the PDCologne) and the actual content of the keynote. The reaction of the audience at our event was “meh” and even worse I seriously doubt we’ll ever get that number of people to such an event (which on top of that suffered from technical difficulties beyond our control).   What definitely would have deserved the public live streaming event treatment was the Silverlight Firestarter (aka “Silverlight Damage Control”) event. And maybe we would have thought about organizing something if it weren’t for the “burned earth” left by the PDC keynote. Anyway, the stuff shown at the firestarter keynote was the topic of conversations among colleagues days later (“did you see that? oh yeah, that was seriously cool”). Things that I have learned/observed/noticed in 2010 In the long run, there’s a huge difference between “It works pretty well” and “it just works and I never have to think about it”. I had to get rid of my USB graphics adapter powering the third monitor (read about it in this blog post). Various small issues (desktop icons sometimes moving their positions after a reboot for no apparent reasons, at least one game I couldn’t get run at all, all three monitors sometimes simply refusing to wake up after standby) finally made me buy a PCIe 1x graphics adapter. If you’re interested: The combination of a NVIDIA GTX 460 and a GT 220 is running in “don’t make me think” mode for a couple of months now.   PowerPoint 2010 is a seriously cool piece of software. Not only the new hardware-accelerated effects, but also features like built-in background removal and picture processing (which in many cases are simply “good enough” and save a lot of time) or the smart guides.   Outlook 2010 crashes on me a lot. I haven’t been successful in reproducing these crashes, they just happen when every couple of days on different occasions (only thing in common: I clicked something in the main window – yeah, very helpful observation)   Visual Studio 2010 reminds me of Visual Studio 2005 before SP1, which is actually not a good thing to say about a piece of software. I think it’s telling that Microsoft’s message regarding the beta of SP1 has been different from earlier service pack betas (promising an upgrade path for a beta to the RTM sounds to me like “please, please use it NOW!”).   I have a love/hate relationship with ReSharper. I don’t want to develop without it, but at the same time I can’t fail to notice that ReSharper is taking a heavy toll in terms of performance and sometimes stability. Things I’m looking forward to in 2011 Obviously, the dotnet Cologne 2011. We already have been able to score some big name sponsors (Microsoft, Intel), but we’re still looking for more sponsors. And be assured that we’ll make sure that our partners get the most out of their contribution, regardless of how big or small.   MIX11, period.    Silverlight 5 is going to be great. The only thing I’m a bit nervous about is that I still haven’t read anything official on whether C# next version’s async/await will be in it. Leaving that out would be really stupid considering the end-of-2011 release of SL5 (moving the next release way into the future).

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >