Search Results

Search found 6945 results on 278 pages for 'azure use cases'.

Page 137/278 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • How to Disable the Animations on the Windows 8 Start Screen

    - by Usman
    Who doesn’t love animations? They make everything look so cool. But in some cases, animations are a distraction, and the same is true for Windows 8′s start screen (the “Modern UI”). Fortunately, there’s a very simple way to disable all those animations. Keep reading to find out how it’s done. The animations are especially noticeable when you switch from the good ol’ peaceful desktop to the start screen by pressing the winkey. I don’t know about you, but it feels like I’m getting dizzy by watching all those crazy animations over and over again. People have found out ways to enhance the start screen animations, add delay to various elements and stuff like that. But we’re going the other way, disabling the animations completely. To do so, log in, and when the start screen appears, type “Computer” (it will pop up in the search results before you’ve even finished typing). Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus?

    Read the article

  • How do I manage the technical debate over WCF vs. Web API?

    - by Saeed Neamati
    I'm managing a team of like 15 developers now, and we are stuck at a point on choosing the technology, where the team is broken into two completely opposite teams, debating over usage of WCF vs. Web API. Team A which supports usage of Web API, brings forward these reasons: Web API is just the modern way of writing services (Wikipedia) WCF is an overhead for HTTP. It's a solution for TCP, and Net Pipes, and other protocols WCF models are not POCO, because of [DataContract] & [DataMember] and those attributes SOAP is not as readable and handy as JSON SOAP is an overhead for network compared to JSON (transport over HTTP) No method overloading Team B which supports the usage of WCF, says: WCF supports multiple protocols (via configuration) WCF supports distributed transactions Many good examples and success stories exist for WCF (while Web API is still young) Duplex is excellent for two-way communication This debate is continuing, and I don't know what to do now. Personally, I think that we should use a tool only for its right place of usage. In other words, we'd better use Web API, if we want to expose a service over HTTP, but use WCF when it comes to TCP and Duplex. By searching the Internet, we can't get to a solid result. Many posts exist for supporting WCF, but on the contrary we also find people complaint about it. I know that the nature of this question might sound arguable, but we need some good hints to decide. We're stuck at a point where choosing a technology by chance might make us regret it later. We want to choose with open eyes. Our usage would be mostly for web, and we would expose our services over HTTP. In some cases (say 5 to 10 percent) we might need distributed transactions though. What should I do now? How do I manage this debate in a constructive way?

    Read the article

  • Is it reasonable to expect knowing the whole stack bottom up?

    - by Vaibhav Garg
    I am an Sr. developer/architect/Product Manager for embedded systems. The systems that I have had experience with have typically been small to medium size codebases - typically close to 25-30K LOC in C, using 8-16 and 32 bit low end microcontrollers. The systems have been entirely bootstrapped by our team - meaning right from the start-up code to the end application code has either been written by the team, or at the very least, is thoroughly understood and maintained by us. Now, if we were to start developing more complex systems with complex peripherals, such as USB OTG et al. (think, low end cell phones), there are libraries and stacks available commercially and from chip vendors that reduce the task to just calling the right APIs and being able to use those peripherals. Now, from a habit point of view, this does not give me and the team a comfortable feeling, not being able to comprehend the entire code tree, with virtual black boxes at the lower layers. Is it reasonable to devote, and reserve, time getting into the details of how the APIs are implemented, assuming that the same would also entail getting into details of relevant standards (again, for USB as an example)? Or, alternatively, should a thorough understanding of the top level usage of the APIs be sufficient? This of course assumes that the source codes to all libraries are available, which they are, in almost all cases. Edit: In partial response to @Abhi Beckert, the documentation is refreshingly very comprehensive and meticulously maintained, AFAIK and been able to judge. I have not had a long experience with the same.

    Read the article

  • Live Oracle AppAdvantage Webcast in APAC: Register Today

    - by Tanu Sood
    How Oracle Applications Customers can Extend the Value of their Investments How Oracle Applications Customers can Extend the Value of their Investments Oracle AppAdvantage is an exciting new initiative for Oracle enterprise application customers including E-Business Suite, PeopleSoft, JD Edwards, and Siebel. Oracle AppAdvantage provides strategies to help applications customers simplify, differentiate and innovate their investments through a pace layered architecture that can adjust with business requirements.Whether your organization is extending your applications to mobile devices, building a customer self-service portal, taking applications to the cloud, integrating applications with your other business critical applications or securely extending them to serve your specific needs, you can take the extension or customization work out of the applications and seamlessly extend with Oracle Fusion Middleware technologies as required. This webcast will discuss: Strategies to help applications customers simplify, differentiate and innovate their investments through a pace layered architecture How to get started and implementation use cases with customer examples Register today for this webcast on November 6. Can't wait until the Live Webcast? and ask him a question! If you are facing problems with registration or would like further information please email us at [email protected] -- For any questions on Oracle, our events and products please call or send us an email. Date Wednesday, 6th November 2013 Time Mumbai 10:30 a.m. (GMT +5:30) Singapore 1:00 p.m. (GMT +8:00) Sydney 4:00 p.m. (GMT +11:00)   -- The duration of this Webcast is 60 minutes. -- Contact Us | Legal Notices | Privacy Copyright © 2013, Oracle and/or its affiliates. All rights reserved.

    Read the article

  • Fix corrupt NTFS partition without Windows

    - by Capt.Nemo
    MY NTFS Partition has gotten corrupt somehow (it's a relic from the days when I had Windows installed). I'm putting the debug output of fdisk and blkid here. At the same time, any OS is unable to mount my root partition, which is located next to my NTFS partition. I'm not sure if this has anything to do with it, though. I get the following error while trying to mount my root partition (sda5) mount: wrong fs type, bad option, bad superblock on /dev/sda5, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so ubuntu@ubuntu:~$ dmesg | tail [ 1019.726530] Descriptor sense data with sense descriptors (in hex): [ 1019.726533] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 [ 1019.726551] 1a 3e ed 92 [ 1019.726558] sd 0:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed [ 1019.726568] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 1a 3e ed 40 00 01 00 00 [ 1019.726584] end_request: I/O error, dev sda, sector 440331666 [ 1019.726602] JBD: Failed to read block at offset 462 [ 1019.726609] ata1: EH complete [ 1019.726612] JBD: recovery failed [ 1019.726617] EXT4-fs (sda5): error loading journal When I open gparted (using live CD), I get an exclamation next to my NTFS drive which states Is there a way to run chkdsk without using windows ? My attempt to run fsck results in the following : ubuntu@ubuntu:~$ sudo fsck /dev/sda fsck from util-linux-ng 2.17.2 e2fsck 1.41.14 (22-Dec-2010) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Update : I was able to fix the NTFS partition running chkdsk off HBCD, but it seems that the superblock problem still remains. *Update 2: * Fixed superblock issue using e2fsck -c /dev/sda5

    Read the article

  • My first encounter with SmartAssembly

    - by Peter Larsson
    Let me start by writing I am a supreme VB6 programmer, but I have very little experience with VB.Net, so I think I still need some more time learning SmartAssembly. SmartAssembly make obfuscating and merging dll files a piece of cake! With it's simple, straight forward and clean GUI I did make my tests work. With other obfuscators like Xenocode, Salamander etc which lets you (and in some cases forces you) control more advanced settings, you really have to know what you are doing. Especially when it comes to protecting code that uses external dependencies. My most annoying experience is that if you start checking radio buttons and activating different obfuscating features in SmartAssembly, you will end up breaking your working code as well, if you like me is not that experienced and don't know what you’re doing. SmartAssembly have some troubleshooting information on their website which explains why the application will fail in some scenarios. So why not extend these checks in some deeper analyzing stage on the dll's? By doing that I think more people could get fully functional dll's out of the box instead of trying different settings and then test the protected dll and see if it's working or not. //Peter

    Read the article

  • Regular-Expressions.info Thoroughly Updated

    - by Jan Goyvaerts
    RegexBuddy 4 was released earlier this month. This is a major upgrade that significantly improves RegexBuddy’s ability to emulate the features and deficiencies of the latest versions of all the popular regex flavors as well as many past versions of these flavors. Along with that, the Regular-Expressions.info website has been thoroughly updated with new content. Both the tutorial and reference sections have been significantly expanded to cover all the features of the latest regular expression flavors. There are also new tutorial and reference subsections that explain the syntax used by replacement strings when searching and replacing with regular expressions. I’m also reviving this blog. In the coming weeks you can expect blog post that highlight the new topics on the Regular-Expressions.info website. Later on I’ll blog about more intricate regex-related issues that RegexBuddy 4 emulates but that the website doesn’t talk about or only mentions in passing. RegexBuddy 4.0.0 is aware of 574 different aspects (syntactic and behavioral differences) of 94 regular expression flavors. These numbers are surely to grow with future 4.x.x releases. While RegexBuddy juggles it all with ease, that’s far too much detail to cover in a tutorial or reference that any person would want to read. So the tutorial and reference cover the important features and behaviors, while the blog will serve the corner cases as tidbits. Subscribe to the Regex Guru RSS Feed if you don’t want to miss any articles.

    Read the article

  • Edge Detection on Screen

    - by user2056745
    I have a edge collision problem with a simple game that i am developing. Its about throwing a coin across the screen. I am using the code below to detect edge collisions so i can make the coin bounce from the edges of the screen. Everything works as i want except one case. When the coin hits left edge and goes to right edge the system doesnt detect the collision. The rest cases are working perfectly, like hitting the right edge first and then the left edge. Can someone suggest a solution for it? public void onMove(float dx, float dy) { coinX += dx; coinY += dy; if (coinX > rightBorder) { coinX = ((rightBorder - coinX) / 3) + rightBorder; } if (coinX < leftBorder) { coinX = -(coinX) / 3; } if (coinY > bottomBorder) { coinY = ((bottomBorder - coinY) / 3) + bottomBorder; } invalidate(); }

    Read the article

  • Get Started using Build-Deploy-Test Workflow with TFS 2012

    - by Jakob Ehn
    TFS 2012 introduces a new type of Lab environment called Standard Environment. This allows you to setup a full Build Deploy Test (BDT) workflow that will build your application, deploy it to your target machine(s) and then run a set of tests on that server to verify the deployment. In TFS 2010, you had to use System Center Virtual Machine Manager and involve half of your IT department to get going. Now all you need is a server (virtual or physical) where you want to deploy and test your application. You don’t even have to install a test agent on the machine, TFS 2012 will do this for you! Although each step is rather simple, the entire process of setting it up consists of a bunch of steps. So I thought that it could be useful to run through a typical setup.I will also link to some good guidance from MSDN on each topic. High Level Steps Install and configure Visual Studio 2012 Test Controller on Target Server Create Standard Environment Create Test Plan with Test Case Run Test Case Create Coded UI Test from Test Case Associate Coded UI Test with Test Case Create Build Definition using LabDefaultTemplate 1. Install and Configure Visual Studio 2012 Test Controller on Target Server First of all, note that you do not have to have the Test Controller running on the target server. It can be running on another server, as long as the Test Agent can communicate with the test controller and the test controller can communicate with the TFS server. If you have several machines in your environment (web server, database server etc..), the test controller can be installed either on one of those machines or on a dedicated machine. To install the test controller, simply mount the Visual Studio Agents media on the server and browse to the vstf_controller.exe file located in the TestController folder. Run through the installation, you might need to reboot the server since it installs .NET 4.5. When the test controller is installed, the Test Controller configuration tool will launch automatically (if it doesn’t, you can start it from the Start menu). Here you will supply the credentials of the account running the test controller service. Note that this account will be given the necessary permissions in TFS during the configuration. Make sure that you have entered a valid account by pressing the Test link. Also, you have to register the test controller with the TFS collection where your test plan is located (and usually the code base of course) When you press Apply Settings, all the configuration will be done. You might get some warnings at the end, that might or might not cause a problem later. Be sure to read them carefully.   For more information about configuring your test controllers, see Setting Up Test Controllers and Test Agents to Manage Tests with Visual Studio 2. Create Standard Environment Now you need to create a Lab environment in Microsoft Test Manager. Since we are using an existing physical or virtual machine we will create a Standard Environment. Open MTM and go to Lab Center. Click New to create a new environment Enter a name for the environment. Since this environment will only contain one machine, we will use the machine name for the environment (TargetServer in this case) On the next page, click Add to add a machine to the environment. Enter the name of the machine (TargetServer.Domain.Com), and give it the Web Server role. The name must be reachable both from your machine during configuration and from the TFS app tier server. You also need to supply an account that is a local administration on the target server. This is needed in order to automatically install a test agent later on the machine. On the next page, you can add tags to the machine. This is not needed in this scenario so go to the next page. Here you will specify which test controller to use and that you want to run UI tests on this environment. This will in result in a Test Agent being automatically installed and configured on the target server. The name of the machine where you installed the test controller should be available on the drop down list (TargetServer in this sample). If you can’t see it, you might have selected a different TFS project collection. Press Next twice and then Verify to verify all the settings: Press finish. This will now create and prepare the environment, which means that it will remote install a test agent on the machine. As part of this installation, the remote server will be restarted. 3-5. Create Test Plan, Run Test Case, Create Coded UI Test I will not cover step 3-5 here, there are plenty of information on how you create test plans and test cases and automate them using Coded UI Tests. In this example I have a test plan called My Application and it contains among other things a test suite called Automated Tests where I plan to put test cases that should be automated and executed as part of the BDT workflow. For more information about Coded UI Tests, see Verifying Code by Using Coded User Interface Tests   6. Associate Coded UI Test with Test Case OK, so now we want to automate our Coded UI Test and have it run as part of the BDT workflow. You might think that you coded UI test already is automated, but the meaning of the term here is that you link your coded UI Test to an existing Test Case, thereby making the Test Case automated. And the test case should be part of the test suite that we will run during the BDT. Open the solution that contains the coded UI test method. Open the Test Case work item that you want to automate. Go to the Associated Automation tab and click on the “…” button. Select the coded UI test that you corresponds to the test case: Press OK and the save the test case For more information about associating an automated test case with a test case, see How to: Associate an Automated Test with a Test Case 7. Create Build Definition using LabDefaultTemplate Now we are ready to create a build definition that will implement the full BDT workflow. For this purpose we will use the LabDefaultTemplate.11.xaml that comes out of the box in TFS 2012. This build process template lets you take the output of another build and deploy it to each target machine. Since the deployment process will be running on the target server, you will have less problem with permissions and firewalls than if you were to remote deploy your solution. So, before creating a BDT workflow build definition, make sure that you have an existing build definition that produces a release build of your application. Go to the Builds hub in Team Explorer and select New Build Definition Give the build definition a meaningful name, here I called it MyApplication.Deploy Set the trigger to Manual Define a workspace for the build definition. Note that a BDT build doesn’t really need a workspace, since all it does is to launch another build definition and deploy the output of that build. But TFS doesn’t allow you to save a build definition without adding at least one mapping. On Build Defaults, select the build controller. Since this build actually won’t produce any output, you can select the “This build does not copy output files to a drop folder” option. On the process tab, select the LabDefaultTemplate.11.xaml. This is usually located at $/TeamProject/BuildProcessTemplates/LabDefaultTemplate.11.xaml. To configure it, press the … button on the Lab Process Settings property First, select the environment that you created before: Select which build that you want to deploy and test. The “Select an existing build” option is very useful when developing the BDT workflow, because you do not have to run through the target build every time, instead it will basically just run through the deployment and test steps which speeds up the process. Here I have selected to queue a new build of the MyApplication.Test build definition On the deploy tab, you need to specify how the application should be installed on the target server. You can supply a list of deployment scripts with arguments that will be executed on the target server. In this example I execute the generated web deploy command file to deploy the solution. If you for example have databases you can use sqlpackage.exe to deploy the database. If you are producing MSI installers in your build, you can run them using msiexec.exe and so on. A good practice is to create a batch file that contain the entire deployment that you can run both locally and on the target server. Then you would just execute the deployment batch file here in one single step. The workflow defines some variables that are useful when running the deployments. These variables are: $(BuildLocation) The full path to where your build files are located $(InternalComputerName_<VM Name>) The computer name for a virtual machine in a SCVMM environment $(ComputerName_<VM Name>) The fully qualified domain name of the virtual machine As you can see, I specify the path to the myapplication.deploy.cmd file using the $(BuildLocation) variable, which is the drop folder of the MyApplication.Test build. Note: The test agent account must have read permission in this drop location. You can find more information here on Building your Deployment Scripts On the last tab, we specify which tests to run after deployment. Here I select the test plan and the Automated Tests test suite that we saw before: Note that I also selected the automated test settings (called TargetServer in this case) that I have defined for my test plan. In here I define what data that should be collected as part of the test run. For more information about test settings, see Specifying Test Settings for Microsoft Test Manager Tests We are done! Queue your BDT build and wait for it to finish. If the build succeeds, your build summary should look something like this:

    Read the article

  • "wrong fs type, bad option, bad superblock" error while mounting FAT Drives

    - by cshubhamrao
    I am unable to mount any fat32 or fat16 formatted usb disks under Ubuntu 13.10. The thing here to note is that it is happening only with fat formatted Disks. ntfs, ext formatted external usb disks work well (I tried formatting the same with ext4 and it worked) While mounting via nautilus: Error while mounting from terminal: root@shubham-pc:~# mount -t vfat /dev/sdc1 /media/shubham/n mount: wrong fs type, bad option, bad superblock on /dev/sdc1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so As suggested by the error: Output from dmesg | tail root@shubham-pc:~# dmesg | tail [ 3545.482598] scsi8 : usb-storage 1-1:1.0 [ 3546.481530] scsi 8:0:0:0: Direct-Access SanDisk Cruzer 1.26 PQ: 0 ANSI: 5 [ 3546.482373] sd 8:0:0:0: Attached scsi generic sg3 type 0 [ 3546.483758] sd 8:0:0:0: [sdc] 15633408 512-byte logical blocks: (8.00 GB/7.45 GiB) [ 3546.485254] sd 8:0:0:0: [sdc] Write Protect is off [ 3546.485262] sd 8:0:0:0: [sdc] Mode Sense: 43 00 00 00 [ 3546.488314] sd 8:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [ 3546.499820] sdc: sdc1 [ 3546.503388] sd 8:0:0:0: [sdc] Attached SCSI removable disk [ 3547.273396] FAT-fs (sdc1): IO charset iso8859-1 not found Output from fsck.vfat: root@shubham-pc:~# fsck.vfat /dev/sdc1 dosfsck 3.0.16, 01 Mar 2013, FAT32, LFN /dev/sdc1: 1 files, 1/1949978 clusters All normal Tried re-creating the whole partition table and then formatting as fat32 but to no avail so the possibility of corrupted drive is ruled out. Tried the same with around 4 Disks or so and all have the same things

    Read the article

  • Building gstreamer_ndk_bundle problems

    - by Cipi
    I'm trying to build gstreamer_ndk_bundle under Ubuntu 12.4 and I'm failing miserably! I have installed all "glib-dev" packages (packages that in their name have glib and dev), and also I have tried to compile/install glib 2.33.1 (latest) from source, but I always get this error: /home/marko/gstreamer_ndk_bundle/jni/../glib/gobject/gmarshal.c:149: undefined reference to `g_value_get_schar' collect2: ld returned 1 exit status make: *** [/home/marko/gstreamer_ndk_bundle/obj/local/armeabi/libgobject-2.0.so] Error 1 This means that glib source doesn't have the definition for g_value_get_schar, and since that function was introduced in glib somewhere after version 2.30.0, my guess is that I am not using proper glib! I tried to force gstremaer_ndk_bundle to build with sources from the folder /home/marko/glib-2.33.1/ which I compiled/installed by exporting these env vars: GLIB_GENMARSHAL=/home/marko/glib-2.33.1/gobject/glib-genmarshal GLIB_COMPILE_SCHEMAS=/home/marko/glib-2.33.1/gio/glib-compile-schemas Also I changed gmarshal.h so it includes gmarshal.h from the installed glib folder: #ifndef _marko_glib_loaded #define _marko_glib_loaded #include "/home/marko/glib-2.33.1/gobject/gmarshal.h" #endif But failed in both cases. How can I know what glib is used while compiling gstreamer and install the proper one? How can I force gstreamer_ndk_bundle to use glib sources from the folder I have un-tared/configured/installed and not the system ones, or whatever ones it uses? I read somewhere that I need gstreamer-devel package if I keep getting this error while compiling. Where can I find that package?! Can't Google it out... Has anyone EVER built gstreamer_ndk_bundle and lived to tell the tale?

    Read the article

  • When is type testing OK?

    - by svidgen
    Assuming a language with some inherent type safety (e.g., not JavaScript): Given a method that accepts a SuperType, we know that in most cases wherein we might be tempted to perform type testing to pick an action: public void DoSomethingTo(SuperType o) { if (o isa SubTypeA) { o.doSomethingA() } else { o.doSomethingB(); } } We should usually, if not always, create a single, overridable method on the SuperType and do this: public void DoSomethingTo(SuperType o) { o.doSomething(); } ... wherein each subtype is given its own doSomething() implementation. The rest of our application can then be appropriately ignorant of whether any given SuperType is really a SubTypeA or a SubTypeB. Wonderful. But, we're still given is a-like operations in most, if not all, type-safe languages. And that seems suggests a potential need for explicit type testing. So, in what situations, if any, should we or must we perform explicit type testing? Forgive my absent mindedness or lack of creativity. I know I've done it before; but, it was honestly so long ago I can't remember if what I did was good! And in recent memory, I don't think I've encountered a need to test types outside my cowboy JavaScript.

    Read the article

  • What causes bad performance in consumer apps?

    - by Crashworks
    My Comcast DVR takes at least three seconds to respond to every remote control keypress, making the simple task of watching television into a frustrating button-mashing experience. My iPhone takes at least fifteen seconds to display text messages and crashes ¼ of the times I try to bring up the iPad app; simply receiving and reading an email often takes well over a minute. Even the navcom in my car has mushy and unresponsive controls, often swallowing successive inputs if I make them less than a few seconds apart. These are all fixed-hardware end-consumer appliances for which usability should be paramount, and yet they all fail at basic responsiveness and latency. Their software is just too slow. What's behind this? Is it a technical problem, or a social one? Who or what is responsible? Is it because these were all written in managed, garbage-collected languages rather than native code? Is it the individual programmers who wrote the software for these devices? In all of these cases the app developers knew exactly what hardware platform they were targeting and what its capabilities were; did they not take it into account? Is it the guy who goes around repeating "optimization is the root of all evil," did he lead them astray? Was it a mentality of "oh it's just an additional 100ms" each time until all those milliseconds add up to minutes? Is it my fault, for having bought these products in the first place? This is a subjective question, with no single answer, but I'm often frustrated to see so many answers here saying "oh, don't worry about code speed, performance doesn't matter" when clearly at some point it does matter for the end-user who gets stuck with a slow, unresponsive, awful experience. So, at what point did things go wrong for these products? What can we as programmers do to avoid inflicting this pain on our own customers?

    Read the article

  • Welcome to JavaOne!

    - by marius.ciortea
    Welcome to this year's JavaOne conference! We are glad you dropped by. We want to keep you informed of all the happenings around JavaOne: all the events leading up to the conference and all the events during the conference week itself. We'll cover announcements, news, planning (but we won't make you go to any meetings), and snafus (nothing that makes us look too bad, of course). We'll even throw in a contest or two to make sure you are paying attention. We'll post a couple of times a week, and then more frequently as we get closer to September. There's a group of us, and we cover the Java beat, JUGs, Oracle Technology Network, Oracle Solaris, and lots more. What do you want to hear about? Let us know.A group of us from the office went to see the movie Iron Man 2 (it just debuted in the United States) last week and it reminded us of Java, the Java community, and JavaOne. In all three cases, from many disparate (and sometimes seemingly incompatible) parts and people, something comes together that works, is cool, and helps make a better world. Right now, there are hundreds of little islands of planning, all busy answering questions for JavaOne: What sessions get selected? What goes in the Mason street tent (until a few weeks ago, Will there be a tent on Mason street?), What do the JUGS need? Which Oracle ACEs will be there? Can we do a surf theme at the OTN party? And, somehow, like an Iron Man suit, they all come together and work to make a great event. At least, we hope it will be great. That's for you to decide. Please don't be shy--give us your comments and suggestions. We'll be listening.P.S. You can attend Stark Expo online at Oracle.com/ironman2, where you can train to become a "Master Cloud Operative." I got my MCO certification. I wish I had a card to put in my wallet.

    Read the article

  • Explaining the difference between OData & RDF by way of analogy

    - by jamiet
    A couple of months back I wrote a blog post entitled Microsoft, OData and RDF where I gave a high level view of the OData protocol and how it compares to RDF. I talked about linked data, triples and such like which may have been somewhat useful however jargon-heavy. Earlier today Dr Michael Hausenblas (blog | twitter) offered an analogy which I think is probably more useful and with Michael's permission I'm re-posting it here:Imagine a Web (a Web of Documents, if you wish), which is not based on HTML and hyperlinks, but on MS Word documents. The documents are all available on the Internet, so you can download them and consume the content. But after you’re done with a certain document that talks about a book, how do you learn more about it? For example, reviews about the book or where you can purchase it? Maybe the original document mentions that there is some more related information on another server. So you’d need to go there and look for the related bit of information yourself. You see? That’s what the Web is great at – you just click on a hyperlink and it takes you to the document (or section) you’re interested in. All the legwork is taken care of for you through HTML, URIs and HTTP.Hm, right, but how is this related to OData? Well, OData feels a bit like the above mentioned scenario, just concerning data. Of course you – well actually rather a software program I guess – can consume it (a single source), but that’s it.from Oh – it is data on the Web by Michael Hausenblas I believe that OData has loads of use cases but its important to understand its limitations as well and I think Michael has done a good job of explaining those limitations.@Jamiet   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Save Upgrade downtime: Upgrade APEX upfront

    - by Mike Dietrich
    With almost every patch or release upgrade of the Oracle Database a new version of Oracle Application Express (APEX) will be installed. And as APEX is part of the database installation it will be upgraded as part of the component upgrades after the ORACLE SERVER component has been successfully upgraded to the new releases. But the APEX upgrade can take a bit (several minutes or even more in some cases). Therefore it is a common advice to upgrade APEX upfront before upgrading the database as this can be done online while the database is in production (unless your databases serves just as an APEX application backend - in this case upgrading APEX upfront won't save you anything). To upgrade Oracle APEX upfront you'll have to followMOS Note:1088970.1. It explains that you'll have to: Determine the installation type by running this query:select count(*) from <SCHEMA>.WWV_FLOWS where id = 4000;whereas <SCHEMA> can be one of the following:FLOWS_010500 1.5.X FLOWS_010600 1.6.X FLOWS_020000 2.0.X FLOWS_020100 2.1.X FLOWS_020200 2.2.X FLOWS_030000 3.0.X FLOWS_030100 3.1.X  APEX_030200 3.2.X APEX_040000 4.0.XAPEX_040100 4.1.XAPEX_040200 4.2.XIf the query returns 0 then you'll need to run apxrtins.sqlIf the query returns 1 then you'll need to execute apexins.sql Download the newest APEX package and install it. -Mike . 

    Read the article

  • Movember 2012

    - by Tim Koekkoek
    If you were lucky enough to visit one of the Oracle Dublin offices during the month November you may have noticed a bunch of mustached merchants. If you thought the mustache was the newest hair fashion in Ireland you were wrong. These guys were the Mo Bro’s and proud members of MOracle, our Movember 2012 team. The aim of Movember is to raise vital funds and awareness for men’s health, especially prostate cancer. To raise these funds, men don't shave their upper lips for a whole month and get sponsored for it by friends, family and colleagues. To highlight the importance of supporting this cause, take a look at these statistics: •             1 out of 8 men will be diagnosed with prostate cancer during their life. •             This year more than 2,000 new cases of disease will be diagnosed. •             1 out of 3 men will be diagnosed with cancer during their life. It was a long and heavy month for all the Mo Bro’s, but in the end the effort has paid off. Under the leadership of team captain Jimmy this team managed to raise over €4,400  and was ranked #34 out of 1142 Irish Movember teams. The team couldn't have done it without the constant support of our colleagues and sponsors. Many thanks to all of you! We are very happy to have raised money and awareness for men’s health. On top of that we are also happy to have raised awareness for the most underrated and abandoned piece of man’s hair… the mustache. This is just the beginning; soon many men will proudly wear this fashionable look again!

    Read the article

  • Top 5 Developer Enabling Nuggets in MySQL 5.6

    - by Rob Young
    MySQL 5.6 is truly a better MySQL and reflects Oracle's commitment to the evolution of the most popular and widelyused open source database on the planet.  The feature-complete 5.6 release candidate was announced at MySQL Connect in late September and the production-ready, generally available ("GA") product should be available in early 2013.  While the message around 5.6 has been focused mainly on mass appeal, advanced topics like performance/scale, high availability, and self-healing replication clusters, MySQL 5.6 also provides many developer-friendly nuggets that are designed to enable those who are building the next generation of web-based and embedded applications and services. Boiling down the 5.6 feature set into a smaller set, of simple, easy to use goodies designed with developer agility in mind, these things deserve a quick look:Subquery Optimizations Using semi-JOINs and late materialization, the MySQL 5.6 Optimizer delivers greatly improved subquery performance. Specifically, the optimizer is now more efficient in handling subqueries in the FROM clause; materialization of subqueries in the FROM clause is now postponed until their contents are needed during execution. Additionally, the optimizer may add an index to derived tables during execution to speed up row retrieval. Internal tests run using the DBT-3 benchmark Query #13, shown below, demonstrate an order of magnitude improvement in execution times (from days to seconds) over previous versions. select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)from customer, orders, lineitemwhere o_orderkey in (                select l_orderkey                from lineitem                group by l_orderkey                having sum(l_quantity) > 313  )  and c_custkey = o_custkey  and o_orderkey = l_orderkeygroup by c_name, c_custkey, o_orderkey, o_orderdate, o_totalpriceorder by o_totalprice desc, o_orderdateLIMIT 100;What does this mean for developers?  For starters, simplified subqueries can now be coded instead of complex joins for cross table lookups: SELECT title FROM film WHERE film_id IN (SELECT film_id FROM film_actor GROUP BY film_id HAVING count(*) > 12); And even more importantly subqueries embedded in packaged applications no longer need to be re-written into joins.  This is good news for both ISVs and their customers who have access to the underlying queries and who have spent development cycles writing, testing and maintaining their own versions of re-written queries across updated versions of a packaged app.The details are in the MySQL 5.6 docs. Online DDL OperationsToday's web-based applications are designed to rapidly evolve and adapt to meet business and revenue-generationrequirements. As a result, development SLAs are now most often measured in minutes vs days or weeks. For example, when an application must quickly support new product lines or new products within existing product lines, the backend database schema must adapt in kind, and most commonly while the application remains available for normal business operations.  MySQL 5.6 supports this level of online schema flexibility and agility by providing the following new ALTER TABLE online DDL syntax additions:  CREATE INDEX DROP INDEX Change AUTO_INCREMENT value for a column ADD/DROP FOREIGN KEY Rename COLUMN Change ROW FORMAT, KEY_BLOCK_SIZE for a table Change COLUMN NULL, NOT_NULL Add, drop, reorder COLUMN Again, the details are in the MySQL 5.6 docs. Key-value access to InnoDB via Memcached APIMany of the next generation of web, cloud, social and mobile applications require fast operations against simple Key/Value pairs. At the same time, they must retain the ability to run complex queries against the same data, as well as ensure the data is protected with ACID guarantees. With the new NoSQL API for InnoDB, developers have allthe benefits of a transactional RDBMS, coupled with the performance capabilities of Key/Value store.MySQL 5.6 provides simple, key-value interaction with InnoDB data via the familiar Memcached API.  Implemented via a new Memcached daemon plug-in to mysqld, the new Memcached protocol is mapped directly to the native InnoDB API and enables developers to use existing Memcached clients to bypass the expense of query parsing and go directly to InnoDB data for lookups and transactional compliant updates.  The API makes it possible to re-use standard Memcached libraries and clients, while extending Memcached functionality by integrating a persistent, crash-safe, transactional database back-end.  The implementation is shown here:So does this option provide a performance benefit over SQL?  Internal performance benchmarks using a customized Java application and test harness show some very promising results with a 9X improvement in overall throughput for SET/INSERT operations:You can follow the InnoDB team blog for the methodology, implementation and internal test cases that generated these results here. How to get started with Memcached API to InnoDB is here. New Instrumentation in Performance SchemaThe MySQL Performance Schema was introduced in MySQL 5.5 and is designed to provide point in time metrics for key performance indicators.  MySQL 5.6 improves the Performance Schema in answer to the most common DBA and Developer problems.  New instrumentations include: Statements/Stages What are my most resource intensive queries? Where do they spend time? Table/Index I/O, Table Locks Which application tables/indexes cause the most load or contention? Users/Hosts/Accounts Which application users, hosts, accounts are consuming the most resources? Network I/O What is the network load like? How long do sessions idle? Summaries Aggregated statistics grouped by statement, thread, user, host, account or object. The MySQL 5.6 Performance Schema is now enabled by default in the my.cnf file with optimized and auto-tune settings that minimize overhead (< 5%, but mileage will vary), so using the Performance Schema ona production server to monitor the most common application use cases is less of an issue.  In addition, new atomic levels of instrumentation enable the capture of granular levels of resource consumption by users, hosts, accounts, applications, etc. for billing and chargeback purposes in cloud computing environments.The MySQL docs are an excellent resource for all that is available and that can be done with the 5.6 Performance Schema. Better Condition Handling - GET DIAGNOSTICSMySQL 5.6 enables developers to easily check for error conditions and code for exceptions by introducing the new MySQL Diagnostics Area and corresponding GET DIAGNOSTICS interface command. The Diagnostic Area can be populated via multiple options and provides 2 kinds of information:Statement - which provides affected row count and number of conditions that occurredCondition - which provides error codes and messages for all conditions that were returned by a previous operation The addressable items for each are: The new GET DIAGNOSTICS command provides a standard interface into the Diagnostics Area and can be used via the CLI or from within application code to easily retrieve and handle the results of the most recent statement execution.  An example of how it is used might be:mysql> DROP TABLE test.no_such_table; ERROR 1051 (42S02): Unknown table 'test.no_such_table' mysql> GET DIAGNOSTICS CONDITION 1 -> @p1 = RETURNED_SQLSTATE, @p2 = MESSAGE_TEXT; mysql> SELECT @p1, @p2; +-------+------------------------------------+| @p1   | @p2                                | +-------+------------------------------------+| 42S02 | Unknown table 'test.no_such_table' | +-------+------------------------------------+ Options for leveraging the MySQL Diagnotics Area and GET DIAGNOSTICS are detailed in the MySQL Docs.While the above is a summary of some of the key developer enabling 5.6 features, it is by no means exhaustive. You can dig deeper into what MySQL 5.6 has to offer by reading this developer zone article or checking out "What's New in MySQL 5.6" in the MySQL docs.BONUS ALERT!  If you are developing on Windows or are considering MySQL as an alternative to SQL Server for your next project, application or shipping product, you should check out the MySQL Installer for Windows.  The installer includes the MySQL 5.6 RC database, all drivers, Visual Studio and Excel plugins, tray monitor and development tools all a single download and GUI installer.   So what are your next steps? Register for Dec. 13 "MySQL 5.6: Building the Next Generation of Web-Based Applications and Services" live web event.  Hurry!  Seats are limited. Download the MySQL 5.6 Release Candidate (look under the Development Releases tab) Provide Feedback <link to http://bugs.mysql.com/> Join the Developer discussion on the MySQL Forums Explore all MySQL Products and Developer Tools As always, thanks for your continued support of MySQL!

    Read the article

  • Partner Webcast - Oracle WebCenter: Portal Highlights - 31 Oct 2013

    - by Thanos Terentes Printzios
    Oracle WebCenter is the center of engagement for business. In order to succeed in today’s economy, organizations need to engage with information across all channels to ensure customers, partners and employees have access to the right information in the context of the business process in which they are engaged. The latest release of Oracle WebCenter addresses this challenge with updates across its complete portfolio.Nowadays, Portals are multi-channel applications that enable the creation, sharing and distribution of personalized content, as well as access to social networking and self-service capabilities. Web 2.0 and social technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed.The new release of Oracle WebCenter Portal makes it easier and faster for business users to create intuitive portals with integrated application content Streamlining development with an integrated set of tools for web and mobile. Providing out-of-the box templates for common use cases. Expediting the portal creation experience with new development tools empower business users to build and deploy mobile portals and websites with unprecedented speed—without having to wait for IT which leads to a shorter time to market and reduced costs. Join us to discover a Web platform that allows organizations to quickly and easily create intranets, extranets, composite applications, and self-service portals, providing users a more secure and efficient way of consuming information and interacting with applications, processes, and other users – the latest Oracle WebCenter Portal release 11gR1 PS7. Agenda Oracle WebCenter Overview Oracle WebCenter Portal New and enhanced features to improve the user experience: For Knowledge Workers Simplified Portal Creation Search Enhancements For Application Specialists New Portal Builder Simplify Mobile Development For Developers : Enhanced APIs and ADF Support For Administrators Lifecycle Enhancements Search Administration Impersonation Summary - Q&A This is our first webcast of an Oracle Webcenter Series for Partners, with the support of  Oracle EMEA Webcenter Partner Community. Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. New invitations will be shared of additional webcasts planned for Oracle Webcenter. Thursday, October 31st, 2013 10am CET (8am UTC / 11am EEST)  Register Now For any questions please contact us at [email protected] Stay Connected

    Read the article

  • Rails/Node.js interaction

    - by lpvn
    I and my co-worker are developing a web application with rails and node.js and we can't reach a consensus regarding a particular architectural decision. Our setup is basically a rails server working with node.js and redis, when a client makes a http request to our rails API in some cases our rails application posts the response to a redis database and then node.js transmits the response via websocket. Our disagreement occurs in the following point: my co-worker thinks that using node.js to send data to clients is somewhat business logic and should be inside the model, so in the first code he wrote he used commands of broadcast in callbacks and other places of the model, he's convinced that the models are the best place for the interaction between rails and node. I on the other hand think that using node.js belongs to the runtime realm, my take is that the broadcast commands and other node.js interactions should be in the controller and should only be used in a model if passed through a well defined interface, just like the situation when a model needs to access the current user of a session. At this point we're tired of arguing over this same thing and our discussion consists in us repeating to ourselves our same opinions over and over. Could anyone, preferably with experience in the same setup, give us an unambiguous response saying which solution is more adequate and why it is?

    Read the article

  • Does TDD really work for complex projects?

    - by Amir Rezaei
    I’m asking this question regarding problems I have experienced during TDD projects. I have noticed the following challenges when creating unit tests. Generating and maintaining mock data It’s hard and unrealistic to maintain large mock data. It’s is even harder when database structure undergoes changes. Testing GUI Even with MVVM and ability to test GUI, it takes a lot of code to reproduce the GUI scenario. Testing the business I have experience that TDD works well if you limit it to simple business logic. However complex business logic is hard to test since the number of combinations of tests (test space) is very large. Contradiction in requirements In reality it’s hard to capture all requirements under analysis and design. Many times one note requirements lead to contradiction because the project is complex. The contradiction is found late under implementation phase. TDD requires that requirements are 100% correct. In such cases one could expect that conflicting requirements would be captured during creating of tests. But the problem is that this isn’t the case in complex scenarios. I have read this question: Why does TDD work? Does TDD really work for complex enterprise projects, or is it practically limit to project type?

    Read the article

  • Do or can robots cause considerable performance issues?

    - by Anicho
    So the question in the title is exactly what I am trying to find out. My case is: At work we are in a discussion with team members who seem to think bots will cause us problems relating to performance when running on our services website. Out setup: Lets say I have site www.mysite.co.uk this is a shop window to our online services which sit on www.mysiteonline.co.uk. When people search in google for mysite they see mysiteonline.co.uk as well as mysite.co.uk. Cases against stopping bots crawling: We don't store gb's of data publicly available on the web Most friendly bots, if they were to cause issues would have done so already In our instance the bots can't crawl the site because it requires username & password Stopping bots with robot .txt causes an issue with seo (ref.1) If it was a malicious bot, it would ignore robot.txt or meta tags anyway Ref 1. If we were to block mysiteonline.co.uk from having robots crawl this will affect seo rankings and make it inconvenient for users who actively search for mysite to find mysiteonline. Which we can prove is the case for a good portion of our users.

    Read the article

  • SQLAuthority News – Scaling Up Your Data Warehouse with SQL Server 2008 R2

    - by pinaldave
    Data Warehouses are suppose to be containing huge amount of the data from the beginning. However, there are cases when too big is not enough. Every Data Warehouse Admin will agree that they have faced situation where they will need to scale up their data warehouse. Microsoft has released white paper discussing the same. Here is the abstract from the Microsoft Official site: SQL Server 2008 introduced many new functional and performance improvements for data warehousing, and SQL Server 2008 R2 includes all these and more. This paper discusses how to use SQL Server 2008 R2 to get great performance as your data warehouse scales up. We present lessons learned during extensive internal data warehouse testing on a 64-core HP Integrity Superdome during the development of the SQL Server 2008 release, and via production experience with large-scale SQL Server customers. Our testing indicates that many customers can expect their performance to nearly double on the same hardware they are currently using, merely by upgrading to SQL Server 2008 R2 from SQL Server 2005 or earlier, and compressing their fact tables. We cover techniques to improve manageability and performance at high-scale, encompassing data loading (extract, transform, load), query processing, partitioning, index maintenance, indexed view (aggregate) management, and backup and restore. Scaling Up Your Data Warehouse with SQL Server 2008 R2 Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Adding SSE support in Java EE 8

    - by delabassee
    SSE (Server-Sent Event) is a standard mechanism used to push, over HTTP, server notifications to clients.  SSE is often compared to WebSocket as they are both supported in HTML 5 and they both provide the server a way to push information to their clients but they are different too! See here for some of the pros and cons of using one or the other. For REST application, SSE can be quite complementary as it offers an effective solution for a one-way publish-subscribe model, i.e. a REST client can 'subscribe' and get SSE based notifications from a REST endpoint. As a matter of fact, Jersey (JAX-RS Reference Implementation) already support SSE since quite some time (see the Jersey documentation for more details). There might also be some cases where one might want to use SSE directly from the Servlet API. Sending SSE notifications using the Servlet API is relatively straight forward. To give you an idea, check here for 2 SSE examples based on the Servlet 3.1 API.  We are thinking about adding SSE support in Java EE 8 but the question is where as there are several options, in the platform, where SSE could potentially be supported: the Servlet API the WebSocket API JAX-RS or even having a dedicated SSE API, and thus a dedicated JSR too! Santiago Pericas-Geertsen (JAX-RS Co-Spec Lead) conducted an initial investigation around that question. You can find the arguments for the different options and Santiago's findings here. So at this stage JAX-RS seems to be a good choice to support SSE in Java EE. This will obviously be discussed in the respective JCP Expert Groups but what is your opinion on this question?

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >