Search Results

Search found 605 results on 25 pages for 'diagnose'.

Page 21/25 | < Previous Page | 17 18 19 20 21 22 23 24 25  | Next Page >

  • Is there anything on a local network or desktop environment that could effect JScript execution?

    - by uku
    I know this sounds odd. The JS on my project functions perfectly, except when the web site is accessed using computers at one specific company. To make things even more difficult, the JS fails only about 50% of the time when run from that company. The JS failure occurs with FireFox, Chrome, and IE. I have tested this myself using FF and Chrome on a thumb drive. The browsers on my thumb always display my project site perfectly, except when run from a computer on said company's network where they fail at the same rate as the installed browsers. My JS is using jQuery and making some Ajax calls. The Ajax calls are where the failure is occurring. To diagnose the problem I created a logging function for my Ajax calls and recorded success and failure. Over a one month period, there were only a handful of failures (about 1%) from all access points other than this company. Oddly enough, the Ajax calls in the logging function are not failing. There is nothing exotic there - Just Win XP SP3. I have never noticed any other unusual behavior from their network. The company is a division of a mega ISP and is on their corporate network. Any other suggestions for troubleshooting would be welcome.

    Read the article

  • Looking for out-of-place directories in an SVN working copy?

    - by jthg
    An annoyance that I sometimes come across with SVN is the working copy getting corrupted by one of the .svn folders getting moved from its original location. It doesn't happen often if you're careful and use the proper tools for all moves and renames, but it still somehow happens from time to time. First, does anyone know if there's a good way to catch the problem before a commit is even done? Cruise control usually catches the problem, but there are plenty of cases it wouldn't catch. Second, is there a quick and easy way to check for out-of-place .svn folder if I suspect that there is one? I can definitely do it manually by deducing what directory is out of place based on the compiler errors or by diffing the working copy with another clean checkout. But, this seems like a problem that SVN can diagnose in a second by giving me a list of all directories whose parent directory in the working copy doesn't match its parent directory in the repository. There there some way to have SVN give me a list like that? Thanks.

    Read the article

  • Convert Markdown text to RTF, using Ruby and Pandoc?

    - by niteshade
    Playing with Ruby and Ruby-Pandoc. Seems like a nice tool, if I can get it to work. I'd like to convert some Markdown text (with embedded lists and other fanciness) to Rich Text. Here's the text I'm converting: Title === This is a paragraph. Hallelujah. Here comes a nested list. --- * List item 1 * List item 1.1 * List item 1.2 * List item 2 * List item 2.1 Here's my Ruby code... require 'pandoc-ruby' input = File.read(test.md) converter = PandocRuby.new(input, from: :markdown, to: :rtf) puts converter.convert ... which (after saving the output to a file) produces a document without anything but a title: Here's the code of the RTF file: {\pard \ql \f0 \sa180 \li0 \fi0 \b \fs36 Title\par} {\pard \ql \f0 \sa180 \li0 \fi0 This is a paragraph. Hallelujah.\par} {\pard \ql \f0 \sa180 \li0 \fi0 \b \fs32 Here comes a nested list.\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 1\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 1.1\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 1.2\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 2\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 2.1\sa180\par} In addition, even if it did show up in my RTF viewer (Mac TextEdit), the RTF code seems to have lost all list nesting. I don't know how to diagnose this, whether I have not stated necessary header information or something in Ruby-Pandoc. Thanks in advance!

    Read the article

  • A SharePoint Developer&rsquo;s Toolchest

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). When we develop for SharePoint, we end up using many tools, third party or Microsoft, to facilitate our development. What are some of your favorite tools? Mine are as below - 1. Reflector: When I saw reflector, I was pretty convinced that a tool better and more useful than it doesn’t exist. Well I was wrong! Redgate took over reflector and they still offer it as a free version, but they have a paid version called reflector pro. It lets you debug third party source code, as if you had the source code. Brilliant! Who needs documentation anymore when you have real code? 2. ULS Viewer: It is no secret, reading ULS logs is a pain in the rear. Well, not so with ULS Viewer, which does work with SharePoint 2007 as well. But it’s just way cooler with SharePoint 2010. You know when you get an error in SharePoint 2010 it shows you an error like as below: Well, the ULS Viewer will allow you to set filtering critereon, allowing you to immediately zero in, into an error, across multiple WFEs even. Also there are numerous other facilities built into the tool, such as advanced filtering, critical error notifications, etc. A must have! You can read the documentation of the ULSViewer here. 3. SPDisposeCheck: Did you know that the MySite object is strange? What is strange about it? That you have to dispose it even if you didn’t create it!? Well who the hell remembers all that! Honestly I do! And you should too. But there is a tool to help you sanitize your code. And that is SPDisposeCheck. You run it against your DLL or EXE, and it will give you suggestions on where you might have missed calling dispose on an object. You still have to use your head, but having this tool helps. 4. DebugView: Debugging for SharePoint can be difficult sometimes. Sometimes your breakpoints don’t get hit. And while you can try and make them hit, it is sometimes easier to just write a bunch of Debug.WriteLines, and catch them from an external application such as DebugView. You simply use your code, and DebugView will catch all the Debug.WriteLine’s in your code like this - 5. BGInfo: One annoying thing about SharePoint projects, it causes the number of servers to multiply like bunnies. As I’m RDP’ing into many computers trying to diagnose a crazy issue, sometimes it becomes hard to remember which machine is which. BGInfo puts all that on the wallpaper, alongwith a bunch of other useful info. A bit like this - 5. WSPBuilder: SharePoint 2007 only, but I think there maybe a version for SP2010 coming later. I think the VS2010 tools for SP2010 development are quite nice, so WSPBuilder, well so far I don’t miss it. But lets see what WSPBuilder for 2010 brings – I haven’t seen it yet. However, I want to confidently assert that WSPBuilder for SP2007 is simply awesome. 6. SharePoint Manager: The SharePoint Manager 2010 is a SharePoint object model explorer. It enables you to browse every site on the local farm and view every property. It also enables you to change the properties. The VS2010 dev tools now include a server explorer, which show you a subset of properties in read-only. I would LOVE to see SharePoint manager like functionality built into VS2010. SharePoint Manager, a total must-have. Comment on the article ....

    Read the article

  • Install Oracle Configuration Manager's Standalone Collector

    - by Get Proactive Customer Adoption Team
    Untitled Document The Why and the How If you have heard of Oracle Configuration Manager (OCM), but haven’t installed it, I’m guessing this is for one of two reasons. Either you don’t know how it helps you or you don’t know how to install it. I’ll address both of those reasons today. First, let’s take a quick look at how My Oracle Support and the Oracle Configuration Manager work together to gain a good understanding of what their differences and roles are before we tackle the install.   Oracle Configuration Manger is the tool that actually performs the data collection task. You deploy this lightweight piece of software into your system to collect configuration information about the system and OCM uploads that data to Oracle’s customer configuration repository. Oracle Support Engineers then have the configuration data available when you file a service request. You can also view the data through My Oracle Support. The real value is that the data Oracle Configuration Manager collects can help you avoid problems and get your Service Requests solved more quickly. When you view the information in My Oracle Support’s user interface to OCM, it may help you avoid situations that create problems. The proactive tools included in Oracle Configuration Manager help you avoid issues before they occur. You also save time because you didn’t need to open a service request. For example, you can use this capability when you need to compare your system configuration at two points in time, or monitor the system health. If you make the configuration data available to Oracle Support Engineers, when you need to open a Service Request the data helps them diagnose and resolve your critical system issues more quickly, which means you get answers more quickly too. Quick Installation Process Overview Before we dive into the step-by-step details, let me provide a quick overview. For some of you, this will be all you need. Log in to My Oracle Support and download the data collector from Collector tab. If you don’t see the Collector tab, click the More tab gain access. On the Collector tab, you will find a drop-down list showing which platforms are available. You can also see more ways to the Collector can help you if you click through the carousel of benefits. After you download the software for your platform, use FTP to move that file (.zip) from your PC to the server that hosts the Oracle software. Once you have that file on the server, locate the $ORACLE_HOME directory, and unzip the file within that directory. You can then use the command line tool to start the installation process. The installation process requires the My Oracle Support credential (Support Identifier, username, and password) Proxy specification (Host IP Address, Port number, username and password) Installation Step-by-Step Download the collector zip file from My Oracle Support and place it into your $Oracle_Home Unzip the zip file you downloaded from My Oracle Support – this will create a directory named CCR with several subdirectories Using the command line go to “$ORACLE_HOME/CCR/bin” and run the following command “setupCCR” Provide your My Oracle Support credential: login, password, and Support Identifier The installer will start deploying the collector application You have installed the Collector Post Installation Now that you have installed successfully, the scheduler is ready to collect configuration information for the software available in your Oracle Home. By default, the first collection will take place the day after the installation. If you want to run an instrumentation script to start the configuration collection of your Oracle Database server, E-Business Suite, or Enterprise Manager, you will find more details on that in the Installation and Administration Guide for My Oracle Support Configuration Manager. Related documents available on My Oracle Support Oracle Configuration Manager Installation and Administration Guide [ID 728989.5] Oracle Configuration Manager Prerequisites [ID 728473.5] Oracle Configuration Manager Network Connectivity Test [ID 728970.5] Oracle Configuration Manager Collection Overview [ID 728985.5] Oracle Configuration Manager Security Overview [ID 728982.5] Oracle Software Configuration Manager: Disconnected Mode Collection [ID 453412.1]

    Read the article

  • ArchBeat Top 10 for December 2-8, 2012

    - by Bob Rhubart
    The Top 10 most-clicked items shared on the OTN ArchBeat Facebook page for the week of December 2-8, 2012 Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics Another of the four posts published on Dec 4 by the Fusion Middleware A-Team blogger identified as "fip" illlustrates "how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that the JMS Topic messages will be evenly distributed to same composite running off different SOA cluster nodes without causing duplication." Web Service Example - Part 3: Asynchronous Part 3 in this series from the Oracle ADF Mobile blog looks at "firing the web service asynchronously and then filling in the UI when it completes." Denis says, "This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data." Advanced Oracle SOA Suite Oracle Open World 2012 SOA Presentations Oracle SOA & BPM Partner Community blogger Juergen Kress shares a list of 13 SOA presentations delivered or moderated by Oracle SOA Product Management at OOW12 in San Francisco. Oracle WebLogic Server WLS Domain Browser My colleague Jeff Davies, a frequent speaker at OTN Architect Day events and a genuinely nice guy, emailed me last night with this message: "I just came across this app on Google Play. It allows WebLogic administrators to browse WLS 12c domain information. I installed it on my phone and tried it out. Works very fast." I'm an iPhone guy, but I'm perfectly comfortable taking Jeff at his word. The app is called WLS Domain Browser. Follow the link for more info from the Google Play site. Retrieve Performance Data from SOA Infrastructure Database Another of the four blog posts published on Dec 4 by very busy Oracle Fusion Middleware A-Team member "fip," this one offers "examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time." How to Achieve OC4J RMI Load Balancing "Having returned from a customer who faced challenges with OC4J RMI load balancing, I felt there is still some confusion in the field [about] how OC4J RMI load balancing works," says the Oracle Fusion Middleware A-Team member known only as "fip." "Hence I decide to dust off an old tech note that I wrote a few years back and share it with the general public." From XaaS to Java EE – Which damn cloud is right for me in 2012? Oracle ACE Director Markus Eisele wrestles with a timely technical issue and shares his observations on several of the alternatives. Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox "One of the main advantages of this is that Templates can be created away from the Exalogic Environment," explains The Old Toxophilist. (BTW: I had to look it up: a toxophilist is one who collects bows and arrows.) ADF Mobile - Implementing Reusable Mobile Architecture "Reusability was always a strong part of ADF," says Oracle ACE Director Andrejus Baranovskis. "The same high reusability level is supported now in ADF Mobile." The objective of this post is "to prove technically that [the] reusable architecture concept works for ADF Mobile." Using BPEL Performance Statistics to Diagnose Performance Bottlenecks Someone had a busy day… This post, one of four published on DeC 4 by a member of the Oracle Fusion Middleware A-Team identified only as "fip," offers details on how to "enable, retrieve and interpret the performance statistics, before the future versions provides a more pleasant user experience." Thought for the Day "If you're afraid to change something it is clearly poorly designed." — Martin Fowler Source: SoftwareQuotes.com

    Read the article

  • mythbuntu 12 - lirc device doesn't appear to even exist

    - by FrustratedWithFormsDesigner
    I'm trying to get a new installation of Mythbuntu working. So far, everything is OK except the remote. The sensor for the remote is on my Hauppauge WinTV HVR 1250. First I tried to run irw to see what was being picked up by the sensor: $ irw connect: No such file or directory Then trying to run lircd gives: $ lircd start$ lircd start lircd: can't open or create /var/run/lirc/lircd.pid I look for any lirc devices and find there are none: $ ls /dev/li* ls: cannot access /dev/li*: No such file or directory Just to be sure, I check in /proc/bus/input/devices, which shows me two powerbuttons (not sure why), kbd and mouse dev, and the audio devs. Nothing for the IR receiver on the tuner card (which I thought was strange because shouldn't the tuner show up here?). $ cat /proc/bus/input/devices I: Bus=0019 Vendor=0000 Product=0001 Version=0000 N: Name="Power Button" P: Phys=PNP0C0C/button/input0 S: Sysfs=/devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input0 U: Uniq= H: Handlers=kbd event0 B: PROP=0 B: EV=3 B: KEY=10000000000000 0 I: Bus=0019 Vendor=0000 Product=0001 Version=0000 N: Name="Power Button" P: Phys=LNXPWRBN/button/input0 S: Sysfs=/devices/LNXSYSTM:00/LNXPWRBN:00/input/input1 U: Uniq= H: Handlers=kbd event1 B: PROP=0 B: EV=3 B: KEY=10000000000000 0 I: Bus=0003 Vendor=099a Product=7202 Version=0111 N: Name="Wireless Keyboard/Mouse" P: Phys=usb-0000:00:10.1-2/input0 S: Sysfs=/devices/pci0000:00/0000:00:10.1/usb8/8-2/8-2:1.0/input/input2 U: Uniq= H: Handlers=sysrq kbd event2 B: PROP=0 B: EV=120013 B: KEY=1000000000007 ff9f207ac14057ff febeffdfffefffff fffffffffffffffe B: MSC=10 B: LED=7 I: Bus=0003 Vendor=099a Product=7202 Version=0111 N: Name="Wireless Keyboard/Mouse" P: Phys=usb-0000:00:10.1-2/input1 S: Sysfs=/devices/pci0000:00/0000:00:10.1/usb8/8-2/8-2:1.1/input/input3 U: Uniq= H: Handlers=kbd mouse0 event3 B: PROP=0 B: EV=1f B: KEY=4837fff072ff32d bf54444600000000 70001 20c100b17c000 267bfad9415fed 9e168000004400 10000002 B: REL=143 B: ABS=100000000 B: MSC=10 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Line" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input4 U: Uniq= H: Handlers=event4 B: PROP=0 B: EV=21 B: SW=2000 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Front Mic" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input5 U: Uniq= H: Handlers=event5 B: PROP=0 B: EV=21 B: SW=10 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Rear Mic" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input6 U: Uniq= H: Handlers=event6 B: PROP=0 B: EV=21 B: SW=10 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Front Headphone" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input7 U: Uniq= H: Handlers=event7 B: PROP=0 B: EV=21 B: SW=4 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Line-Out" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input8 U: Uniq= H: Handlers=event8 B: PROP=0 B: EV=21 B: SW=40 According to dmesg, the driver was registered, but it doesn't look like any devices was associated with the driver: $ dmesg | grep irc [ 10.631162] lirc_dev: IR Remote Control driver registered, major 249 So far, I've seen a number of forum pages suggesting that I use some trick to create a link between /dev/lirc and some other device that is the REAL IR sensor, like /dev/event5, but those cases assume that the real device is shown from /proc/bus/input/devices, and I don't see any such device there. Any suggestions on how to fix or further diagnose this?

    Read the article

  • Windows Azure Emulators On Your Desktop

    - by BuckWoody
    Many people feel they have to set up a full Azure subscription online to try out and develop on Windows Azure. But you don’t have to do that right away. In fact, you can download the Windows Azure Compute Emulator – a “cloud development environment” – right on your desktop. No, it’s not for production use, and no, you won’t have other people using your system as a cloud provider, and yes, there are some differences with Production Windows Azure, but you’ll be able code, run, test, diagnose, watch, change and configure code without having any connection to the Internet at all. The best thing about this approach is that when you are ready to deploy the code you’ve been testing, a few clicks deploys it to your subscription when you make one.   So what deep-magic does it take to run such a thing right on your laptop or even a Virtual PC? Well, it’s actually not all that difficult. You simply download and install the Windows Azure SDK (you can even get a free version of Visual Studio for it to run on – you’re welcome) from here: http://msdn.microsoft.com/en-us/windowsazure/cc974146.aspx   This SDK will also install the Windows Azure Compute Emulator and the Windows Azure Storage Emulator – and then you’re all set. Right-click the icon for Visual Studio and select “Run as Administrator”:    Now open a new “Cloud” type of project:   Add your Web and Worker Roles that you want to code:   And when you’re done with your design, press F5 to start the desktop version of Azure:   Want to learn more about what’s happening underneath? Right-click the tray icon with the Azure logo, and select the two emulators to see what they are doing:          In the configuration files, you’ll see a “Use Development Storage” setting. You can call the BLOB, Table or Queue storage and it will all run on your desktop. When you’re ready to deploy everything to Windows Azure, you simply change the configuration settings and add the storage keys and so on that you need.   Want to learn more about all this?   Overview of the Windows Azure Compute Emulator: http://msdn.microsoft.com/en-us/library/gg432968.aspx Overview of the Windows Azure Storage Emulator: http://msdn.microsoft.com/en-us/library/gg432983.aspx January 2011 Training Kit: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=413E88F8-5966-4A83-B309-53B7B77EDF78&displaylang=en      

    Read the article

  • BizTalk: Suspend shape and Convoy

    - by Leonid Ganeline
    Part 1: BizTalk: Instance Subscription and Convoys: Details This is a Part 2. I am discussing the Suspend shape together with Convoys and going to show that using them together is undesirable. In previous article we investigated the Instance Subscriptions and how they could create situation with dangerous zones in processing.  Let' start with Suspend shape. [See the BizTalk Help] "You can use the Suspend shape to make an orchestration instance stop running until an administrator explicitly intervenes, perhaps to reflect an error condition that requires attention beyond the scope of the orchestration. All of the state information for the orchestration instance is saved, and will be reinstated when the administrator resumes the orchestration instance. When an orchestration instance is suspended, an error is raised. You can specify a message string to accompany the error to help the administrator diagnose the situation."   On the Suspend shape the orchestration is stopped in the Suspended (Resumable) state. Next we have two choices, one is to resume and the second is to terminate the orchestration. Is the orchestration is stopped or unenlisted? You don't find a note about it anywhere. The fact is the Orchestration is stopped and still enlisted. It is very important. So again, the suspended orchestration can be resumed or terminated. The moment when the operator or the operation script resumes or terminates can be far away. It is also important too. Let's go back to the case from previous article. Make sure you notice the convoy and the dangerous zone after the last Receive shape.     Now we have a Suspend shape inside the orchestration. The first orchestration instance is suspended. Next messages start new orchestration instance and have been consumed by this orchestration, right? Wrong! The orchestration is stopped on the Suspend shape but still enlisted. Now the dangerous zone, the "zombie zone" is expanded to the interval between the last receive and the moment of termination or end of the orchestration. The new orchestration instance for this convoy will not start till this moment. How fast operator finds out this suspended orchestration? Maybe hours or days. All this time orchestration is still enlisted and gathering the convoy messages. We can resume the orchestration but we cannot resume these messages together with orchestration. Seems the name Suspended of the orchestration is misleading. The orchestration can be in the Started (and Enlisted)/Stopped (and Enlisted)/Unenlisted state. The Suspend shape switches orchestration exactly to the Stopped state. The Stop name would describe the shape clearly and unambiguously and the Stopped state would describe the orchestration. Imagine we can change the BizTalk. The Orchestration editor can search these situations and returns the compile error. In similar case the Orchestration Editor forces us to use only ordered delivery port with convoys. The run-time core can force the orchestration with convoy be suspended in Unresumable state, that means the run-time unenlists the orchestration instance subscriptions. The Suspend shape name should be changed. The "Suspend" name is misleading. The "Stop" name is clear and unambiguous. The same for the orchestration state, it should be “Stopped” not “Suspended (Resumable)”.   Conclusion:  It is not recommended using a Suspend shape together with the convoy orchestrations.

    Read the article

  • Resolve SRs Faster Using RDA - Find the Right Profile

    - by Daniel Mortimer
    Introduction Remote Diagnostic Agent (RDA) is an excellent command-line data collection tool that can aid troubleshooting / problem solving. The tool covers the majority of Oracle's vast product range, and its data collection capability is comprehensive. RDA collects data about the operating system and environment, including environment variable, kernel settings network o/s performance o/s patches and much more the Oracle Products installed, including patches logs and debug metrics configuration and much more In effect, RDA can obtain a snapshot of an Oracle Product and its environment. Oracle Support encourages the use of RDA because it greatly reduces service request resolution time by minimizing the number of requests from Oracle Support for more information. RDA is designed to be as unobtrusive as possible; it does not modify systems in any way. It collects useful data for Oracle Support only and a security filter is provided if required. Find and Use the Right RDA Profile One problem of any tool / utility, which covers a large range of products, is knowing how to target it against only the products you wish to troubleshoot. RDA does not have a GUI. Nor does RDA have an intelligent mechanism for detecting and automatically collecting data only for those Oracle products installed. Instead, you have to tell RDA what to do. There is a mind boggling large number of RDA data collection modules which you can configure RDA to use. It is easier, however, to setup RDA to use a "Profile". A profile consists of a list of data collection modules and predefined settings. As such profiles can be used to diagnose a problem with a particular product or combination of products. How to run RDA with a profile? ( <rda> represents the command you selected to run RDA (for example, rda.pl, rda.cmd, rda.sh, and perl rda.pl).) 1. Use the embedded spreadsheet to find the RDA profile which is appropriate for your problem / chosen Oracle Fusion Middleware products. 2. Use the following command to perform the setup <rda> -S -p <profile_name>  3. Run the data collection <rda> Run the data collection. If you want to perform setup and run in one go, then use a command such as the following: <rda> -vnSCRP -p <profile name> For more information, refer to: Remote Diagnostic Agent (RDA) 4 - Profile Manual Pages [ID 391983.1] Additional Hints / Tips: 1. Be careful! Profile names are case sensitive.2. When profiles are not used, RDA considers all existing modules by default. For example, if you have downloaded RDA for the first time and run the command <rda> -S you will see prompts for every RDA collection module many of which will be of no interest to you. Also, you may, in your haste to work through all the questions, forget to say "Yes" to the collection of data that is pertinent to your particular problem or product. Profiles avoid such tedium and help ensure the right data is collected at the first time of asking.

    Read the article

  • Windows Azure Emulators On Your Desktop

    - by BuckWoody
    Many people feel they have to set up a full Azure subscription online to try out and develop on Windows Azure. But you don’t have to do that right away. In fact, you can download the Windows Azure Compute Emulator – a “cloud development environment” – right on your desktop. No, it’s not for production use, and no, you won’t have other people using your system as a cloud provider, and yes, there are some differences with Production Windows Azure, but you’ll be able code, run, test, diagnose, watch, change and configure code without having any connection to the Internet at all. The best thing about this approach is that when you are ready to deploy the code you’ve been testing, a few clicks deploys it to your subscription when you make one.   So what deep-magic does it take to run such a thing right on your laptop or even a Virtual PC? Well, it’s actually not all that difficult. You simply download and install the Windows Azure SDK (you can even get a free version of Visual Studio for it to run on – you’re welcome) from here: http://msdn.microsoft.com/en-us/windowsazure/cc974146.aspx   This SDK will also install the Windows Azure Compute Emulator and the Windows Azure Storage Emulator – and then you’re all set. Right-click the icon for Visual Studio and select “Run as Administrator”:    Now open a new “Cloud” type of project:   Add your Web and Worker Roles that you want to code:   And when you’re done with your design, press F5 to start the desktop version of Azure:   Want to learn more about what’s happening underneath? Right-click the tray icon with the Azure logo, and select the two emulators to see what they are doing:          In the configuration files, you’ll see a “Use Development Storage” setting. You can call the BLOB, Table or Queue storage and it will all run on your desktop. When you’re ready to deploy everything to Windows Azure, you simply change the configuration settings and add the storage keys and so on that you need.   Want to learn more about all this?   Overview of the Windows Azure Compute Emulator: http://msdn.microsoft.com/en-us/library/gg432968.aspx Overview of the Windows Azure Storage Emulator: http://msdn.microsoft.com/en-us/library/gg432983.aspx January 2011 Training Kit: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=413E88F8-5966-4A83-B309-53B7B77EDF78&displaylang=en      

    Read the article

  • Must go through Windows Boot Loader to get to Grub

    - by Zach
    I just installed a fresh copy of Precise alongside Windows 7. I have to separate 750GB hard drives; /dev/sda holds the Windows partitions and /dev/sdb holds the Ubuntu partitions. Other than that, these are fresh installs of both Windows 7 and Ubuntu 12.04. Whenever I boot, Grub doesn't load, instead it goes to a black screen with a single blinking (horizontal bar) cursor in the top right corner. However, if I boot, hit escape right as the BIOS/POST screen finishes up, see the Windows Boot Loader and hit escape to make it go back to the BIOS screen. After the BIOS screen, grub shows up and everything functions normally; I can boot into Ubuntu or Win7. I don't want to have to do the Escape, Escape, Wait, Boot trick every time. I have no idea what would be wrong or what information I could give you guys to help diagnose. I have run a sudo update-grub and it found everything normally. I tried adding nomodeset flag in the /etc/default/grub line GRUB_CMDLINE_LINUX_DEFAULT which searching around made me think might work. Thoughts on what I could do to fix this? EDIT: I've tried changing the boot order so that both drives in the BIOS (both are labeled as "Internal HDD") have had a try booting first. I think the problem may be that every time I boot, the BIOS boot order is different... and I have to reset it. It seems to not be stable... but I'm not sure how to go about fixing that either. The machine has both traditional BIOS and UEFI. It came standard in "Legacy" mode; so it is currently set to boot through Legacy mode. I've reinstalled Ubuntu now, and now if I hit escape at the end of the BIOS/POST startup screen, it takes me to GRUB menu. Otherwise it automatically loads Windows. It seems like GRUB is now the acting bootloader, it just doesn't automatically start that unless I ask it to open a bootloader. In my other machines, it has always automatically started at the end of BIOS/POST. EDIT2: Using gparted, I just looked at my partitions, it would seem that my linux-swap partition is currently flagged as the boot partition for my Ubuntu install. I currently only have 2 partitions: one of "ext4" with a mount point of "/" and flag " "; and the "linux-swap" with mount point " " and flag "boot." If I change the boot flag to be on "/," it does not reliably solve the problem. After 10 boots: 2 Booted successfully to GRUB 5 Booted directly to Windows 7 3 booted to the black screen with the cursor and hung there Further research makes me think this is an issue of the BIOS not reliably booting hard drives in the same order or not finding both hard drives. If I ask it to create a "boot menu" sometimes it has 2 entries for "Internal HDD," sometimes 1. Also the list it creates changes order every time I bring it up; so it is not following a consistent boot sequence. Will report back if this is not an issue with GRUB.

    Read the article

  • Perm SSIS Developer Urgently Required

    - by blakmk
      Job Role To provide dedicated data services support to the company, by designing, creating, maintaining and enhancing database objects, ensuring data quality, consistency and integrity. Migrating data from various sources to central SQL 2008 data warehouse will be the primary function. Migration of data from bespoke legacy database’s to SQL 2008 data warehouse. Understand key business requirements, Liaising with various aspects of the company. Create advanced transformations of data, with focus on data cleansing, redundant data and duplication. Creating complex business rules regarding data services, migration, Integrity and support (Best Practices). Experience ·         Minimum 3 year SSIS experience, in a project or BI Development role and involvement in at least 3 full ETL project life cycles, using the following methodologies and tools o    Excellent knowledge of ETL concepts including data migration & integrity, focusing on SSIS. o    Extensive experience with SQL 2005 products, SQL 2008 desirable. o    Working knowledge of SSRS and its integration with other BI products. o    Extensive knowledge of T-SQL, stored procedures, triggers (Table/Database), views, functions in particular coding and querying. o    Data cleansing and harmonisation. o    Understanding and knowledge of indexes, statistics and table structure. o    SQL Agent – Scheduling jobs, optimisation, multiple jobs, DTS. o    Troubleshoot, diagnose and tune database and physical server performance. o    Knowledge and understanding of locking, blocks, table and index design and SQL configuration. ·         Demonstrable ability to understand and analyse business processes. ·         Experience in creating business rules on best practices for data services. ·         Experience in working with, supporting and troubleshooting MS SQL servers running enterprise applications ·         Proven ability to work well within a team and liaise with other technical support staff such as networking administrators, system administrators and support engineers. ·         Ability to create formal documentation, work procedures, and service level agreements. ·         Ability to communicate technical issues at all levels including to a non technical audience. ·         Good working knowledge of MS Word, Excel, PowerPoint, Visio and Project.   Location Based in Crawley with possibility of some remote working Contact me for more info: http://sqlblogcasts.com/blogs/blakmk/contact.aspx      

    Read the article

  • Musings on the launch of SQL Monitor

    - by Phil Factor
    For several years, I was responsible for the smooth running of a large number of enterprise database servers. We ran a network monitoring tool that was primitive by today’s standards but which performed the useful function of polling every system, including all the Servers in my charge. It ran a configurable script for each service that you needed to monitor that was merely required to return one of a number of integer values. These integer values represented the pain level of the service, from 10 (“hurtin’ real bad”) to 1 (“Things is great”). Not only could you program the visual appearance of each server on the network diagram according to the value of the integer, but you could even opt to run a sound file. Very soon, we had a large TFT Screen, high on the wall of the server room, with every server represented by an icon, and a speaker next to it that would give out a series of grunts, groans, snores, shrieks and funeral marches, depending on the problem. One glance at the display, and you could dive in with iSQL/QA/SSMS and check what was going on with your favourite diagnostic tools. If you saw a server icon burst into flames on the screen or droop like a jelly, you dropped your mug of coffee to do it.  It was real fun, but I remember it more for the huge difference it made to have that real-time visibility into how your servers are performing. The management soon stopped making jokes about the real reason we wanted the TFT screen. (It rendered DVDs beautifully they said; particularly flesh-tints). If you are instantly alerted when things start to go wrong, then there was a good chance you could fix it before being alerted to the problem by the users of the system.  There is a world of difference between this sort of tool, one that gives whoever is ‘on watch’ in the server room the first warning of a potential problem on one of any number of servers, and the breed of tool that attempts to provide some sort of prosthetic DBA Brain. I like to get the early warning, to get the right information to help to diagnose a problem: No auto-fix, but just the information. I prefer to leave the task of ascertaining the exact cause of a problem to my own routines, custom code, intuition and forensic instincts. A simulated aircraft cockpit doesn’t do anything for me, especially before I know where I should be flying.  Time has moved on, and that TFT screen is now, with SQL Monitor, an iPad or any other mobile or static device that can support a browser. Rather than trying to reproduce the conceptual topology of the servers, it lists them in their groups so as to give a display that scales with the increasing number of databases you monitor.  It gives the history of the major events and trends for the servers. It gives the icons and colours that you can spot out of the corner of your eye, but goes on to give you just enough information in drill-down to give you a much clearer idea of where to look with your DBA tools and routines. It doesn't swamp you with information.  Whereas a few server and database-level problems are pretty easily fixed, others depend on judgement and experience to sort out.  Although the idea of an application that automates the bulk of a DBA’s skills is attractive to many, I can’t see it happening soon. SQL Server’s complexity increases faster than the panaceas can be created. In the meantime, I believe that the best way of helping  DBAs  is to make the monitoring process as simple and effective as possible,  and provide the right sort of detail and ‘evidence’ to allow them to decide on the fix. In the end, it is still down to the skill of the DBA.

    Read the article

  • Mastering snow and Java development at jDays in Gothenburg

    - by JavaCecilia
    Last weekend, I took the train from Stockholm to Gothenburg to attend and present at the new Java developer conference jDays. It was professionally arranged in the Swedish exhibition hall close to the amusement park Liseberg and we got a great deal out of the top-level presenters and hallway discussions. Understanding and Improving Your Java Process Our main purpose was to spread information on JVM and our monitoring tools for Java processes, so I held a crash course in the most important terms and concepts if you want to affect the performance of your Java process. From the beginning - the JVM specification to interpretation of heap usage graphs. For correct analysis, you also need to understand something about process memory - you need space for the Java heap (-Xms for initial size and -Xmx for max heap size), but the process memory also contain the thread stacks (to a size of -Xss), JVM internal data structures used for keeping track of Java objects on the heap, method compilation/optimization, native libraries, etc. If you get long pause times, make sure to monitor your application, see the allocation rate and frequency of pause times.My colleague Klara Ward then held a presentation on the Java Mission Control product, the profiling and diagnostics tools suite for HotSpot, coming soon. The room was packed and very appreciated, Klara demonstrated four different scenarios, e.g. how to diagnose and fix latencies due to lock contention for logging.My German colleague, OpenJDK ambassador Dalibor Topic travelled to Sweden to do the second keynote on "Make the Future Java". He let us in on the coming features and roadmaps of Java, now delivering major versions on a two-year schedule (Java 7 2011, Java 8 2013, etc). Also letting us in on where to download early versions of 8, to report problems early on. Software Development in teams Being a scout leader, I'm drilled in different team building and workshop techniques, creating strong groups - of course, I had to attend Henrik Berglund's session on building successful teams. He spoke about the importance of clear goals, autonomy and agreed processes. Thomas Sundberg ended the conference by doing live remote pair programming with Alex in Rumania and a concrete tips for people wanting to try it out (for local collaboration, remember to wash and change clothes). Memory Master Keynote The conference keynote was delivered by the Swedish memory master Mattias Ribbing, showing off by remembering the order of a deck of cards he'd seen once. He made it interactive by forcing the audience to learn a memory mastering technique of remembering ten ordered things by heart, asking us to shout out the order backwards and we made it! I desperately need this - bought the book, will get back on the subject. Continuous Delivery The most impressive presenter was Axel Fontaine on Continuous Delivery. Very well prepared slides with key images of his message and moved about the stage like a rock star. The topic is of course highly interesting, how to create an infrastructure enabling immediate feedback to developers and ability to release your product several times per day. Tomek Kaczanowski delivered a funny and useful presentation on good and bad tests, providing comic relief with poorly written tests and the useful rules of thumb how to rewrite them. To conclude, we had a great time and hope to see you at jDays next year :)

    Read the article

  • Teminal non-responsive on load, can't enter anything until CTRL+C

    - by Silver Light
    Hello! I have an issue with terminal in Ubuntu 10.04. When I launch it, it hangs, like this: I cannot do anything until I press CTRL+C: I cannot remember when this started. What can be wrong? Looks like teminal is loading or processing something each time it loads. How can I diagnose and solve this problem? EDIT: Here are the conents of ~/.bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines in the history. See bash(1) for more options # ... or force ignoredups and ignorespace HISTCONTROL=ignoredups:ignorespace # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # set a fancy prompt (non-color, unless we know we "want" color) case "$TERM" in xterm-color) color_prompt=yes;; esac # uncomment for a colored prompt, if the terminal has the capability; turned # off by default to not distract the user: the focus in a terminal window # should be on the output of commands, not on the prompt #force_color_prompt=yes if [ -n "$force_color_prompt" ]; then if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then # We have color support; assume it's compliant with Ecma-48 # (ISO/IEC-6429). (Lack of such support is extremely rare, and such # a case would tend to support setf rather than setaf.) color_prompt=yes else color_prompt= fi fi if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' fi unset color_prompt force_color_prompt # If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # some more ls aliases alias ll='ls -alF' alias la='ls -A' alias l='ls -CF' # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi # Source .profile if [ -f ~/.profile ]; then . ~/.profile fi Setting -x at the beginning showed me that it tries to repeat this without stopping: +++++++++++++++++++ '[' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' '!=' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' ']' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line=' acroread gpdf xpdf' +++++++++++++++++++ list=("${list[@]}" $line) +++++++++++++++++++ read line

    Read the article

  • Dont Throw Duplicate Exceptions

    In your code, youll sometimes have write code that validates input using a variety of checks.  Assuming you havent embraced AOP and done everything with attributes, its likely that your defensive coding is going to look something like this: public void Foo(SomeClass someArgument) { if(someArgument == null) { throw new InvalidArgumentException("someArgument"); } if(!someArgument.IsValid()) { throw new InvalidArgumentException("someArgument"); }   // Do Real Work } Do you see a problem here?  Heres the deal Exceptions should be meaningful.  They have value at a number of levels: In the code, throwing an exception lets the develop know that there is an unsupported condition here In calling code, different types of exceptions may be handled differently At runtime, logging of exceptions provides a valuable diagnostic tool Its this last reason I want to focus on.  If you find yourself literally throwing the exact exception in more than one location within a given method, stop.  The stack trace for such an exception is likely going to be identical regardless of which path of execution led to the exception being thrown.  When that happens, you or whomever is debugging the problem will have to guess which exception was thrown.  Guessing is a great way to introduce additional problems and/or greatly increase the amount of time require to properly diagnose and correct any bugs related to this behavior. Dont Guess Be Specific When throwing an exception from multiple code paths within the code, be specific.  Virtually ever exception allows a custom message use it and ensure each case is unique.  If the exception might be handled differently by the caller, than consider implementing a new custom exception type.  Also, dont automatically think that you can improve the code by collapsing the if-then logic into a single call with short-circuiting (e.g. if(x == null || !x.IsValid()) ) that will guarantee that you cant easily throw different information into the message as easily as constructing the exception separately in each case. The code above might be refactored like so:   public void Foo(SomeClass someArgument) { if(someArgument == null) { throw new ArgumentNullException("someArgument"); } if(!someArgument.IsValid()) { throw new InvalidArgumentException("someArgument"); }   // Do Real Work } In this case its taking advantage of the fact that there is already an ArgumentNullException in the framework, but if you didnt have an IsValid() method and were doing validation on your own, it might look like this: public void Foo(SomeClass someArgument) { if(someArgument.Quantity < 0) { throw new InvalidArgumentException("someArgument", "Quantity cannot be less than 0. Quantity: " + someArgument.Quantity); } if(someArgument.Quantity > 100) { throw new InvalidArgumentException("someArgument", "SomeArgument.Quantity cannot exceed 100. Quantity: " + someArgument.Quantity); }   // Do Real Work }   Note that in this last example, Im throwing the same exception type in each case, but with different Message values.  Im also making sure to include the value that resulted in the exception, as this can be extremely useful for debugging.  (How many times have you wished NullReferenceException would tell you the name of the variable it was trying to reference?) Dont add work to those who will follow after you to maintain your application (especially since its likely to be you).  Be specific with your exception messages follow DRY when throwing exceptions within a given method by throwing unique exceptions for each interesting case of invalid state. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Teminal hands on load, can't enter anything until CTRL+C

    - by Silver Light
    Hello! I have an issue with terminal in Ubuntu 10.04. When I launch it, it hangs, like this: I cannot do anything until I press CTRL+C: I cannot remember when this started. What can be wrong? Looks like teminal is loading or processing something each time it loads. How can I diagnose and solve this problem? EDIT: Here are the conents of ~/.bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines in the history. See bash(1) for more options # ... or force ignoredups and ignorespace HISTCONTROL=ignoredups:ignorespace # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # set a fancy prompt (non-color, unless we know we "want" color) case "$TERM" in xterm-color) color_prompt=yes;; esac # uncomment for a colored prompt, if the terminal has the capability; turned # off by default to not distract the user: the focus in a terminal window # should be on the output of commands, not on the prompt #force_color_prompt=yes if [ -n "$force_color_prompt" ]; then if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then # We have color support; assume it's compliant with Ecma-48 # (ISO/IEC-6429). (Lack of such support is extremely rare, and such # a case would tend to support setf rather than setaf.) color_prompt=yes else color_prompt= fi fi if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' fi unset color_prompt force_color_prompt # If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # some more ls aliases alias ll='ls -alF' alias la='ls -A' alias l='ls -CF' # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi # Source .profile if [ -f ~/.profile ]; then . ~/.profile fi Setting -x at the beginning showed me that it tries to repeat this without stopping: +++++++++++++++++++ '[' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' '!=' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' ']' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line=' acroread gpdf xpdf' +++++++++++++++++++ list=("${list[@]}" $line) +++++++++++++++++++ read line

    Read the article

  • HERMES Medical Solutions Helps Save Lives with MySQL

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} HERMES Medical Solutions was established in 1976 in Stockholm, Sweden, and is a leading innovator in medical imaging hardware/software products for health care facilities worldwide. HERMES delivers a plethora of different medical imaging solutions to optimize hospital workflow. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} HERMES advanced algorithms make it possible to detect the smallest changes under therapies important and necessary to optimize different therapeutic methods and doses. Challenges Fighting illness & disease requires state-of-the-art imaging modalities and software in order to diagnose accurately, stage disease appropriately and select the best treatment available. Selecting and implementing a new database platform that would deliver the needed performance, reliability, security and flexibility required by the high-end medical solutions offered by HERMES. Solution Decision to migrate from in-house database to an embedded SQL database powering the HERMES products, delivered either as software, integrated hardware and software solutions, or via the cloud in a software-as-a-service configuration. Evaluation of several databases and selection of MySQL based on its high performance, ease of use and integration, and low Total Cost of Ownership. On average, between 4 and 12 Terabytes of data are stored in MySQL databases underpinning the HERMES solutions. The data generated by each medical study is indeed stored during 10 years or more after the treatment was performed. MySQL-based HERMES systems also allow doctors worldwide to conduct new drug research projects leveraging the large amount of medical data collected. Hospitals and other HERMES customers worldwide highly value the “zero administration” capabilities and reliability of MySQL, enabling them to perform medical analysis without any downtime. Relying on MySQL as their embedded database, the HERMES team has been able to increase their focus on further developing their clinical applications. HERMES Medical Solutions could leverage the Oracle Financing payment plan to spread its investment over time and make the MySQL choice even more valuable. “MySQL has proven to be an excellent database choice for us. We offer high-end medical solutions, and MySQL delivers the reliability, security and performance such solutions require.” Jan Bertling, CEO.

    Read the article

  • Unable to start SQL Server Instance 2008 R2 - DB file corrupt

    - by Velu
    I was not able to start the SQL Server 2008 R2 production DB instance. After reading the log file error message is " The log scan number passed to log scan in database ‘master’ is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication." After reading several post i realize that my MASTER DB file is corrupted. I have followed the below setup Copy the Master.mdf and Masterlog.ldf file from Template location to My Database Data folder. C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\Templates to D:\MSSQL\MSSQL10_50.MSSQLSERVER\MSSQL\DATA Note: Same error occur when i copy the all DB file like Master, MasterLog, MSDBData, MSDBlog, Model and ModelLog When i run my MSSQLSEVER instance different problem occur. In My server i had only C, D- Drive i dont have the E drive. How can i override these below error path. Error LOG 2012-10-24 02:51:12.79 spid5s Error: 17204, Severity: 16, State: 1. 2012-10-24 02:51:12.79 spid5s FCB::Open failed: Could not open file e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBData.mdf for file number 1. OS error: 3(The system cannot find the path specified.). 2012-10-24 02:51:12.79 spid5s Error: 5120, Severity: 16, State: 101. 2012-10-24 02:51:12.79 spid5s Unable to open the physical file "e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBData.mdf". Operating system error 3: "3(The system cannot find the path specified.)". 2012-10-24 02:51:12.79 spid5s Error: 17207, Severity: 16, State: 1. 2012-10-24 02:51:12.79 spid5s FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file 'e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf'. Diagnose and correct the operating system error, and retry the operation. 2012-10-24 02:51:12.79 spid5s File activation failure. The physical file name "e:\sql10_main_t.obj.x86fre\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf" may be incorrect.

    Read the article

  • Postfix SMTP auth not working with virtual mailboxes + SASL + Courier userdb

    - by Greg K
    So I've read a variety of tutorials and how-to's and I'm struggling to make sense of how to get SMTP auth working with virtual mailboxes in Postfix. I used this Ubuntu tutorial to get set up. I'm using Courier-IMAP and POP3 for reading mail which seems to be working without issue. However, the credentials used to read a mailbox are not working for SMTP. I can see from /var/log/auth.log that PAM is being used, does this require a UNIX user account to work? As I'm using virtual mailboxes to avoid creating user accounts. li305-246 saslauthd[22856]: DEBUG: auth_pam: pam_authenticate failed: Authentication failure li305-246 saslauthd[22856]: do_auth : auth failure: [user=fred] [service=smtp] [realm=] [mech=pam] [reason=PAM auth error] /var/log/mail.log li305-246 postfix/smtpd[27091]: setting up TLS connection from mail-pb0-f43.google.com[209.85.160.43] li305-246 postfix/smtpd[27091]: Anonymous TLS connection established from mail-pb0-f43.google.com[209.85.160.43]: TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits) li305-246 postfix/smtpd[27091]: warning: SASL authentication failure: Password verification failed li305-246 postfix/smtpd[27091]: warning: mail-pb0-f43.google.com[209.85.160.43]: SASL PLAIN authentication failed: authentication failure I've created accounts in userdb as per this tutorial. Does Postfix also use authuserdb? What debug information is needed to help diagnose my issue? main.cf: # TLS parameters smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # SMTP parameters smtpd_sasl_local_domain = smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtp_tls_security_level = may smtpd_tls_security_level = may smtpd_tls_auth_only = no smtp_tls_note_starttls_offer = yes smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom /etc/postfix/sasl/smtpd.conf pwcheck_method: saslauthd mech_list: plain login /etc/default/saslauthd START=yes PWDIR="/var/spool/postfix/var/run/saslauthd" PARAMS="-m ${PWDIR}" PIDFILE="${PWDIR}/saslauthd.pid" DESC="SASL Authentication Daemon" NAME="saslauthd" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd" /etc/courier/authdaemonrc authmodulelist="authuserdb" I've only modified one line in authdaemonrc and restarted the service as per this tutorial. I've added accounts to /etc/courier/userdb via userdb and userdbpw and run makeuserdb as per the tutorial. SOLVED Thanks to Jenny D for suggesting use of rimap to auth against localhost IMAP server (which reads userdb credentials). I updated /etc/default/saslauthd to start saslauthd correctly (this page was useful) MECHANISMS="rimap" MECH_OPTIONS="localhost" THREADS=0 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" After doing this I got the following error in /var/log/auth.log: li305-246 saslauthd[28093]: auth_rimap: unexpected response to auth request: * BYE [ALERT] Fatal error: Account's mailbox directory is not owned by the correct uid or gid: li305-246 saslauthd[28093]: do_auth : auth failure: [user=fred] [service=smtp] [realm=] [mech=rimap] [reason=[ALERT] Unexpected response from remote authentication server] This blog post detailed a solution by setting IMAP_MAILBOX_SANITY_CHECK=0 in /etc/courier/imapd. Then restart your courier and saslauthd daemons for config changes to take effect. sudo /etc/init.d/courier-imap restart sudo /etc/init.d/courier-authdaemon restart sudo /etc/init.d/saslauthd restart Watch /var/log/auth.log while trying to send email. Hopefully you're good!

    Read the article

  • Windows 7 x64 installation freezes on new PC build

    - by jhsowter
    Symptoms While attempting to install Windows 7 (64 bit) on my new PC build, it freezes usually at the point where it is expanding the windows image, but has frozen as early as accepting the licence agreement, and as late as just after the first restart. My specs are at the bottom of the post. So far I have tried the following to identify the problem, in rough chronological order: Tried different hard drives with different sata cables. Same symptoms. I later used a different computer to install windows on the same hard drive with no problems. Tried the RAM in different slots, and tried one RAM stick instead of two. Same symptoms. Updated the BIOS to 1.60. Same symptoms. Ran Memtest86+ with RAM in dual channel. It passed about 6 times when I left it running overnight. Used USB to install windows instead of an optical drive. Same symptoms. Change SATA configuration from AHCI to IDE. Same symptoms. Tried various different SATA ports. Same symptoms. Updated BIOS to 1.70. Same symptoms. I saw the RAM did not list my motherboard as being supported even though the motherboard did list the RAM as being supported. So I tried some Kingston DDR3 1333MHz RAM instead. Same symptoms. Other (possibly) pertinent information My CPU idles at about 30 °C. I can't tell what it gets to when it's working. When I installed the CPU, the lever which locks the CPU in place took quite a bit of force to pull down. Now I didn't just yank it down without rechecking the CPU was seated properly about 5 times, but it does seems unusual, and I wonder if the CPU was seated badly if I would see these symptoms? I am out of ideas and don't know how to diagnose any further. I suppose either the motherboard or CPU must be the problem. I am on the verge of taking it to a specialist. The Question How should I proceed from here? Is there anything I can rule out as being the source of the symptoms I am seeing? My Specs CPU: Intel i5 3570k RAM: G.Skill RipjawsX 8GB kit HDD: single 3.5" 500GB SATA or 160GB 2.5" SATA (at different times and sometime together. But no RAID or anything). MB: ASRock Extreme4 Z77 PSU: Silverstone Strider Plus 600W ST60F-P

    Read the article

  • Troubleshooting major performance issue: Is culprit Intel RST, Hard drive, or something else?

    - by Sean Killeen
    The Setup I have the following components that come into play in this situation: ASUS P8Z68 V/PRO motherboard a RAID1 configuration (1x 1TB drive, 1 x 2TB drive -- I explain below), accelerated with an SSD using Intel's RST software, and 1 TB drive standing by as a spare. Core i7 2600k 32 GB RAM Windows 8.1 This box was designed to be beast, and until just recently, was very good at being just that. What's Happening The system has slowed to a crawl whenever it touches the disk. Things appear to work at normal speed when dealing with memory. For example, typing this is fine, but saving it to disk from notepad gave me a 5-7 second pause when clicking save. The disks appear to be at 100% all the time (e.g. the light on the disk access on the PC is solidly on -- not even any flashing) In ProcExp it appears that the disk is barely being utilized at all: Intel RST reports that everything is fine: Other Details Prior to this happening, RST had reported that my drives were failing (one went bad, one was throwing SMART events). This made sense; they were at the tail end of their warranty and the PC is on almost all the time. I RMA'd the drives via Seagate. In the meantime, I'd purchased a 2TB drive because I didn't realize that the 1TB drives were under warranty. I figured I'd replace the other 1 TB drive with another 2 TB when it died but then discovered the warranty. AFAIK, I haven't done any major updates since 8.1 and it worked fine after those. Question(s) How can I troubleshoot this? What is the best way to try to figure out why disks are being maxed out despite the OS reporting barely any disk usage and that everything is OK? Given the failures, etc. that I describe above, is it possible that the problem could be the I/O on the motherboard itself? If so, how would I even be able to diagnose it? I'm betting the drives that Seagate gave me are refurbished (didn't think to look; that's dumb). Is it possible that the same model drive, refurbished, could somehow cause this? In terms of how RAID1 works, is it possible that one drive is "falling behind" somehow, and that the RAID1 is constantly trying to fix the mirroring? If so, this seems like Intel RST would report on it, but I wanted to consider it as an option.

    Read the article

  • Are file access times not properly maintained in Mac OS X?

    - by Ether
    I'm trying to determine how file access times are maintained by default in Mac OS X, as I'm trying to diagnose some odd behaviour I'm seeing in a new MBP Unibody (running Snow Leopard, 10.6.2): The symptoms (drilling down to the specific behaviour that seems to be causing the issue): mutt is unable to switch to mailboxes which have recently received new mail mail is delivered by procmail, which updates the mtime of the mbox folder it is updating, but does not alter the atime (this is how new mail detection works: by comparing atime to mtime) however, both the mtime and atime of the mbox file is getting updated Through testing, it does not appear that atimes can be set separately in the filesystem: : [ether@tequila ~]$; touch test : [ether@tequila ~]$; touch -m -t 200801010000 test2 : [ether@tequila ~]$; touch -a -t 200801010000 test3 : [ether@tequila ~]$; ls -l test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Jan 1 2008 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 : [ether@tequila ~]$; ls -lu test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Dec 30 11:43 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 The test2 file is created with an old mtime, and the atime is set to now (as it is a new file), which is correct. However, test3 is created with an old atime, but is not set properly on the file. To be sure this is not just behaviour seen with new files, let's modify an old file: : [ether@tequila ~]$; touch -a -t 200801010000 test : [ether@tequila ~]$; ls -l test -rw------- 1 ether staff 0 Dec 30 11:42 test : [ether@tequila ~]$; ls -lu test -rw------- 1 ether staff 0 Dec 30 11:45 test So it would seem that atimes cannot be set explicitly (it is always reset to "now" when either mtime or atime modifications are submitted). Is this something inherent to the filesystem itself, is it something that can be changed, or am I totally crazy and looking in the wrong place? PS. the output of mount is: : [ether@tequila ~]$; mount /dev/disk0s2 on / (hfs, local, journaled) devfs on /dev (devfs, local, nobrowse) map -hosts on /net (autofs, nosuid, automounted, nobrowse) map auto_home on /home (autofs, automounted, nobrowse) ...and Disk Utility says that the drive is of type "Mac OS Extended (Journaled)".

    Read the article

  • How can I verify that my SSD is performing as it should?

    - by Jon Skeet
    EDIT: Okay, so I've no idea what caused the change, but after trying loads of different things to work out what was wrong, I've rerun the WEI (about the 4th time in total) and the score has jumped to a far more respectable 7.3. I'm going to leave well alone now :) I've got a brand new 256GB SSD (Crucial CT256M225) which should have stellar performance. However, on my (also brand new) Dell Studio 1557 with Windows 7 Professional 64 bit, it's only giving a performance index of 5.9. I realise the performance index should be taken with a bit of a pinch of salt, but I wonder whether something's wrong. Given this paragraph from this MSDN article on Windows 7, I'd expect to see a high 6.X or possible a 7.X figure: In Windows 7, there are new random read, random write and flush assessments. Better SSDs can score above 6.5 all the way to 7.9. To be included in that range, an SSD has to have outstanding random read rates and be resilient to flush and random write workloads. In the Beta timeframe of Windows 7, there was a capping of scores at 1.9, 2.9 or the like if a disk (SSD or HDD) didn’t perform adequately when confronted with our random write and flush assessments. Feedback on this was pretty consistent, with most feeling the level of capping to be excessive. As a result, we now simply restrict SSDs with performance issues from joining the newly added 6.0+ and 7.0+ ranges. SSDs that are not solid performers across all assessments effectively get scored in a manner similar to what they would have been in Windows Vista, gaining no Win7 boost for great random read performance. How can I diagnose any performance issues with either the disk or how Windows 7 is handling it? Are there any particularly good tools you'd recommend? One note of curiosity: I couldn't install the firmware update (to 1916) until I changed my BIOS handling of the drive to ATA mode; after installing the firmware I tried to boot the Windows installation DVD - but that only worked after turning it back to AHCI mode (which I've left it in). Installing Windows 7 took longer than I expected - it sat at the "Windows is loading files" prompt for a very long time. Likewise it was on "Expanding files (0%)" for a long time. Since installation it's been fine though - but I don't know whether it's really providing quite as beefy performance as it should. EDIT: My netbook with the 64GB equivalent drive has a performance index of 6.6...

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25  | Next Page >