Search Results

Search found 46813 results on 1873 pages for 'system shutdown'.

Page 814/1873 | < Previous Page | 810 811 812 813 814 815 816 817 818 819 820 821  | Next Page >

  • SQL SERVER – MSQL_XP – Wait Type – Day 20 of 28

    - by pinaldave
    In this blog post, I am going to discuss something from my field experience. While consultation, I have seen various wait typed, but one of my customers who has been using SQL Server for all his operations had an interesting issue with a particular wait type. Our customer had more than 100+ SQL Server instances running and the whole server had MSSQL_XP wait type as the most number of wait types. While running sp_who2 and other diagnosis queries, I could not immediately figure out what the issue was because the query with that kind of wait type was nowhere to be found. After a day of research, I was relieved that the solution was very easy to figure out. Let us continue discussing this wait type. From Book On-Line: ?MSQL_XP occurs when a task is waiting for an extended stored procedure to end. SQL Server uses this wait state to detect potential MARS application deadlocks. The wait stops when the extended stored procedure call ends. MSQL_XP Explanation: This wait type is created because of the extended stored procedure. Extended Stored Procedures are executed within SQL Server; however, SQL Server has no control over them. Unless you know what the code for the extended stored procedure is and what it is doing, it is impossible to understand why this wait type is coming up. Reducing MSQL_XP wait: As discussed, it is hard to understand the Extended Stored Procedure if the code for it is not available. In the scenario described at the beginning of this post, our client was using third-party backup tool. The third-party backup tool was using Extended Stored Procedure. After we learned that this wait type was coming from the extended stored procedure of the backup tool they were using, we contacted the tech team of its vendor. The vendor admitted that the code was not optimal at some places, and within that day they had provided the patch. Once the updated version was installed, the issue on this wait type disappeared. As viewed in the wait statistics of all the 100+ SQL Server, there was no more MSSQL_XP wait type found. In simpler terms, you must first identify which Extended Stored Procedure is creating the wait type of MSSQL_XP and see if you can get in touch with the creator of the SP so you can help them optimize the code. If you have encountered this MSSQL_XP wait type, I encourage all of you to write how you managed it. Please do not mention the name of the vendor in your comment as I will not approve it. The focus of this blog post is to understand the wait types; not talk about others. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Get Oracle Linux Certified at Much Reduced Price

    - by Antoinette O'Sullivan
    You have already heard the great news that you can now prove your knowledge on Oracle Linux 5 and 6 with the new Oracle Certified Associate, Oracle Linux 5 and 6 System Administrator exam. Until December 21th 2013, this exam is in beta phase so you can get a fully-fledged certification at a much reduced price; for example $50 in the United States or 39 euros in the euro zone. Establishing What You Need to Know Your first step is to click on the Exam Topics tab on the certification page. You will see a list of topics that you will be tested on during the certification exam. These are the areas that you need to improve your knowledge on, if you are not already expert. Registering For a Certification Exam On the certification page, click on Register for this Exam. The Pearson VUE site guides you through signing up for an event at a date and location to suit you. Preparing to Take an Exam On the certification page, click on the Exam Preparation tab. This indicates the recommended training that can help you prepare to sit the exam. The recommended training for this certification is the Oracle Linux System Administration course. You can take this very popular 5-day live instructor-led course as a: Live Virtual Event: Take the training from your own desk, no travel required. Choose from a selection of events already on the schedule to suit different timezones. In-Class: Travel to an education center to take this class. Below is a selection of events already on the schedule.  Location  Date  Delivery Language  Brussels, Belgium  18 November 2013  English  London, England  16 December 2013  English   Manchester, England  27 January 2014  English  Reading, England  12 May 2014  English  Milan, Italy  31 March 2014  Italian   Rome, Italy  10 February 2014  Italian  Utrecht, Netherlands  18 November 2013  Dutch Warsaw, Poland   9 December 2013  Polish  Bucharest, Romania  20 January 2014  Romanian  Ankara, Turkey  12 January 2014  Turkish  Istanbul, Turkey  16 December 2013  Turkish  Panjim, India  4 November 2013  English  Jakarta, Indonesia  9 December 2013  English  Kuala Lumpur, Malaysia  25 November 2013  English  Makati City, Philippines  11 November 2013  English  Singapore  25 November 2013  English  Bangkok, Thailand  11 November 2013  English  Casablanca, Morocco  16 December 2013  English  Muscat, Oman  2 March 2014  English  Johannesburg, South Africa  17 February 2014  English  Tunis, Tunisia  31 March 2014  French  Canberra, Australia 25 November 2013   English  Melbourne, Australia  19 May 2014  English  Sydney, Australia  20 January 2014  English  Mississauga, Canada  24 February 2014  English Ottawa, Canada   28 April 2014  English  Belmont, CA, United States  10 February 2014  English  Irvine, CA, United States  12 May 2014  English  San Francisco, CA, United States  18 November 2013  English  Chicago, IL, United States  14 April 2014  English  Cambridge, MA, United States  18 November 2013  English  Roseville, MA, United States  2 December 2013  English  Edison, NJ, United States  10 March 2014  English   Pittsburg, PA, United States  9 December 2013  English   Reston, VA, United States 13 January 2014   English For more information on the Oracle Linux curriculum, go to http://oracle.com/education/linux.

    Read the article

  • Oracle Database 11gR2 11.2.0.3 Certified with E-Business Suite on Windows

    - by John Abraham
    As a follow up to our original certification announcement, Oracle Database 11g Release 2 (11.2.0.3) is now certified with Oracle E-Business Suite Release 11i and Release 12 on the following Microsoft Windows operating systems: Release 12.1 (12.1.1 and higher) Microsoft Windows Server (32-bit) (2003, 2008) Microsoft Windows x64 (64-bit) (20031, 20081, 2008 R22) Release 12.0 (12.0.4 and higher) Microsoft Windows Server (32-bit) (2003) Microsoft Windows x64 (64-bit) (2003, 2008)1 Release 11i (11.5.10.2 + ATG PF.H RUP 6 and higher) Microsoft Windows Server (32-bit) (2003, 20081) Microsoft Windows x64 (64-bit) (2003, 2008, 2008 R2)1 Notes 1: This OS is a 'database tier only' or 'split tier configuration' platform where the application tier must be on a fully certified E-Business Suite platform. 2: This OS is a 'database tier only' platform for Release 11i. For 12.1.1 or higher, it is also supported on the application tier via the migration process outlined in My Oracle Support Document 1188535.1. Pending Certification E-Business Suite 12.0 with 11.2.0.3 Split Tier Certification on Microsoft Windows x64 (64-bit) (2008 R2) is in progress and will be announced separately. This announcement for Oracle E-Business Suite 11i and R12 includes: Real Application Clusters (RAC) Oracle Database Vault Transparent Data Encryption (Column Encryption) TDE Tablespace Encryption Advanced Security Option (ASO)/Advanced Networking Option (ANO) Export/Import Process for Oracle E-Business Suite Release 11i and Release 12 Database Instances Transportable Database and Transportable Tablespaces Data Migration Processes for Oracle E-Business Suite Release 11i and Release 12 References MOS Document 881505.1 - Interoperability Notes - Oracle E-Business Suite Release 11i with Oracle Database 11g Release 2 (11.2.0) MOS Document 1058763.1 - Interoperability Notes - Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) MOS Document 1091086.1 - Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 11gR2 MOS Document 1091083.1 - Integrating Oracle E-Business Suite Release 12 with Oracle Database Vault 11gR2 MOS Document 216205.1 - Database Initialization Parameters for Oracle E-Business Suite 11i MOS Document 396009.1 - Database Initialization Parameters for Oracle Applications Release 12 MOS Document 823586.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 11i MOS Document 823587.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 MOS Document 403294.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 11i MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 828223.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 11i MOS Document 828229.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 12 MOS Document 391248.1 - Encrypting Oracle E-Business Suite Release 11i Network Traffic using Advanced Security Option and Advanced Networking Option MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 557738.1 - Export/Import Process for Oracle E-Business Suite Release 11i Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 741818.1 - Export/Import Process for Oracle E-Business Suite Release 12 Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 1366265.1 - Using Transportable Tablespaces to Migrate Oracle Applications 11i Using Oracle Database 11g Release 2 MOS Document 1311487.1 - Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 11g Release 2 MOS Document 729309.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 11i Using Oracle Database 10g Release 2 or 11g MOS Document 734763.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 10g Release 2 or 11g MOS Document 1188535.1 - Migrating Oracle E-Business Suite R12 to Microsoft Windows Server 2008 R2 Please also review the platform-specific Oracle Database Installation Guides for operating system and other prerequisites. Related Articles Database 11.2.0.2 Certified with EBS R12 on IBM: Linux on System z EBS R12 Certified with Database 11gR2 on SLES 11 11gR2 11.2.0.3 Database Certified with E-Business Suite

    Read the article

  • Why is my machine unable to mount my SMB drives ("CIFS VFS: Error connecting to socket. Aborting operation", return code -115)?

    - by downbeat
    I have a machine running Precise (12.04 x64), and I cannot mount my SMB drives (I have 3, we'll call them public, private and download). It used to work (a week or two ago) and I didn't touch fstab! The machine hosting the shares is a commercial NAS, and I'm not seeing anything that would indicate it's an issue with the NAS. I have an older machine which I updated to Precise at the same time (both fresh installed, not dist-upgrade), so should have a very similar configuration. It is not having any problems. I am not having problems on windows machines/partitions either, only one of my Precise machines. The two machines are using identical entries in fstab and identical /etc/samba/smb.conf files. I don't think I've ever changed smb.conf (has never mattered before). My fstab entries all basically look like this: //10.1.1.111/public /media/public cifs credentials=/home/downbeat/.credentials,iocharset=utf8,uid=downbeat,gid=downbeat,file_mode=0644,dir_mode=0755 0 0 Here's the dmesg output on boot: [ 51.162198] CIFS VFS: Error connecting to socket. Aborting operation [ 51.162369] CIFS VFS: cifs_mount failed w/return code = -115 [ 51.194106] CIFS VFS: Error connecting to socket. Aborting operation [ 51.194250] CIFS VFS: cifs_mount failed w/return code = -115 [ 51.198120] CIFS VFS: Error connecting to socket. Aborting operation [ 51.198243] CIFS VFS: cifs_mount failed w/return code = -115 There are no other errors I see in the dmesg output. Originally when I ran 'testparm -s', the output contained these lines ERROR: lock directory /var/run/samba does not exist ERROR: pid directory /var/run/samba does not exist Here's the samba related programs I have installed: $ dpkg --list|grep -i samba ii libpam-winbind 2:3.6.3-2ubuntu2.3 Samba nameservice and authentication integration plugins ii libwbclient0 2:3.6.3-2ubuntu2.3 Samba winbind client library ii nautilus-share 0.7.3-1ubuntu2 Nautilus extension to share folder using Samba ii python-smbc 1.0.13-0ubuntu1 Python bindings for Samba clients (libsmbclient) ii samba-common 2:3.6.3-2ubuntu2.3 common files used by both the Samba server and client ii samba-common-bin 2:3.6.3-2ubuntu2.3 common files used by both the Samba server and client ii winbind 2:3.6.3-2ubuntu2.3 Samba nameservice integration server $ dpkg --list|grep -i smb ii dmidecode 2.11-4 SMBIOS/DMI table decoder ii libsmbclient 2:3.6.3-2ubuntu2.3 shared library for communication with SMB/CIFS servers ii python-smbc 1.0.13-0ubuntu1 Python bindings for Samba clients (libsmbclient) ii smbclient 2:3.6.3-2ubuntu2.3 command-line SMB/CIFS clients for Unix ii smbfs 2:5.1-1ubuntu1 Common Internet File System utilities - compatibility package $ dpkg --list|grep -i cifs ii cifs-utils 2:5.1-1ubuntu1 Common Internet File System utilities ii libsmbclient 2:3.6.3-2ubuntu2.3 shared library for communication with SMB/CIFS servers ii smbclient 2:3.6.3-2ubuntu2.3 command-line SMB/CIFS clients for Unix I originally noticed that my other machine had "libpam-winbind" and "nautilus-share" installed and the machine with the issue did not. Installing those two packages solved my errors with 'testparm -s', but did not fix my issue. Finally, I tried to purge and reinstall these packages smbclient smbfs cifs-utils samba-common samba-common-bin Still no luck. Again, it used to work; now it doesn't. Very similarly configured machine works (but some packages are out of date on the working machine). The NAS has only one interface/IP address, nmblookup works to find it's IP from it's hostname (from the machine with the issue) and it responds to a ping. Please any help would be great. I've been searching on AskUbuntu, SuperUser, ubuntuforums and plain old search engines for a week now and it's driving me crazy!

    Read the article

  • When to implement: Together with or after the source product?

    - by Jeremy Oosthuizen
    Somebody recently relayed a prospect's question to me: How hard would it be to implement OUBI after the source product (CC&B, WAM or NMS) has already been implemented? Fact is that MOST non-OUBI Data Warehouse / Business Intelligence implementations take place after the source application(s) are in place and hopefully stable. If an organization decides that they need better reporting and management information, then the logical path (see The Data Warehouse Institute's Data Warehouse Maturity Model) is to a Data Warehouse -- no matter when their last applications were implemented. If there is a pre-built Data Warehouse for their specific application, or even for the desired business process in their industry, they're in luck. Else they have to design and build from scratch, using a toolset. The implementation of a toolset is unlike the implementation of OUBI which, like OBI Apps, contain pre-built ETL routines and user content. Much has been written before about the advantages of that. So, because OUBI is designed specifically for Oracle Utilities transactional products, we often implement them in parallel -- with OUBI lagging a little behind by necessity, like Reporting. Customers know from the start they're going to need the solution, and therefore purchase the products at the same time. My biggest argument FOR a parallel installation/implementation of OUBI with the source product is two-fold: - There could be things (which is the technical term for data elements) that customers figure out they need when implementing OUBI, which are often easier added to the source product's implementation project, than to add later; - OUBI's ETL often points out errors (severe or not) with converted data, which are easier to fix during the source product's implementation project, or it may even be impossible to fix afterwards. The Conversion routines sometimes miss these errors, because the source system can live with the not-quite-perfect converted data. If the data can't be properly extracted, i.e. the proper Dimensions linked to the Facts, then it can't get into OUBI. That means it can't be analyzed effectively along with the rest of the organization's data. Then there is also the throw-away-work argument, which may be significant. The operational / transactional system cannot go live without reports on Day 1. A lot of those reports would be taken care of by the implementation of OUBI. If OUBI is implemented after go-live, those reports STILL have to be built during the source product's implementation project, but they become throw-away after the OUBI implementation. I have sometimes been told that it is better to implement OUBI after the source product, because it cuts down on scope and risk for the source product's implementation project. All I can say to that, is bah humbug. No, seriously, given the arguments above, planning has to include the OUBI implementation and it has to be managed properly -- just like any other implementation. If so, it should not add any risk and it should be included in the scope from the start. The answer to the prospect's question is therefore that it is not that much more difficult; after all, most DW/BI implemenations are done like that. They just have to consider the points above.

    Read the article

  • Scid Chess Program Compiling issue

    - by lbochitt
    First of all I would like to point that I'm new to Ubuntu so sorry if what I am asking is ridiculous. I have downloaded Scid 4.4 chess program and I have tried to compile it as it was explained on its website: 1) Initialize git. 2) Create a folder where you want to download and (?) compile the source, then cast: git init on your command line. 3) You're now ready to download the sources recall Fulvio's spell: git clone git://scid.git.sourceforge.net/gitroot/scid/scid This should get you the latest Scid source. 4) You're now ready to compile Scid. In principle, all you need to do is: ./configure and then make 5) If you get stuck, you probably need to get developer versions of tcl/tk. This translates into issuing these three commands: sudo apt-get install tcl8.5-dev sudo apt-get install tk8.5-dev sudo apt-get install zlib1g -dev 6) You should now be ready to compile The problem is that when I run ./configure to start compiling the following message appears on Terminal: configure: Makefile configuration program for Scid Tcl/Tk version: 8.5 Your operating system is: Linux 3.8.0-19-generic Location of "tcl.h": /usr/include/tcl8.5 Location of "tk.h": /usr/include/tcl8.5 Location of Tcl 8.5 library: not found Location of Tk 8.5 library: not found Checking if your system already has zlib installed: yes. Using Makefile.conf. Not all settings could be determined! The default Makefile was written. You will need to edit it before you can compile Scid. What should I do? Has anybody faced this problem before? Thanks in advance I have run ls -l /usr/include/tcl8.5/tcl.h here's the result: -rw-r--r-- 1 root root 87291 abr 22 10:45 /usr/include/tcl8.5/tcl.h I have also tried what you suggested Could you run git reset --hard HEAD and git clean -d -f to clean up everything using Git? Then run ./configure again. Just a shot in the dark - I've seen some GNU automake stuff still listening to its "cached" version of the results or something. Still no solution. I don't know why it can't recognize the library though it is installed I opened configure to see where the program looked for the library. This is the code: # libraryPath: List of possible locations of Tcl/Tk library. set libraryPath { /usr/lib /usr/lib64 /usr/local/tcl/lib /usr/local/lib /usr/X11R6/lib /opt/tcltk/lib /usr/lib/x86_64-linux-gnu } lappend libraryPath "/usr/lib/tcl${tclv}" lappend libraryPath "/usr/lib/tk${tclv}" lappend libraryPath "/usr/lib/tcl${tclv_nodot}" lappend libraryPath "/usr/lib/tk${tclv_nodot}" # Try to add tcl_library and auto_path values to libraryPath, # in case the user has a non-standard Tcl/Tk library location: if {[info exists ::tcl_library]} { lappend headerPath \ [file join [file dirname [file dirname $::tcl_library]] include] lappend libraryPath [file dirname $::tcl_library] lappend libraryPath $::tcl_library } if {[info exists ::auto_path]} { foreach name $::auto_path { lappend libraryPath $name } } if {! [info exists var(TCL_INCLUDE)]} { puts -nonewline { Location of "tcl.h": } set opt(tcl_h) [findDir "tcl.h" $headerPath "TCL_VERSION.*$tclv"] if {$opt(tcl_h) == ""} { puts "not found" set success 0 set opt(tcl_h) "$::defaultVar(TCL_INCLUDE)" } else { puts $opt(tcl_h) } } set opt(tcl_lib) "" if {! [info exists var(TCL_LIBRARY)]} { puts -nonewline " Location of Tcl $tclv library: " set opt(tcl_lib) [findDir "libtcl${tclv}.*" $libraryPath] if {$opt(tcl_lib) == ""} { set opt(tcl_lib) [findDir "libtcl${tclv_nodot}.*" $libraryPath] if {$opt(tcl_lib) == ""} { puts "not found" set success 0 set opt(tcl_lib) "$opt(tcl_h)" set opt(tcl_lib_file) "tcl\$(TCL_VERSION)" } else { set opt(tcl_lib_file) "tcl${tclv_nodot}" puts $opt(tcl_lib) } } else { set opt(tcl_lib_file) "tcl\$(TCL_VERSION)" puts $opt(tcl_lib) } } if {! [info exists var(TCL_INCLUDE)]} { set var(TCL_INCLUDE) "-I$opt(tcl_h)" } if {! [info exists var(TCL_LIBRARY)]} { set var(TCL_LIBRARY) "-L$opt(tcl_lib) -l$opt(tcl_lib_file)" } return $success So I guess (And by guess I mean i have no idea how to code) I should write somewhere in here usr/lib/tcl8.5 and usr/lib/tk8.5, am I right?

    Read the article

  • Install Problem (Ubuntu Server 10.04) with USB as it reboots when I hit 'enter' for 'Install Ubuntu Server' option! Help

    - by Alastair
    We cannot seem to install Ubuntu Server with USB as it reboots when I hit 'enter' for 'Install Ubuntu Server' option. My friend wants to try setting up a server so; we downloaded Ubuntu Server 10.04.4 we created a boot CD and installed ubuntu server no problem at all. But then the problem arose the hardrive we wanted to use is a 1tb sata drive and the computer orginally has 40gb IDE. So I bought a Sata to IDE and IDE to Sata converter from: http://www.microdirect.co.uk/Home/Product/52926/IDE-to-SATA-converter---Converts-IDE-HDD-to-SATA-inc-sata-data-and-power-cables Unfortunately this converter means I cannot plug in the IDE cable meaning I only have one IDE connection i.e CD drive has to be disconnected for the 1tb sata Hardrive to be connected. So now the 1tb drive is connected, powered it on opened the bios to make sure the hdd appeared it did as ST3ASDAPFKG (somthing like that). Fortunately the computer supports USB booting, so I read ubuntu server usb install instructions I tried: Startup Disk Creator & Unebootin Startup Disk Creator made the usb bootable with the 'ubuntu-10.04.4-server-i386.iso' All looked fine stuck the usb drive in, booted the machine up and I am quickly presented with ubuntu language choice. I hit enter to select English then I am presented with: Install Ubuntu Server, Install Ubuntu Enterprise Cloud, Check Disk for defects, test memory, Boot from first hard disk, Rescue a broken system I can move up and down the menu fine everything seems ok, I select 'Install Ubuntu Server', computer just hangs and screen either goes blank or locks. So I rebooted the computer loads the same menus fine, I select 'Install Ubuntu Server' hit 'enter' and the computer just restarts then brings me back to the same menu. hmmm Then I tried choosing the rest of the options separately: Install Ubuntu Server, Install Ubuntu Enterprise Cloud, Check Disk for defects, test memory, Boot from first hard disk, Rescue a broken system computer just restarts and back to the same ubuntu menu every-time. Grrrr At this point I wish I actually new how to command line install or something but I don't have a clue how to do that. So I tried hitting 'f6' for 'other options' and I tried them all in various combinations and individually. No Luck: (Expert mode, acpi=off, noapic, nolapic, edd=on, nodmraid, nomodeset, Free Software only) At this point I am wondering if it is a bios setting causing problems, I tried turning every option in there on off that I don't understand. No Luck. I then discovered by accident if you hit esc in the ubuntu install menu it says "you are leaving the graphical boot menu and starting the text mode interface" I hit 'Ok'. Next a prompt pops up saying 'boot:' One time it responded when I typed somthing with 'Cannot find kernal image (something like that but since then it just restarts when I hit enter in that prompt). I had a browse on the net and found someone suggesting removing quiet from install command for 'Install Ubuntu Server'. Made no difference at all just reboots... Orginal boot options noprompt cdrom-detect/try-usb=true persistent file=/cdrom/preseed/ubuntu-server.seed initrd=/install/initrd.gz quiet -- Modified boot options noprompt cdrom-detect/try-usb=true persistent file=/cdrom/preseed/ubuntu-server.seed initrd=/install/initrd.gz -- Still I cannot install Ubuntu Server by USB as it, reboots when I hit 'enter' for 'Install Ubuntu Server' option. This is a real pain as we cannot take the 1tb Sata Hardrive and swap it for IDE to be able to use the cd drive. Why is is it so hard to install ubuntu server with usb? I have wasted a full day and half on this really frustrated any help would be amazing! I know the answers out there just seems a bit illusive at the moment! Computer Spec- Asus Motherboard, 1gb RAM 2X512MB, Powersupply 200watt, 2.8ghz Processor Intel, On-board 64mb graphics, 100mb Ethernet, 54mb Wireless,

    Read the article

  • Expert F# &ndash; Pattern Matching with Adam and Eve

    - by MarkPearl
    So I am loving my Expert F# book. I wish I had more time with it, but the little time I get I really enjoy. However today I was completely stumped by what the book was trying to get across with regards to pattern matching. On Page 38 – Chapter 3, it briefly describes F# option values. On this page it gives the code snippet along the code lines below and then goes on to speak briefly about pattern matching... open System type 'a option = | None | Some of 'a let people = [ ("Adam", None); ("Eve", None); ("Cain", Some("Adam", "Eve")); ("Abel", Some("Adam", "Eve")) ] let showParents(name, parents) = match parents with | Some(dad, mum) -> printfn "%s has father %s, mother %s" name dad mum | None -> printfn "%s has no parents!" name Console.WriteLine(showParents("Adam", None))   Originally when I read this code I think I misunderstood the purpose of the example code. I for some reason thought that the showParents function would magically be parsing the people array and looking for a match of name and then showing the parents. But obviously it cannot do this since there is no reference to the people array in the showParents method. After rereading the page I realized that I had just combined the two segments of code together, possibly incorrectly, and that a better example would have been to have a code snippet like the following. let showParents(name, parents) = match parents with | Some(dad, mum) -> printfn "%s has father %s, mother %s" name dad mum | None -> printfn "%s has no parents!" name Console.WriteLine(showParents("Adam", None)) Console.WriteLine(showParents("Cain", Some("Adam", "Eve"))) Console.ReadLine()   However, what if I wanted to have a function that was passed a list of people and a name would then show the parents of the name if there were any, and if not would show that they had no parents… so that doesnt seem to difficult does it… lets look at my very unoptimized noob F# code to try and achieve this… open System let people = [ ("Adam", None); ("Eve", None); ("Cain", Some("Adam", "Eve")); ("Abel", Some("Adam", "Eve")) ] // // returns the name of the person // let showName(person : string * (string * string) option) = let name = fst(person) name // // Returns a string with the parents details or not // let showParents(itemData : string * (string * string) option) = let name = fst(itemData) let parents = snd(itemData) match parents with | Some(dad, mum) -> "Father " + dad + " and Mother " + mum | None -> "Has no parents!" // // Prints the details // let showDetails(person : string * (string * string) option) = Console.WriteLine(showName(person)) Console.WriteLine(showParents(person)) // // Check if the name matches the first portion of person // if so, return true, else return false // let nameMatch(name : string , person : string * (string * string) option) = match name with | x when x = fst(person) -> true | _ -> false // // Searches an array of people and looks for a match of names // let findPerson(name : string, people : (string * (string * string) option) list) = let o = Seq.tryFind(fun x -> nameMatch(name, x)) people if Option.isSome o then o else Option.None // // Try and find a person, if found show their details // else show no match // let FoundPerson = findPerson("Cain", people) match FoundPerson with | None -> Console.WriteLine("Not found") | Some(x) -> showDetails(x) Console.ReadLine() So, my code isn’t the cleanest but it did teach me a bit more F#. The area that I learnt about was the option keyword. The challenge being, if a match of the name isn’t found – and if a name is found but the person doesn’t have parents it should react accordingly. I’m pretty sure I can optimize this code quite a bit more and I think I may come back to it sometime in the future and relook at it, but for now at least I was able to achieve what I wanted.. and my brain has gone just that wee little bit more functional.

    Read the article

  • OSB and Coherence Integration

    - by mark.ms.smith
    Anyone who has tried to manage Coherence nodes or tried to cache results in OSB, will appreciate the new functionality now available. As of WebLogic Server 10.3.4, you can use the WebLogic Administration Server, via the Administration Console or WLST, and java-based Node Manager to manage and monitor the life cycle of stand-alone Coherence cache servers. This is a great step forward as the previous options mainly involved writing your own scripts to do this. You can find an excellent description of how this works at James Bayer’s blog. You can also find the WebLogic documentation here.As of Oracle Service Bus 11gR1 (11.1.1.3.0), OSB now supports service result caching for Business Bervices with Coherence. If you use Business Services that return somewhat static results that do not change often, you can configure those Business Services to cache results. For Business Services that use result caching, you can control the time to live for the cached result. After the cached result expires, the next Business Service call results in invoking the back-end service to get the result. This result is then stored in the cache for future requests to access. I’m thinking that this caching functionality would be perfect for some sort of cross reference data that was refreshed nightly by batch. You can find the OSB Business Service documentation here.Result Caching in a dedicated JVMThis example demonstrates these new features by configuring a OSB Business Service to cache results in a separate Coherence JVM managed by WebLogic. The reason why you may want to use a separate, dedicated JVM is that the result cache data could potentially be quite large and you may want to protect your OSB java heap.In this example, the client will call an OSB Proxy Service to get Employee data based on an Employee Id. Using a Business Service, OSB calls an external system. The results are automatically cached and when called again, the respective results are retrieved from the cache rather than the external system.Step 1 – Set up your Coherence Server Via the OSB Administration Server Console, create your Coherence Server to be used as the results cache.Here are the configured Coherence Server arguments from the Server Start tab. Note that I’m using the default Cache Config and Override files in the domain.-Xms256m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=256m -Dtangosol.coherence.override=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-override.xml -Dtangosol.coherence.cluster=OSB-cluster -Dtangosol.coherence.cacheconfig=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dcom.sun.management.jmxremote Just incase you need it, here is my Coherence Server classpath:/app/middleware/jdev_11.1.1.4/oracle_common/modules/oracle.coherence_3.6/coherence.jar: /app/middleware/jdev_11.1.1.4/modules/features/weblogic.server.modules.coherence.server_10.3.4.0.jar: /app/middleware/jdev_11.1.1.4/oracle_osb/lib/osb-coherence-client.jarBy default, OSB will try and create a local result cache instance. You need to disable this by adding the following JVM parameters to each of the OSB Managed Servers:-Dtangosol.coherence.distributed.localstorage=false -DOSB.coherence.cluster=OSB-clusterIf you need more information on configuring a remote result cache, have a look at the configuration documentration under the heading Using an Out-of-Process Coherence Cache Server.Step 2 – Configure your Business Service Under the respective Business Service Message Handling Configuration (Advanced Properties), you need to enable “Result Caching”. Additionally, you need to determine what the cache data will be keyed on. In the example below, I’m keying it on the unique Employee Id.The Results As this test was on my laptop, the actual timings are just an indication that there is a benefit to caching results. Using my test harness, I sent 10,000 requests to OSB, all with the same Employee Id. In this case, I had result caching disabled.You can see that this caused the back end Business Service (BS_GetEmployeeData) to be called for each request. Then after enabling result caching, I sent the same number of identical requests.You can now see the Business Service was only invoked once on the first request. All subsequent requests used the Results Cache.

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Advanced Solution – Wait Type – Day 7 of 28

    - by pinaldave
    Earlier we discussed about the what is the common solution to solve the issue with CXPACKET wait time. Today I am going to talk about few of the other suggestions which can help to reduce the CXPACKET wait. If you are going to suggest that I should focus on MAXDOP and COST THRESHOLD – I totally agree. I have covered them in details in yesterday’s blog post. Today we are going to discuss few other way CXPACKET can be reduced. Potential Reasons: If data is heavily skewed, there are chances that query optimizer may estimate the correct amount of the data leading to assign fewer thread to query. This can easily lead to uneven workload on threads and may create CXPAKCET wait. While retrieving the data one of the thread face IO, Memory or CPU bottleneck and have to wait to get those resources to execute its tasks, may create CXPACKET wait as well. Data which is retrieved is on different speed IO Subsystem. (This is not common and hardly possible but there are chances). Higher fragmentations in some area of the table can lead less data per page. This may lead to CXPACKET wait. As I said the reasons here mentioned are not the major cause of the CXPACKET wait but any kind of scenario can create the probable wait time. Best Practices to Reduce CXPACKET wait: Refer earlier article regarding MAXDOP and Cost Threshold. De-fragmentation of Index can help as more data can be obtained per page. (Assuming close to 100 fill-factor) If data is on multiple files which are on multiple similar speed physical drive, the CXPACKET wait may reduce. Keep the statistics updated, as this will give better estimate to query optimizer when assigning threads and dividing the data among available threads. Updating statistics can significantly improve the strength of the query optimizer to render proper execution plan. This may overall affect the parallelism process in positive way. Bad Practice: In one of the recent consultancy project, when I was called in I noticed that one of the ‘experienced’ DBA noticed higher CXPACKET wait and to reduce them, he has increased the worker threads. The reality was increasing worker thread has lead to many other issues. With more number of the threads, more amount of memory was used leading memory pressure. As there were more threads CPU scheduler faced higher ‘Context Switching’ leading further degrading performance. When I explained all these to ‘experienced’ DBA he suggested that now we should reduce the number of threads. Not really! Lower number of the threads may create heavy stalling for parallel queries. I suggest NOT to touch the setting of number of the threads when dealing with CXPACKET wait. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest reading book on-line for further clarification. All the discussion of Wait Stats over here is generic and it varies by system to system. You are recommended to test this on development server before implementing to production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • ASP.NET Routing not working on IIS 7.0

    - by Rick Strahl
    I ran into a nasty little problem today when deploying an application using ASP.NET 4.0 Routing to my live server. The application and its Routing were working just fine on my dev machine (Windows 7 and IIS 7.5), but when I deployed (Windows 2008 R1 and IIS 7.0) Routing would just not work. Every time I hit a routed url IIS would just throw up a 404 error: This is an IIS error, not an ASP.NET error so this doesn’t actually come from ASP.NET’s routing engine but from IIS’s handling of expressionless URLs. Note that it’s clearly falling through all the way to the StaticFile handler which is the last handler to fire in the typical IIS handler list. In other words IIS is trying to parse the extension less URL and not firing it into ASP.NET but failing. As I mentioned on my local machine this all worked fine and to make sure local and live setups match I re-copied my Web.config, double checked handler mappings in IIS and re-copied the actual application assemblies to the server. It all looked exactly matched. However no workey on the server with IIS 7.0!!! Finally, totally by chance, I remembered the runAllManagedModulesForAllRequests attribute flag on the modules key in web.config and set it to true: <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <add name="ScriptCompressionModule" type="Westwind.Web.ScriptCompressionModule,Westwind.Web" /> </modules> </system.webServer> And lo and behold, Routing started working on the live server and IIS 7.0! This seems really obvious now of course, but the really tricky thing about this is that on IIS 7.5 this key is not necessary. So on my Windows 7 machine ASP.NET Routing was working just fine without the key set. However on IIS 7.0 on my live server the same missing setting was not working. On IIS 7.0 this key must be present or Routing will not work. Oddly on IIS 7.5 it appears that you can’t even turn off the behavior – setting runtAllManagedModuleForAllRequests="false" had no effect at all and Routing continued to work just fine even with the flag set to false, which is NOT what I would have expected. Kind of disappointing too that Windows Server 2008 (R1) can’t be upgraded to IIS 7.5. It sure seems like that should have been possible since the OS server core changes in R2 are pretty minor. For the future I really hope Microsoft will allow updating IIS versions without tying them explicitly to the OS. It looks like that with the release of IIS Express Microsoft has taken some steps to untie some of those tight OS links from IIS. Let’s hope that’s the case for the future – it sure is nice to run the same IIS version on dev and live boxes, but upgrading live servers is too big a deal to do just because an updated OS release came out. Moral of the story – never assume that your dev setup will work as is on the live setup. It took me forever to figure this out because I assumed that because my web.config on the local machine was fine and working and I copied all relevant web.config data to the server it can’t be the configuration settings. I was looking everywhere but in the .config file forever before getting desperate and remembering the flag when I accidentally checked the intellisense settings in the modules key. Never assume anything. The other moral is: Try to keep your dev machine and server OS’s in sync whenever possible. Maybe it’s time to upgrade to Windows Server 2008 R2 after all. More info on Extensionless URLs in IIS Want to find out more exactly on how extensionless Urls work on IIS 7? The check out  How ASP.NET MVC Routing Works and its Impact on the Performance of Static Requests which goes into great detail on the complexities of the process. Thanks to Jeff Graves for pointing me at this article – a great linked reference for this topic!© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7  Windows  

    Read the article

  • Test and Report Add-on Compatibility in Firefox

    - by Asian Angel
    Now that the new version of Firefox is out you probably have a favorite extension or two that has not updated yet. You can get that extension working again, test it, and report back to Mozilla on how well it does with the Add-on Compatibility Reporter extension. Before For our example we chose a great extension that unfortunately has not been updated yet. As you can see here Firefox is refusing to let the extension install. After As soon as you install Add-on Compatibility Reporter you will be presented with an information page on how the extension works and what you can do with it. You should definitely take a moment to read this as it is very helpful. After trying our non-compatible extension again we were able to proceed with the install process. Notice at the bottom that “compatibility checking” has been overridden. Success! As soon as we restarted our browser it was easy to see the “non-compatible icon” in the “Add-ons Manager Window”…but the extension did install though (terrific!). Clicking on the extension’s entry will reveal a new button in the lower right corner. Using the “Compatibility Drop-Down Menu” you can report if the extension is working as well as before or if it is actually having problems. The extension that we used for our example had no problems whatsoever so good news there. Whichever option you choose you will be presented with a small “Report Window” with information about the extension, your browser’s version number, and your operating system. Click “Submit Report” to send it on its’ way. You will see a confirmation message letting you know that your report was successfully submitted. While the extension itself has not been altered in any form at least you have it working again and have helped verify whether it still works well or not. Notice the “notation” present now in place of the “Compatibility Button” that lets you know that you have already taken care of that particular extension. Looking great… Conclusion If you have a favorite extension that you miss using in the newest release of Firefox then this is definitely an extension to add to your browser. Not only will your extension start working again but you can let Mozilla know how well it is working and (hopefully) help get the extension updated. Links Download the Add-on Compatibility Reporter extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Firefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible ExtensionsUsing Windows 7 or Vista Compatibility ModeMysticgeek Blog: Generate A System Health Report In VistaCheck Extension Compatibility for Upcoming Firefox ReleasesMake Safari Stop Crashing Every 20 Seconds on Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • SQL 2005 Transaction Rollback Hung–unresolved deadlock

    - by steveh99999
    Encountered an interesting issue recently with a SQL 2005 sp3 Enterprise Edition system. Every weekend, a full database reindex was being run on this system – normally this took around one and a half hours. Then, one weekend, the job ran for over 17 hours  - and had yet to complete... At this point, DBA cancelled the job. Job status is now cancelled – issue over…   However, cancelling the job had not killed the reindex transaction – DBCC OPENTRAN was still showing the transaction being open. The oldest open transaction in the database was now over 17 hours old.  Consequently, transaction log % used growing dramatically and locks still being held in the database... Further attempts to kill the transaction did nothing. ie we had a transaction which could not be killed. In sysprocesses, it was apparent the SPID was in rollback status, but the spid was not accumulating CPU or IO. Was the SPID stuck ? On examination of the SQL errorlog – shortly after the reindex had started, a whole bunch of deadlock output had been produced by trace flag 1222. Then this :- spid5s      ***Stack Dump being sent to   xxxxxxx\SQLDump0042.txt spid5s      * ******************************************************************************* spid5s      * spid5s      * BEGIN STACK DUMP: spid5s      *   12/05/10 01:04:47 spid 5 spid5s      * spid5s      * Unresolved deadlock spid5s      * spid5s      *   spid5s      * ******************************************************************************* spid5s      * ------------------------------------------------------------------------------- spid5s      * Short Stack Dump spid5s      Stack Signature for the dump is 0x000001D7 spid5s      External dump process return code 0x20000001. Unresolved deadlock – don’t think I’ve ever seen one of these before…. A quick call to Microsoft support confirmed the following bug had been hit :- http://support.microsoft.com/kb/961479 So, only option to get rid of the hung spid – to restart SQL Server… Fortunately SQL Server restarted without any issues. I was pleasantly surprised to see that recovery on this particular database was fast. However, restarting SQL Server to fix an issue is not something I would normally rush to do... Short term fix – the reindex was changed to use MAXDOP of 1. Longer term fix will be to apply the correct CU, or wait for SQL 2005 sp 4 ?? This should be released any day soon I hope..

    Read the article

  • Create Custom Windows Key Keyboard Shortcuts in Windows

    - by Asian Angel
    Nearly everyone uses keyboard shortcuts of some sort on their Windows system but what if you could create new ones for your favorite apps or folders? You might just be amazed at how simple it can be with just a few clicks and no programming using WinKey. WinKey in Action During the installation process you will see this window that gives you a good basic idea of just what can be accomplished with this wonderful little app. As soon as the installation process has finished you will see the “Main App Window”. It provides a simple straightforward listing of all the keyboard shortcuts that it is currently managing. Note: WinKey will automatically add an entry to the “Startup Listing” in your “Start Menu” during installation. To see the regular built-in Windows keyboard shortcuts that it is managing click “Standard Shortcuts” to select it and then click on “Properties”. For those who are curious WinKey does have a “System Tray Icon” that can be disabled if desired. Now onto creating those new keyboard shortcuts… For our example we decided to create a keyboard shortcut for an app rather than a folder. To create a shortcut for an app click on the small “Paper Icon” as shown here. Once you have done that browse to the appropriate folder and select the exe file. The second step will be choosing which keyboard shortcut you would like to associate with that particular app. You can use the drop-down list to choose from a listing of available keyboard combinations. For our example we chose “Windows Key + A”. The final step is choosing the “Run Mode”. There are three options available in the drop-down list…choose the one that best suits your needs. Here is what our example looked like once finished. All that is left to do at this point is click “OK” to finish the process. And just like that your new keyboard shortcut is now listed in the “Main App Window”. Time to try out your new keyboard shortcut! One quick use of our new keyboard shortcut and Iron Browser opened right up. WinKey really does make creating new keyboard shortcuts as simple as possible. Conclusion If you have been wanting to create new keyboard shortcuts for your favorite apps and folders then it really does not get any simpler than with WinKey. This is definitely a recommended app for anyone who loves “get it done” software. Links Download WinKey at Softpedia Similar Articles Productive Geek Tips Show Keyboard Shortcut Access Keys in Windows VistaCreate a Keyboard Shortcut to Access Hidden Desktop Icons and FilesKeyboard Ninja: 21 Keyboard Shortcut ArticlesAnother Desktop Cube for Windows XP/VistaHow-To Geek on Lifehacker: Control Your Computer with Shortcuts & Speed Up Vista Setup TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems

    Read the article

  • Dynamically creating meta tags in asp.net mvc

    - by Jalpesh P. Vadgama
    As we all know that Meta tag has very important roles in Search engine optimization and if we want to have out site listed with good ranking on search engines then we have to put meta tags. Before some time I have blogged about dynamically creating meta tags in asp.net 2.0/3.5 sites, in this blog post I am going to explain how we can create a meta tag dynamically very easily. To have meta tag dynamically we have to create a meta tag on server-side. So I have created a method like following. public string HomeMetaTags() { System.Text.StringBuilder strMetaTag = new System.Text.StringBuilder(); strMetaTag.AppendFormat(@"<meta content='{0}' name='Keywords'/>","Home Action Keyword"); strMetaTag.AppendFormat(@"<meta content='{0}' name='Descption'/>", "Home Description Keyword"); return strMetaTag.ToString(); } Here you can see that I have written a method which will return a string with meta tags. Here you can write any logic you can fetch it from the database or you can even fetch it from xml based on key passed. For the demo purpose I have written that hardcoded. So it will create a meta tag string and will return it. Now I am going to store that meta tag in ViewBag just like we have a title tag. In this post I am going to use standard template so we have our title tag there in viewbag message. Same way I am going save meta tag like following in ViewBag. public ActionResult Index() { ViewBag.Message = "Welcome to ASP.NET MVC!"; ViewBag.MetaTag = HomeMetaTags(); return View(); } Here in the above code you can see that I have stored MetaTag ViewBag. Now as I am using standard ASP.NET MVC3 template so we have our we have out head element in Shared folder _layout.cshtml file. So to render meta tag I have modified the Head tag part of _layout.cshtml like following. <head> <title>@ViewBag.Title</title> <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.5.1.min.js")" type="text/javascript"></script> @Html.Raw(ViewBag.MetaTag) </head> Here in the above code you can see I have use @Html.Raw method to embed meta tag in _layout.cshtml page. This HTML.Raw method will embed output to head tag section without encoding html. As we have already taken care of html tag in string function we don’t need the html encoding. Now it’s time to run application in browser. Now once you run your application in browser and click on view source you will find meta tag for home page as following. That’s its It’s very easy to create dynamically meta tag. Hope you liked it.. Stay tuned for more.. Till then happy programming.

    Read the article

  • Enable Multi-Column Google Searches with a User Script

    - by Asian Angel
    Are you wanting to improve the search results view at Google and make better use of the webpage space? With a little user script magic you can make those search results look and fit better in your favorite browser. Note: This user script may conflict with the AutoPager extension if you have it installed in your favorite browser. Before Here is the standard single column view of search results at Google. Not too bad but the available space could certainly be better utilized. Note: For the purposes of our example we are using Google Chrome but this user script can be easily added to other browsers. After If you have never installed a user script in Chrome before it is just as simple as the regular extensions at the official Google website. Here you can see the details for the user script we are installing. Notice that you can view the source code if desired. To add the user script to Chrome click on “Install”. Once you start the install process you will see an intermediary message asking if you wish to continue in the lower left corner of your browser. Click “Continue” to move to the next step in the install process. From this point on the install process is practically identical to the official extensions. You can see the final confirmation window here…click “Install” to finish adding the user script to Chrome. As with regular extensions you will see a post-install message in the upper right corner. So, what does a user script look like in the “Extensions Page”? You can see the user script entry here…outside of an icon it looks rather identical to a normal extension. After refreshing the search page shown above we now have two columns of search results (default setting). This looks much much better than a single column view and there is little to no page scrolling required now. To switch to a three column view simply use the keyboard shortcut “Alt + 3”. To return to a single column view use “Alt + 1” and for the default two column view use “Alt + 2”. Three keyboard shortcuts for three different views…definitely a good thing. Note: On our test system we needed to use the number keys at the top of our keyboard to switch views…this is most likely the result of unique settings on our test system. Conclusion If you are wanting a better viewing experience when conducting searches at Google then this user script will make a very nice addition to your favorite browser. For those using Firefox you can add user scripts with the Greasemonkey & Stylish extensions. Using Opera Browser? See our how-to for adding user scripts to Opera here. Links Install the Multi-Column View of Google Search Results User Script Similar Articles Productive Geek Tips Hide Flash Animations in Google ChromeEnable Google Search From Shortcut Key in KDE on (k)UbuntuSet Gmail as Default Mail Client in UbuntuSet Up User Scripts in Opera BrowserHow To Enable Favicons for Google Reader Subscriptions TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Yes, it’s Patch Tuesday Generate Stunning Tag Clouds With Tagxedo Install, Remove and HIDE Fonts in Windows 7 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu

    Read the article

  • OBIEE 11.1.1 - How to Enable Caching in Internet Information Services (IIS) 7.0+

    - by Ahmed A
    Follow these steps to configure static file caching and content expiration if you are using IIS 7.0 Web Server with Oracle Business Intelligence. Tip: Install IIS URL Rewrite that enables Web administrators to create powerful outbound rules. Following are the steps to set up static file caching for IIS 7.0+ Web Server: 1. In “web.config” file for OBIEE static files virtual directory (ORACLE_HOME/bifoundation/web/app) add the following highlight in bold the outbound rule for caching:<?xml version="1.0" encoding="UTF-8"?><configuration>    <system.webServer>        <urlCompression doDynamicCompression="true" />        <rewrite>            <outboundRules>                <rule name="header1" preCondition="FilesMatch" patternSyntax="Wildcard">                    <match serverVariable="RESPONSE_CACHE_CONTROL" pattern="*" />                    <action type="Rewrite" value="max-age=604800" />                </rule>                <preConditions>    <preCondition name="FilesMatch">                        <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/css|^text/x-javascript|^text/javascript|^image/gif|^image/jpeg|^image/png" />                    </preCondition>                </preConditions>            </outboundRules>        </rewrite>    </system.webServer></configuration>2. Restart IIS. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • SQL SERVER – Why Do We Need Data Quality Services – Importance and Significance of Data Quality Services (DQS)

    - by pinaldave
    Databases are awesome.  I’m sure my readers know my opinion about this – I have made SQL Server my life’s work after all!  I love technology and all things computer-related.  Of course, even with my love for technology, I have to admit that it has its limits.  For example, it takes a human brain to notice that data has been input incorrectly.  Computer “brains” might be faster than humans, but human brains are still better at pattern recognition.  For example, a human brain will notice that “300” is a ridiculous age for a human to be, but to a computer it is just a number.  A human will also notice similarities between “P. Dave” and “Pinal Dave,” but this would stump most computers. In a database, these sorts of anomalies are incredibly important.  Databases are often used by multiple people who rely on this data to be true and accurate, so data quality is key.  That is why the improved SQL Server features Master Data Management talks about Data Quality Services.  This service has the ability to recognize and flag anomalies like out of range numbers and similarities between data.  This allows a human brain with its pattern recognition abilities to double-check and ensure that P. Dave is the same as Pinal Dave. A nice feature of Data Quality Services is that once you set the rules for the program to follow, it will not only keep your data organized in the future, but go to the past and “fix up” any data that has already been entered.  It also allows you do combine data from multiple places and it will apply these rules across the board, so that you don’t have any weird issues that crop up when trying to fit a round peg into a square hole. There are two parts of Data Quality Services that help you accomplish all these neat things.  The first part is DQL Server, which you can think of as the hardware component of the system.  It is installed on the side of (it needs to install separately after SQL Server is installed) SQL Server and runs quietly in the background, performing all its cleanup services. DQS Client is the user interface that you can interact with to set the rules and check over your data.  There are three main aspects of Client: knowledge base management, data quality projects and administration.  Knowledge base management is the part of the system that allows you to set the rules, or program the “knowledge base,” so that your database is clean and consistent. Data Quality projects are what run in the background and clean up the data that is already present.  The administration allows you to check out what DQS Client is doing, change rules, and generally oversee the entire process.  The whole process is user-friendly and a pleasure to use.  I highly recommend implementing Data Quality Services in your database. Here are few of my blog posts which are related to Data Quality Services and I encourage you to try this out. SQL SERVER – Installing Data Quality Services (DQS) on SQL Server 2012 SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS SQL SERVER – DQS Error – Cannot connect to server – A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessions” – SetDataQualitySessionPhaseTwo SQL SERVER – Configuring Interactive Cleansing Suggestion Min Score for Suggestions in Data Quality Services (DQS) – Sensitivity of Suggestion SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS) Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • jtreg update, March 2012

    - by jjg
    There is a new update for jtreg 4.1, b04, available. The primary changes have been to support faster and more reliable test runs, especially for tests in the jdk/ repository. [ For users inside Oracle, there is preliminary direct support for gathering code coverage data using jcov while running tests, and for generating a coverage report when all the tests have been run. ] -- jtreg can be downloaded from the OpenJDK jtreg page: http://openjdk.java.net/jtreg/. Scratch directories On platforms like Windows, if a test leaves a file open when the test is over, that can cause a problem for downstream tests, because the scratch directory cannot be emptied beforehand. This is addressed in agentvm mode by discarding any agents using that scratch directory and starting new agents using a new empty scratch directory. Successive directives use suffices _1, _2, etc. If you see such directories appearing in the work directory, that is an indication that files were left open in the preceding directory in the series. Locking support Some tests use shared system resources such as fixed port numbers. This causes a problem when running tests concurrently. So, you can now mark a directory such that all the tests within all such directories will be run sequentially, even if you use -concurrency:N on the command line to run the rest of the tests in parallel. This is seen as a short term solution: it is recommended that tests not use shared system resources whenever possible. If you are running multiple instances of jtreg on the same machine at the same time, you can use a new option -lock:file to specify a file to be used for file locking; otherwise, the locking will just be within the JVM used to run jtreg. "autovm mode" By default, if no options to the contrary are given on the command line, tests will be run in othervm mode. Now, a test suite can be marked so that the default execution mode is "agentvm" mode. In conjunction with this, you can now mark a directory such that all the tests within that directory will be run in "othervm" mode. Conceptually, this is equivalent to putting /othervm on every appropriate action on every test in that directory and any subdirectories. This is seen as a short term solution: it is recommended tests be adapted to use agentvm mode, or use "@run main/othervm" explicitly. Info in test result files The user name and jtreg version info are now stored in the properties near the beginning of the .jtr file. Build The makefiles used to build and test jtreg have been reorganized and simplified. jtreg is now using JT Harness version 4.4. Other jtreg provides access to GNOME_DESKTOP_SESSION_ID when set. jtreg ensures that shell tests are given an absolute path for the JDK under test. jtreg now honors the "first sentence rule" for the description given by @summary. jtreg saves the default locale before executing a test in samevm or agentvm mode, and restores it afterwards. Bug fixes jtreg tried to execute a test even if the compilation failed in agentvm mode because of a JVM crash. jtreg did not correctly handle the -compilejdk option. Acknowledgements Thanks to Alan, Amy, Andrey, Brad, Christine, Dima, Max, Mike, Sherman, Steve and others for their help, suggestions, bug reports and for testing this latest version.

    Read the article

  • Announcement: Employee Info Starter Kit (v5.0) is Released

    - by Mohammad Ashraful Alam
    Ever wanted to have a simple jQuery menu bound with ASP.NET web site map file? Ever wanted to have cool css design stuffs implemented on your ASP.NET data bound controls? Ever wanted to let Visual Studio generate logical layers for you, which can be easily tested, customized and bound with ASP.NET data controls? If your answers with respect to above questions are ‘yes’, then you will probably happy to try out latest release (v5.0) of Employee Starter Kit, which is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, the current release illustrates how to utilize Microsoft ASP.NET 4.0 Web Form Data Controls, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is an open source ASP.NET project template that is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. This project template is titled as “Employee Info Starter Kit”, which was initially hosted on Microsoft Code Gallery and been downloaded 1, 50,000+ of copies afterword.  The latest version of this starter kit is hosted in Codeplex. Release Highlights User End Functional Specification The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee records Update an existing employee record Delete existing employee records Architectural Overview Simple 3 layer architecture (presentation, business logic and data access layer) ASP.NET web form based user interface Built-in code generators for logical layers, implemented in Visual Studio default template engine (T4) Built-in Entity Framework entities as business entities (aka: data containers) Data Mapper design pattern based Data Access Layer, implemented in C# and Entity Framework Domain Model design pattern based Business Logic Layer, implemented in C# Object Model for Cross Cutting Concerns (such as validation, logging, exception management) Minimum System Requirements Visual Studio 2010 (Web Developer Express Edition) or higher Sql Server 2005 (Express Edition) or higher Technology Utilized Programming Languages/Scripts Browser side: JavaScript Web server side: C# Code Generation Template: T-4 Template Frameworks .NET Framework 4.0 JavaScript Framework: jQuery 1.5.1 CSS Framework: 960 grid system .NET Framework Components .NET Entity Framework .NET Optional/Named Parameters (new in .net 4.0) .NET Tuple (new in .net 4.0) .NET Extension Method .NET Lambda Expressions .NET Anonymous Type .NET Query Expressions .NET Automatically Implemented Properties .NET LINQ .NET Partial Classes and Methods .NET Generic Type .NET Nullable Type ASP.NET Meta Description and Keyword Support (new in .net 4.0) ASP.NET Routing (new in .net 4.0) ASP.NET Grid View (CSS support for sorting - (new in .net 4.0)) ASP.NET Repeater ASP.NET Form View ASP.NET Login View ASP.NET Site Map Path ASP.NET Skin ASP.NET Theme ASP.NET Master Page ASP.NET Object Data Source ASP.NET Role Based Security Getting Started Guide To see Employee Info Starter Kit in action is pretty easy! Download the latest version. Extract the file. From the extracted folder click the C# project file (Eisk.Web.csproj) to open it in Visual Studio 2010 Hit Ctrl+F5! The current release (v5.0) of Employee Info Starter Kit is properly packaged, fully documented and well tested. If you want to learn more about it in details, just check the following links: Release Home Page Installation Walkthrough Hand on Coding Walkthrough Technical Reference Enjoy!

    Read the article

  • How do I fix a corrupted harddrive after failed upgrade?

    - by Nil
    The problem originated when I was trying to fix this problem. Things went horribly, horribly wrong and I ended up with a new problem altogether. The last thing I did was run sudo apt-get install and that caused my system to freeze. I restarted my computer and it would not boot from the harddrive. I ran a copy of Ubuntu 12.10 from a flashdrive that I had and ran gparted to see if my partitions were all there. It returned this message: Invalid partition table on /dev/sda -- wrong signature 5208. The drive appeared as a 2TiB unallocated drive with an error. The drive had 4 partitions before (plus random unallocated space). There was a fat32 partition, an ext4 partition which contained ubuntu 13.04/13.10 (I don't even know which one at this point), an extended partition which contained a swap partition for my ubuntu partition (I was meaning to move that ubuntu partition into the extended partition, never got around to it), and another partition (I don't remember how I formatted it). I should also mention this is a 1TB harddrive. So in short, I have a corrupted partition table on my primary harddrive from which I boot from, how can I fix this? I tried mounting the drive with sudo mount /dev/sda1 /media/ubuntu then I changed my directory to said folder and tried to list files and this monstrosity happened: $ ls ls: cannot access ??w?j^?.: Input/output error ls: cannot access ??(? ?x?.|: Input/output error ls: cannot access 6W_@?)?._??: Input/output error ls: cannot access HB0v???.A}?: Input/output error ls: cannot access ???.?X: Input/output error ls: cannot access t)?.+?l: Input/output error ls: cannot access ?h@ ?.@ : Input/output error ls: cannot access >? @??.???: Input/output error ls: cannot access m???.??: Input/output error ls: cannot access @ if??a?: Input/output error ls: cannot access ?M!vN$?.??n: Input/output error ls: cannot access ?o? ??.Bm`: Input/output error ls: cannot access ?:I??? M. : Input/output error ls: cannot access W??.??: Input/output error ls: cannot access ?: Input/output error ls: cannot access ?W?s??: Input/output error ls: cannot access ?v?k?.???: Input/output error ls: cannot access 5?$<N??: Input/output error .x????.??i: Input/output error ls: cannot access je????.j?1: Input/output error XjD?.???: Input/output error ls: cannot access W??n???.?: Input/output error ls: cannot access ?^x.$"?: Input/output error ls: cannot access !??*!??j.??: Input/output error ls: cannot access '-??k?^?.???: Input/output error ls: cannot access b?w?w?b.\??: Input/output error ls: cannot access o????"z.??B: Input/output error ls: cannot access ??b?h.?3-: Input/output error ls: cannot access ??.$7: Input/output error ls: cannot access )??K.bk: Input/output error ls: cannot access s??z?.?(?: Input/output error ls: cannot access ?F@?0?.@?: Input/output error .?D: Input/output error .??: Input/output error ls: cannot access?????. @: Input/output error ls: cannot access ?/?? ?.??: No such file or directory ls: cannot access rk?p4q(?.?k: Input/output error This looks promising. This is the output of fdisk -l $ sudo fdisk -l /dev/sda Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: invalid flag 0x5208 of partition table 5 will be corrected by w(rite) Disk /dev/sda: 2199.0 GB, 2199023132672 bytes 255 heads, 63 sectors/track, 267349 cylinders, total 4294967056 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x44fdfe06 Device Boot Start End Blocks Id System /dev/sda1 113305600 894715903 390705152 c W95 FAT32 (LBA) /dev/sda2 894715904 1489307647 297295872 83 Linux /dev/sda3 1489309694 1497307135 3998721 5 Extended /dev/sda4 1497309184 1953523711 228107264 7 HPFS/NTFS/exFAT /dev/sda5 ? 3013257822 3688738171 337740175 aa Unknown

    Read the article

  • Installing CUDA on Ubuntu 12.04 with nvidia driver 295.59

    - by johnmcd
    I have been trying to get cuda to run on a nvidia gt 650m based laptop. I am running Ubuntu 12.04 with the nvidia 295.59 driver. Also, my laptop uses Optimus so I have install the driver via bumblebee. Bumblebee is not working correctly yet -- however I believe it is possible to install CUDA independently. To install CUDA I have followed the instructions detailed here: How can I get nVidia CUDA or OpenCL working on a laptop with nVidia discrete card/Intel Integrated Graphics? However I am still running into problem building the sdk. I made the changes specified at the above link in common.mk, but I got the following (snippet) from the build process: make[2]: Entering directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/fluidsGL' /usr/bin/ld: warning: libnvidia-tls.so.302.17, needed by /usr/lib/nvidia-current/libGL.so, not found (try using -rpath or -rpath-link) /usr/bin/ld: warning: libnvidia-glcore.so.302.17, needed by /usr/lib/nvidia-current/libGL.so, not found (try using -rpath or -rpath-link) /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv018tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv012glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv017glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv012tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv015tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv019tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv000glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv017tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv013tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv013glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv018glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv022tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv007tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv009tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv020tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv014glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv015glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv016tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv001glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv006tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv021tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv011tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv020glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv019glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv002glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv021glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv014tls' collect2: ld returned 1 exit status make[2]: *** [../../bin/linux/release/fluidsGL] Error 1 make[2]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/fluidsGL' make[1]: *** [src/fluidsGL/Makefile.ph_build] Error 2 make[1]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C' make: *** [all] Error 2 The libraries that ld warns about are on my system and are installed on the system: $ locate libnvidia-tls.so.302.17 libnvidia-glcore.so.302.17 /usr/lib/nvidia-current/libnvidia-glcore.so.302.17 /usr/lib/nvidia-current/libnvidia-tls.so.302.17 /usr/lib/nvidia-current/tls/libnvidia-tls.so.302.17 /usr/lib32/nvidia-current/libnvidia-glcore.so.302.17 /usr/lib32/nvidia-current/libnvidia-tls.so.302.17 /usr/lib32/nvidia-current/tls/libnvidia-tls.so.302.17 however /usr/lib/nvidia-current and /usr/lib32/nvidia-current are not being picked up by ldconfig. I have tried adding them by adding a file to /etc/ld.so.conf.d/ which gets past this error, however now I am getting the following error: make[2]: Entering directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/deviceQueryDrv' cc1plus: warning: command line option ‘-Wimplicit’ is valid for C/ObjC but not for C++ [enabled by default] obj/x86_64/release/deviceQueryDrv.cpp.o: In function `main': deviceQueryDrv.cpp:(.text.startup+0x5f): undefined reference to `cuInit' deviceQueryDrv.cpp:(.text.startup+0x99): undefined reference to `cuDeviceGetCount' deviceQueryDrv.cpp:(.text.startup+0x10b): undefined reference to `cuDeviceComputeCapability' deviceQueryDrv.cpp:(.text.startup+0x127): undefined reference to `cuDeviceGetName' deviceQueryDrv.cpp:(.text.startup+0x16a): undefined reference to `cuDriverGetVersion' deviceQueryDrv.cpp:(.text.startup+0x1f0): undefined reference to `cuDeviceTotalMem_v2' deviceQueryDrv.cpp:(.text.startup+0x262): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x457): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x4bc): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x502): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x533): undefined reference to `cuDeviceGetAttribute' obj/x86_64/release/deviceQueryDrv.cpp.o:deviceQueryDrv.cpp:(.text.startup+0x55e): more undefined references to `cuDeviceGetAttribute' follow collect2: ld returned 1 exit status make[2]: *** [../../bin/linux/release/deviceQueryDrv] Error 1 make[2]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/deviceQueryDrv' make[1]: *** [src/deviceQueryDrv/Makefile.ph_build] Error 2 make[1]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C' make: *** [all] Error 2 I would appreciate any help that anyone can provide me with. If I can provide any further information please let me know. Thanks.

    Read the article

  • Weblogic 10.3.4 (PS3) nodemanager wont start?

    - by angelo.santagata
    Hi all, well Im back from Australia and one of the things which happened was Oracle announced the PS3 release of oracles SOA & Webcenter products have been released. Now I normally use pre-installed images but I always like to install the products at least once that way I get to see its installation caveats.. Here’s one. Installation on Windows 7 64bit, 64bit JVM, generic weblogic Server installer. All worked fine, EXCEPT I cant start the node manager, I get the following error <08-Feb-2011 17:16:48> <INFO> <Loading domains file: D:\products\wls1034\WLSERV~1.3\common\NODEMA~1\nodemanager.domains> <08-Feb-2011 17:16:48> <SEVERE> <Fatal error in node manager server> weblogic.nodemanager.common.ConfigException: Native version is enabled but nodemanager native library could not be loaded     at weblogic.nodemanager.server.NMServerConfig.initProcessControl(NMServerConfig.java:249)     at weblogic.nodemanager.server.NMServerConfig.<init>(NMServerConfig.java:190)     at weblogic.nodemanager.server.NMServer.init(NMServer.java:182)     at weblogic.nodemanager.server.NMServer.<init>(NMServer.java:148)     at weblogic.nodemanager.server.NMServer.main(NMServer.java:390)     at weblogic.NodeManager.main(NodeManager.java:31) Caused by: java.lang.UnsatisfiedLinkError: D:\products\wls1034\wlserver_10.3\server\native\win\32\nodemanager.dll: Can't load IA 32-bit .dll on a AMD 64-bit platform     at java.lang.ClassLoader$NativeLibrary.load(Native Method)     at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)     at java.lang.Runtime.loadLibrary0(Runtime.java:823)     at java.lang.System.loadLibrary(System.java:1028)     at weblogic.nodemanager.util.WindowsProcessControl.<init>(WindowsProcessControl.java:17)     at weblogic.nodemanager.util.ProcessControlFactory.getProcessControl(ProcessControlFactory.java:24)     at weblogic.nodemanager.server.NMServerConfig.initProcessControl(NMServerConfig.java:247)     ... 5 more Ok it appears that the node manager has gotten confused and thinks this is a 32bit install of Weblogic Server whereas it is the 64bit install.. Might have been something I did, or didnt do, on installation (e.g. –d64 on the jvm command line), however the workaround is pretty easy. 1. Create a file called nodemanager.properties in %WL_HOME%\common\nodemanager on my machine it was D:\products\wls1034\wlserver_10.3\common\nodemanager 2. Add the following line to it NativeVersionEnabled=false 3. And start it up!, this will force it not to use .DLL files and use emulation/non native methods instead..  

    Read the article

  • Screen (command-line program) bug?

    - by VioletCrime
    fired up my Minecraft server again after about a year off. My server used to run 11.04, which has since been upgraded to 12.04; I lost my management scripts in the upgrade (thought I'd backed up the user's home directory), but whatever, I enjoy developing stuff like that anyways. However, this time around, I'm running into issues. I start the Minecraft server using a detached screen, however the script is unable to 'stuff' commands into the screen instance until I attach to the screen then detach again? Once I do that, I can stuff anything I want into the server's terminal using the -X option, until I stop/start the server again, then I have to reattach and detach in order to restore functionality? Here's the manager script: #!/bin/bash #Name of the screen housing the server (set with the -S flag at startup) SCREENNAME=minecraft1 #Name of the folder housing the Minecraft world FOLDERNAME=world1 startServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Cannot start Minecraft server; it is already running!" else screen -dmS $SCREENNAME java -Xmx1024M -Xms1024M -jar minecraft_server.jar sleep 2 if screen -list | grep "$SCREENNAME" > /dev/null then echo "Minecraft server started; happy mining!" else echo "ERROR: Minecraft server failed to start!" fi fi; } stopServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (halt) in one minute." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down (halt) in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' else echo "Server is not running; nothing to stop." fi; } stopServerNow (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 5-second warning." screen -S $SCREENNAME -X stuff "/say EMERGENCY SHUTDOWN! 5 seconds to halt." screen -S $SCREENNAME -X stuff $'\015' sleep 5 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' else echo "Server is not running; nothing to stop." fi; } restartServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (restart) in one minute." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down (restart) in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' sleep 2 startServer else echo "Cannot restart server: it isn't running." fi; } #In order for this function to work, a directory 'backup/$FOLDERNAME' must exist in the same #directory that '$FOLDERNAME' resides backupWorld (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (backup) in one minute." screen -S $SCREENNAME -X stuff $'\015' screen -S $SCREENNAME -X stuff "/say Server should be down for no more than a few seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down for backup in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' fi sleep 2 if screen -list | grep $SCREENNAME > /dev/null then echo "Server is still running? Error." else cd .. tar -czvf backup0.tar.gz $FOLDERNAME mv backup0.tar.gz backup/$FOLDERNAME cd backup/$FOLDERNAME rm backup10.tar.gz mv backup9.tar.gz backup10.tar.gz mv backup8.tar.gz backup9.tar.gz mv backup7.tar.gz backup8.tar.gz mv backup6.tar.gz backup7.tar.gz mv backup5.tar.gz backup6.tar.gz mv backup4.tar.gz backup5.tar.gz mv backup3.tar.gz backup4.tar.gz mv backup2.tar.gz backup3.tar.gz mv backup1.tar.gz backup2.tar.gz mv backup0.tar.gz backup1.tar.gz cd ../../$FOLDERNAME screen -dmS $SCREENNAME java -Xmx1024M -Xms1024M -jar minecraft_server.jar; sleep 2 if screen -list | grep "$SCREENNAME" > /dev/null then echo "Minecraft server restarted; happy mining!" else echo "ERROR: Minecraft server failed to start!" fi fi; } printCommands (){ echo echo "$0 usage:" echo echo "Start : Starts the server on a detached screen." echo "Stop : Stop the server; includes a 1-minute warning." echo "StopNOW : Stops the server with only a 5-second warning." echo "Restart : Stops the server and starts the server again." echo "Backup : Stops the server (1 min), backs up the world, and restarts." echo "Help : Display this message." } #Forces case-insensitive string comparisons shopt -s nocasematch #Primary 'Switch' if [[ $1 = "start" ]] then startServer elif [[ $1 = "stop" ]] then stopServer elif [[ $1 = "stopnow" ]] then stopServerNow elif [[ $1 = "backup" ]] then backupWorld elif [[ $1 = "restart" ]] then restartServer else printCommands fi

    Read the article

  • Speaker at the German Visual FoxPro Developer Conference 2003

    The following is an excerpt from the UniversalThread conference coverage of the German Visual FoxPro Developer Conference 2003 written by Hans-Otto Lochmann and Armin Neudert. Track: Visual FoxPro and Linux This track consists of 4 sessions presented on one day in one sequence. Originally the Linux portion of this track was to be presented by Whil Hentzen, the well-known publisher, book author and confer-ence speaker. Unfortunately some illness prevented him from joining this DevCon. Rainer got the bad news only on early Friday morning. It was definitely to late to find a replacement among the already invited speaker on such a short notice. So Rainer decided to take over these "three sessions in a row" by himself with "a little help from his friends". He hired a coach for him for the weekend and prepared slides and sessions by himself - the originally planed slides and session material were still in USA. Rainer survived barely an endless disaster of C0000005's due to various wrong configuration settings... At the presentation Jochen Kirstätter helped massively with technical details regarding Linux whereas Rainer did the slides and the presentation. Gerold Lübben then presented the MySQL part - as originally planned. This track concentrated on the how to run Visual FoxPro applications on Linux machines with the help of a Windows emulator like Wine. As more and more people use Linux machines in production (and not just for running servers), more and more invitations to bid for a development job includes the requirement to run the application in a Linux environment. If you would like to participate in such submissions, then you should get familiar with the open source operating system Linux and the open source Data Base system MySQL. [...] These sessions provided a broad, complete overview of where Linux fits into the current computing landscape from the perspective of a VFP developer, where VFP can be used with Linux, and a conceptual plan for how to approach the incorporation of Linux into your day-to-day work. In order for you to be able to work with a Linux back end, you're going to need to know something about how Linux works. The best way involves a two-step process: First, plunk down a Linux workstation on your desk next to your Windows machine and develop some experience with the new OS.Second, once you have a basic level of comfort with Linux, gained through your experience on a workstation, leverage that knowledge and learn to connect to a Linux server from your Windows machine. This track showed both of these processes: What you can expect when you set up your Linux work-station, how to set it up, how to connect to your Windows network, how to fit VFP into the mix, and even how you could use it to replace your Windows workstation in some cases. Also this track demonstrated how to connect to an existing Linux server, running MySQL or an another back end, and how to get your VFP apps talking to that back end data. This track also showed both of the positions you can take. Rainer disliked it wholeheartedly (the bad guy position in these talks) and Jochen loved it (the good guy and "typical Linux techie"-position we all love). These opposite position lasted for three sessions and both sides where shown with their Pros and Cons in live and lively discussions of the speakers (club banging was forbidden). Gerold Luebben showed how Visual Foxpro and MySQL can work together. MySQL is as one the most well known open SOURCE databases for nearly all platforms available. Particularly in eBusiness MySQL is well positioned and well known for its performance and its stability. Still we like Visual FoxPro more - for sure . [...]

    Read the article

< Previous Page | 810 811 812 813 814 815 816 817 818 819 820 821  | Next Page >