Search Results

Search found 2282 results on 92 pages for 'walking man'.

Page 82/92 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Why is Linux choosing the wrong source ip address

    - by Scheintod
    and what to do to let it choose the right one? This all happens inside an OpenVZ container: The Host is Debian/Wheezy with Redhat/OpenVZ Kernel: root@mycl2:~# uname -a Linux mycl2 2.6.32-openvz-042stab081.5-amd64 #1 SMP Mon Sep 30 16:40:27 MSK 2013 x86_64 GNU/Linux The container has two (virtual) network interfaces. One in public and one in private address-space: root@mycl2:~# ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:475 errors:0 dropped:0 overruns:0 frame:0 TX packets:775 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:32059 (31.3 KiB) TX bytes:56309 (54.9 KiB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:80.123.123.29 P-t-P:80.123.123.29 Bcast:80.123.123.29 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 venet0:1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.0.1.29 P-t-P:10.0.1.29 Bcast:10.0.1.29 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 The route to the private network is set manually: root@mycl2:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 venet0 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 venet0 Tring to ping others on the private network leads to the wrong source address been choosen: root@mycl2:~# ip route get 10.0.1.26 10.0.1.26 dev venet0 src 80.123.123.29 cache mtu 1500 advmss 1460 hoplimit 64 Why is this and what can I do about it? EDIT: If I create the route with (thanks to Joshua) ip route add 10.0.0.0/8 dev venet0 src 10.0.1.29 it is working. But according to man ip-route the src parameter should only set the source-ip if this route is chosen. But if this route is chosen then the source-ip would be that anyway.

    Read the article

  • HttpWebRequest and Ignoring SSL Certificate Errors

    - by Rick Strahl
    Man I can't believe this. I'm still mucking around with OFX servers and it drives me absolutely crazy how some these servers are just so unbelievably misconfigured. I've recently hit three different 3 major brokerages which fail HTTP validation with bad or corrupt certificates at least according to the .NET WebRequest class. What's somewhat odd here though is that WinInet seems to find no issue with these servers - it's only .NET's Http client that's ultra finicky. So the question then becomes how do you tell HttpWebRequest to ignore certificate errors? In WinInet there used to be a host of flags to do this, but it's not quite so easy with WebRequest. Basically you need to configure the CertificatePolicy on the ServicePointManager by creating a custom policy. Not exactly trivial. Here's the code to hook it up: public bool CreateWebRequestObject(string Url) {    try     {        this.WebRequest =  (HttpWebRequest) System.Net.WebRequest.Create(Url);         if (this.IgnoreCertificateErrors)            ServicePointManager.CertificatePolicy = delegate { return true; };}One thing to watch out for is that this an application global setting. There's one global ServicePointManager and once you set this value any subsequent requests will inherit this policy as well, which may or may not be what you want. So it's probably a good idea to set the policy when the app starts and leave it be - otherwise you may run into odd behavior in some situations especially in multi-thread situations.Another way to deal with this is in you application .config file. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} <configuration>   <system.net>     <settings>       <servicePointManager           checkCertificateName="false"           checkCertificateRevocationList="false"                />     </settings>   </system.net> </configuration> This seems to work most of the time, although I've seen some situations where it doesn't, but where the code implementation works which is frustrating. The .config settings aren't as inclusive as the programmatic code that can ignore any and all cert errors - shrug. Anyway, the code approach got me past the stopper issue. It still amazes me that theses OFX servers even require this. After all this is financial data we're talking about here. The last thing I want to do is disable extra checks on the certificates. Well I guess I shouldn't be surprised - these are the same companies that apparently don't believe in XML enough to generate valid XML (or even valid SGML for that matter)...© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp  HTTP  

    Read the article

  • Dynamic Type to do away with Reflection

    - by Rick Strahl
    The dynamic type in C# 4.0 is a welcome addition to the language. One thing I’ve been doing a lot with it is to remove explicit Reflection code that’s often necessary when you ‘dynamically’ need to walk and object hierarchy. In the past I’ve had a number of ReflectionUtils that used string based expressions to walk an object hierarchy. With the introduction of dynamic much of the ReflectionUtils code can be removed for cleaner code that runs considerably faster to boot. The old Way - Reflection Here’s a really contrived example, but assume for a second, you’d want to dynamically retrieve a Page.Request.Url.AbsoluteUrl based on a Page instance in an ASP.NET Web Page request. The strongly typed version looks like this: string path = Page.Request.Url.AbsolutePath; Now assume for a second that Page wasn’t available as a strongly typed instance and all you had was an object reference to start with and you couldn’t cast it (right I said this was contrived :-)) If you’re using raw Reflection code to retrieve this you’d end up writing 3 sets of Reflection calls using GetValue(). Here’s some internal code I use to retrieve Property values as part of ReflectionUtils: /// <summary> /// Retrieve a property value from an object dynamically. This is a simple version /// that uses Reflection calls directly. It doesn't support indexers. /// </summary> /// <param name="instance">Object to make the call on</param> /// <param name="property">Property to retrieve</param> /// <returns>Object - cast to proper type</returns> public static object GetProperty(object instance, string property) { return instance.GetType().GetProperty(property, ReflectionUtils.MemberAccess).GetValue(instance, null); } If you want more control over properties and support both fields and properties as well as array indexers a little more work is required: /// <summary> /// Parses Properties and Fields including Array and Collection references. /// Used internally for the 'Ex' Reflection methods. /// </summary> /// <param name="Parent"></param> /// <param name="Property"></param> /// <returns></returns> private static object GetPropertyInternal(object Parent, string Property) { if (Property == "this" || Property == "me") return Parent; object result = null; string pureProperty = Property; string indexes = null; bool isArrayOrCollection = false; // Deal with Array Property if (Property.IndexOf("[") > -1) { pureProperty = Property.Substring(0, Property.IndexOf("[")); indexes = Property.Substring(Property.IndexOf("[")); isArrayOrCollection = true; } // Get the member MemberInfo member = Parent.GetType().GetMember(pureProperty, ReflectionUtils.MemberAccess)[0]; if (member.MemberType == MemberTypes.Property) result = ((PropertyInfo)member).GetValue(Parent, null); else result = ((FieldInfo)member).GetValue(Parent); if (isArrayOrCollection) { indexes = indexes.Replace("[", string.Empty).Replace("]", string.Empty); if (result is Array) { int Index = -1; int.TryParse(indexes, out Index); result = CallMethod(result, "GetValue", Index); } else if (result is ICollection) { if (indexes.StartsWith("\"")) { // String Index indexes = indexes.Trim('\"'); result = CallMethod(result, "get_Item", indexes); } else { // assume numeric index int index = -1; int.TryParse(indexes, out index); result = CallMethod(result, "get_Item", index); } } } return result; } /// <summary> /// Returns a property or field value using a base object and sub members including . syntax. /// For example, you can access: oCustomer.oData.Company with (this,"oCustomer.oData.Company") /// This method also supports indexers in the Property value such as: /// Customer.DataSet.Tables["Customers"].Rows[0] /// </summary> /// <param name="Parent">Parent object to 'start' parsing from. Typically this will be the Page.</param> /// <param name="Property">The property to retrieve. Example: 'Customer.Entity.Company'</param> /// <returns></returns> public static object GetPropertyEx(object Parent, string Property) { Type type = Parent.GetType(); int at = Property.IndexOf("."); if (at < 0) { // Complex parse of the property return GetPropertyInternal(Parent, Property); } // Walk the . syntax - split into current object (Main) and further parsed objects (Subs) string main = Property.Substring(0, at); string subs = Property.Substring(at + 1); // Retrieve the next . section of the property object sub = GetPropertyInternal(Parent, main); // Now go parse the left over sections return GetPropertyEx(sub, subs); } As you can see there’s a fair bit of code involved into retrieving a property or field value reliably especially if you want to support array indexer syntax. This method is then used by a variety of routines to retrieve individual properties including one called GetPropertyEx() which can walk the dot syntax hierarchy easily. Anyway with ReflectionUtils I can  retrieve Page.Request.Url.AbsolutePath using code like this: string url = ReflectionUtils.GetPropertyEx(Page, "Request.Url.AbsolutePath") as string; This works fine, but is bulky to write and of course requires that I use my custom routines. It’s also quite slow as the code in GetPropertyEx does all sorts of string parsing to figure out which members to walk in the hierarchy. Enter dynamic – way easier! .NET 4.0’s dynamic type makes the above really easy. The following code is all that it takes: object objPage = Page; // force to object for contrivance :) dynamic page = objPage; // convert to dynamic from untyped object string scriptUrl = page.Request.Url.AbsolutePath; The dynamic type assignment in the first two lines turns the strongly typed Page object into a dynamic. The first assignment is just part of the contrived example to force the strongly typed Page reference into an untyped value to demonstrate the dynamic member access. The next line then just creates the dynamic type from the Page reference which allows you to access any public properties and methods easily. It also lets you access any child properties as dynamic types so when you look at Intellisense you’ll see something like this when typing Request.: In other words any dynamic value access on an object returns another dynamic object which is what allows the walking of the hierarchy chain. Note also that the result value doesn’t have to be explicitly cast as string in the code above – the compiler is perfectly happy without the cast in this case inferring the target type based on the type being assigned to. The dynamic conversion automatically handles the cast when making the final assignment which is nice making for natural syntnax that looks *exactly* like the fully typed syntax, but is completely dynamic. Note that you can also use indexers in the same natural syntax so the following also works on the dynamic page instance: string scriptUrl = page.Request.ServerVariables["SCRIPT_NAME"]; The dynamic type is going to make a lot of Reflection code go away as it’s simply so much nicer to be able to use natural syntax to write out code that previously required nasty Reflection syntax. Another interesting thing about the dynamic type is that it actually works considerably faster than Reflection. Check out the following methods that check performance: void Reflection() { Stopwatch stop = new Stopwatch(); stop.Start(); for (int i = 0; i < reps; i++) { // string url = ReflectionUtils.GetProperty(Page,"Title") as string;// "Request.Url.AbsolutePath") as string; string url = Page.GetType().GetProperty("Title", ReflectionUtils.MemberAccess).GetValue(Page, null) as string; } stop.Stop(); Response.Write("Reflection: " + stop.ElapsedMilliseconds.ToString()); } void Dynamic() { Stopwatch stop = new Stopwatch(); stop.Start(); dynamic page = Page; for (int i = 0; i < reps; i++) { string url = page.Title; //Request.Url.AbsolutePath; } stop.Stop(); Response.Write("Dynamic: " + stop.ElapsedMilliseconds.ToString()); } The dynamic code runs in 4-5 milliseconds while the Reflection code runs around 200+ milliseconds! There’s a bit of overhead in the first dynamic object call but subsequent calls are blazing fast and performance is actually much better than manual Reflection. Dynamic is definitely a huge win-win situation when you need dynamic access to objects at runtime.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • ERROR: Linux route add command failed: external program exited with error status: 4

    - by JohnMerlino
    A remote machine running fedora uses openvpn, and multiple developers were successfully able to connect to it via their client openvpn. However, I am running Ubuntu 12.04 and I am having trouble connecting to the server via vpn. I copied ca.crt, home.key, and home.crt from the server to my local machine to /etc/openvpn folder. My client.conf file looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. ;proto tcp proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. remote xx.xxx.xx.130 1194 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nogroup # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ca.crt cert home.crt key home.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 But when I start server and look in /var/log/syslog, I notice the following error: May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.252 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: ERROR: Linux route add command failed: external program exited with error status: 4 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 172.27.12.0 netmask 255.255.255.0 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.255 gw 10.27.12.37 And I am unable to connect to the server via openvpn: $ ssh [email protected] ssh: connect to host xxx.xx.xx.130 port 22: No route to host What may I be doing wrong?

    Read the article

  • OBIEE 11.1.1 - OBIEE 11g Full Sample App on VMware Player 4

    - by user809526
    The Full Sample App is designed to run on Virtual Box. Let's describe how to run it on VMware Player 4. Open Virtualization Format Tool http://communities.vmware.com/community/vmtn/server/vsphere/automationtools/ovf VMware Player Documentation https://www.vmware.com/support/pubs/player_pubs.html Full Sample App Deployment Guide sampleapp107-vbimage-deployguide-453583.pdf INSTALL VMplayer 4.0.0 as root LINUX # sh VMware-Player-4.0.0-471780.x86_64.bundle (A new VM is not needed and can be deleted later after that installation is completed. "I will install OS later" - blank hard disk Guest: linux, Red Hat Enterprise Linux 5-64bits => rename to RHEL target: eg /a/root/vmware/ Max disk size: 5 GB (will be deleted) Disk: Single file Dummy RHEL.vmk, RHEL.vmdk is generated. "Delete VM from Disk" in VM Player.) Copy Full Sample App files to target /a/root/vmware/ WARNING: Select a target eg /a/root/vmware/ with lots of free space, 95 GB. Check checksums (md5sum). Please do it! ff85c7eacf7fb8c382e98da875e879e1  Sampleapp_v107_GA-disk1.vmdk 973258cb3c7d64ab03ae853278cf2233  Sampleapp_v107_GA-disk2.vmdk e576be16e36d810479736bfb15d050f5  Sampleapp_v107_GA-disk3.vmdk 3455df77279e53e07d5fee6712f1597d  Sampleapp_v107_GA-disk4.vmdk OVF FILE   Sampleapp_v107_GA.ovf CONVERSION $ cd /a/root/vmware/ LINUX $ /usr/bin/ovftool -tt=ovf --compress=1 -dm=monolithicSparse Sampleapp_v107_GA.ovf .  [dot] Opening OVF source: Sampleapp_v107_GA.ovf Warning: No manifest file Opening OVF target: . Writing OVF package: Sampleapp_v107_GA/Sampleapp_v107_GA.ovf Disk Transfer Completed                   Completed successfully WINDOWS CYGWIN $ /cygdrive/c/VMwarePlayer/OVFTool/ovftool.exe -tt=ovf --compress=1 -dm=monolithicSparse Sampleapp_v107_GA.ovf .  [dot] Opening OVF source: Sampleapp_v107_GA.ovf Warning: No manifest file Opening OVF target: . Writing OVF package: Sampleapp_v107_GA\Sampleapp_v107_GA.ovf Disk Transfer Completed Completed successfully /a/root/vmware$ du -sk 49095328    .   [50 GB already occupied] IMPORT - First start of VM Player 4: /usr/bin/vmplayer "Open a Virtual Machine" Browse to /a/root/vmware/Sampleapp_v107_GA/Sampleapp_v107_GA.ovf [the new generated .ovf] "Import Virtual Machine" dialog Name: Sampleapp_v107_GA Location: /a/root/vmware/Sampleapp_v107_GA/storage [was /home/tdubois/vmware/Sampleapp_v107_GA] "Import" "The import failed because /a/root/vmware/Sampleapp_v107_GA/Sampleapp_v107_GA.ovf did not pass OVF specification conformance or virtual hardware compliance checks. Click Retry to relax OVF specification..." "Retry" ; Long import /a/root/vmware/Sampleapp_v107_GA/storage/Sampleapp_v107_GA.vmx and new .vmdk files are created. /a/root/vmware$ du -sk 95551384    .   [95 GB occupied] Full Sample App GUEST SETUP "Edit VM settings" min 3GB, 2+ processors, network bridged. For OBIEE + Essbase testing use 8 GB RAM hardware. At first time lauch of Full Sample App, leave OEL booting for several minutes undisturbed. Problem with X display server may occur [/usr/bin/Xorg ; man Xorg]. "Failed to start the X server.... Would you like to view the X server output to diagnose the problem?" "No" [tab key] "Would you like to try to configure the X server? Note that you will need the root password for this." "Yes" [oracle] X Display Settings 800x600 saved in /etc/X11/xorg.conf "Trying to restart the X server" Login as root/oracle in guest OEL. In guest OEL, Virtual Machine > Install VMware Tools... Extract archive VMwareTools-8.8.0-471268.tar.gz all files in writable local directory eg /root In Terminal run Perl script # cd /root/vmware-tools-distrib ; ./vmware-install.pl [keep all default answers] Set keyboard layout System > Preferences > Keyboard > Layouts Restart X server eg System > Log Out root... , relogin Modify X resolution System > Preferences > Screen Resolution Full Sample App OEL login: oracle/oracle ; root/oracle [default US keyboard layout] Credentials are described in the 'sampleapp107-vbimage-deployguide-453583.pdf' The large files in /a/root/vmware/ /a/root/vmware/Sampleapp_v107_GA/ may be removed. FAILURE REMARK: Adding the 4 original Sampleapp_v107_GA-disks[1234].vmdk to VM Player does NOT work as described below. "Edit VM settings" "Remove" "Hard Disk" "Edit VM settings" "Add" "Hard Disk" "Next" "Use an existing virtual disk" "Browse" "Finish" "Keep existing format" "Ok" for each 4 disks settings one by one. Start VM Player 4. "You do not have write access to a partition" Allow all Sampleapp_v107 OEL linux launches. OEL stalls silently after 'Checking filesystems'.

    Read the article

  • Ask How-To Geek: Dropbox in the Start Menu, Understanding Symlinks, and Ripping TV Series DVDs

    - by Jason Fitzpatrick
    This week we take a look at how to incorporate Dropbox into your Windows Start Menu, understanding and using symbolic links, and how to rip your TV series DVDs right to unique and high-quality episode files. Once a week we dip into our reader mailbag and help readers solve their problems, sharing the useful solutions with you in the process. Read on to see our fixes for this week’s reader dilemmas. Add Drobox to Your Start Menu Dear How-To Geek, I use Dropbox all the time and would like to add it right onto my start menu along side the other major shortcuts like Documents, Pictures, etc. It seems like adding Dropbox into the menu should be part of the Dropbox installation package! Sincerely, Dropboxing in Des Moines Dear Dropboxing, We agree, it would be a nice installation option. As it stands you’re going to have to do a little simple hacking to get Dropbox nestled neatly into your start menu. The hack isn’t super elegant but when you’re done you’ll have the link you want and it’ll look like it was there all along. Check out this step-by-step guide here in order to take an existing Library shortcut and rework it to be a Dropbox link. Understanding and Using Symbolic Links Dear How-To Geek, I was talking to a coworker the other day about an issue I’d been having with a media center application I’m running. He suggested using symbolic links to better organize my media and make it easier for the application to access my collection. I had no idea what he was talking about and never got a chance to bug him about it later. Can you clear up this whole symbolic links business for me? I’ve been using computers for years and I’ve never even heard of it! Sincerely, Symbolic Who? Dear Symbolic, Symbolic links aren’t commonly used by many Windows users which is why you likely haven’t run into the concept. Symbolic links are essentially supercharged shortcuts—the newly introduced Windows library system is really just a type of symbolic link system. You can use symbolic links to do all sorts of neat stuff like link folders to your Dropbox folder, organize media, and more. The concept of symbolic links is pretty simple but the execution can be really tricky. We’d suggest reading over our guide to creating symbolic links in Windows 7, Windows XP, and Ubunutu to get a clearer idea what you’re getting into. Rip Your TV DVDs into Handy Episode Files Dear How-To Geek, My wife got me an iPod for Christmas and I still haven’t got around to filling it up. I have tons of entire TV show seasons on DVD and would like to get them on the iPod but I have absolutely no idea where to start. How do I get the shows off the discs? I thought it would be as easy to import the TV shows into iTunes as it is to import tracks off a CD but I was totally wrong. I tried downloading some applications to rip them but those didn’t work at all. Very frustrating! Surely there is an easy and/or automated way to do this, right? Sincerely, Free My DVDs Dear DVDs, Oh man is this a frustration we can relate to. It’s inordinately difficult to get movies and TV shows off physical media and into digital (and portable media player-friendly) formats. There are a multitude of ways to rip DVDs and quite a few applications out there (some good, some mediocre, and some outright malware). We’d recommend a two-part punch to solve your ripping woes. You’ll need a copy of DVDFab to strip away the protections on the discs and rip the disc and Handbrake to load the disc image and convert the files. It’s not quite as smooth as the CD-to-iTunes workflow but it’s still pretty easy. Check out all the steps and settings you’ll want to toggle here. Have a question you want to put before the How-To Geek staff? Shoot us an email at [email protected] and then keep an eye out for a solution in the Ask How-To Geek column. Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines Google’s New Personal Blocklist Extension Kills Search Engine Spam KeyCounter Tracks Your Keystrokes and Mouse Clicks Add Custom LED Ambient Lighting to Your PC or Media Center The Trackor Monitors Amazon Prices; Integrates with Chrome, Firefox, and Safari Four Awesome TRON Legacy Themes for Chrome and Iron Anger is Illogical – Old School Style Instructional Video [Star Trek Mashup]

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • nvidia driver problem after update

    - by baltasar
    I know there are a lot of posts about nvdia driver, but I am not able to solve this. I updated my configuration yesterday, September 27th, and it seems that a new nvidia driver was involved. The updated completed with an error, and invited me to send a bug report automatically. I send it, but it finally said that there was no place to send the bug report to. Today I updated again, and a new kernel was there. Then I rebooted. I found a desktop in 640x480 with a horrible, unreadable font. I run jockey and tried to go back to another version of the nvidia driver, but it seems that none of the drivers listed there are suddenly valid. I have the x-swat repository enabled, because I remember that I had a similar issue in the past that was only to be solved with the newest driver. jockey log: 2012-09-28 11:22:24,747 DEBUG: Selecting previously unselected package nvidia-173. (Reading database ... 536311 files and directories currently installed.) Unpacking nvidia-173 (from .../nvidia-173_173.14.35-0ubuntu0.2_i386.deb) ... Processing triggers for man-db ... Setting up nvidia-173 (173.14.35-0ubuntu0.2) ... Loading new nvidia-173-173.14.35 DKMS files... Building only for 3.2.0-32-generic-pae Building for architecture i686 Building initial module for 3.2.0-32-generic-pae Error! Bad return status for module build on kernel: 3.2.0-32-generic-pae (i686) Consult /var/lib/dkms/nvidia-173/173.14.35/build/make.log for more information. Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... 2012-09-28 11:22:25,143 WARNING: modinfo for module nvidia_173 failed: ERROR: modinfo: could not find module nvidia_173 2012-09-28 11:22:25,143 ERROR: XorgDriverHandler.enable(): package or module not installed, aborting 2012-09-28 11:22:53,613 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:22:53,613 DEBUG: KMH enabled: False 2012-09-28 11:22:53,629 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:22:53,629 DEBUG: KMH enabled: False 2012-09-28 11:23:01,943 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:01,943 DEBUG: KMH enabled: False 2012-09-28 11:23:01,962 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:01,963 DEBUG: KMH enabled: False 2012-09-28 11:23:01,998 DEBUG: NVidia(nvidia_173_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-173/ld.so.conf other target alt None other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:01,998 DEBUG: nvidia_173_updates is not the alternative in use 2012-09-28 11:23:02,044 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,044 DEBUG: nvidia_current is not the alternative in use 2012-09-28 11:23:02,066 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,066 DEBUG: nvidia_current is not the alternative in use 2012-09-28 11:23:02,106 DEBUG: NVidia(nvidia_current_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-173/ld.so.conf other target alt None other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,106 DEBUG: nvidia_current_updates is not the alternative in use 2012-09-28 11:23:02,157 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,157 DEBUG: KMH enabled: False 2012-09-28 11:23:02,177 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,178 DEBUG: KMH enabled: False 2012-09-28 11:23:02,245 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,245 DEBUG: KMH enabled: False 2012-09-28 11:23:02,272 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,273 DEBUG: KMH enabled: False 2012-09-28 11:23:02,303 DEBUG: NVidia(nvidia_173_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-173/ld.so.conf other target alt None other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,303 DEBUG: nvidia_173_updates is not the alternative in use 2012-09-28 11:23:02,330 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,410 DEBUG: nvidia_current is not the alternative in use 2012-09-28 11:23:02,427 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,428 DEBUG: nvidia_current is not the alternative in use 2012-09-28 11:23:02,457 DEBUG: NVidia(nvidia_current_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-173/ld.so.conf other target alt None other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,458 DEBUG: nvidia_current_updates is not the alternative in use

    Read the article

  • Aamir Khan’s Satyamev Jayate stirs a movement

    - by Gopinath
    Bollywood actor Aamir Khan is known for his dedication and hard work in inspiring millions of viewers though movies by discussing social problems and motivating people to solve them. His movie Rang De Basanthi seeded Indian anti-corruption movement, Tare Zameen Par touched the problems faced by few challenged kids and the latest movie 3 idiots exposed how education institutions in India are producing lakhs of Donkeys out of colleges every year. He extended his dedication of serving the society to small screen with the launch of reality TV show Satyamev Jayate. Before you start misjudging it as one of those non sense drama / entertaining reality shows, let me tell you that it is not a typical music, games, fight or dance reality show. Satyamev Jayate is all about the real people of India, their problems and how to tackle them.  This is not just a reality show, its movement to educate people about the social evils. Its been many years since I spent couple of hours  in front of TV as most of the programs are too cynical or does not add much value.  In my childhood I use to anxiously wait for Mahabarath or He-Man TV shows to start but after a two decades I waited anxiously for the start of Satyamev Jayate. The wait was worth and the 1 hours 30 minutes spent watching it meaningful. When was the last time you were so satisfied after watching a TV show and inspired to do something? I don’t remember. Today, the show focused on female foeticide and its impact. It showed women who were tortured and forced to abort female foetuses. On the show few brave women shared their experiences of giving birth to girl babies and rough times they are going through with their in-laws & husbands. The show not only focused on the problem but also on the root cause of the evil,  inspiring people working to tackle it and what every individual can do his part to solve it.  The best part of the show is,  its not a blame game. When there is a problem most of the people quickly get into identifying who is wrong and start blaming them instead of solve the actual problem.  Aamir did not blame anyone for female foeticide – neither the government who don’t impose strict rules, nor the doctors who abort girl babies to make money or the mother-in-laws & husbands who torcher girl baby mothers are blamed. He careful highlighted the problem, showed horrifying statistics and their impact on the future society and few inspiring people working to tackle the problem.  He touched heart and stirred a movement against the issue. First time ever I voted for a reality show through SMS and it’s for Satyamev Jayate. I’m proud to do so. Here are the few reactions of popular people, activists & media about the program @aamir_khan absolutely the best program I have seen on TV in recent past. Thanku for converting an idiot box into an inspirationsl medium — Kiran Bedi (@thekiranbedi) May 6, 2012 Satyamev Jayate proves tht TV 2 can b a tool of social change. — Shekhar Kapur (@shekharkapur) May 6, 2012 i absolutely loved #satyamevjayate. at least aamir is doing what all of us only talk about. — Harsha Bhogle (@bhogleharsha) May 6, 2012 Now Television will no longer be called an idiot box,the VISION of Television broadens up with#SatyamevJayate !!! — Madhur Bhandarkar (@mbhandarkar268) May 6, 2012 The Sunday 11am slot seems to have come back with a bang… #SatyamevJayate — atul kasbekar (@atulkasbekar) May 6, 2012   I was spellbound, says Prasoon Joshi – It’s a unique show. I was completely bowled over by it. It’s a never-done before concept Aamir Khan strikes the right chord with Satyamev Jayate – The format is quite crisp. Talking about the emotional connect, there are moments when your eyes well up with tears, but the various segments ensure there’s more content than emotional drama ‘Satyamev Jayate’ gutsy, sensible show: Viewers – From filmmakers to clinical psychologists to professors – everyone has given the thumbs up to Aamir Khan’s television show ‘Satyamev Jayate’, saying it is a gutsy, hard-hitting and sensible programme that strikes an emotional chord with the audiences. Aamir Khan’s TV debut ‘Satyamev Jayate’ takes Twitter by storm – The roads of the capital sported a deserted look around 11 am on Sunday morning, as everyone was hooked on to their TV sets. Did you watch the program? What is your opinion? I’m waiting for next 11 AM of next Sunday. Are you?

    Read the article

  • Rob Blackwell on interoperability and Azure

    - by Eric Nelson
    At QCon in March we had a sample Azure application implemented in both Java and Ruby to demonstrate that the Windows Azure Platform is not just about .NET. The following is an interesting interview with Rob Blackwell, the R&D director of the partner who implemented the application. UK Interoperability Team Interviews Rob Blackwell, R&D Director at Active Web Solutions. Is Microsoft taking interoperability seriously? Yes. In the past, I think Microsoft has, quite rightly come in for criticism, but architects and developers should look at this again. The Interoperability Bridges site (http://www.interoperabilitybridges.com/ ) shows a wide range of projects that allow interoperability from Java, Ruby and PHP for example. The Windows Azure platform has been architected with interoperable APIs in mind. It's straightforward to access the various storage facilities from just about any language or platform. Azure compute is capable of running more than just C# applications! Why is interoperability important to you? My company provides consultancy and bespoke development services. We're a Microsoft Gold Partner, but we live in the real world where companies have a mix of technologies provided by a variety of vendors. When developing an enterprise software solution, you rarely have a completely blank canvas. We often see integration scenarios where we need to exchange data with legacy systems. It's not unusual to see modern Silverlight applications being built on top of Java or Mainframe based back ends. Could you give us some examples of where interoperability has been important for your projects? We developed an innovative Sea Safety system for the RNLI Lifeboats here in the UK. Commercial Fishing is one of the most dangerous professions and we helped developed the MOB Guardian System which uses satellite technology and man overboard devices to raise the alarm when a fisherman gets into trouble. The solution is implemented in .NET running on Windows, but without interoperable standards, it would have been impossible to communicate with the satellite gateway technology. For more information, please see the case study: http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000005892 More recently, we were asked to build a web site to accompany the QCon 2010 conference in London to help demonstrate and promote interoperability. We built the site using Java and Restlet and hosted it in Windows Azure Compute. The site accepts feedback from visitors and all the data is stored in Windows Azure Storage. We also ported the application to Ruby on Rails for demonstration purposes. Visitors to the stand were surprised that this was even possible. Why should Java developers be interested in Windows Azure? Windows Azure Storage consists of Blobs, Queues and Tables. The storage is scalable, durable, secure and cost-effective. Using the WindowsAzure4j library, it's easy to use, and takes just a few lines of code. If you are writing an application with large data storage requirements, or you want an offsite backup, it makes a lot of sense. Running Java applications in Azure Compute is straightforward with tools like the Tomcat Solution Accelerator (http://code.msdn.microsoft.com/winazuretomcat )and AzureRunMe (http://azurerunme.codeplex.com/ ). The Windows Azure AppFabric Service Bus can also be used to connect heterogeneous systems running on different networks and in different data centres. How can The Service Bus be considered an interoperability solution? I think that the Windows Azure AppFabric Service Bus is one of Microsoft’s best kept secrets. Think of it as “a globally scalable application plumbing kit in the sky”. If you have used Enterprise Service Buses before, you’ll be familiar with the concept. Applications can connect to the service bus to securely exchange data – these can be point to point or multicast links. With the AppFabric Service Bus, the applications can exist anywhere that has access to the Internet and the connections can traverse firewalls. This makes it easy to extend or scale your application or reach out to other networks and technologies. For example, let’s say you have a SQL Server database running on premises and you want to expose the data to a Java application running in the cloud. You could set up a point to point Service Bus connection and use JDBC. Traditionally this would have been difficult or impossible without punching holes in firewalls and compromising security. Rob Blackwell is R&D Director at Active Web Solutions, www.aws.net , a Microsoft Gold Partner specialising in leading edge software solutions. He is an occasional writer and conference speaker and blogs at www.robblackwell.org.uk Related Links: UK Azure Online Community – join today. UK Windows Azure Site Start working with Windows Azure

    Read the article

  • Productivity Tips

    - by Brian T. Jackett
    A few months ago during my first end of year review at Microsoft I was doing an assessment of my year.  One of my personal goals to come out of this reflection was to improve my personal productivity.  While I hear many people say “I wish I had more hours in the day so that I could get more done” I feel like that is the wrong approach.  There is an inherent assumption that you are being productive with your time that you already have and thus more time would allow you to be as productive given more time.    Instead of wishing I could add more hours to the day I’ve begun adopting a number of processes or behavior changes in my personal life to make better use of my time with the goal of improving productivity.  The areas of focus are as follows: Focus Processes Tools Personal health Email Note: A number of these topics have spawned from reading Scott Hanselman’s blog posts on productivity, reading of David Allen’s book Getting Things Done, and discussions with friends and coworkers who had great insights into this topic.   Focus Pre-reading / viewing: Overcome your work addiction Millennials paralyzed by choice Its Not What You Read Its What You Ignore (Scott Hanselman video)    I highly recommend Scott Hanselman’s video above and this post before continuing with this article.  It is well worth the 40+ mins price of admission for the video and couple minutes for article.  One key takeaway for me was listing out my activities in an average week and realizing which ones held little or no value to me.  We all have a finite amount of time to work each day.  Do you know how much time and effort you spend on various aspects of your life (family, friends, religion, work, personal happiness, etc.)?  Do your actions and commitments reflect your priorities?    The biggest time consumers with little value for me were time spent on social media services (Twitter and Facebook), playing an MMO video game, and watching TV.  I still check up on Facebook, Twitter, Microsoft internal chat forums, and other services to keep contact with others but I’ve reduced that time significantly.  As for TV I’ve cut the cord and no longer subscribe to cable TV.  Instead I use Netflix, RedBox, and over the air channels but again with reduced time consumption.  With the time I’ve freed up I’m back to working out 2-3 times a week and reading 4 nights a week (both of which I had been neglecting previously).  I’ll mention a few tools for helping measure your time in the Tools section.   Processes    Do not multi-task.  I’ll say it again.  Do not multi-task.  There is no such thing as multi tasking.  The human brain is optimized to work on one thing at a time.  When you are “multi-tasking” you are really doing 2 or more things at less than 100%, usually by a wide margin.  I take pride in my work and when I’m doing something less than 100% the results typically degrade rapidly.    Now there are some ways of bending the rules of physics for this one.  There is the notion of getting a double amount of work done in the same timeframe.  Some examples would be listening to podcasts / watching a movie while working out, using a treadmill as your work desk, or reading while in the bathroom.    Personally I’ve found good results in combining one task that does not require focus (making dinner, playing certain video games, working out) and one task that does (watching a movie, listening to podcasts).  I believe this is related to me being a visual and kinesthetic (using my hands or actually doing it) learner.  I’m terrible with auditory learning.  My fiance and I joke that sometimes we talk and talk to each other but never really hear each other.   Goals / Tasks    Goals can give us direction in life and a sense of accomplishment when we complete them.  Goals can also overwhelm us and give us a sense of failure when we don’t complete them.  I propose that you shift your perspective and not dwell on all of the things that you haven’t gotten done, but focus instead on regularly setting measureable goals that are within reason of accomplishing.    At the end of each time frame have a retrospective to review your progress.  Do not feel guilty about what you did not accomplish.  Feel proud of what you did accomplish and readjust your goals for the next time frame to more attainable goals.  Here is a sample schedule I’ve seen proposed by some.  I have not consistently set goals for each timeframe, but I do typically set 3 small goals a day (this blog post is #2 for today). Each day set 3 small goals Each week set 3 medium goals Each month set 1 large goal Each year set 2 very large goals   Tools    Tools are an extension of our human body.  They help us extend beyond what we can physically and mentally do.  Below are some tools I use almost daily or have found useful as of late. Disclaimer: I am not getting endorsed to promote any of these products.  I just happen to like them and find them useful. Instapaper – Save internet links for reading later.  There are many tools like this but I’ve found this to be a great one.  There is even a “read it later” JavaScript button you can add to your browser so when you navigate to a site it will then add this to your list. Stacks for Instapaper – A Windows Phone 7 app for reading my Instapaper articles on the go.  It does require a subscription to Instapaper (nominal $3 every three months) but is easily worth the cost.  Alternatively you can set up your Kindle to sync with Instapaper easily but I haven’t done so. SlapDash Podcast – Apps for Windows Phone and  Windows 8 (possibly other platforms) to sync podcast viewing / listening across multiple devices.  Now that I have my Surface RT device (which I love) this is making my consumption easier to manage. Feed Reader – Simple Windows 8 app for quickly catching up on my RSS feeds.  I used to have hundreds of unread items all the time.  Now I’m down to 20-50 regularly and it is much easier and faster to consume on my Surface RT.  There is also a free version (which I use) and I can’t see much different between the free and paid versions currently. Rescue Time – Have you ever wondered how much time you’ve spent on websites vs. email vs. “doing work”?  This service tracks your computer actions and then lets you report on them.  This can help you quantitatively identify areas where your actions are not in line with your priorities. PowerShell – Windows automation tool.  It is now built into every client and server OS.  This tool has saved me days (and I mean the full 24 hrs worth) of time and effort in the past year alone.  If you haven’t started learning PowerShell and you administrating any Windows OS or server product you need to start today. Various blogging tools – I wrote a post a couple years ago called How I Blog about my blogging process and tools used.  Almost all of it still applies today.   Personal Health    Some of these may be common sense or debatable, but I’ve found them to help prioritize my daily activities. Get plenty of sleep on a regular basis.  Sacrificing sleep too many nights a week negatively impacts your cognition, attitude, and overall health. Exercise at least three days.  Exercise could be lifting weights, taking the stairs up multiple flights of stairs, walking for 20 mins, or a number of other "non-traditional” activities.  I find that regular exercise helps with sleep and improves my overall attitude. Eat a well balanced diet.  Too much sugar, caffeine, junk food, etc. are not good for your body.  This is not a matter of losing weight but taking care of your body and helping you perform at your peak potential.   Email    Email can be one of the biggest time consumers (i.e. waster) if you aren’t careful. Time box your email usage.  Set a meeting invite for yourself if necessary to limit how much time you spend checking email. Use rules to prioritize your email.  Email from external customers, my manager, or include me directly on the To line go into my inbox.  Everything else goes a level down and I have 30+ rules to further sort it, mostly distribution lists. Use keyboard shortcuts (when available).  I use Outlook for my primary email and am constantly hitting Alt + S to send, Ctrl + 1 for my inbox, Ctrl + 2 for my calendar, Space / Tab / Shift + Tab to mark items as read, and a number of other useful commands.  Learn them and you’ll see your speed getting through emails increase. Keep emails short.  No one Few people like reading through long emails.  The first line should state exactly why you are sending the email followed by a 3-4 lines to support it.  Anything longer might be better suited as a phone call or in person discussion.   Conclusion    In this post I walked through various tips and tricks I’ve found for improving personal productivity.  It is a mix of re-focusing on the things that matter, using tools to assist in your efforts, and cutting out actions that are not aligned with your priorities.  I originally had a whole section on keyboard shortcuts, but with my recent purchase of the Surface RT I’m finding that touch gestures have replaced numerous keyboard commands that I used to need.  I see a big future in touch enabled devices.  Hopefully some of these tips help you out.  If you have any tools, tips, or ideas you would like to share feel free to add in the comments section.         -Frog Out   Links Scott Hanselman Productivity posts http://www.hanselman.com/blog/CategoryView.aspx?category=Productivity Overcome your work addiction http://blogs.hbr.org/hbsfaculty/2012/05/overcome-your-work-addiction.html?awid=5512355740280659420-3271   Millennials paralyzed by choice http://priyaparker.com/blog/millennials-paralyzed-by-choice   Its Not What You Read Its What You Ignore (video) http://www.hanselman.com/blog/ItsNotWhatYouReadItsWhatYouIgnoreVideoOfScottHanselmansPersonalProductivityTips.aspx   Cutting the cord – Jeff Blankenburg http://www.jeffblankenburg.com/2011/04/06/cutting-the-cord/   Building a sitting standing desk – Eric Harlan http://www.ericharlan.com/Everything_Else/building-a-sitting-standing-desk-a229.html   Instapaper http://www.instapaper.com/u   Stacks for Instapaper http://www.stacksforinstapaper.com/   Slapdash Podcast Windows Phone -  http://www.windowsphone.com/en-us/store/app/slapdash-podcasts/90e8b121-080b-e011-9264-00237de2db9e Windows 8 - http://apps.microsoft.com/webpdp/en-us/app/slapdash-podcasts/0c62e66a-f2e4-4403-af88-3430a821741e/m/ROW   Feed Reader http://apps.microsoft.com/webpdp/en-us/app/feed-reader/d03199c9-8e08-469a-bda1-7963099840cc/m/ROW   Rescue Time http://www.rescuetime.com/   PowerShell Script Center http://technet.microsoft.com/en-us/scriptcenter/bb410849.aspx

    Read the article

  • Why can't I change the AU_AU locale to en_US?

    - by sachin
    /bin/bash: warning: setlocale: LC_ALL: cannot change locale ( (unset)) Generating locales... en_US.ISO-8859-1... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done Generation complete. ganesha@ubuntu:~$ sudo update_locale LANG=en_US sudo: update_locale: command not found ganesha@ubuntu:~$ sudo update-locale LANG=en_US perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US:en", LC_ALL = " (unset)", LC_PAPER = "AU_AU.UTF-8", LC_ADDRESS = "AU_AU.UTF-8", LC_MONETARY = "AU_AU.UTF-8", LC_MEASUREMAUT = "AU_AU.UTF-8", LC_NUMERIC = "AU_AU.UTF-8", LC_TELEPHONE = "AU_AU.UTF-8", LC_MESSAGES = "AU_AU.UTF-8", LC_COLLATE = "AU_AU.UTF-8", LC_IDAUTIFICATION = "AU_AU.UTF-8", LC_CTYPE = "AU_AU.UTF-8", LC_TIME = "AU_AU.UTF-8", LC_NAME = "AU_AU.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). ganesha@ubuntu:~$ sudo update-locale LANG=en_US.UTF8 perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US:en", LC_ALL = " (unset)", LC_PAPER = "AU_AU.UTF-8", LC_ADDRESS = "AU_AU.UTF-8", LC_MONETARY = "AU_AU.UTF-8", LC_MEASUREMAUT = "AU_AU.UTF-8", LC_NUMERIC = "AU_AU.UTF-8", LC_TELEPHONE = "AU_AU.UTF-8", LC_MESSAGES = "AU_AU.UTF-8", LC_COLLATE = "AU_AU.UTF-8", LC_IDAUTIFICATION = "AU_AU.UTF-8", LC_CTYPE = "AU_AU.UTF-8", LC_TIME = "AU_AU.UTF-8", LC_NAME = "AU_AU.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). ganesha@ubuntu:~$ gedit ~/.profile (process:6467): Gtk-WARNING **: Locale not supported by C library. Using the fallback 'C' locale. ganesha@ubuntu:~$ sudo update-locale LANG=en_US.UTF8 perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US:en", LC_ALL = " (unset)", LC_PAPER = "AU_AU.UTF-8", LC_ADDRESS = "AU_AU.UTF-8", LC_MONETARY = "AU_AU.UTF-8", LC_MEASUREMAUT = "AU_AU.UTF-8", LC_NUMERIC = "AU_AU.UTF-8", LC_TELEPHONE = "AU_AU.UTF-8", LC_MESSAGES = "AU_AU.UTF-8", LC_COLLATE = "AU_AU.UTF-8", LC_IDAUTIFICATION = "AU_AU.UTF-8", LC_CTYPE = "AU_AU.UTF-8", LC_TIME = "AU_AU.UTF-8", LC_NAME = "AU_AU.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). ganesha@ubuntu:~$ sudo apt-get install --reinstall locales Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 0 not upgraded. Need to get 0 B/3359 kB of archives. After this operation, 0 B of additional disk space will be used. perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US:en", LC_ALL = " (unset)", LC_TIME = "AU_AU.UTF-8", LC_MONETARY = "AU_AU.UTF-8", LC_CTYPE = "AU_AU.UTF-8", LC_COLLATE = "AU_AU.UTF-8", LC_ADDRESS = "AU_AU.UTF-8", LC_TELEPHONE = "AU_AU.UTF-8", LC_MESSAGES = "AU_AU.UTF-8", LC_NAME = "AU_AU.UTF-8", LC_MEASUREMAUT = "AU_AU.UTF-8", LC_IDAUTIFICATION = "AU_AU.UTF-8", LC_NUMERIC = "AU_AU.UTF-8", LC_PAPER = "AU_AU.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). (Reading database ... 157848 files and directories currently installed.) Preparing to replace locales 2.13+git20120306-3 (using .../locales_2.13+git20120306-3_all.deb) ... Unpacking replacement locales ... Processing triggers for man-db ... Setting up locales (2.13+git20120306-3) ... /bin/bash: warning: setlocale: LC_ALL: cannot change locale ( (unset)) Generating locales... en_AG.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_AU.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_BW.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_CA.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_DK.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_GB.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_HK.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_IE.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_IN.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_NG.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_NZ.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_PH.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_SG.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_US.ISO-8859-1... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_US.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_ZA.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_ZM.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done en_ZW.UTF-8... /usr/sbin/locale-gen: line 177: warning: setlocale: LC_ALL: cannot change locale ( (unset)) done Generation complete. ganesha@ubuntu:~$ How to correct this?

    Read the article

  • SharePoint Saturday Michigan 2010 Recap, Slides, and Photos

    - by Brian Jackett
    This past weekend I attended SharePoint Saturday Michigan (SPSMI) in Ann Arbor, Michigan.  For those unfamiliar, SharePoint Saturday is a community driven event where various speakers gather to present at a FREE conference on all topics related to SharePoint.  This made my third SharePoint Saturday attended and second I’ve spoken at.  I believe today it was announced that about 210 people total attended the event.  I was very happy with the turnout, especially the ratio of male to female attendees.  Typically with computer related conferences the ratio leans towards more males attending, but both Peter Serzo (one of conference organizers) and I both commented to each other that at the end of the day it appeared to be close to 40% women in the crowd.  So here’s my recap of the weekend. Arrival     Friday afternoon I drove up from Columbus, OH to Ann Arbor, MI and arrived around 4pm.  I was attempting to avoid the rush hour traffic and construction backups.  Turned out to be a good idea because other speakers coming up Friday got stuck on a highway which literally closed down in both directions due to a bad accident.  I was talking my friend Sean McDonough through the highway closing and this was the first time I had seen a solid black traffic line on Google Maps.  Most of us are familiar with Green, Yellow, and Red, but this line was black if that tells you how bad it got. Speaker “Dinner”     Fast forward a few hours and it was time for the speaker “dinner.”  I put “dinner” in quotes because with this night alone SPSMI set a new bar for nicest and most extravagant speaker appreciation events for SharePoint Saturday.  By tapping into some very influential contacts, the conference organizers were able to provide a truck limo (yep you heard right) with refreshments, access to an underground suite at the Palace of Auburn Hills, and courtside tickets to see the Detroit Pistons play that night.  Being a Michigan native I have to say that I was absolutely floored by this experience and very thankful to our conference organizers Peter, Sebastian, and Jesse along with Trillium Teamologies. Sessions     The actual conference started Saturday morning at 9am with the keynote by Rob Collie who is the Microsoft program manager for PowerPivot.  The day continued and I attended the following sessions: Mike Watson (@mikewat) – “SharePoint 2010 Fight Night: Devs vs. Admins” Karl Swedeberg (@kswedberg) – “A Walk on the Client Side with jQuery“ [my session] Brian Jackett (@briantjackett) - “Real World Deployment of SharePoint 2007 Solutions” Jeff Willinger (@jwillie) - “Social Computing and Collaboration Inside and Outside the 4 Walls” Paul Schaeflein (@paulschaeflein) – “PowerShell for the SharePoint Developer” My Presentation     I had a great time presenting my session on Deploying SharePoint 2007 Solutions, but it wasn’t without its fair share of technical issues.  As my session was right after lunch I came in to my room 10 mins early to set up my laptop, slides, and demos.  As a quick background note, a few months ago I got an upgraded laptop from my company Sogeti and have been dual booting it between XP (factory installed) and Windows Server 2008 R2 w/ Hyper-V.  As such I had prepared all of my demo virtual machines to run under Hyper-V.  About 3 minutes before my session was scheduled to start though it became apparent that I did not have the correct display drivers to connect Windows Server 2008 R2 to the projector…     As you can imagine this was a slight cause for concern as I was potentially going to be unable to give my presentation.  Luckily for me I usually prepare for such unforeseen issues and had my presentation and some spare VMs that would run on XP on my external hard drive.  Knowing this I rebooted my machine into XP and began my presentation without slides until about 5 mins into the session when everything was up and running on XP.  Despite this being the first time I gave this presentation I have to say it was one of my favorites I’ve given so far.  The audience was very engaged in the session and I received some great, positive feedback afterwards.  Thanks to all who attended my session, I appreciate it very much. Link to Presentation Files     For those of you who attended my session and would like my slides or demo PowerShell scripts they can be found on my SkyDrive at the link below.  Also, if you have a few minutes and wouldn’t mind rating my session I have this session posted on SpeakerRate.  As speakers we always appreciate any and all feedback attendees offer, so thank you if you are able to provide any. SkyDrive folder with session files Rate my SharePoint 2007 Solutions session   Picture Albums     For everyone else, here are my pictures from the weekend.  The first link is to my FaceBook album which will have tagging (recommend this one.)  The second is to my Live album if you care for higher resolution images. http://www.facebook.com/album.php?aid=2154482&id=21905041&l=a3fb72ee8c View Full Album Conclusion     A big thank you goes out to all of the organizers, speakers, sponsors, and attendees of SPSMI.  As I’ve said so many times, without each and every one of you these events wouldn’t be possible.  I thoroughly enjoyed this trip back to my home state and presenting a new session.  For those interested in my upcoming schedule I will be giving two sessions on PowerShell at SharePoint Saturday Charlotte in April, helping plan Stir Trek: Iron Man Edition in May, and I’m submitting sessions to Day of .Net Ann Arbor in May as well.  Beyond that I haven’t planned out any travels.  Thanks for reading my recap.  Look forward to more technical posts now that I have a short break in conferences.         -Frog Out   links: Michigan image

    Read the article

  • Quick guide to Oracle IRM 11g: Server installation

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index This is the first of a set of articles designed to assist with the successful installation, configuration and deployment of a document security solution using Oracle IRM. This article goes through a set of simple instructions which detail how to download, install and configure the IRM server, the starting point for building a document security solution. This article contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Downloading the software Preparing a database Creating the schema WebLogic Server installation Installing Oracle IRM Introduction Because we are using Oracle Enterprise Linux in this guide, and before we get into the detail of IRM, i'd like to share some tips with Linux to make life a bit easier.Use a 64bit platform, IRM 11g runs just fine on a 32bit server but with 64bit you will build a more future proof service. Download and install the latest Java JDK package. Make sure you get the 64bit version if you are on a 64bit server. Configure Linux to use a good Yum server to simplify installing packages. For Oracle Enterprise Linux we maintain a great public Yum here. Have at least 20GB of free disk space on the partition you intend to install the IRM server. The downloads are big, then you extract them and then install. This quickly consumes disk space which you can easily recover by deleting the downloaded and extracted files after wards. But it's nice to have the disk space spare to keep these around in case you need to restart any part of the installation process again. Downloading the software OK, so before you can do anything, you need the software install kits. Luckily Oracle allows you to freely download every technology we create. You'll need to get the following; Oracle WebLogic Server Oracle Database Oracle Repository Creation Utility (rcu) Oracle IRM server You can use Microsoft SQL server 2005 or 2008, in this guide i've used Oracle RDBMS 11gR2 for Linux. Preparing the database I'm not going to go through the finer points of installing the database. There are many very good guides on installing the Oracle Database. However one thing I would suggest you think about is enabling TDE, network encryption and using Database Vault. These Oracle database security technologies are excellent for creating a complete end to end security solution. No point in going to all the effort to secure document access with IRM when someone can go directly to the database and assign themselves rights to documents. To understand this further, you can see a video of the IRM service using these database security technologies here. With a database up and running we need to create a schema to hold the IRM data. This schema contains the rights model, cryptographic keys, user account id's and associated rights etc. Creating the IRM database schema Oracle uses the Repository Creation Tool which builds your schema, extract the files from the rcu zip. Then in a terminal window; cd /oracle/install/rcu/bin ./rcu This will launch the Repository Creation Tool and you will be presented with the image to the right. Hit next and continue onto the next dialog. You are asked if you are going to be creating a new schema or wish to drop an existing one, you obviously just need to click next at this point to create a new schema. The RCU next needs to know where your database is so you'll need the following details of your database instance. Below, for reference, is the information for my installation. Hostname: irm.oracle.demo Port: 1521 (This is the default TCP port for the Oracle Database) Service Name: irm.oracle.demo. Note this is not the SID, but the service name. Username: sys Password: ******** Role: SYSDBA And then select next. Because the RCU contains schemas for many of the Oracle Technologies, you now need to select to just deploy the Oracle IRM schema. Open the section under "Enterprise Content Management" and tick the "Oracle Information Rights Management" component. Note that you also get the chance to select a prefix which defaults to "DEV" (for development). I usually change this to something that reflects my own install. PROD for a production system, INT for internal only etc. The next step asks for the passwords for the schema users. We are only creating one schema here so you just enter one password. Some brave souls store this password in an Excel spreadsheet which is then secure against the IRM server you're about to install in this guide. Nearing the end of the schema creation is the mapping of the tablespaces to the schema. Note I had setup a table space already that was encrypted using TDE and at this point I was able to select that tablespace by clicking in the "Default Tablespace" column. The next dialog confirms your actions and clicking on next causes it to create the schema and default data. After this you are presented with the completion summary. WebLogic Server installation The database is now ready and the next step is to install the application server. Oracle IRM 11g is a JEE application and currently only supported in Oracle WebLogic Server. So the next step is get WebLogic Server installed, which is pretty easy. Depending on the version you download, you either run the binary or for a 64 bit platform (like mine) run the following command. java -d64 -jar wls1033_generic.jar And in the resulting dialog hit next to start walking through the install. Next choose a directory into which you will install WebLogic Server. I like to change from the default and install into /oracle/. Then all my software goes into this one folder, all owned by the "oracle" user. The next dialog asks for your Oracle support information to ensure you are kept up to date. If you have an Oracle support account, enter your details but for most evaluation systems I leave these fields blank. Again, for evaluation or development systems, I usually stick with the "Typical" install type which you are next asked for. Next you are asked for the JDK which will be used for the server. When installing from the generic jar on a 64bit platform like in this guide, no JDK is bundled with the installer. But as you can see in the image on the right, that it does a good job of detecting the one you've got installed. Defaults for the install directories are usually taken, no changes here, just click next. And finally we are ready to install, hit next, sit back and relax. Typically this takes about 10 minutes. After the install, do not run the quick start, we need to deploy the IRM install itself from which we will create a new WebLogic domain. For now just hit done and lets move to the final step of the installation process. Installing Oracle IRM The last piece of the puzzle to getting your environment ready is to deploy the IRM files themselves. Unzip the Oracle Enterprise Content Management 11g zip file and it will create a Disk1 directory. Switch to this folder and in the console run ./runInstaller. This will launch the installer which will also ask for the location of the JDK. Look at the image on the right for the detail. You should now see the first stage of the IRM installation. The dialog warns you need to have a WebLogic server installed and have created the schema's, but you've just done all that above (I hope) so we are ready to go. The installer now checks that you have all the required libraries installed and other system parameters are correct. Because nearly all of my development and evaluation installations have the database server on the same system, the installer passes these checks without issue... Next... Now chose where to install the IRM files, you must install into the same Middleware Home as the WebLogic Server installation you just performed. Usually the installer already defaults to this location anyway. I also tend to change the Oracle Home Directory to Oracle_IRM so it's clear this is just an IRM install. The summary page tells you about space needed to deploy the files. Unfortunately the IRM install comes with all of the other Oracle ECM software, you can't just select the IRM files, everything gets deployed to disk and uses 1.6GB of space! Not fun, but Oracle has to package up similar technologies otherwise we would have a very large number of installers to QA and manage, again, not fun. Hit Install, time for another drink, maybe a piece of cake or a donut... on a half decent system this part of the install took under 10 minutes. Finally the installation of your IRM server is complete, click on finish and the next phase is to create the WebLogic domain and start configuring your server. Now move onto the next article in this guide... configuring your IRM server ready to seal your first document.

    Read the article

  • Internet Explorer and Cookie Domains

    - by Rick Strahl
    I've been bitten by some nasty issues today in regards to using a domain cookie as part of my FormsAuthentication operations. In the app I'm currently working on we need to have single sign-on that spans multiple sub-domains (www.domain.com, store.domain.com, mail.domain.com etc.). That's what a domain cookie is meant for - when you set the cookie with a Domain value of the base domain the cookie stays valid for all sub-domains. I've been testing the app for quite a while and everything is working great. Finally I get around to checking the app with Internet Explorer and I start discovering some problems - specifically on my local machine using localhost. It appears that Internet Explorer (all versions) doesn't allow you to specify a domain of localhost, a local IP address or machine name. When you do, Internet Explorer simply ignores the cookie. In my last post I talked about some generic code I created to basically parse out the base domain from the current URL so a domain cookie would automatically used using this code:private void IssueAuthTicket(UserState userState, bool rememberMe) { FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, userState.UserId, DateTime.Now, DateTime.Now.AddDays(10), rememberMe, userState.ToString()); string ticketString = FormsAuthentication.Encrypt(ticket); HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, ticketString); cookie.HttpOnly = true; if (rememberMe) cookie.Expires = DateTime.Now.AddDays(10); var domain = Request.Url.GetBaseDomain(); if (domain != Request.Url.DnsSafeHost) cookie.Domain = domain; HttpContext.Response.Cookies.Add(cookie); } This code works fine on all browsers but Internet Explorer both locally and on full domains. And it also works fine for Internet Explorer with actual 'real' domains. However, this code fails silently for IE when the domain is localhost or any other local address. In that case Internet Explorer simply refuses to accept the cookie and fails to log in. Argh! The end result is that the solution above trying to automatically parse the base domain won't work as local addresses end up failing. Configuration Setting Given this screwed up state of affairs, the best solution to handle this is a configuration setting. Forms Authentication actually has a domain key that can be set for FormsAuthentication so that's natural choice for the storing the domain name: <authentication mode="Forms"> <forms loginUrl="~/Account/Login" name="gnc" domain="mydomain.com" slidingExpiration="true" timeout="30" xdt:Transform="Replace"/> </authentication> Although I'm not actually letting FormsAuth set my cookie directly I can still access the domain name from the static FormsAuthentication.CookieDomain property, by changing the domain assignment code to:if (!string.IsNullOrEmpty(FormsAuthentication.CookieDomain)) cookie.Domain = FormsAuthentication.CookieDomain; The key is to only set the domain when actually running on a full authority, and leaving the domain key blank on the local machine to avoid the local address debacle. Note if you want to see this fail with IE, set the domain to domain="localhost" and watch in Fiddler what happens. Logging Out When specifying a domain key for a login it's also vitally important that that same domain key is used when logging out. Forms Authentication will do this automatically for you when the domain is set and you use FormsAuthentication.SignOut(). If you use an explicit Cookie to manage your logins or other persistant value, make sure that when you log out you also specify the domain. IOW, the expiring cookie you set for a 'logout' should match the same settings - name, path, domain - as the cookie you used to set the value.HttpCookie cookie = new HttpCookie("gne", ""); cookie.Expires = DateTime.Now.AddDays(-5); // make sure we use the same logic to release cookie var domain = Request.Url.GetBaseDomain(); if (domain != Request.Url.DnsSafeHost) cookie.Domain = domain; HttpContext.Response.Cookies.Add(cookie); I managed to get my code to do what I needed it to, but man I'm getting so sick and tired of fixing IE only bugs. I spent most of the day today fixing a number of small IE layout bugs along with this issue which took a bit of time to trace down.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to configure a zone cluster on Solaris Cluster 4.0

    - by JuergenS
    This is a short overview on how to configure a zone cluster on Solaris Cluster 4.0. This is a little bit different as in Solaris Cluster 3.2/3.3 because Solaris Cluster 4.0 is only running on Solaris 11. The name of the zone cluster must be unique throughout the global Solaris Cluster and must be configured on a global Solaris Cluster. Please read all the requirements for zone cluster in Solaris Cluster Software Installation Guide for SC4.0. For Solaris Cluster 3.2/3.3 please refer to my previous blog Configuration steps to create a zone cluster in Solaris Cluster 3.2/3.3. A. Configure the zone cluster into the already running global clusterCheck if zone cluster can be created # cluster show-netprops to change number of zone clusters use # cluster set-netprops -p num_zoneclusters=12 Note: 12 zone clusters is the default, values can be customized! Create config file (zc1config) for zone cluster setup e.g: Configure zone cluster # clzc configure -f zc1config zc1 Note: If not using the config file the configuration can also be done manually # clzc configure zc1 Check zone configuration # clzc export zc1 Verify zone cluster # clzc verify zc1 Note: The following message is a notice and comes up on several clzc commands Waiting for zone verify commands to complete on all the nodes of the zone cluster "zc1"... Install the zone cluster # clzc install zc1 Note: Monitor the consoles of the global zone to see how the install proceed! (The output is different on the nodes) It's very important that all global cluster nodes have installed the same set of ha-cluster packages! Boot the zone cluster # clzc boot zc1 Login into non-global-zones of zone cluster zc1 on all nodes and finish Solaris installation. # zlogin -C zc1 Check status of zone cluster # clzc status zc1 Login into non-global-zones of zone cluster zc1 and configure the shell environment for root (for PATH: /usr/cluster/bin, for MANPATH: /usr/cluster/man) # zlogin -C zc1 If using additional name service configure /etc/nsswitch.conf of zone cluster non-global zones. hosts: cluster files netmasks: cluster files Configure /etc/inet/hosts of the zone cluster zones Enter all the logical hosts of non-global zones B. Add resource groups and resources to zone cluster Create a resource group in zone cluster # clrg create -n <zone-hostname-node1>,<zone-hostname-node2> app-rg Note1: Use command # cluster status for zone cluster resource group overview. Note2: You can also run all commands for zone cluster in global cluster by adding the option -Z to the command. e.g: # clrg create -Z zc1 -n <zone-hostname-node1>,<zone-hostname-node2> app-rg Set up the logical host resource for zone cluster In the global zone do: # clzc configure zc1 clzc:zc1 add net clzc:zc1:net set address=<zone-logicalhost-ip> clzc:zc1:net end clzc:zc1 commit clzc:zc1 exit Note: Check that logical host is in /etc/hosts file In zone cluster do: # clrslh create -g app-rg -h <zone-logicalhost> <zone-logicalhost>-rs Set up storage resource for zone cluster Register HAStoragePlus # clrt register SUNW.HAStoragePlus Example1) ZFS storage pool In the global zone do: Configure zpool eg: # zpool create <zdata> mirror cXtXdX cXtXdX and # clzc configure zc1 clzc:zc1 add dataset clzc:zc1:dataset set name=zdata clzc:zc1:dataset end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p zpools=zdata app-hasp-rs Example2) HA filesystem In the global zone do: Configure SVM diskset and SVM devices. and # clzc configure zc1 clzc:zc1 add fs clzc:zc1:fs set dir=/data clzc:zc1:fs set special=/dev/md/datads/dsk/d0 clzc:zc1:fs set raw=/dev/md/datads/rdsk/d0 clzc:zc1:fs set type=ufs clzc:zc1:fs add options [logging] clzc:zc1:fs end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/data app-hasp-rs Example3) Global filesystem as loopback file system In the global zone configure global filesystem and it to /etc/vfstab on all global nodes e.g.: /dev/md/datads/dsk/d0 /dev/md/datads/dsk/d0 /global/fs ufs 2 yes global,logging and # clzc configure zc1 clzc:zc1 add fs clzc:zc1:fs set dir=/zone/fs (zc-lofs-mountpoint) clzc:zc1:fs set special=/global/fs (globalcluster-mountpoint) clzc:zc1:fs set type=lofs clzc:zc1:fs end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: (Create scalable rg if not already done) # clrg create -p desired_primaries=2 -p maximum_primaries=2 app-scal-rg # clrs create -g app-scal-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/zone/fs hasp-rs More details of adding storage available in the Installation Guide for zone cluster Switch resource group and resources online in the zone cluster # clrg online -eM app-rg # clrg online -eM app-scal-rg Test: Switch of the resource group in the zone cluster # clrg switch -n zonehost2 app-rg # clrg switch -n zonehost2 app-scal-rg Add supported dataservice to zone cluster Documentation for SC4.0 is available here Example output: Appendix: To delete a zone cluster do: # clrg delete -Z zc1 -F + Note: Zone cluster uninstall can only be done if all resource groups are removed in the zone cluster. The command 'clrg delete -F +' can be used in zone cluster to delete the resource groups recursively. # clzc halt zc1 # clzc uninstall zc1 Note: If clzc command is not successful to uninstall the zone, then run 'zoneadm -z zc1 uninstall -F' on the nodes where zc1 is configured # clzc delete zc1

    Read the article

  • Minidlna Directory Issues

    - by Somnambulist
    I've done my searching and can't find an answer to THIS specific issue. I have my minidlna set up and running - but it's not really done properly. First off, when I open the server on my bluray player, all of my movies are listed twice - when they are certainly not saved on my external twice. Second, when I open the server - rather than reading "Movies" "TV" "Music", etc - It just mashes all of my movies, tv, and some other folders all together with no real organization. I never had this problem when I had my Windows set up, so I know it's something configured improperly more-so than my external drive giving me gruff. Here's my minidlna.conf file: # This is the configuration file for the MiniDLNA daemon, a DLNA/UPnP-AV media # server. # # Unless otherwise noted, the commented out options show their default value. # # On Debian, you can also refer to the minidlna.conf(5) man page for # documentation about this file. media_dir=/media/somnambulist/Ghost In You # This option can be specified more than once if you want multiple directories # scanned. # # If you want to restrict a media_dir to a specific content type, you can # prepend the directory name with a letter representing the type (A, P or V), # followed by a comma, as so: # * "A" for audio (eg. media_dir=A,/var/lib/minidlna/music) # * "P" for pictures (eg. media_dir=P,/var/lib/minidlna/pictures) # * "V" for video (eg. media_dir=V,/var/lib/minidlna/videos) # # WARNING: After changing this option, you need to rebuild the database. Either # run minidlna with the '-R' option, or delete the 'files.db' file # from the db_dir directory (see below). # On Debian, you can run, as root, 'service minidlna force-reload' instead. #media_dir=/var/lib/minidlna media_dir=V,/media/somnambulist/Ghost In You/Movies media_dir=V,/media/somnambulist/Ghost In You/TV media_dir=P,/home/somnambulist/Pictures # Path to the directory that should hold the database and album art cache. db_dir=/home/somnambulist/serverart # Path to the directory that should hold the log file. log_dir=/home/somnambulist/serverlog # Minimum level of importance of messages to be logged. # Must be one of "off", "fatal", "error", "warn", "info" or "debug". # "off" turns of logging entirely, "fatal" is the highest level of importance # and "debug" the lowest. #log_level=warn # Use a different container as the root of the directory tree presented to # clients. The possible values are: # * "." - standard container # * "B" - "Browse Directory" # * "M" - "Music" # * "P" - "Pictures" # * "V" - "Video" # if you specify "B" and client device is audio-only then "Music/Folders" will be used as root root_container=B # Network interface(s) to bind to (e.g. eth0), comma delimited. #network_interface= # IPv4 address to listen on (e.g. 192.0.2.1). #listening_ip= # Port number for HTTP traffic (descriptions, SOAP, media transfer). port=8200 # URL presented to clients. # The default is the IP address of the server on port 80. #presentation_url=http://example.com:80 # Name that the DLNA server presents to clients. friendly_name=Somnambulist Media Server # Serial number the server reports to clients. serial=12345678 # Model name the server reports to clients. #model_name=Windows Media Connect compatible (MiniDLNA) # Model number the server reports to clients. model_number=1 # Automatic discovery of new files in the media_dir directory. #inotify=yes # List of file names to look for when searching for album art. Names should be # delimited with a forward slash ("/"). album_art_names=Cover.jpg/cover.jpg/AlbumArtSmall.jpg/albumartsmall.jpg/AlbumArt.jpg/albumart.jpg/Album.jpg/album.jpg/Folder.jpg/folder.jpg/Thumb.jpg/thumb.jpg # Strictly adhere to DLNA standards. # This allows server-side downscaling of very large JPEG images, which may # decrease JPEG serving performance on (at least) Sony DLNA products. #strict_dlna=no # Support for streaming .jpg and .mp3 files to a TiVo supporting HMO. #enable_tivo=no # Notify interval, in seconds. #notify_interval=895 # Path to the MiniSSDPd socket, for MiniSSDPd support. #minissdpdsocket=/run/minissdpd.sock` And here's the error I get in terminal when I run: sudo service minidlna restart sudo service minidlna force-reload Force restart error: Restarting DLNA/UPnP-AV media server minidlna [2013/08/12 21:19:27] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/Movies" not accessible! [Permission denied] [2013/08/12 21:19:27] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/TV" not accessible! [Permission denied] Force-reload error: Restarting DLNA/UPnP-AV media server minidlna [2013/08/12 21:19:46] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/Movies" not accessible! [Permission denied] [2013/08/12 21:19:46] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/TV" not accessible! [Permission denied] rm: cannot remove ‘/home/somnambulist/serverart/files.db’: Permission denied rm: cannot remove ‘/home/somnambulist/serverart/art_cache/media/somnambulist/Ghost In You/Movies/Slumdog Millionaire/Slumdog.Millionaire.Cover.jpg’: Permission denied rm: cannot remove ‘/home/somnambulist/serverart/art_cache/media/somnambulist/Ghost In You/Movies/Zack and Miri Make a Porno/ZackAndMiriMakeAPornoCover.jpg’: Permission denied [2013/08/12 21:19:46] minidlna.c:744: warn: Failed to clean old file cache. [ OK ] I've spent hours on this at this point, read through various files - and even had a friend who is relatively Ubuntu-savvy try to help me via chat - no such luck. Thanks in advance for any help.

    Read the article

  • I Didn&rsquo;t Get You Anything&hellip;

    - by Bob Rhubart
    Nearly every day this blog features a  list posts and articles written by members of the OTN architect community. But with Christmas just days away, I thought a break in that routine was in order. After all, if the holidays aren’t excuse enough for an off-topic post, then the terrorists have won. Rather than buy gifts for everyone -- which, given the readership of this blog and my budget could amount to a cash outlay of upwards of $15.00 – I thought I’d share a bit of holiday humor. I wrote the following essay back in the mid-90s, for a “print” publication that used “paper” as a content delivery system.  That was then. I’m older now, my kids are older, but my feelings toward the holidays haven’t changed… It’s New, It’s Improved, It’s Christmas! The holidays are a time of rituals. Some of these, like the shopping, the music, the decorations, and the food, are comforting in their predictability. Other rituals, like the shopping, the  music, the decorations, and the food, can leave you curled into the fetal position in some dark corner, whimpering. How you react to these various rituals depends a lot on your general disposition and credit card balance. I, for one, love Christmas. But there is one Christmas ritual that really tangles my tinsel: the seasonal editorializing about how our modern celebration of the holidays pales in comparison to that of Christmas past. It's not that the old notions of how to celebrate the holidays aren't all cozy and romantic--you can't watch marathon broadcasts of "It's A Wonderful White Christmas Carol On Thirty-Fourth Street Story" without a nostalgic teardrop or two falling onto your plate of Christmas nachos. It's just that the loudest cheerleaders for "old-fashioned" holiday celebrations overlook the fact that way-back-when those people didn't have the option of doing it any other way. Dashing through the snow in a one-horse open sleigh? No thanks. When Christmas morning rolls around, I'm going to be mighty grateful that the family is going to hop into a nice warm Toyota for the ride over to grandma's place. I figure a horse-drawn sleigh is big fun for maybe fifteen minutes. After that you’re going to want Old Dobbin to haul ass back to someplace warm where the egg nog is spiked and the family can gather in the flickering glow of a giant TV and contemplate the true meaning of football. Chestnuts roasting on an open fire? Sorry, no fireplace. We've got a furnace for heat, and stuffing nuts in there voids the warranty. Any of the roasting we do these days is in the microwave, and I'm pretty sure that if you put chestnuts in the microwave they would become little yuletide hand grenades. Although, if you've got a snoot full of Yule grog, watching chestnuts explode in your microwave might be a real holiday hoot. Some people may see microwave ovens as a symptom of creeping non-traditional holiday-ism. But I'll bet you that if there were microwave ovens around in Charles Dickens' day, the Cratchits wouldn't have had to entertain an uncharacteristically giddy Scrooge for six or seven hours while the goose cooked. Holiday entertaining is, in fact, the one area that even the most severe critic of modern practices would have to admit has not changed since Tim was Tiny. A good holiday celebration, then as now, involves lots of food, free-flowing drink, and a gathering of friends and family, some of whom you are about as happy to see as a subpoena. Just as the Cratchit's Christmas was spent with a man who, for all they knew, had suffered some kind of head trauma, so the modern holiday gathering includes relatives or acquaintances who, because they watch too many talk shows, and/or have poor personal hygiene, and/or fail to maintain scheduled medication, you would normally avoid like a plate of frosted botulism. But in the season of good will towards men, you smile warmly at the mystery uncle wandering around half-crocked with a clump of mistletoe dangling from the bill of his N.R.A. cap. Dickens' story wouldn't have become the holiday classic it has if, having spotted on their doorstep an insanely grinning, raw poultry-bearing, fresh-off-a-rough-night Scrooge, the Cratchits had pulled their shades and pretended not to be home. Which is probably what I would have done. Instead, knowing full well his reputation as a career grouch, they welcomed him into their home, and we have a touching story that teaches a valuable lesson about how the Christmas spirit can get the boss to pump up the payroll. Despite what the critics might say, our modern Christmas isn't all that different from those of long ago. Sure, the technology has changed, but that just means a bigger, brighter, louder Christmas, with lasers and holograms and stuff. It's our modern celebration of a season that even the least spiritual among us recognizes as a time of hope that the nutcases of the world will wake up and realize that peace on earth is a win/win proposition for everybody. If Christmas has changed, it's for the better. We should continue making Christmas bigger and louder and shinier until everybody gets it.  *** Happy Holidays, everyone!   del.icio.us Tags: holiday,humor Technorati Tags: holiday,humor

    Read the article

  • Error when trying to compile abgx360: C++ compiler cannot create executables

    - by era878
    I am trying to compile the abgx360 GUI. First I run home/eric/Desktop/abgx360-1.0.5/configure but I recieve this error: checking for C++ compiler default output file name... configure: error: C++ compiler cannot create executables See `config.log' for more details. Then i run make but I recieve this error: make: * No rule to make target `/home/eric/Desktop/abgx360-1.0.5/Makefile.am', needed by `/home/eric/Desktop/abgx360-1.0.5/Makefile.in'. Stop. Here is my 'config.log': This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by abgx360gui configure 1.0.2, which was generated by GNU Autoconf 2.61. Invocation command line was $ /home/eric/Desktop/abgx360gui-1.0.2/configure ## --------- ## ## Platform. ## ## --------- ## hostname = Eric-Desktop uname -m = x86_64 uname -r = 2.6.35-27-generic uname -s = Linux uname -v = #48-Ubuntu SMP Tue Feb 22 20:25:46 UTC 2011 /usr/bin/uname -p = unknown /bin/uname -X = unknown /bin/arch = unknown /usr/bin/arch -k = unknown /usr/convex/getsysinfo = unknown /usr/bin/hostinfo = unknown /bin/machine = unknown /usr/bin/oslevel = unknown /bin/universe = unknown PATH: /usr/local/sbin PATH: /usr/local/bin PATH: /usr/sbin PATH: /usr/bin PATH: /sbin PATH: /bin PATH: /usr/games ## ----------- ## ## Core tests. ## ## ----------- ## configure:1800: checking for a BSD-compatible install configure:1856: result: /usr/bin/install -c configure:1867: checking whether build environment is sane configure:1910: result: yes configure:1938: checking for a thread-safe mkdir -p configure:1977: result: /bin/mkdir -p configure:1990: checking for gawk configure:2020: result: no configure:1990: checking for mawk configure:2006: found /usr/bin/mawk configure:2017: result: mawk configure:2028: checking whether make sets $(MAKE) configure:2049: result: yes configure:2302: checking for g++ configure:2332: result: no configure:2302: checking for c++ configure:2332: result: no configure:2302: checking for gpp configure:2332: result: no configure:2302: checking for aCC configure:2332: result: no configure:2302: checking for CC configure:2332: result: no configure:2302: checking for cxx configure:2332: result: no configure:2302: checking for cc++ configure:2332: result: no configure:2302: checking for cl.exe configure:2332: result: no configure:2302: checking for FCC configure:2332: result: no configure:2302: checking for KCC configure:2332: result: no configure:2302: checking for RCC configure:2332: result: no configure:2302: checking for xlC_r configure:2332: result: no configure:2302: checking for xlC configure:2332: result: no configure:2360: checking for C++ compiler version configure:2367: g++ --version >&5 /home/eric/Desktop/abgx360gui-1.0.2/configure: line 2368: g++: command not found configure:2370: $? = 127 configure:2377: g++ -v >&5 /home/eric/Desktop/abgx360gui-1.0.2/configure: line 2378: g++: command not found configure:2380: $? = 127 configure:2387: g++ -V >&5 /home/eric/Desktop/abgx360gui-1.0.2/configure: line 2388: g++: command not found configure:2390: $? = 127 configure:2413: checking for C++ compiler default output file name configure:2440: g++ conftest.cpp >&5 /home/eric/Desktop/abgx360gui-1.0.2/configure: line 2441: g++: command not found configure:2443: $? = 127 configure:2481: result: configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "abgx360gui" | #define PACKAGE_TARNAME "abgx360gui" | #define PACKAGE_VERSION "1.0.2" | #define PACKAGE_STRING "abgx360gui 1.0.2" | #define PACKAGE_BUGREPORT "" | #define PACKAGE "abgx360gui" | #define VERSION "1.0.2" | /* end confdefs.h. */ | | int | main () | { | | ; | return 0; | } configure:2488: error: C++ compiler cannot create executables See `config.log' for more details. ## ---------------- ## ## Cache variables. ## ## ---------------- ## ac_cv_env_CCC_set= ac_cv_env_CCC_value= ac_cv_env_CC_set= ac_cv_env_CC_value= ac_cv_env_CFLAGS_set= ac_cv_env_CFLAGS_value= ac_cv_env_CPPFLAGS_set= ac_cv_env_CPPFLAGS_value= ac_cv_env_CPP_set= ac_cv_env_CPP_value= ac_cv_env_CXXFLAGS_set= ac_cv_env_CXXFLAGS_value= ac_cv_env_CXX_set= ac_cv_env_CXX_value= ac_cv_env_LDFLAGS_set= ac_cv_env_LDFLAGS_value= ac_cv_env_LIBS_set= ac_cv_env_LIBS_value= ac_cv_env_build_alias_set= ac_cv_env_build_alias_value= ac_cv_env_host_alias_set= ac_cv_env_host_alias_value= ac_cv_env_target_alias_set= ac_cv_env_target_alias_value= ac_cv_path_install='/usr/bin/install -c' ac_cv_path_mkdir=/bin/mkdir ac_cv_prog_AWK=mawk ac_cv_prog_make_make_set=yes ## ----------------- ## ## Output variables. ## ## ----------------- ## ACLOCAL='${SHELL} /home/eric/Desktop/abgx360gui-1.0.2/missing --run aclocal-1.10' AMDEPBACKSLASH='' AMDEP_FALSE='' AMDEP_TRUE='' AMTAR='${SHELL} /home/eric/Desktop/abgx360gui-1.0.2/missing --run tar' AUTOCONF='${SHELL} /home/eric/Desktop/abgx360gui-1.0.2/missing --run autoconf' AUTOHEADER='${SHELL} /home/eric/Desktop/abgx360gui-1.0.2/missing --run autoheader' AUTOMAKE='${SHELL} /home/eric/Desktop/abgx360gui-1.0.2/missing --run automake-1.10' AWK='mawk' CC='' CCDEPMODE='' CFLAGS='' CPP='' CPPFLAGS='' CXX='g++' CXXDEPMODE='' CXXFLAGS='' CYGPATH_W='echo' DEFS='' DEPDIR='' ECHO_C='' ECHO_N='-n' ECHO_T='' EGREP='' EXEEXT='' GREP='' INSTALL_DATA='${INSTALL} -m 644' INSTALL_PROGRAM='${INSTALL}' INSTALL_SCRIPT='${INSTALL}' INSTALL_STRIP_PROGRAM='$(install_sh) -c -s' LDFLAGS='' LIBOBJS='' LIBS='' LTLIBOBJS='' MAKEINFO='${SHELL} /home/eric/Desktop/abgx360gui-1.0.2/missing --run makeinfo' OBJEXT='' PACKAGE='abgx360gui' PACKAGE_BUGREPORT='' PACKAGE_NAME='abgx360gui' PACKAGE_STRING='abgx360gui 1.0.2' PACKAGE_TARNAME='abgx360gui' PACKAGE_VERSION='1.0.2' PATH_SEPARATOR=':' SET_MAKE='' SHELL='/bin/bash' STRIP='' VERSION='1.0.2' WX_CFLAGS='' WX_CFLAGS_ONLY='' WX_CONFIG_PATH='' WX_CPPFLAGS='' WX_CXXFLAGS='' WX_CXXFLAGS_ONLY='' WX_LIBS='' WX_LIBS_STATIC='' WX_RESCOMP='' WX_VERSION='' ac_ct_CC='' ac_ct_CXX='' am__fastdepCC_FALSE='' am__fastdepCC_TRUE='' am__fastdepCXX_FALSE='' am__fastdepCXX_TRUE='' am__include='' am__isrc=' -I$(srcdir)' am__leading_dot='.' am__quote='' am__tar='${AMTAR} chof - "$$tardir"' am__untar='${AMTAR} xf -' bindir='${exec_prefix}/bin' build_alias='' datadir='${datarootdir}' datarootdir='${prefix}/share' docdir='${datarootdir}/doc/${PACKAGE_TARNAME}' dvidir='${docdir}' exec_prefix='NONE' host_alias='' htmldir='${docdir}' includedir='${prefix}/include' infodir='${datarootdir}/info' install_sh='$(SHELL) /home/eric/Desktop/abgx360gui-1.0.2/install-sh' libdir='${exec_prefix}/lib' libexecdir='${exec_prefix}/libexec' localedir='${datarootdir}/locale' localstatedir='${prefix}/var' mandir='${datarootdir}/man' mkdir_p='/bin/mkdir -p' oldincludedir='/usr/include' pdfdir='${docdir}' prefix='NONE' program_transform_name='s,x,x,' psdir='${docdir}' sbindir='${exec_prefix}/sbin' sharedstatedir='${prefix}/com' sysconfdir='${prefix}/etc' target_alias='' ## ----------- ## ## confdefs.h. ## ## ----------- ## #define PACKAGE_NAME "abgx360gui" #define PACKAGE_TARNAME "abgx360gui" #define PACKAGE_VERSION "1.0.2" #define PACKAGE_STRING "abgx360gui 1.0.2" #define PACKAGE_BUGREPORT "" #define PACKAGE "abgx360gui" #define VERSION "1.0.2" configure: exit 77

    Read the article

  • Talking JavaOne with Rock Star Martijn Verburg

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top-rated speakers at each JavaOne Conference. They are awarded by their peers, who, through conference surveys, recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized. Martijn Verburg has, in recent years, established himself as an important mover and shaker in the Java community. His “Diabolical Developer” session at the JavaOne 2011 Conference got people’s attention by identifying some of the worst practices Java developers are prone to engage in. Among other things, he is co-leader and organizer of the thriving London Java User Group (JUG) which has more than 2,500 members, co-represents the London JUG on the Executive Committee of the Java Community Process, and leads the global effort for the Java User Group “Adopt a JSR” and “Adopt OpenJDK” programs. Career highlights include overhauling technology stacks and SDLC practices at Mizuho International, mentoring Oracle on technical community management, and running off shore development teams for AIG. He is currently CTO at jClarity, a start-up focusing on automating optimization for Java/JVM related technologies, and Product Advisor at ZeroTurnaround. He co-authored, with Ben Evans, "The Well-Grounded Java Developer" published by Manning and, as a leading authority on technical team optimization, he is in high demand at major software conferences.Verburg is participating in five sessions, a busy man indeed. Here they are: CON6152 - Modern Software Development Antipatterns (with Ben Evans) UGF10434 - JCP and OpenJDK: Using the JUGs’ “Adopt” Programs in Your Group (with Csaba Toth) BOF4047 - OpenJDK Building and Testing: Case Study—Java User Group OpenJDK Bugathon (with Ben Evans and Cecilia Borg) BOF6283 - 101 Ways to Improve Java: Why Developer Participation Matters (with Bruno Souza and Heather Vancura-Chilson) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Kirk Pepperdine, Ellen Kraffmiller and Henri Tremblay) When I asked Verburg about the biggest mistakes Java developers tend to make, he listed three: A lack of communication -- Software development is far more a social activity than a technical one; most projects fail because of communication issues and social dynamics, not because of a bad technical decision. Sadly, many developers never learn this lesson. No source control -- Developers simply storing code in local filesystems and emailing code in order to integrate Design-driven Design -- The need for some developers to cram every design pattern from the Gang of Four (GoF) book into their source code All of which raises the question: If these practices are so bad, why do developers engage in them? “I've seen a wide gamut of reasons,” said Verburg, who lists them as: * They were never taught at high school/university that their bad habits were harmful.* They weren't mentored in their first professional roles.* They've lost passion for their craft.* They're being deliberately malicious!* They think software development is a technical activity and not a social one.* They think that they'll be able to tidy it up later.A couple of key confusions and misconceptions beset Java developers, according to Verburg. “With Java and the JVM in particular I've seen a couple of trends,” he remarked. “One is that developers think that the JVM is a magic box that will clean up their memory, make their code run fast, as well as make them cups of coffee. The JVM does help in a lot of cases, but bad code can and will still lead to terrible results! The other trend is to try and force Java (the language) to do something it's not very good at, such as rapid web development. So you get a proliferation of overly complex frameworks, libraries and techniques trying to get around the fact that Java is a monolithic, statically typed, compiled, OO environment. It's not a Golden Hammer!”I asked him about the keys to running a good Java User Group. “You need to have a ‘Why,’” he observed. “Many user groups know what they do (typically, events) and how they do it (the logistics), but what really drives users to join your group and to stay is to give them a purpose. For example, within the LJC we constantly talk about the ‘Why,’ which in our case is several whys:* Re-ignite the passion that developers have for their craft* Raise the bar of Java developers in London* We want developers to have a voice in deciding the future of Java* We want to inspire the next generation of tech leaders* To bring the disparate tech groups in London together* So we could learn from each other* We believe that the Java ecosystem forms a cornerstone of our society today -- we want to protect that for the futureLooking ahead to Java 8 Verburg expressed excitement about Lambdas. “I cannot wait for Lambdas,” he enthused. “Brian Goetz and his group are doing a great job, especially given some of the backwards compatibility that they have to maintain. It's going to remove a lot of boiler plate and yet maintain readability, plus enable massive scaling.”Check out Martijn Verburg at JavaOne if you get a chance, and, stay tuned for a longer interview yours truly did with Martijn to be publish on otn/java some time after JavaOne. Originally published on blogs.oracle.com/javaone.

    Read the article

  • Talking JavaOne with Rock Star Martijn Verburg

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top-rated speakers at each JavaOne Conference. They are awarded by their peers, who, through conference surveys, recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized. Martijn Verburg has, in recent years, established himself as an important mover and shaker in the Java community. His “Diabolical Developer” session at the JavaOne 2011 Conference got people’s attention by identifying some of the worst practices Java developers are prone to engage in. Among other things, he is co-leader and organizer of the thriving London Java User Group (JUG) which has more than 2,500 members, co-represents the London JUG on the Executive Committee of the Java Community Process, and leads the global effort for the Java User Group “Adopt a JSR” and “Adopt OpenJDK” programs. Career highlights include overhauling technology stacks and SDLC practices at Mizuho International, mentoring Oracle on technical community management, and running off shore development teams for AIG. He is currently CTO at jClarity, a start-up focusing on automating optimization for Java/JVM related technologies, and Product Advisor at ZeroTurnaround. He co-authored, with Ben Evans, "The Well-Grounded Java Developer" published by Manning and, as a leading authority on technical team optimization, he is in high demand at major software conferences.Verburg is participating in five sessions, a busy man indeed. Here they are: CON6152 - Modern Software Development Antipatterns (with Ben Evans) UGF10434 - JCP and OpenJDK: Using the JUGs’ “Adopt” Programs in Your Group (with Csaba Toth) BOF4047 - OpenJDK Building and Testing: Case Study—Java User Group OpenJDK Bugathon (with Ben Evans and Cecilia Borg) BOF6283 - 101 Ways to Improve Java: Why Developer Participation Matters (with Bruno Souza and Heather Vancura-Chilson) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Kirk Pepperdine, Ellen Kraffmiller and Henri Tremblay) When I asked Verburg about the biggest mistakes Java developers tend to make, he listed three: A lack of communication -- Software development is far more a social activity than a technical one; most projects fail because of communication issues and social dynamics, not because of a bad technical decision. Sadly, many developers never learn this lesson. No source control -- Developers simply storing code in local filesystems and emailing code in order to integrate Design-driven Design -- The need for some developers to cram every design pattern from the Gang of Four (GoF) book into their source code All of which raises the question: If these practices are so bad, why do developers engage in them? “I've seen a wide gamut of reasons,” said Verburg, who lists them as: * They were never taught at high school/university that their bad habits were harmful.* They weren't mentored in their first professional roles.* They've lost passion for their craft.* They're being deliberately malicious!* They think software development is a technical activity and not a social one.* They think that they'll be able to tidy it up later.A couple of key confusions and misconceptions beset Java developers, according to Verburg. “With Java and the JVM in particular I've seen a couple of trends,” he remarked. “One is that developers think that the JVM is a magic box that will clean up their memory, make their code run fast, as well as make them cups of coffee. The JVM does help in a lot of cases, but bad code can and will still lead to terrible results! The other trend is to try and force Java (the language) to do something it's not very good at, such as rapid web development. So you get a proliferation of overly complex frameworks, libraries and techniques trying to get around the fact that Java is a monolithic, statically typed, compiled, OO environment. It's not a Golden Hammer!”I asked him about the keys to running a good Java User Group. “You need to have a ‘Why,’” he observed. “Many user groups know what they do (typically, events) and how they do it (the logistics), but what really drives users to join your group and to stay is to give them a purpose. For example, within the LJC we constantly talk about the ‘Why,’ which in our case is several whys:* Re-ignite the passion that developers have for their craft* Raise the bar of Java developers in London* We want developers to have a voice in deciding the future of Java* We want to inspire the next generation of tech leaders* To bring the disparate tech groups in London together* So we could learn from each other* We believe that the Java ecosystem forms a cornerstone of our society today -- we want to protect that for the futureLooking ahead to Java 8 Verburg expressed excitement about Lambdas. “I cannot wait for Lambdas,” he enthused. “Brian Goetz and his group are doing a great job, especially given some of the backwards compatibility that they have to maintain. It's going to remove a lot of boiler plate and yet maintain readability, plus enable massive scaling.”Check out Martijn Verburg at JavaOne if you get a chance, and, stay tuned for a longer interview yours truly did with Martijn to be publish on otn/java some time after JavaOne.

    Read the article

  • That Escalated Quickly

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2014/05/17/that-escalated-quickly.aspxI have been working remotely out of my home for over 4 years now. All of my coworkers during that time have also worked remotely. Lots of folks have written about the challenges inherent in facilitating communication on remote teams and strategies for overcoming them. A popular theme around this topic is the notion of “escalating communication”. In this context “escalating” means taking a conversation from one mode of communication to a different, higher fidelity mode of communication. Here are the five modes of communication I use at work in order of increasing fidelity: Email – This is the “lowest fidelity” mode of communication that I use. I usually only check it a few times a day (and I’m trying to check it even less frequently than that) and I only keep items in my inbox if they represent an item I need to take action on that I haven’t tracked anywhere else. Forums / Message boards – Being a developer, I’ve gotten into the habit of having other people look over my code before it becomes part of the product I’m working on. These code reviews often happen in “real time” via screen sharing, but I also always have someone else give all of the changes another look using pull requests. A pull request takes my code and lets someone else see the changes I’ve made side-by-side with the existing code so they can see if I did anything dumb. Pull requests can facilitate a conversation about the code changes in an online-forum like style. Some teams I’ve worked on also liked using tools like Trello or Google Groups to have on-going conversations about a topic or task that was being worked on. Chat & Instant Messaging  - Chat and instant messaging are the real workhorses for communication on the remote teams I’ve been a part of. I know some teams that are co-located that also use it pretty extensively for quick messages that don’t warrant walking across the office to talk with someone but reqire more immediacy than an e-mail. For the purposes of this post I think it’s important to note that the terms “chat” and “instant messaging” might insinuate that the conversation is happening in real time, but that’s not always true. Modern chat and IM applications maintain a searchable history so people can easily see what might have been discussed while they were away from their computers. Voice, Video and Screen sharing – Everyone’s got a camera and microphone on their computers now, and there are an abundance of services that will let you use them to talk to other people who have cameras and microphones on their computers. I’m including screen sharing here as well because, in my experience, these discussions typically involve one or more people showing the other participants something that’s happening on their screen. Obviously, this mode of communication is much higher-fidelity than any of the ones listed above. Scheduled meetings are typically conducted using this mode of communication. In Person – No matter how great communication tools become, there’s no substitute for meeting with someone face-to-face. However, opportunities for this kind of communcation are few and far between when you work on a remote team. When a conversation gets escalated that usually means it moves up one or more positions on this list. A lot of people advocate jumping to #4 sooner than later. Like them, I used to believe that, if it was possible, organizing a call with voice and video was automatically better than any kind of text-based communication could be. Lately, however, I’m becoming less convinced that escalating is always the right move. Working Asynchronously Last year I attended a talk at our local code camp given by Drew Miller. Drew works at GitHub and was talking about how they use GitHub internally. Many of the folks at GitHub work remotely, so communication was one of the main themes in Drew’s talk. During the talk Drew used the phrase, “asynchronous communication” to describe their use of chat and pull request comments. That phrase stuck in my head because I hadn’t heard it before but I think it perfectly describes the way in which remote teams often need to communicate. You don’t always know when your co-workers are at their computers or what hours (if any) they are working that day. In order to work this way you need to assume that the person you’re talking to might not respond right away. You can’t always afford to wait until everyone required is online and available to join a voice call, so you need to use text-based, persistent forms of communication so that people can receive and respond to messages when they are available. Going back to my list from the beginning of this post for a second, I characterize items #1-3 as being “asynchronous” modes of communication while we could call items #4 and #5 “synchronous”. When communication gets escalated it’s almost always moving from an asynchronous mode of communication to a synchronous one. Now, to the point of this post: I’ve become increasingly reluctant to escalate from asynchronous to synchronous communication for two primary reasons: 1 – You can often find a higher fidelity way to convey your message without holding a synchronous conversation 2 - Asynchronous modes of communication are (usually) persistent and searchable. You Don’t Have to Broadcast Live Let’s start with the first reason I’ve listed. A lot of times you feel like you need to escalate to synchronous communication because you’re having difficulty describing something that you’re seeing in words. You want to provide the people you’re conversing with some audio-visual aids to help them understand the point that you’re trying to make and you think that getting on Skype and sharing your screen with them is the best way to do that. Firing up a screen sharing session does work well, but you can usually accomplish the same thing in an asynchronous manner. For example, you could take a screenshot and annotate it with some text and drawings to illustrate what it is you’re seeing. If a screenshot won’t work, taking a short screen recording while your narrate over it and posting the video to your forum or chat system along with a text-based description of what’s in the recording that can be searched for later can be a great way to effectively communicate with your team asynchronously. I Said What?!? Now for the second reason I listed: most asynchronous modes of communication provide a transcript of what was said and what decisions might have been made during the conversation. There have been many occasions where I’ve used the search feature of my team’s chat application to find a conversation that happened several weeks or months ago to remember what was decided. Unfortunately, I think the benefits associated with the persistence of communicating asynchronously often get overlooked when people decide to escalate to a in-person meeting or voice/video call. I’m becoming much more reluctant to suggest a voice or video call if I suspect that it might lead to codifying some kind of design decision because everyone involved is going to hang up the call and immediately forget what was decided. I recognize that you can record and archive these types of interactions, but without being able to search them the recordings aren’t terribly useful. When and How To Escalate I don’t mean to imply that communicating via voice/video or in person is never a good idea. I probably jump on a Skype call with a co-worker at least once a day to quickly hash something out or show them a bit of code that I’m working on. Also, meeting in person periodically is really important for remote teams. There’s no way around the fact that sometimes it’s easier to jump on a call and show someone my screen so they can see what I’m seeing. So when is it right to escalate? I think the simplest way to answer that is when the communication starts to feel painful. Everyone’s tolerance for that pain is different, but I think you need to let it hurt a little bit before jumping to synchronous communication. When you do escalate from asynchronous to synchronous communication, there are a couple of things you can do to maximize the effectiveness of the communication: Takes notes – This is huge and yet I’ve found that a lot of teams don’t do this. If you’re holding a meeting with  > 2 people you should have someone taking notes. Taking notes while participating in a meeting can be difficult but there are a few strategies to deal with this challenge that probably deserve a short post of their own. After the meeting, make sure the notes are posted to a place where all concerned parties (including those that might not have attended the meeting) can review and search them. Persist decisions made ASAP – If any decisions were made during the meeting, persist those decisions to a searchable medium as soon as possible following the conversation. All the teams I’ve worked on used a web-based system for tracking the on-going work and a backlog of work to be done in the future. I always try to make sure that all of the cards/stories/tasks/whatever in these systems always reflect the latest decisions that were made as the work was being planned and executed. If held a quick call with your team lead and decided that it wasn’t worth the effort to build real-time validation into that new UI you were working on, go and codify that decision in the story associated with that work immediately after you hang up. Even better, write it up in the story while you are both still on the phone. That way when the folks from your QA team pick up the story to test a few days later they’ll know why the real-time validation isn’t there without having to invoke yet another conversation about the work. Communicating Well is Hard At this point you might be thinking that communicating asynchronously is more difficult than having a live conversation. You’re right: it is more difficult. In order to communicate effectively this way you need to very carefully think about the message that you’re trying to convey and craft it in a way that’s easy for your audience to understand. This is almost always harder than just talking through a problem in real time with someone; this is why escalating communication is such a popular idea. Why wouldn’t we want to do the thing that’s easier? Easier isn’t always better. If you and your team can get in the habit of communicating effectively in an asynchronous manner you’ll find that, over time, all of your communications get less painful because you don’t need to re-iterate previously made points over and over again. If you communicate right the first time, you often don’t need to rehash old conversations because you can go back and find the decisions that were made laid out in plain language. You’ll also find that you get better at doing things like writing useful comments in your code, creating written documentation about how the feature that you just built works, or persuading your team to do things in a certain way.

    Read the article

  • Gene Hunt Says:

    - by BizTalk Visionary
    "She's as nervous as a very small nun at a penguin shoot"   "He's got fingers in more pies than a leper on a cookery course" "You so much as belch out of line and I'll have your scrotum on a barbed wire plate" "Let's go play slappyface" "your surrounded by armed barstewards" “Right, get out and find this murdering scum right now!” [pause] “Scratch that, we start 9am sharp tomorrow, it's beer-o-clock.” "So then Cartwright, you're such a good Detective.... Go and Detect me a packet of Garibaldies" "You're not the one who is going to have to knit himself a new arsehole after 25 years of aggressive male love in prison" “A dream for me is Diana Dors and a bottle of chip fat." “A dream for me is Diana Dors and a bottle of chip fat." “They reckon you've got concussion - but personally, I couldn't give a tart's furry cup if half your brains are falling out. Don't ever waltz into my kingdom playing king of the jungle.” “You great... soft... sissy... girlie... nancy... french... bender... Man-United supporting POOF!!” “Drugs eh? What's the point. They make you forget, make you talk funny, make you see things that aren't there. My old grandma got all of that for free when she had a stroke.” “He's Dead! It's quite serious!” “Fanny in the flat...Nice Work” “SoopaDoopa” “Tits in a Jumper!” “Drop your weapons! You are surrounded by armed bastards!” “It's 1973, almost dinnertime. I'm 'avin 'oops!” “Trust the Gene Genie!” “I wanna hump Britt Ekland...What're we gonna do...!” “Was that 'E' and you don't know the rest?! or you going 'Eeee, I Dunno'” “Good Girl! Prostate probe and no jelly. “ “Give over, it's nothing like Spain!” “I'll come over your houses and stamp on all your toys!” “The Wizard will sort it out. It's cos of the wonderful things he does” “Cartwright can jump up and down on his knackers!” “It's not a windup love, he really thinks like this!” “Women! You can't say two words to them” “I was thinking, maybe, a Berni Inn!” “If I wanted a bollocking for drinking too much...!” “Shhhh...hear that...that's the sound of this case being closed! “Chicken!? In a basket!?” “Seems a large quantity of cocaine...” “You probably thought he kept his cock in his keks!” “The tail-end of Rays demotion speech!” “Stephen Warren is gay!?” “You're a smart boy, use your initiative!” “Don't be such a Jessie!” “I find the idea of a bird brushing her teeth...!” “Never been tempted to the Magic talcum powder?” “Make sure she's got nice tits!” “You're more likely to find an ostrich with a plum up it's arse!” “Drink this lot under the table and have a pint on the way home!” “Never be a female Prime Minister!” “Pub? Pub! pub!.....Pub!” “Thou shalt not suck off rent boys!” “The number for the special clinic is on the notice board!” “If me uncle had tits, would he be me auntie!” “Got your vicars in a twist!” “We Done?!” “Your mates got balls...If they were any bigger he'd need a wheelbarrow!” “The Ending - from 'I want to go home' to the end music.”

    Read the article

  • Atmospheric Scattering

    - by Lawrence Kok
    I'm trying to implement atmospheric scattering based on Sean O`Neil algorithm that was published in GPU Gems 2. But I have some trouble getting the shader to work. My latest attempts resulted in: http://img253.imageshack.us/g/scattering01.png/ I've downloaded sample code of O`Neil from: http://http.download.nvidia.com/developer/GPU_Gems_2/CD/Index.html. Made minor adjustments to the shader 'SkyFromAtmosphere' that would allow it to run in AMD RenderMonkey. In the images it is see-able a form of banding occurs, getting an blueish tone. However it is only applied to one half of the sphere, the other half is completely black. Also the banding appears to occur at Zenith instead of Horizon, and for a reason I managed to get pac-man shape. I would appreciate it if somebody could show me what I'm doing wrong. Vertex Shader: uniform mat4 matView; uniform vec4 view_position; uniform vec3 v3LightPos; const int nSamples = 3; const float fSamples = 3.0; const vec3 Wavelength = vec3(0.650,0.570,0.475); const vec3 v3InvWavelength = 1.0f / vec3( Wavelength.x * Wavelength.x * Wavelength.x * Wavelength.x, Wavelength.y * Wavelength.y * Wavelength.y * Wavelength.y, Wavelength.z * Wavelength.z * Wavelength.z * Wavelength.z); const float fInnerRadius = 10; const float fOuterRadius = fInnerRadius * 1.025; const float fInnerRadius2 = fInnerRadius * fInnerRadius; const float fOuterRadius2 = fOuterRadius * fOuterRadius; const float fScale = 1.0 / (fOuterRadius - fInnerRadius); const float fScaleDepth = 0.25; const float fScaleOverScaleDepth = fScale / fScaleDepth; const vec3 v3CameraPos = vec3(0.0, fInnerRadius * 1.015, 0.0); const float fCameraHeight = length(v3CameraPos); const float fCameraHeight2 = fCameraHeight * fCameraHeight; const float fm_ESun = 150.0; const float fm_Kr = 0.0025; const float fm_Km = 0.0010; const float fKrESun = fm_Kr * fm_ESun; const float fKmESun = fm_Km * fm_ESun; const float fKr4PI = fm_Kr * 4 * 3.141592653; const float fKm4PI = fm_Km * 4 * 3.141592653; varying vec3 v3Direction; varying vec4 c0, c1; float scale(float fCos) { float x = 1.0 - fCos; return fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } void main( void ) { // Get the ray from the camera to the vertex, and its length (which is the far point of the ray passing through the atmosphere) vec3 v3FrontColor = vec3(0.0, 0.0, 0.0); vec3 v3Pos = normalize(gl_Vertex.xyz) * fOuterRadius; vec3 v3Ray = v3CameraPos - v3Pos; float fFar = length(v3Ray); v3Ray = normalize(v3Ray); // Calculate the ray's starting position, then calculate its scattering offset vec3 v3Start = v3CameraPos; float fHeight = length(v3Start); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fCameraHeight)); float fStartAngle = dot(v3Ray, v3Start) / fHeight; float fStartOffset = fDepth*scale(fStartAngle); // Initialize the scattering loop variables float fSampleLength = fFar / fSamples; float fScaledLength = fSampleLength * fScale; vec3 v3SampleRay = v3Ray * fSampleLength; vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5; // Now loop through the sample rays for(int i=0; i<nSamples; i++) { float fHeight = length(v3SamplePoint); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fHeight)); float fLightAngle = dot(normalize(v3LightPos), v3SamplePoint) / fHeight; float fCameraAngle = dot(normalize(v3Ray), v3SamplePoint) / fHeight; float fScatter = (-fStartOffset + fDepth*( scale(fLightAngle) - scale(fCameraAngle)))/* 0.25f*/; vec3 v3Attenuate = exp(-fScatter * (v3InvWavelength * fKr4PI + fKm4PI)); v3FrontColor += v3Attenuate * (fDepth * fScaledLength); v3SamplePoint += v3SampleRay; } // Finally, scale the Mie and Rayleigh colors and set up the varying variables for the pixel shader vec4 newPos = vec4( (gl_Vertex.xyz + view_position.xyz), 1.0); gl_Position = gl_ModelViewProjectionMatrix * vec4(newPos.xyz, 1.0); gl_Position.z = gl_Position.w * 0.99999; c1 = vec4(v3FrontColor * fKmESun, 1.0); c0 = vec4(v3FrontColor * (v3InvWavelength * fKrESun), 1.0); v3Direction = v3CameraPos - v3Pos; } Fragment Shader: uniform vec3 v3LightPos; varying vec3 v3Direction; varying vec4 c0; varying vec4 c1; const float g =-0.90f; const float g2 = g * g; const float Exposure =2; void main(void){ float fCos = dot(normalize(v3LightPos), v3Direction) / length(v3Direction); float fMiePhase = 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + fCos*fCos) / pow(1.0 + g2 - 2.0*g*fCos, 1.5); gl_FragColor = c0 + fMiePhase * c1; gl_FragColor.a = 1.0; }

    Read the article

  • MySQL server stopped working after upgrade

    - by umpirsky
    I upgraded to 12.04 and my MySQL server just stopped working. It throws: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I tried to reinstall it from software center, but it fails with: Package operation failed The installation or removal of a software package failed. installArchives() failed: Selecting previously unselected package mysql-server. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 243412 files and directories currently installed.) Unpacking mysql-server (from .../mysql-server_5.5.22-0ubuntu1_all.deb) ... Setting up mysql-server-5.5 (5.5.22-0ubuntu1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: mysql-server-5.5 mysql-server Error in function: Setting up mysql-server-5.5 (5.5.22-0ubuntu1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured I also tried: $ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: mysql-server-5.5 mysql-server-core-5.5 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up mysql-server-5.5 (5.5.22-0ubuntu1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: mysql-server-5.5 E: Sub-process /usr/bin/dpkg returned an error code (1) Any idea? EDIT: Crash report is being auto generated. EDIT: After trying and trying I got suggestion to do: #apt-get --purge remove mysql-server-5.1 mysql-server-5.5 Reading package lists... Done Building dependency tree Reading state information... Done Virtual packages like 'mysql-server-5.1' can't be removed The following packages will be REMOVED: mysql-server-5.5* 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 31.3 MB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 243407 files and directories currently installed.) Removing mysql-server-5.5 ... Purging configuration files for mysql-server-5.5 ... Processing triggers for ureadahead ... Processing triggers for man-db ... The most important part is: Virtual packages like 'mysql-server-5.1' can't be removed Any idea?

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >