Search Results

Search found 3141 results on 126 pages for 'crashpoint zero'.

Page 90/126 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Romanian parter Omnilogic Delivers “No Limits” Scalability, Performance, Security, and Affordability through Next-Generation, Enterprise-Grade Engineered Systems

    - by swalker
    Omnilogic SRL is a leading technology and information systems provider in Romania and central and Eastern Europe. An Oracle Value-Added Distributor Partner, Omnilogic resells Oracle software, hardware, and engineered systems to Oracle Partner Network members and provides specialized training, support, and testing facilities. Independent software vendors (ISVs) also use Omnilogic’s demonstration and testing facilities to upgrade the performance and efficiency of their solutions and those of their customers by migrating them from competitor technologies to Oracle platforms. Omnilogic also has a dedicated offering for ISV solutions, based on Oracle technology in a hosting service provider model. Omnilogic wanted to help Oracle Partners and ISVs migrate solutions to Oracle Exadata and sell Oracle Exadata to end-customers. It installed Oracle Exadata Database Machine X2-2 Quarter Rack at its data center to create a demonstration and testing environment. Demonstrations proved that Oracle Exadata achieved processing speeds up to 100 times faster than competitor systems, cut typical back-up times from 6 hours to 20 minutes, and stored 10 times more data. Oracle Partners and ISVs learned that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, without business disruption, and with reduced ongoing operating costs. Challenges A word from Omnilogic “Oracle Exadata is the new killer application—the smartest solution on the market. There is no competition.” – Sorin Dragomir, Chief Operating Officer, Omnilogic SRL Enable Oracle Partners in Romania and central and eastern Europe to achieve Oracle Exadata Ready status by providing facilities to test and optimize existing applications and build real-life proofs of concept (POCs) for new solutions on Oracle Exadata Database Machine Provide technical support and demonstration facilities for ISVs migrating their customers’ solutions from competitor technologies to Oracle Exadata to maximize performance, scalability, and security; optimize hardware and datacenter space; cut maintenance costs; and improve return on investment Demonstrate power of Oracle Exadata’s high-performance, high-capacity engineered systems for customer-facing businesses, such as government organizations, telecommunications, banking and insurance, and utility companies, which typically require continuous availability to support very large data volumes Showcase Oracle Exadata’s unchallenged online transaction processing (OLTP) capabilities that cut application run times to provide unrivalled query turnaround and user response speeds while significantly reducing back-up times and eliminating risk of unplanned outages Capitalize on providing a world-class training and demonstration environment for Oracle Exadata to accelerate sales with Oracle Partners Solutions Created a testing environment to enable Oracle Partners and ISVs to test their own solutions and those of their customers on Oracle Exadata running on Oracle Enterprise Linux or Oracle Solaris Express to benchmark performance prior to migration Leveraged expertise on Oracle Exadata to offer Oracle Exadata training, migration, support seminars and to showcase live demonstrations for Oracle Partners Proved how Oracle Exadata’s pre-engineered systems, that come assembled, configured, and ready to run, reduce deployment time and cost, minimize risk, and help customers achieve the full performance potential immediately after go live Increased processing speeds 10-fold and with zero data loss for a telecommunications provider’s client-facing customer relationship management solution Achieved performance improvements of between 6 and 100 times faster for financial and utility company applications currently running on IBM, Microsoft, or SAP HANA platforms Showed how daily closure procedures carried out overnight by banks, insurance companies, and other financial institutions to analyze each day’s business, can typically be cut from around six hours to 20 minutes, some 18 times faster, when running on Oracle Exadata Simulated concurrent back-ups while running applications under normal working conditions to prove that Oracle Exadata-based solutions can be backed up during business hours without causing bottlenecks or impacting the end-user experience Demonstrated that Oracle Exadata’s built-in analytics, data mining and OLTP capabilities make it the highest-performance, lowest-cost choice for large data warehousing operations Showed how Oracle Exadata’s columnar compression and intelligent storage architecture allows 10 times more data to be stored than on competitor platforms Demonstrated how Oracle Exadata cuts hardware requirements significantly by consolidating workloads on to fewer servers which delivers greater power efficiency and lower operating costs that competing systems from IBM and other manufacturers Proved to ISVs that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, and with minimal business disruption Demonstrated how storage servers, database servers, and network switches can be added incrementally and inexpensively to the Oracle Exadata platform to support business expansion On track to grow revenues by 10% in year one and by 15% annually thereafter through increased business generated from Oracle Partners and ISVs

    Read the article

  • Optional Parameters and Named Arguments in C# 4 (and a cool scenario w/ ASP.NET MVC 2)

    - by ScottGu
    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This is the seventeenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post covers two new language feature being added to C# 4.0 – optional parameters and named arguments – as well as a cool way you can take advantage of optional parameters (both in VB and C#) with ASP.NET MVC 2. Optional Parameters in C# 4.0 C# 4.0 now supports using optional parameters with methods, constructors, and indexers (note: VB has supported optional parameters for awhile). Parameters are optional when a default value is specified as part of a declaration.  For example, the method below takes two parameters – a “category” string parameter, and a “pageIndex” integer parameter.  The “pageIndex” parameter has a default value of 0, and as such is an optional parameter: When calling the above method we can explicitly pass two parameters to it: Or we can omit passing the second optional parameter – in which case the default value of 0 will be passed:   Note that VS 2010’s Intellisense indicates when a parameter is optional, as well as what its default value is when statement completion is displayed: Named Arguments and Optional Parameters in C# 4.0 C# 4.0 also now supports the concept of “named arguments”.  This allows you to explicitly name an argument you are passing to a method – instead of just identifying it by argument position.  For example, I could write the code below to explicitly identify the second argument passed to the GetProductsByCategory method by name (making its usage a little more explicit): Named arguments come in very useful when a method supports multiple optional parameters, and you want to specify which arguments you are passing.  For example, below we have a method DoSomething that takes two optional parameters: We could use named arguments to call the above method in any of the below ways: Because both parameters are optional, in cases where only one (or zero) parameters is specified then the default value for any non-specified arguments is passed. ASP.NET MVC 2 and Optional Parameters One nice usage scenario where we can now take advantage of the optional parameter support of VB and C# is with ASP.NET MVC 2’s input binding support to Action methods on Controller classes. For example, consider a scenario where we want to map URLs like “Products/Browse/Beverages” or “Products/Browse/Deserts” to a controller action method.  We could do this by writing a URL routing rule that maps the URLs to a method like so: We could then optionally use a “page” querystring value to indicate whether or not the results displayed by the Browse method should be paged – and if so which page of the results should be displayed.  For example: /Products/Browse/Beverages?page=2. With ASP.NET MVC 1 you would typically handle this scenario by adding a “page” parameter to the action method and make it a nullable int (which means it will be null if the “page” querystring value is not present).  You could then write code like below to convert the nullable int to an int – and assign it a default value if it was not present in the querystring: With ASP.NET MVC 2 you can now take advantage of the optional parameter support in VB and C# to express this behavior more concisely and clearly.  Simply declare the action method parameter as an optional parameter with a default value: C# VB If the “page” value is present in the querystring (e.g. /Products/Browse/Beverages?page=22) then it will be passed to the action method as an integer.  If the “page” value is not in the querystring (e.g. /Products/Browse/Beverages) then the default value of 0 will be passed to the action method.  This makes the code a little more concise and readable. Summary There are a bunch of great new language features coming to both C# and VB with VS 2010.  The above two features (optional parameters and named parameters) are but two of them.  I’ll blog about more in the weeks and months ahead. If you are looking for a good book that summarizes all the language features in C# (including C# 4.0), as well provides a nice summary of the core .NET class libraries, you might also want to check out the newly released C# 4.0 in a Nutshell book from O’Reilly: It does a very nice job of packing a lot of content in an easy to search and find samples format. Hope this helps, Scott

    Read the article

  • My Mother Bought a Droid

    - by Ben Griswold
    I converted to iPhone two years ago when I left my former employer and my Blackberry behind. The truth is I half-heartedly purchased my iPhone. It was a great looking device, but as far as I was concerned, it was a mere toy compared to my Blackberry.  I remember hiding the toy in my briefcase when attending business meetings because I didn’t consider it to be professional enough.  I’ve since owned all three generations of the iPhone and, well, iPhone seems to have caught on. I still miss the click of the Blackberry keyboard and the blinking red light letting me know that someone/something requires my attention, but I’m officially an iPhone fanboy now.  My mom called last weekend and asked if she should buy an iPhone. I talked her ear off about everything I love about iPhone. I went on for about twenty minutes. I couldn’t help myself. I mentioned everything from my podcast subscriptions to the application which manages my workouts.  I went as far as to say that someday all smart phones will be referred to as iPhones just like all tissues are referred to as Kleenex and all sodas are referred to as Cokes.  I was really on a roll and then I stopped. I had to…the call dropped.  There I was, strategically standing in the far corner of my backyard where I get the most reliable AT&T reception and the call drops in middle of my iPhone pitch.  Folks, I don’t care how good a salesperson you are, it’s tough to recover from a situation like this.   I dialed my mom back and jokingly asked if she was planning to make calls with her new phone. I explained that AT&T is bound to provide better service eventually but I’m not sure she should wait. After all, I have troubles with the network in San Diego and I can only image how bad it would be for her in Western Massachusetts. Mom called back a few days later exclaiming, “I bought a Droid! I love this phone! I haven’t done anything with it but make phone calls, but I love it.”  I had to laugh.  My mom made the right call (pun intended.)  The iPhone is an amazing device, but owners are constantly reminded that its core function (it’s a phone, remember?) is subpar.  If you love gadgets, you’re probably enthralled by iPhone’s many bells and whistles and, relatively speaking, the terrible phone service might not amount to much.  (Maybe it amounts to a rant on your blog.) The overall iPhone offering is so attractive that consumers are willing to wait for AT&T to straighten up their act or wait until Apple grants a choice of carriers.  But I don’t see either of these remedies coming soon. In the interim, I’m willing to take my iPhone for what it is and just continue to enjoy my favorite features while pretending that poor coverage isn’t a big deal. With any luck, more and more reasonable folks will recognize that Android Phones are legitimate players in the smart phone space, they will buy loads of them and there will become plenty of functional phones to borrow when my “phone” is showing zero bars.  Heck, I’m already covered when I visit my mom.

    Read the article

  • It&rsquo;s ok to throw System.Exception&hellip;

    - by Chris Skardon
    No. No it’s not. It’s not just me saying that, it’s the Microsoft guidelines: http://msdn.microsoft.com/en-us/library/ms229007.aspx  Do not throw System.Exception or System.SystemException. Also – as important: Do not catch System.Exception or System.SystemException in framework code, unless you intend to re-throw.. Throwing: Always, always try to pick the most specific exception type you can, if the parameter you have received in your method is null, throw an ArgumentNullException, value received greater than expected? ArgumentOutOfRangeException. For example: public void ArgChecker(int theInt, string theString) { if (theInt < 0) throw new ArgumentOutOfRangeException("theInt", theInt, "theInt needs to be greater than zero."); if (theString == null) throw new ArgumentNullException("theString"); if (theString.Length == 0) throw new ArgumentException("theString needs to have content.", "theString"); } Why do we want to do this? It’s a lot of extra code when compared with a simple: public void ArgChecker(int theInt, string theString) { if (theInt < 0 || string.IsNullOrWhiteSpace(theString)) throw new Exception("The parameters were invalid."); } It all comes down to a couple of things; the catching of the exceptions, and the information you are passing back to the calling code. Catching: Ok, so let’s go with introduction level Exception handling, taught by many-a-university: You do all your work in a try clause, and catch anything wrong in the catch clause. So this tends to give us code like this: try { /* All the shizzle */ } catch { /* Deal with errors */ } But of course, we can improve on that by catching the exception so we can report on it: try { } catch(Exception ex) { /* Log that 'ex' occurred? */ } Now we’re at the point where people tend to go: Brilliant, I’ve got exception handling nailed, what next??? and code gets littered with the catch(Exception ex) nastiness. Why is it nasty? Let’s imagine for a moment our code is throwing an ArgumentNullException which we’re catching in the catch block and logging. Ok, the log entry has been made, so we can debug the code right? We’ve got all the info… What about an OutOfMemoryException – what can we do with that? That’s right, not a lot, chances are you can’t even log it (you are out of memory after all), but you’ve caught it – and as such - have hidden it. So, as part of this, there are two things you can do one, is the rethrow method: try { /* code */ } catch (Exception ex) { //Log throw; } Note, it’s not catch (Exception ex) { throw ex; } as that will wipe all your important stack trace information. This does get your exception to continue, and is the only reason you would catch Exception (anywhere other than a global catch-all) in your code. The other preferred method is to catch the exceptions you can deal with. It may not matter that the string I’m passing in is null, and I can cope with it like this: try{ DoSomething(myString); } catch(ArgumentNullException){} And that’s fine, it means that any exceptions I can’t deal with (OutOfMemory for example) will be propagated out to other code that can deal with it. Of course, this is horribly messy, no one wants try / catch blocks everywhere and that’s why Microsoft added the ‘Try’ methods to the framework, and it’s a strategy we should continue. If I try: int i = (int) "one"; I will get an InvalidCastException which means I need the try / catch block, but I could mitigate this using the ‘TryParse’ method: int i; if(!Int32.TryParse("one", out i)) return; Similarly, in the ‘DoSomething’ example, it might be beneficial to have a ‘TryDoSomething’ that returns a boolean value indicating the success of continuing. Obviously this isn’t practical in every case, so use the ol’ common sense approach. Onwards Yer thanks Chris, I’m looking forward to writing tonnes of new code. Fear not, that is where helpers come into it… (but that’s the next post)

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • Auto-cancel reason not found (6, 13906)

    - by Rajesh Sharma
    There are many errors in the application which are never invoked because of appropriate application configuration done at the time of implementation by the solution architects. So typically, as an application end user you would never stumble upon such errors. But what if the application administrator inadvertently changes the configuration/setup in the development, test, QA, or production environment? This is the time when you as an end user are introduced to a brand-new error for which you may not have a clue or understanding to what it means and neither the access/privilege to rectify it.    In this post we'll focus on one such error '6, 13906 - Auto-cancel reason not found'.   You get this error if you have not defined a Bill (Segment) Cancel Reason (Admin Menu, B, Bill Cancel Reason) code with System Default value of Turn off auto-cancel.   Consider a scenario when you are about to final bill an 'Account' for which the bill period's cut-off date you selected is falling on or after the Service Agreement's (SA) end/stop date (basically SA is Stopped with a date earlier than it was billed previously). And for the same 'Account' either: Bill segments exists that end after the SA's end date OR Non-closing bill segments exists that end on the SA's end date (OR closing bill segments that do not end on SA's end date or do not exist at all - remember closing/final bill segment is generated if the SA is in Stopped status).   CC&B detects such scenario and attempts to cancel all such violating bill segments automatically, but NOT if you are generating the bill Online. If online, the system assumes that you know what you are doing, and prompts you with error 2, 13716 - Bill segments that violate the SA (%1) End Date (%2) exist to take necessary action.   If in batch, system automatically cancels these kinds of bill segment(s).   Since this happens in the background, you have to define within the application which System Default Bill (Segment) cancellation reason code identified as Turn off auto-cancel, should be used by the process when it attempts to cancel any such violating bill segments (You already know that you cannot cancel a bill segment without giving a reason for cancellation).   So what exactly happens during batch billing?   Bill Segment generation routine at first determines billing eligibility of the service agreement being billed. One of the billing eligibility criteria is to check the SA's previous bill segments which have end dates greater than the current cut-off date/end date. Technically, the routine retrieves a count of such violating bill segments.     SELECT COUNT (*) FROM CI_BSEG WHERE SA_ID = :SA-ID AND BSEG_STAT_FLG = '50' -- Frozen AND END_DT IS NOT NULL AND (END_DT > '03-JUN-2010' -- Bill segment greater than SA's End Date OR OR (END_DT = '03-JUN-2010' AND CLOSING_BSEG_SW = 'N')) -- Non-closing bill segment ending on SA's end date   If the count is greater than zero, Bill segment generation routine executes another program to auto-cancel such bill segments. Auto-cancel program retrieves the 'Bill Cancel Reason' code which is identified as Turn off auto-cancel. Retrieved cancel reason code is then placed on the bill segments that are being cancelled automatically.   During this process if the routine fails to determine the bill cancel reason code having System Default Turn off auto-cancel because it was not been configured, you get a bill exception 6, 13906 - Auto-cancel reason not found.   Also note that duplicate or multiple System Default codes identified as Turn off auto-cancel are not allowed. CC&B would complain with an error 2, 54201.   Duplicate validation/check is also performed within Auto-cancel routine, if suppose for test purposes you executed a DML statement updating CI_BILL_CAN_RSN.BSCAN_SYS_DFLT_FLG with a value 'T'.

    Read the article

  • Another sound not working post

    - by Thomas Smart
    Tried all the other "sound not working" posts i think, lost count. purge/reinstall alsa and pulse, reboot, add user to audio group, various lines in the alsa config file such as "options snd-hda-intel model=" then tried different options like generic, auto, basic, default, etc. tried pulseaudio -k && sudo alsa force-reload a few times, with and without rebooting. Hardware: 16gb ram, core I7-4790, Intel Haswell mboard with onboard sound and graphics Multimedia: Audio Adapter: HDA-Intel-HDA Intel HDMI OS: Ubuntu server 14.04 with ubuntu-desktop installed. GUI sound settings lists only the dummy sound card alsamixer -c 0 ¦ Card: HDA Intel HDMI F1: Help ¦ ¦ Chip: Intel Haswell HDMI F2: System information ¦ ¦ View: F3:[Playback] F4: Capture F5: All F6: Select sound card ¦ ¦ Item: S/PDIF ¦ ¦ +--+ ¦ ¦ ¦OO¦ ¦ ¦ +--+ ¦ ¦ < S/PDIF > ¦ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: HDMI [HDA Intel HDMI], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 aplay -L default Playback/recording through the PulseAudio sound server null Discard all samples (playback) or generate zero samples (capture) pulse PulseAudio Sound Server hdmi:CARD=HDMI,DEV=0 HDA Intel HDMI, HDMI 0 HDMI Audio Output dmix:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample mixing device dsnoop:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample snooping device hw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct hardware device without any conversions plughw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Hardware device with all software conversions cat /proc/asound/cards 0 [HDMI ]: HDA-Intel - HDA Intel HDMI HDA Intel HDMI at 0xf7d14000 irq 46 cat /proc/asound/devices 1: : sequencer 2: [ 0- 3]: digital audio playback 3: [ 0- 0]: hardware dependent 4: [ 0] : control 33: : timer mplayer -ao alsa:device=hdmi /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] alsa-lib: confmisc.c:768:(parse_card) cannot find card '1' [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:392:(snd_func_concat) error evaluating strings [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:1251:(snd_func_refer) error evaluating name [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory [AO_ALSA] alsa-lib: conf.c:4727:(snd_config_expand) Evaluate error: No such file or directory [AO_ALSA] alsa-lib: pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM hdmi [AO_ALSA] Playback open error: No such file or directory Failed to initialize audio driver 'alsa:device=hdmi' Could not open/initialize audio device -> no sound. Audio: no sound Video: no video Exiting... (End of file) mplayer -ao alsa:device=hw=0.3 /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] Format floatle is not supported by hardware, trying default. AO: [alsa] 44100Hz 2ch s16le (2 bytes per sample) Video: no video Starting playback... A: 0.4 (00.4) of 0.8 (00.7) 0.1% Exiting... (End of file) Thank you for your time and help :)

    Read the article

  • A SharePoint Developer&rsquo;s Toolchest

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). When we develop for SharePoint, we end up using many tools, third party or Microsoft, to facilitate our development. What are some of your favorite tools? Mine are as below - 1. Reflector: When I saw reflector, I was pretty convinced that a tool better and more useful than it doesn’t exist. Well I was wrong! Redgate took over reflector and they still offer it as a free version, but they have a paid version called reflector pro. It lets you debug third party source code, as if you had the source code. Brilliant! Who needs documentation anymore when you have real code? 2. ULS Viewer: It is no secret, reading ULS logs is a pain in the rear. Well, not so with ULS Viewer, which does work with SharePoint 2007 as well. But it’s just way cooler with SharePoint 2010. You know when you get an error in SharePoint 2010 it shows you an error like as below: Well, the ULS Viewer will allow you to set filtering critereon, allowing you to immediately zero in, into an error, across multiple WFEs even. Also there are numerous other facilities built into the tool, such as advanced filtering, critical error notifications, etc. A must have! You can read the documentation of the ULSViewer here. 3. SPDisposeCheck: Did you know that the MySite object is strange? What is strange about it? That you have to dispose it even if you didn’t create it!? Well who the hell remembers all that! Honestly I do! And you should too. But there is a tool to help you sanitize your code. And that is SPDisposeCheck. You run it against your DLL or EXE, and it will give you suggestions on where you might have missed calling dispose on an object. You still have to use your head, but having this tool helps. 4. DebugView: Debugging for SharePoint can be difficult sometimes. Sometimes your breakpoints don’t get hit. And while you can try and make them hit, it is sometimes easier to just write a bunch of Debug.WriteLines, and catch them from an external application such as DebugView. You simply use your code, and DebugView will catch all the Debug.WriteLine’s in your code like this - 5. BGInfo: One annoying thing about SharePoint projects, it causes the number of servers to multiply like bunnies. As I’m RDP’ing into many computers trying to diagnose a crazy issue, sometimes it becomes hard to remember which machine is which. BGInfo puts all that on the wallpaper, alongwith a bunch of other useful info. A bit like this - 5. WSPBuilder: SharePoint 2007 only, but I think there maybe a version for SP2010 coming later. I think the VS2010 tools for SP2010 development are quite nice, so WSPBuilder, well so far I don’t miss it. But lets see what WSPBuilder for 2010 brings – I haven’t seen it yet. However, I want to confidently assert that WSPBuilder for SP2007 is simply awesome. 6. SharePoint Manager: The SharePoint Manager 2010 is a SharePoint object model explorer. It enables you to browse every site on the local farm and view every property. It also enables you to change the properties. The VS2010 dev tools now include a server explorer, which show you a subset of properties in read-only. I would LOVE to see SharePoint manager like functionality built into VS2010. SharePoint Manager, a total must-have. Comment on the article ....

    Read the article

  • Best Practices for Handing over Legacy Code

    - by PersonalNexus
    In a couple of months a colleague will be moving on to a new project and I will be inheriting one of his projects. To prepare, I have already ordered Michael Feathers' Working Effectively with Legacy Code. But this books as well as most questions on legacy code I found so far are concerned with the case of inheriting code as-is. But in this case I actually have access to the original developer and we do have some time for an orderly hand-over. Some background on the piece of code I will be inheriting: It's functioning: There are no known bugs, but as performance requirements keep going up, some optimizations will become necessary in the not too distant future. Undocumented: There is pretty much zero documentation at the method and class level. What the code is supposed to do at a higher level, though, is well-understood, because I have been writing against its API (as a black-box) for years. Only higher-level integration tests: There are only integration tests testing proper interaction with other components via the API (again, black-box). Very low-level, optimized for speed: Because this code is central to an entire system of applications, a lot of it has been optimized several times over the years and is extremely low-level (one part has its own memory manager for certain structs/records). Concurrent and lock-free: While I am very familiar with concurrent and lock-free programming and have actually contributed a few pieces to this code, this adds another layer of complexity. Large codebase: This particular project is more than ten thousand lines of code, so there is no way I will be able to have everything explained to me. Written in Delphi: I'm just going to put this out there, although I don't believe the language to be germane to the question, as I believe this type of problem to be language-agnostic. I was wondering how the time until his departure would best be spent. Here are a couple of ideas: Get everything to build on my machine: Even though everything should be checked into source code control, who hasn't forgotten to check in a file once in a while, so this should probably be the first order of business. More tests: While I would like more class-level unit tests so that when I will be making changes, any bugs I introduce can be caught early on, the code as it is now is not testable (huge classes, long methods, too many mutual dependencies). What to document: I think for starters it would be best to focus documentation on those areas in the code that would otherwise be difficult to understand e.g. because of their low-level/highly optimized nature. I am afraid there are a couple of things in there that might look ugly and in need of refactoring/rewriting, but are actually optimizations that have been out in there for a good reason that I might miss (cf. Joel Spolsky, Things You Should Never Do, Part I) How to document: I think some class diagrams of the architecture and sequence diagrams of critical functions accompanied by some prose would be best. Who to document: I was wondering what would be better, to have him write the documentation or have him explain it to me, so I can write the documentation. I am afraid, that things that are obvious to him but not me would otherwise not be covered properly. Refactoring using pair-programming: This might not be possible to do due to time constraints, but maybe I could refactor some of his code to make it more maintainable while he was still around to provide input on why things are the way they are. Please comment on and add to this. Since there isn't enough time to do all of this, I am particularly interested in how you would prioritize.

    Read the article

  • Why Cornell University Chose Oracle Data Masking

    - by Troy Kitch
    One of the eight Ivy League schools, Cornell University found itself in the unfortunate position of having to inform over 45,000 University community members that their personal information had been breached when a laptop was stolen. To ensure this wouldn’t happen again, Cornell took steps to ensure that data used for non-production purposes is de-identified with Oracle Data Masking. A recent podcast highlights why organizations like Cornell are choosing Oracle Data Masking to irreversibly de-identify production data for use in non-production environments. Organizations often copy production data, that contains sensitive information, into non-production environments so they can test applications and systems using “real world” information. Data in non-production has increasingly become a target of cyber criminals and can be lost or stolen due to weak security controls and unmonitored access. Similar to production environments, data breaches in non-production environments can cost millions of dollars to remediate and cause irreparable harm to reputation and brand. Cornell’s applications and databases help carry out the administrative and academic mission of the university. They are running Oracle PeopleSoft Campus Solutions that include highly sensitive faculty, student, alumni, and prospective student data. This data is supported and accessed by a diverse set of developers and functional staff distributed across the university. Several years ago, Cornell experienced a data breach when an employee’s laptop was stolen.  Centrally stored backup information indicated there was sensitive data on the laptop. With no way of knowing what the criminal intended, the university had to spend significant resources reviewing data, setting up service centers to handle constituent concerns, and provide free credit checks and identity theft protection services—all of which cost money and took time away from other projects. To avoid this issue in the future Cornell came up with several options; one of which was to sanitize the testing and training environments. “The project management team was brought in and they developed a project plan and implementation schedule; part of which was to evaluate competing products in the market-space and figure out which one would work best for us.  In the end we chose Oracle’s solution based on its architecture and its functionality.” – Tony Damiani, Database Administration and Business Intelligence, Cornell University The key goals of the project were to mask the elements that were identifiable as sensitive in a consistent and efficient manner, but still support all the previous activities in the non-production environments. Tony concludes,  “What we saw was a very minimal impact on performance. The masking process added an additional three hours to our refresh window, but it was well worth that time to secure the environment and remove the sensitive data. I think some other key points you can keep in mind here is that there was zero impact on the production environment. Oracle Data Masking works in non-production environments only. Additionally, the risk of exposure has been significantly reduced and the impact to business was minimal.” With Oracle Data Masking organizations like Cornell can: Make application data securely available in non-production environments Prevent application developers and testers from seeing production data Use an extensible template library and policies for data masking automation Gain the benefits of referential integrity so that applications continue to work Listen to the podcast to hear the complete interview.  Learn more about Oracle Data Masking by registering to watch this SANS Institute Webcast and view this short demo.

    Read the article

  • Oracle Data Integration 12c: Simplified, Future-Ready, High-Performance Solutions

    - by Thanos Terentes Printzios
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. Oracle Data Integration delivers pervasive and continuous access to timely and trusted data across heterogeneous systems. Oracle is enhancing its data integration offering announcing the general availability of 12c release for the key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c, delivering Simplified and High-Performance Solutions for Cloud, Big Data Analytics, and Real-Time Replication. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions : Supporting Current and Emerging Initiatives Extreme Performance : Even higher performance than ever before Fast Time-to-Value : Higher IT Productivity and Simplified Solutions  With the new capabilities in Oracle Data Integrator 12c, customers can benefit from: Superior developer productivity, ease of use, and rapid time-to-market with the new flow-based mapping model, reusable mappings, and step-by-step debugger. Increased performance when executing data integration processes due to improved parallelism. Improved productivity and monitoring via tighter integration with Oracle GoldenGate 12c and Oracle Enterprise Manager 12c. Improved interoperability with Oracle Warehouse Builder which enables faster and easier migration to Oracle Data Integrator’s strategic data integration offering. Faster implementation of business analytics through Oracle Data Integrator pre-integrated with Oracle BI Applications’ latest release. Oracle Data Integrator also integrates simply and easily with Oracle Business Analytics tools, including OBI-EE and Oracle Hyperion. Support for loading and transforming big and fast data, enabled by integration with big data technologies: Hadoop, Hive, HDFS, and Oracle Big Data Appliance. Only Oracle GoldenGate provides the best-of-breed real-time replication of data in heterogeneous data environments. With the new capabilities in Oracle GoldenGate 12c, customers can benefit from: Simplified setup and management of Oracle GoldenGate 12c when using multiple database delivery processes via a new Coordinated Delivery feature for non-Oracle databases. Expanded heterogeneity through added support for the latest versions of major databases such as Sybase ASE v 15.7, MySQL NDB Clusters 7.2, and MySQL 5.6., as well as integration with Oracle Coherence. Enhanced high availability and data protection via integration with Oracle Data Guard and Fast-Start Failover integration. Enhanced security for credentials and encryption keys using Oracle Wallet. Real-time replication for databases hosted on public cloud environments supported by third-party clouds. Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c and other Oracle technologies, such as Oracle Database 12c and Oracle Applications, provides a number of benefits for organizations: Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training. Integration with Oracle Database 12c provides a strong foundation for seamless private cloud deployments. Delivers real-time data for reporting, zero downtime migration, and improved performance and availability for Oracle Applications, such as Oracle E-Business Suite and ATG Web Commerce . Oracle’s data integration offering is optimized for Oracle Engineered Systems and is an integral part of Oracle’s fast data, real-time analytics strategy on Oracle Exadata Database Machine and Oracle Exalytics In-Memory Machine. Oracle Data Integrator 12c and Oracle GoldenGate 12c differentiate the new offering on data integration with these many new features. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. Find out much more about the new release in the video webcast "Introducing 12c for Oracle Data Integration", where customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Resource Kits Meet Oracle Data Integration 12c  Discover what's new with Oracle Goldengate 12c  Oracle EMEA DIS (Data Integration Solutions) Partner Community is available for all your questions, while additional partner focused webcasts will be made available through our blog here, so stay connected. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • TiVo Follow-up&hellip;Training Opportunities

    - by MightyZot
    A few posts ago I talked about my experience with TiVo Customer Service. While I didn’t receive bad service per se, I felt like the reps could have communicated better. I made the argument that it should be just as easy to leave a company as it is to engage with a company, even though my intention is to remain a TiVo fan. I worked for DataStorm Technologies in the early 90s. I pointed out to another developer that we were leaving files behind in our installations. My opinion was that, if the customer is uninstalling our application, there should be no trace of it left after uninstall except for the customer’s data. He replied with, “screw ‘em. They’re leaving us. Why do we care if we left anything behind?” Wow. Surely there is a lot of arrogance in that statement. Think about this…how often do you change your services, devices, or whatever?  Personally, I change things up about once every two or three years. If I don’t change things up, I at least think about it. So, every two or three years there is an opportunity for you (as a vendor or business) to sell me something. (That opportunity actually exists all the time, because there are many of these two or three year periods overlapping.) Likewise, you have the opportunity to win back my business every two or three years as well. Customer service on exit is just as important as customer service during engagement because, every so often, you have another chance to gain back my loyalty. If you screw that up on exit, your chances are close to zero. In addition, you need to consider all of the potential or existing customers that are part of or affected by my social organizations. “Melissa” at TiVo gave me a call last week and set up some time to talk about my experience. We talked yesterday and she gave me a few moments to pontificate about my thoughts on the importance of a complete customer experience. She had listened to my customer support calls and agreed that I had made it clear that I intended to remain a TiVo customer even though suddenLink is handling my subscription. She said that suddenLink is a very important partner for them and, of course, they want to do everything they can to support TiVo / suddenLink customers.  “Melissa” also said that they had turned this experience into a training opportunity for the reps involved. I hope that is true, because that “programmer arrogance” that I mentioned above (which was somewhat pervasive back then) may be part of the reason why that company is no longer around. Good job “Melissa”!  And, like I said, I am still a TiVo fan. In fact, we love our new TiVo and many of the great new features. In addition, if you’re one of the two people that read these posts, please remember that these are just opinions. Your experiences may be, and likely will be, completely unique to you.

    Read the article

  • How to I do install DB2 ODBC?

    - by Justin
    I have been trying, with no success, to install a IBM DB2 ODBC driver so that my PHP server can connect to a database. I've tried installing the db2_connect and get all sorts of problems, I tried install I Access for Linux and the RPM did not install right nor did using alien breed any useful results. I've also tried the DB2 Runtime v8.1, no success. If I attempt to run the rpm it claims I need dependencies that I can't find in apt-get. Yum is also not very helpful as it appears I don't have any repositories installed or lists... Running the simple RPM gives me this result in terminal: # rpm -ivh iSeriesAccess-7.1.0-1.0.x86_64.rpm rpm: RPM should not be used directly install RPM packages, use Alien instead! rpm: However assuming you know what you are doing... error: Failed dependencies: /bin/ln is needed by iSeriesAccess-7.1.0-1.0.x86_64 /sbin/ldconfig is needed by iSeriesAccess-7.1.0-1.0.x86_64 /bin/rm is needed by iSeriesAccess-7.1.0-1.0.x86_64 /bin/sh is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6(GLIBC_2.3)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libdl.so.2()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libdl.so.2(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libgcc_s.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libm.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libm.so.6(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libodbcinst.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libodbc.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0(GLIBC_2.3.2)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 librt.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 librt.so.1(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6(CXXABI_1.3)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6(GLIBCXX_3.4)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 Using alien and running the dkpg gives me thes headaque: $ alien iSeriesAccess-7.1.0-1.0.x86_64.rpm --scripts # dpkg -i iseriesaccess_7.1.0-2_amd64.deb (Reading database ... 127664 files and directories currently installed.) Preparing to replace iseriesaccess 7.1.0-2 (using iseriesaccess_7.1.0-2_amd64.deb) ... Unpacking replacement iseriesaccess ... post uninstall processing for iSeriesAccess 1.0...upgrade /var/lib/dpkg/info/iseriesaccess.postrm: line 8: [: upgrade: integer expression expected Setting up iseriesaccess (7.1.0-2) ... post install processing for iSeriesAccess 1.0...configure iSeries Access ODBC Driver has been deleted (if it existed at all) because its usage count became zero odbcinst: Driver installed. Usage count increased to 1. Target directory is /etc odbcinst: Driver installed. Usage count increased to 3. Target directory is /etc Processing triggers for libc-bin ... ldconfig deferred processing now taking place So it seems the files installed right, well my odbc driver shows up but db2cli.ini is no where to be found. So several questions. Is there a better alternative to connect php to db2, say an ubuntu package I can just install? Can someone direct me to the steps that makes my ubuntu server works well with the RPM so I can build my db2 instance? Also remember I'm connection to an I Series remotely. I'm not using the DB2 Express C thing, even if I did try it to get the db2 php functions to work. And I don't have zend but I think I have every other package on the ubuntu repositories. Help, thank you!

    Read the article

  • Oracle’s New Approach to Cloud-based Applications User Experiences

    - by Oracle OpenWorld Blog Team
    By Misha Vaughan It was an exciting Oracle OpenWorld this year for customers and partners, as they got to see what their input into the Oracle user experience research and development process has produced for cloud-delivered applications. The result of all this engagement and listening is a focus on simplicity, mobility, and extensibility. These were the core themes across Oracle OpenWorld sessions, executive roundtables, and analyst briefings given by Jeremy Ashley, Oracle's vice president of user experience. The highlight of every meeting with a customer featured the new simplified UI for Oracle’s cloud applications.    Attendees at some sessions and events also saw a vision of what is coming next in the Oracle user experience, and they gave direct feedback on whether this would help solve their business problems.  What did attendees think of what they saw this year? Rebecca Wettemann of Nucleus Research was part of  an analyst briefing on next-generation user experiences from Oracle. Here’s what she told CRM Buyer in an interview just after the event:  “Many of the improvements are incremental, which is not surprising, as Oracle regularly updates its application,” Rebecca Wettemann, vice president of Nucleus Research, told CRM Buyer. "Still, there are distinct themes to this latest set of changes. One is usability. Oracle Sales Cloud, for example, is designed to have zero training for onboarding sales reps, which it does," she explained. "It is quite impressive, actually—the intuitive nature of the application and the design work they have done with this goal in mind. The software uses as few buttons and fields as possible," she pointed out. "The sales rep doesn't have to ask, 'what is the next step?' because she can see what it is."  What else did we hear? Oracle OpenWorld is a time when we can take a broader pulse of our customers’ and partners’ concerns. This year we heard some common user experience themes on the following: · A desire to continue to simplify widely used self-service tasks · A need to understand how customers or partners could take some of the UX lessons learned on simplicity and mobility into their own custom areas and projects  · The continuing challenge of needing to support bring-your-own-device and corporate-provided mobile devices to end users · A desire to harmonize user experiences across platforms for specific business-use cases  What does this mean for next year? Well, there were a lot of things we could only show to smaller groups of customers in our Oracle OpenWorld usability labs and HQ lab tours, to partners at our Expo, and to analysts under non-disclosure agreements. But we used these events as a way to get some early feedback about where we are focusing for the year ahead. Attendees gave us a positive response: @bkhan Saw some excellent UX innovations at the expo “@usableapps: Great job @mishavaughan and @vinoskey on #oow13 UX partner expo!” @WarnerTim @usableapps @mishavaughan @vinoskey @ultan Thanks for an interesting afternoon definitely liked the UX tool kits for partners. You can expect Oracle to continue pushing themes of simplicity, mobility, and extensibility even more aggressively in the next year.  If you are interested to find out what really goes on in the UX labs, such as what we are doing with smartphones, tablets, heads-up displays, and the AppsLab robots, feel free to reach out to me for more information: Misha Vaughan or on Twitter: @mishavaughan.

    Read the article

  • Filtering data in LINQ with the help of where clause

    - by vik20000in
     LINQ has bought with itself a super power of querying Objects, Database, XML, SharePoint and nearly any other data structure. The power of LINQ lies in the fact that it is managed code that lets you write SQL type code to fetch data.  Whenever working with data we always need a way to filter out the data based on different condition. In this post we will look at some of the different ways in which we can filter data in LINQ with the help of where clause. Simple Filter for an array. Let’s say we have an array of number and we want to filter out data based on some condition. Below is an example int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; var lowNums =                 from num in numbers                 where num < 5                 select num;   Filter based on one of the property in the class. With the help of LINQ we can also filer out data from a list based on value of some property. var soldOutProducts =                 from prod in products                 where prod.UnitsInStock == 0                 select prod; Filter based on Multiple of the property in the class. var expensiveInStockProducts =         from prod in products         where prod.UnitsInStock > 0 && prod.UnitPrice > 3.00M         select prod; Filter based on the index of the Item in the list.In the below example we can see that we are able to filter data based on the index of the item in the list. string[] digits = { "zero", "one", "two", "three", "four", "five", "six"}; var shortDigits = digits.Where((digit, index) => digit.Length < index); There are many other way in which we can filter out data in LINQ. In the above post I have tried and shown few ways using the LINQ. Vikram

    Read the article

  • Subscribable World Cup 2010 Calendar

    - by jamiet
    I bang on quite a lot on this blog about ways in which data can get published over the web and one of the most interesting ways, in my opinion, of publishing data in a structured manner that is well understood is to use the iCalendar specification. There isn’t much information in the world that doesn’t have some concept of “when” so iCalendar is a great way of distributing that information. You have probably used iCalendar at some point without even knowing about it. All files with a .ics suffix are iCalendar format files and that is why you can happily import them into Outlook, Hotmail Calendar, Google Calendar etc… where they can be parsed and have the semantic data (when, where and who) extracted from them. Importing of iCalendar format data is really only half the trick though; in my opinion the real value of iCalendar-formatted calendar is the ability to subscribe to them. Subscribing has a simple benefit over importing but that single benefit is of massive importance: a subscriber to an iCalendar calendar can periodically check to see if any updates have been made and, if they have, automatically update the local copy. The real benefit to the user is the productivity gain – a single update to an iCalendar means that all subscribers are automatically made aware of the change and there is zero effort on the part of the subscriber; as my former colleague Howard van Rooijen is fond of saying, “work smarter not harder” – nowhere is this edict more ably demonstrated than subscribing versus importing of calendars. If you want to read some more thoughts about iCalendar then go and read my past blog post Calendar syndication - My big hope for 2009's breakthrough technology or better still go and seek out Jon Udell who speaks very authoritatively on the issue of iCalendar. With this subject of iCalendar on my mind I was interested to discover (via Steve Clayton’s blog post Download the world cup fixtures) that the BBC had made a .ics file available containing all of the matches in the upcoming World Cup. As you can probably guess this was a file that was made available so that it could be imported into your calendar of choice. It had one obvious downside though, right now nobody knows who is going to be playing in the knock-out stages so the calendar looks like this: with no teams being named after 25th June. How much more useful would this calendar have been if the BBC had made it possible to subscribe to the calendar instead, thus the calendar could be updated with the teams for the knock out stages when they are known and every subscriber would have a permanently up-to-date record of all the fixtures in their calendar. Better still, the calendar could be updated with match results as well or perhaps even post a match report from the BBC sport pages; when calendars are made subscribable a sea of opportunity opens up for distribution of information. So with that in mind I have decided to go one better than the BBC. I have imported their .ics into a brand new Hotmail calendar and made it publicly available at the following URLs: HTML http://cid-dc1ed121af0476be.calendar.live.com/calendar/World+Cup+2010/index.html iCalendar webcal://cid-dc1ed121af0476be.calendar.live.com/calendar/World+Cup+2010/calendar.ics The link you’re really interested in is the second one - click on that and it should open up in your calendar software of choice. Or, if you want to view it in an online calendar such as Hotmail Calendar or Google Calendar, copy and paste that URL into the appropriate place. I shall endeavour to keep the calendar updated throughout the World Cup and even if I don’t you’re no worse off than if you had imported the BBC’s .ics file so why not give it a try? If I do keep it up to date then you will have a permanent record of the 2010 World Cup available in your calendar. Forever. If you have your calendar synced to your smartphone then you’ll be carrying match reports around with you without you having to do a single thing. Surely that’s worth a quick click isn’t it?   If you have any thoughts let me have them in the comments below. Thanks for reading. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Fill a Flash Drive with Portable Software using Lupo PenSuite

    - by Asian Angel
    A flash drive full of portable software is helpful to have along wherever you go. The Lupo PenSuite lets you choose from three different versions to get the best fit for your everyday needs. Note: If running the full version you will need a 512 MB USB flash drive or larger. Using Lupo PenSuite The one window to watch for during the setup process is where you have the opportunity to add a specific language pack if needed. Outside of that all that you need to do is sit back and wait for the suite to be extracted. Note: Extraction times will vary based on version and extraction location. Here we browsed to our flash drive to extract it to… Once the setup process is complete locate and double click the Lupo_PenSuite.exe file. This one time window will present you the opportunity to start using the suite immediately, or go directly into the options. When the suite is active you will have a new system tray icon that operates as a start menu button. At the bottom you can monitor the remaining room on your flash drive, and use the close button to exit the suite (may display as a power button based on menu theme). A quick look at the set up inside the suite. There is a pre-configured area for organizing and storing your personal files. Prefer a classic style menu? Just select for it in the options (various tab) and enjoy a smaller streamlined look. Note: You can also change the theme for the regular menu and add a user pic. The suite provides access to your portable software and online sites. You get to enjoy the best of both as shown in the following examples. Websites will open using the suite’s portable Firefox install. VLC is ready to play your downloaded videos. The suite also has some very nice photo editing programs added in. Installing Additional Apps If one of your favorite programs is not included in the suite version, it only takes a few minutes to add it in. Go to the Additional Apps webpage, download the app(s), and extract them onto your hard-drive. Note: Link for additional apps webpage provided below. Add the extracted app(s) to the MyApps folder in the suite’s folder hierarchy. Click on ASuite in the suite’s start menu. Drag and drop the portable app’s exe file into the MyApps section in the ASuite window. Your new software’s shortcut should display as shown here. Close this window when finished. Checking the suite’s start menu will show your new software ready to be used. Conclusion If you need a good portable software collection to carry with you on a flash drive then Lupo PenSuite is definitely worth taking a look at. We tested Lupo PenSuite on XP, Vista, and Windows 7 and it works great on all three. Another popular choice is PortableApps and you can check out our Review of that too they are essentially the same thing, each is just packaged differently. Links Download Lupo PenSuite (Full, Lite, & Zero versions) *Download links approximately one-third down the page. Download Additional Apps for Lupo PenSuite Download Additional Skins for Lupo PenSuite Start Menu View Video Tutorials *Has tutorial for easy updating of entire suite. Similar Articles Productive Geek Tips Install and Run Applications from Your iPod, Flash Drive or Mp3 PlayerRebit Backup Software [Review]BitLocker To Go Encrypts Portable Flash Drives in Windows 7Create a Bootable Ubuntu USB Flash Drive the Easy WaySpeed up Your Windows Vista Computer with ReadyBoost TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor

    Read the article

  • Ubuntu 11.04 and 10.04 hang with black screen while installing from USB disk

    - by Bill
    I've been trying to install Ubuntu 11.04 from a USB flash stick and each time I try to boot from the USB key one of two things happen: A) The screen that asks you what you would like to do (e.g. run Ubuntu from the USB key or install it) shows up and the countdown to the default option starts to count down but as soon as I either touch the keyboard (sometimes I press enter or the arrow keys to select an option) or the countdown gets to zero the screen just locks up and nothing happens no matter how long I wait. B) When I boot from the USB key the screen will flicker for a second and then go black with a flashing white underscore at the top left corner of the screen. Again it doesn't matter how long I wait, nothing happens and pressing keys doesn't do a thing. The very first time I tried to install it I got a terminal-like screen that said something about a directory called 'casper' having an error of some sort. I have tried installing from USB using both 11.04 and 10.10. I'm about to try 10.04. I have read tons of forum posts about this but so far I haven't seen anything in the solutions that apply to me. My intention is to dual boot Windows 7 and Ubuntu. I must keep Windows as I am required to use Visual Studio for one of my college courses. Right now I'm using Wubi but I really want a full install. I can't use LVPM because it doesn't work with the version of Wubi I used. So now I'm thinking my best bet is to try to get a clean install working. I'd also convert Wubi to a full install too but there's no solution as far as I've read. So could someone tell me a reason why this is happening or if there's something I can do to get around the problem? I'm using a Gateway LT2802u netbook with and Intel Atom N455 processor, 1GB RAM, Intel Graphics Media Accelerator 3150 graphics card, and a 250GB HDD. I don't have anything on my current Wubi install that I can't replace so keep in mind when answering that I don't care if I lose my current settings and files from Wubi. Thanks everyone! UPDATE I just answered my own question so in case anyone else is having this same problem using similar hardware, do the following: When I first tried installing 11.04 I used the recommended universal installer tool to create the USB live/installation disk. That caused the original problem. Note that I had already downloaded the 11.04 ISO and did not use the included downloader from the USB creator. After that failed I used the same USB creator but had it download 10.10 for me. It also failed with the same issue. I repeated this process with unetbootin as well for both versions. Finally, I downloaded the Ubuntu 10.04 ISO and used the recommended USB creator once again. There was an error while creating the USB live install so I reformatted the USB key as FAT32 and tried again. It created the USB key. I then booted from the USB flash drive and selected "Install Ubuntu" (exact wording was different). It worked! It took me through the process that you see shown in pictures on the Ubuntu website. I let it create the appropriate partitions for me and it simply worked. I did get a few errors while the system tried to restart after it installed. It hung on a terminal-like screen but I pressed ENTER and it restarted. I booted into Windows 7, it checked the disks as it sensed that I messed with a partition, then it booted into Windows normally. Now I'm going to uninstall Wubi and update my new full install of Ubuntu! I'm excited to get the benefits of a full install now. So in the end, hopefully someone can learn from what I did.

    Read the article

  • FairWarning Privacy Monitoring Solutions Rely on MySQL to Secure Patient Data

    - by Rebecca Hansen
    FairWarning® solutions have audited well over 120 billion events, each of which was processed and stored in a MySQL database. FairWarning is the world's leading supplier of privacy monitoring solutions for electronic health records, relied on by over 1,200 Hospitals and 5,000 Clinics to keep their patients' data safe. In January 2014, FairWarning was awarded the highest commendation in healthcare IT as the first ever Category Leader for Patient Privacy Monitoring in the "2013 Best in KLAS: Software & Services" report[1]. FairWarning has used MySQL as their solutions’ database from their start in 2005 to worldwide expansion and market leadership. FairWarning recently migrated their solutions from MyISAM to InnoDB and updated from MySQL 5.5 to 5.6. Following are some of benefits they’ve had as a result of those changes and reasons for their continued reliance on MySQL (from FairWarning MySQL Case Study). Scalability to Handle Terabytes of Data FairWarning's customers have a lot of data: On average, FairWarning customers receive over 700,000 events to be processed daily. Over 25% of their customers receive over 30 million events per day, which equates to over 1 billion events and nearly one terabyte (TB) of new data each month. Databases range in size from a few hundred GBs to 10+ TBs for enterprise deployments (data are rolled off after 13 months). Low or Zero Admin = Few DBAs "MySQL has not required a lot of administration. After it's been tuned, configured, and optimized for size on initial setup, we have very low administrative costs. I can scale and add more customers without adding DBAs. This has had a big, positive impact on our business.” - Chris Arnold, FairWarning Vice President of Product Management and Engineering. Performance Schema  As the size of FairWarning's customers has increased, so have their tables and data volumes. MySQL 5.6’ new maintenance and management features have helped FairWarning keep up. In particular, MySQL 5.6 performance schema’s low-level metrics have provided critical insight into how the system is performing and why. Support for Mutli-CPU Threads MySQL 5.6' support for multiple concurrent CPU threads, and FairWarning's custom data loader allow multiple files to load into a single table simultaneously vs. one at a time. As a result, their data load time has been reduced by 500%. MySQL Enterprise Hot Backup Because hospitals and clinics never stop, FairWarning solutions can’t either. FairWarning changed from using mysqldump to MySQL Enterprise Hot Backup, which has reduced downtime, restore time, and storage requirements. For many of their larger customers, restore time has decreased by 80%. MySQL Enterprise Edition and Product Roadmap Provide Complete Solution "MySQL's product roadmap fully addresses our needs. We like the fact that MySQL Enterprise Edition has everything included; there's no need to purchase separate modules."  - Chris Arnold Learn More>> FairWarning MySQL Case Study Why MySQL 5.6 is an Even Better Embedded Database for Your Products presentation Updating Your Products to MySQL 5.6, Best Practices for OEMs on-demand webinar (audio and / or slides + Q&A transcript) MyISAM to InnoDB – Why and How on-demand webinar (same stuff) Top 10 Reasons to Use MySQL as an Embedded Database white paper [1] 2013 Best in KLAS: Software & Services report, January, 2014. © 2014 KLAS Enterprises, LLC. All rights reserved.

    Read the article

  • Is this simple XOR encrypted communication absolutely secure?

    - by user3123061
    Say Alice have 4GB USB flash memory and Peter also have 4GB USB flash memory. They once meet and save on both of memories two files named alice_to_peter.key (2GB) and peter_to_alice.key (2GB) which is randomly generated bits. Then they never meet again and communicate electronicaly. Alice also maintains variable called alice_pointer and Peter maintains variable called peter_pointer which is both initially set to zero. Then when Alice needs to send message to Peter they do: encrypted_message_to_peter[n] = message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] Where n i n-th byte of message. Then alice_pointer is attached at begining of the encrypted message and (alice_pointer + encrypted message) is sent to Peter and then alice_pointer is incremented by length of message (and for maximum security can be used part of key erased) Peter receives encrypted_message, reads alice_pointer stored at beginning of message and do this: message_to_peter[n] = encrypted_message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] And for maximum security after reading of message also erases used part of key. - EDIT: In fact this step with this simple algorithm (without integrity check and authentication) decreases security, see Paulo Ebermann post below. When Peter needs to send message to Alice they do analogical steps with peter_to_alice.key and with peter_pointer. With this trivial schema they can send for next 50 years each day 2GB / (50 * 365) = cca 115kB of encrypted data in both directions. If they need more data to send, they simple use larger memory for keys for example with today 2TB harddiscs (1TB keys) is possible to exchange next 50years 60MB/day ! (thats practicaly lots of data for example with using compression its more than hour of high quality voice communication) It Seems to me there is no way for attacker to read encrypted message without keys even if they have infinitely fast computer. because even with infinitely fast computer with brute force they get ever possible message that can fit to length of message, but this is astronomical amount of messages and attacker dont know which of them is actual message. I am right? Is this communication schema really absolutely secure? And if its secure, has this communication method its own name? (I mean XOR encryption is well-known, but whats name of this concrete practical application with use large memories at both communication sides for keys? I am humbly expecting that this application has been invented someone before me :-) ) Note: If its absolutely secure then its amazing because with today low cost large memories it is practicaly much cheeper way of secure communication than expensive quantum cryptography and with equivalent security! EDIT: I think it will be more and more practical in future with lower a lower cost of memories. It can solve secure communication forever. Today you have no certainty if someone succesfuly atack to existing ciphers one year later and make its often expensive implementations unsecure. In many cases before comunication exist step where communicating sides meets personaly, thats time to generate large keys. I think its perfect for military communication for example for communication with submarines which can have installed harddrive with large keys and military central can have harddrive for each submarine they have. It can be also practical in everyday life for example for control your bank account because when you create your account you meet with bank etc.

    Read the article

  • Bash completion doesn't work, or is ignoring what I've typed; but works for commands

    - by Neil Traft
    Bash completion seems to be ignoring what I've typed (it tries to complete, but acts as if there's nothing under the cursor). I know I saw it work on this machine earlier today, but I'm not sure what has changed. Some examples: cd shows all directories under my current folder: $ cd co<tab><tab> cmake/ config/ doc/ examples/ include/ programs/ sandbox/ src/ .svn/ tests/ Commands like ls and less show all files and directories under my current folder: $ ls co<tab><tab> cmake/ config/ .cproject Doxyfile.in include/ programs/ README.txt src/ tests/ CMakeLists.txt COPYING.txt doc/ examples/ mainpage.dox .project sandbox/ .svn/ Even when I try to complete things from a different folder, it gives me only the results for my current folder (telling me that it is completely ignoring what I've typed): $ cd ~/D<tab><tab> cmake/ config/ doc/ examples/ include/ programs/ sandbox/ src/ .svn/ tests/ But it seems to be working fine for commands and variables: $ if<tab><tab> if ifconfig ifdown ifnames ifquery ifup $ echo $P<tab><tab> $PATH $PIPESTATUS $PPID $PS1 $PS2 $PS4 $PWD $PYTHONPATH I do have this bit in my .bashrc, and I have confirmed that my .bashrc is indeed getting sourced: if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi I've even tried manually executing that file, but it doesn't fix the problem: $ . /etc/bash_completion There was even one point in time where it was working for ls, but was not working for cd ... but I can't replicate that result now. Update: I also just discovered that I have terminals open from earlier that still work. I ran source .bashrc in one of them and afterwards completion was broken. Here is my .bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # # Modified by Neil Traft #source ~/.profile # Allow globs to expand hidden files shopt -s dotglob nullglob # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines or lines starting with space in the history. # See bash(1) for more options HISTCONTROL=ignoreboth # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # If set, the pattern "**" used in a pathname expansion context will # match all files and zero or more directories and subdirectories. #shopt -s globstar # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # Color the prompt export PS1="\[$(tput setaf 2)\]\u@\h:\[$(tput setaf 5)\]\W\[$(tput setaf 2)\] $\[$(tput sgr0)\] " # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi

    Read the article

  • Recover Lost Form Data in Firefox

    - by Asian Angel
    Have you ever filled in a text area or form in a webpage and something happens before you can finish it? If you like the idea of recovering that lost data then you will want to have a look at the Lazarus: Form Recovery extension for Firefox. Lazarus: Form Recovery in Action For our first example we chose the comment text box area for one of the articles here at the website. As you can see we were not finished typing in the whole comment yet… Notice the “Lazarus Icon” in the lower right corner. Note: We simulated accidental tab closures for our two examples. After getting our webpage opened up again all of our text was gone. Right clicking within the text area showed two options available…”Recover Text & Recover Form”. Notice that our lost text was listed as a “sub menu”…this could be extremely useful in matching up the appropriate text to the correct webpage if you had multiple tabs open before something happened. Click on the correct text listing to insert it. So easy to finish writing our comment without having to start from zero again. In our second example we chose the sign-up form page for the website. As before we were not finished filling in the form… Getting the webpage opened back up showed the same problem as before…all the entered text was lost. This time we right clicked in the browser window area and there was that wonderful “Recover Form Command” waiting to be used. One click and… All of our lost form data was back and we were able to finish filling in the form. For those who may be interested you can disable Lazarus: Form Recovery on individual websites using the “Context Menu” for the “Status Bar Icon” Options There are three sections in the options and you should take a quick look through them to make any desired modifications in how Lazarus: Form Recovery functions. The first “Options Area” focuses on display/access for the extension. The second “Options Area” allows you to expand the type of data retained, enable removal of data within a given time frame, set up a password, disable search indexing, and enable form data retention while in “Private Browsing Mode”. The third “Options Area” focuses on the Lazarus database itself. Conclusion If you have ever lost text area or form data before then you know how much time could be lost in starting over. Lazarus: Form Recovery helps provide a nice backup solution to get you up and running once again with a minimum of effort. Links Download the Lazarus: Form Recovery extension (Mozilla Add-ons) Download the Lazarus: Form Recovery extension (Extension Homepage) Similar Articles Productive Geek Tips Quick Tip: Resize Any Textbox or Textarea in FirefoxWhy Doesn’t AutoComplete Always Work in Firefox?Pass Variables between Windows Forms Windows without ShowDialog()Using Secure Login in FirefoxAdd Search Forms to the Firefox Search Bar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • How to Use Sparklines in Excel 2010

    - by DigitalGeekery
    One of the cool features of Excel 2010 is the addition of Sparklines. A Sparkline is basically a little chart displayed in a cell representing your selected data set that allows you to quickly and easily spot trends at a glance. Inserting Sparklines on your Spreadsheet You will find the Sparklines group located on the Insert tab.   Select the cell or cells where you wish to display your Sparklines. Select the type of Sparkline you’d like to add to your spreadsheet. You’ll notice there are three types of Sparklines, Line, Column, and Win/Loss. We’ll select Line for our example. A Create Sparklines pops up and will prompt you to enter a Data Range you are using to create the Sparklines. You’ll notice that the location range (the range where the Sparklines will appear) is already filled in. You can type in the data range manually, or click and drag with your mouse across to select the data range. This will auto-fill the data range for you. Click OK when you are finished.   You will see your Sparklines appear in the desired cells.   Customizing Sparklines Select the one of more of the Sparklines to reveal the Design tab. You can display certain value points like high and low points, negative points, and first and last points by selecting the corresponding options from the Show group. You can also mark all value points by selecting  Markers. Select your desired Sparklines and click one of the included styles from the Style group on the Design tab. Click the down arrow on the lower right corner of the box to display additional pre-defined styles…   or select Sparkline Color or Marker Color options to fully customize your Sparklines. The Axis options allow additional options such as Date Axis Type, Plotting Data Left to Right, and displaying an axis point to represent the zero line in your data with Show Axis. Column Sparklines Column Sparklines display your data in individual columns as opposed to the Line view we’ve been using for our examples. Win/Loss Sparklines Win/Loss shows a basic positive or negative representation of your data set.   You can easily switch between different Sparkline types by simply selecting the current cells (individually or the entire group), and then clicking the desired type on the Design tab. For those that may be more visually oriented, Sparklines can be a wonderful addition to any spreadsheet. Are you just getting started with Office 2010? Check out some of our other great Excel posts such as how to copy worksheets, print only selected areas of a spreadsheet, and how to share data with Excel in Office 2010. Similar Articles Productive Geek Tips Convert a Row to a Column in Excel the Easy WayShare Access Data with Excel in Office 2010Make Excel 2007 Print Gridlines In Workbook FileMake Excel 2007 Always Save in Excel 2003 FormatConvert Older Excel Documents to Excel 2007 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • How to check if a cdrom is in the tray remotely (via ssh)?

    - by adempewolff
    I have a server running Ubuntu 10.04 (it's on the other side of the world and I haven't built up the wherewithal to upgrade it remotely yet) and I have been told that there is a CD in one of it's two CD drives. I want to rip an image of the cd and then download it to my local computer (I don't need help with either of these steps). However, I cannot seem to confirm whether or not there actually is a CD in the drive as I was told. It did not automatically mount anywhere (which I'm thinking might just be a result of it being a headless server not running X, nautilus, or any of the other nice user friendly things). There are two CD drives connected via SCSI: austin@austinvpn:/proc/scsi$ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: WDC WD400EB-75CP Rev: 06.0 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: Lite-On Model: LTN486S 48x Max Rev: YDS6 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 01 Lun: 00 Vendor: SAMSUNG Model: CD-R/RW SW-248F Rev: R602 Type: CD-ROM ANSI SCSI revision: 05 However when I try mounting either of these devices (and every other device that could possibly be the cd-drive), it says no medium found: austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/scd1 /cdrom mount: no medium found on /dev/sr1 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/scd0 /cdrom mount: no medium found on /dev/sr0 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrom /cdrom mount: no medium found on /dev/sr1 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrom1 /cdrom mount: no medium found on /dev/sr0 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrw /cdrom mount: no medium found on /dev/sr1 Here are the contents of my /dev folder: austin@austinvpn:/proc/scsi$ ls /dev agpgart loop6 ram6 tty10 tty38 tty8 austinvpn loop7 ram7 tty11 tty39 tty9 block lp0 ram8 tty12 tty4 ttyS0 bsg mapper ram9 tty13 tty40 ttyS1 btrfs-control mcelog random tty14 tty41 ttyS2 bus mem rfkill tty15 tty42 ttyS3 cdrom net root tty16 tty43 urandom cdrom1 network_latency rtc tty17 tty44 usbmon0 cdrw network_throughput rtc0 tty18 tty45 usbmon1 char null scd0 tty19 tty46 usbmon2 console oldmem scd1 tty2 tty47 usbmon3 core parport0 sda tty20 tty48 usbmon4 cpu_dma_latency pktcdvd sda1 tty21 tty49 vcs disk port sda2 tty22 tty5 vcs1 dri ppp sda5 tty23 tty50 vcs2 ecryptfs psaux sg0 tty24 tty51 vcs3 fb0 ptmx sg1 tty25 tty52 vcs4 fd pts sg2 tty26 tty53 vcs5 full ram0 shm tty27 tty54 vcs6 fuse ram1 snapshot tty28 tty55 vcs7 hpet ram10 snd tty29 tty56 vcsa input ram11 sndstat tty3 tty57 vcsa1 kmsg ram12 sr0 tty30 tty58 vcsa2 log ram13 sr1 tty31 tty59 vcsa3 loop0 ram14 stderr tty32 tty6 vcsa4 loop1 ram15 stdin tty33 tty60 vcsa5 loop2 ram2 stdout tty34 tty61 vcsa6 loop3 ram3 tty tty35 tty62 vcsa7 loop4 ram4 tty0 tty36 tty63 vga_arbiter loop5 ram5 tty1 tty37 tty7 zero And here is my fstab file: austin@austinvpn:/proc/scsi$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/austinvpn-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=ed5520ae-c690-4ce6-881e-3598f299be06 /boot ext2 defaults 0 2 /dev/mapper/austinvpn-swap_1 none swap sw 0 0 Am I missing something/doing something wrong, or is there just no CD in the drive or is the drive possibly broken? Is there any nice command to list devices with mountable media? Thanks in advance for any help!

    Read the article

  • PENGUIN IS GETTING READY FOR ORACLE OPENWORLD 2012

    - by Zeynep Koch
    Are you looking for reasons to attend Oracle Openworld, how about below Oracle Linux sessions and hands-on-labs.  1. General Session: Oracle Linux Strategy and Roadmap  In this session, Oracle executives will discuss Linux strategy; the roadmap; contributions to the Linux mainline kernel; and what's in store for upcoming releases of Oracle Linux and the Unbreakable Enterprise Kernel. Don’t miss this session. 2. New Features in Oracle Linux- A Technical Deep Dive Collaborating with the Linux community, Oracle engineers contribute to advancing Linux for mission-critical deployments. In this technical session, attendees will learn about the recent developments in Oracle Linux and the Unbreakable Enterprise Kernel 3. Why Switch to Oracle Linux?  Oracle is the only company that provides a complete Linux solution from applications to disk, fully optimized for Oracle hardware and software, with one-stop support. In this session you will hear from two customers that have successfully implemented Oracle Linux and saved 50 to 90 percent on Linux support costs as well as the reasons to switch to Oracle Linux. 4. Debugging and Configuration Best Practices for Oracle Linux This is one of our best attended sessions and most informative. In this best practices session, learn how to save time and money while preventing headaches and hassles. Discover expert secrets to get your Linux systems up and running (and keep them running), avoid common pitfalls, prevent problems, and circumvent known issues. 5. Top Technical Tips for Automatic and Secure Oracle Linux Deployments In this session, attendees will learn about how to easily deploy and install Oracle Linux systems using various technologies like Kickstart, Oracle Enterprise Manager OpsCenter, and Oracle VM Templates for applications on Linux. Additionally, the session will share useful Linux security tips and introduce utilities to help with hardening and securely operating an Oracle Linux system. We also have a great session in Oracle Develop track: 6. DTrace for Oracle Linux Initially announced at last year's Oracle Openworld, DTrace for Oracle Linux is now available for the Unbreakable Enterprise Kernel R.2. In this session held by one of the engineers working on the DTrace for Linux port, you will learn how you can use this powerful and flexible framework in your development environment. If you prefer to really have practical experience, don’t miss our two Hands-on-Labs where we will cover: HOL-1 : Oracle Linux Package Management: Configuring and Enabling Services In this session you will be Installing and configuring Oracle VM VirtualBox, importing the Oracle Linux virtual appliance. You will then use the package management on Oracle Linux using RPM and yum. You will also be able to review Ksplice, zero downtime kernel updates that enable you to apply security updates, patches and critical bug fixes without rebooting. HOL-2: Oracle Linux Storage Management with LVM and Device Mapper In this session you will learn about storage management with LVM2, the Linux Logical Volume Manager, Btrfs, preparing block devices, creating physical and logical volumes, creating file systems on top of logical volumes, and resizing file systems dynamically. You will also practice setting up software RAID devices, configuring encrypted block devices. You will also see Oracle Linux and Kpslice in the three demopods we will feature at Exhibition demogrounds. One in MySQL Connect and two in Oracle Openworld. What more do you need to come to San Francisco? Oh, I forgot to mention we also have great weather in fall.. Check out the Content Catalog and register to attend Oracle Linux sessions.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >