Search Results

Search found 1811 results on 73 pages for 'andrew wood'.

Page 66/73 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Why CFOs Should Care About Big Data

    - by jmorourke
    The topic of “big data” clearly has reached a tipping point in 2012.  With plenty of coverage over the past few years in the IT press, we are now starting to see the topic of “big data” covered in mainstream business press, including a cover story in the October 2012 issue of the Harvard Business Review.  To help customers understand the challenges of managing “big data” as well as the opportunities that can be created by leveraging “big data”, Oracle has recently run and published the results of a customer survey, as well as white papers and articles on this topic.  Most recently, we commissioned a white paper titled “Mastering Big Data: CFO Strategies to Transform Insight into Opportunity”. The premise here is that “big data” is not just a topic that CIOs should pay attention to, but one that CFOs should understand and take advantage of as well.  Clearly, whoever masters the art and science of big data will be positioned for competitive advantage in their industries or markets.  That’s why smart CFOs are taking control of big data and business analytics projects, not just to uncover new ways to drive growth in a slowing global economy, but also to be a catalyst for change in the enterprise.  With an increasing number of CFOs now responsible for overseeing IT investments and providing strategic insight to the board, CFOs will be increasingly called upon to take a leadership role in assessing the value of “big data” initiatives, building on their traditional skills in reporting and helping managers analyze data to support decision making. Here’s a link to the white paper referenced above, which is posted on the Oracle C-Central/CFO web site, as well as some other resources that can help CFOs master the topic of “big data”: White Paper “Mastering Big Data:  CFO Strategies to Transform Insight into Opportunity CFO Market Watch article:  “Does Big Data Affect the CFO?” Oracle Survey Report:  “From Overload to Impact – An Industry Scorecard on Big Data Industry Challenges” Upcoming Big Data Webcast with Andrew McAfee Here’s a general link to Oracle C-Central/CFO in case you want to start there: www.oracle.com/c-central/cfo Feel free to contact me if you have any questions or need additional information:  [email protected]

    Read the article

  • Where should I go to learn about networking? [closed]

    - by Ollie Saunders
    I wonder if anyone could recommend resource or resources such as a good book that: explains how all the important protocols work and interact. I’m interested in those that are relevant in a typical home network and used over the Internet explains in detail how ADSL Internet connections work to the level of depth necessary so that I’m able to tweak and measure performance settings starts from the beginning but attempts to provide proper understanding rather than idiot-oriented steps to follow Basically, I’m interested in how these technologies work and tend to be implemented in hardware and software rather than “here’s what to do if…” I’m interested in Computer Networking by Andrew S. Tanenbaum and I wonder if anyone else has any experience with that title. It’s expensive but I could probably loan a copy for £3 from the library or so.

    Read the article

  • Windows 8: SL and HTML

    - by xamlnotes
    I  was just pointed to comment on my friend Andrew Brust’s blog about Silverlight versus HTML 5. Andrews blog is here: http://geekswithblogs.net/andrewbrust/archive/2011/11/23/windows-8-will-be-here-tomorrow-but-should-silverlight-be.aspx#600915 You can get another idea from another friend of mine Billy Hollis here: http://geekswithblogs.net/jalexander/archive/2011/04/09/the-eternal-battle-rich-v.-reachhellip--guest-blogger-billy-hollis.aspx The commenter is raving about HTML 5 and how that’s the future and SL is not. Well, my reaction is “hogwash”. Sure, HTML 5 is important and does some interesting stuff. Checkout what Bing.com is doing with it on some days and you can see. But to say that XAML is dead is nuts. I have been wrapping up bugs on a cross browser version of an application for awhile now. Whats the state of cross browser today? Well, better than a few years ago but far from perfect.  Each browser vendor interprets the specs in a little different way and you must account for them. The worst offender for major browsers? Apple and its Safari.  I had to make more changes for it than any other. Whats that got to do with XAML and SL/WPF?  Well, you write your SL code once and it runs in all browsers that support it, no changes. ipad does not? Well, they should be taken to court and forced too just like MS and others have been in the past for locking out competitors. Line of business applications? Write them in SL or WPF or both.  Use the power of XAML witch far out reaches html in any flavor and move on. We do need HTML 5 but its not a panacea nor will it replace all other technologies.

    Read the article

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • My Doors - Why Standards Matter to Business

    - by [email protected]
    By Brian Dayton on April 8, 2010 9:27 PM "Standards save money." "Standards accelerate projects." "Standards make better solutions." What do these statements mean to you? You buy technology solutions like Oracle Applications but you're a business person--trying to close the quarter, get performance reviews processed, negotiate a new sourcing contract, etc. When "standards" come up in presentations and discussions do you: - Nod your head politely - Tune out and check your smart phone - Turn to your IT counterpart and say "Bob's all over this standards thing, right Bob?" Here's why standards matter. My wife wants new external doors downstairs, ones that would get more light into the rooms. Am I OK with that? "Uhh, sure...it's a little dark in the kitchen." - 24 hours ago - wife calls to tell me that she's going to the hardware store and may look at doors - 20 hours ago - wife pulls into driveway, informs me that two doors are in the back of her station wagon, ready for me to carry - 19 hours ago - I re-discovered the fact that it's not fun to carry a solid wood door by myself - 5 hours ago - Local handyman, who was at our house anyway, tells me that the doors we bought will likely cost 2-3x the material cost in installation time and labor...the doors are standard but our doorways aren't We could have done more research. I could be more handy. Sure. But the fact is, my 1951 house wasn't built with me in mind. They built what worked and called it a day. The same holds true with a lot of business applications. They were designed and architected for one-time use with one use-case in mind. Today's business climate is different. If you're going to use your processes and technology to differentiate your business you should have at least a working knowledge of: - How standards can benefit your business - Your IT organization's philosophy around standards - Your vendor's track-record around standards...and watch for those who pay lip-service to standards but don't follow through The rallying cry in most IT organizations today is "learn more about the business, drop the acronyms." I'm not advocating that you go out and learn how to code in Java. But I do believe it will help your business and your decision-making process if you meet IT ½...even ¼ of the way there. Epilogue: The door project has been put on hold and yours truly has to return the doors to the hardware store tomorrow.

    Read the article

  • Leveraging NuGet as a central repository for PowerShell modules

    - by cibrax
    We have been working a lot lately with PowerShell as part of our star product at Tellago Studios, “Moesion”. One of the main features we provide in Moesion is the ability to execute PowerShell commands remotely in a given server using a web mobile interface (You can read more in my previous post about Moesion). One of the things we realized in all this time is that PowerShell lacks of a central repository where IT guys or we, the developers, can easily grab and reuse commands.  All the commands or modules are basically spread across multiple places or websites, like personal blogs, TechNet or CodePlex projects to name a few making the search of them very hard. You are usually limited to use your favorite search engine and copy what you find. In addition, there is not an easy way to reuse, extend or version these commands, which also limits any contribution that you could make to the community.  My friend Jose wrote a great post the other day about the importance of reusing PowerShell modules, and what is the mechanism to reuse them. Jose, however, based his post in a custom implementation using a GIT repository for storing the modules. We have NuGet in the .NET platform for sharing and reusing existing libraries or code, so why can’t just leverage it for reusing PowerShell modules as well ?. Some teams in Microsoft are using NuGet for distributing libraries and binaries so it would be a great thing for all of us if they also distribute the scripting interfaces in PowerShell using NuGet. This applies to the .NET OS community as well. In fact, it looks like Andrew Nurse had the same idea and implemented a project for this in BitBucket, PsGet.

    Read the article

  • FreeBSD 8 and Samba 3.3 smb_panic

    - by scraft3613
    What is causing samba to crash? Need help diagnosing ... [2010/06/14 16:11:42, 0] lib/fault.c:fault_report(40) =============================================================== [2010/06/14 16:11:42, 0] lib/fault.c:fault_report(41) INTERNAL ERROR: Signal 11 in pid 951 (3.3.8) Please read the Trouble-Shooting section of the Samba3-HOWTO [2010/06/14 16:11:42, 0] lib/fault.c:fault_report(43) From: http://www.samba.org/samba/docs/Samba3-HOWTO.pdf [2010/06/14 16:11:42, 0] lib/fault.c:fault_report(44) =============================================================== [2010/06/14 16:11:42, 0] lib/util.c:smb_panic(1673) PANIC (pid 951): internal error [2010/06/14 16:53:40, 0] smbd/server.c:main(1274) Edit: A bit more info -- log.smbd: [2010/06/14 15:59:02, 0] smbd/server.c:main(1274) smbd version 3.3.8 started. Copyright Andrew Tridgell and the Samba Team 1992-2009 [2010/06/14 15:59:02, 0] printing/print_cups.c:cups_connect(103) Unable to connect to CUPS server localhost:631 - Connection refused [2010/06/14 15:59:02, 0] printing/print_cups.c:cups_connect(103) Unable to connect to CUPS server localhost:631 - Connection refused smb.conf [global] workgroup = WASH netbios name = PROD1 [media] path = /jon/media read only = no guest ok = yes

    Read the article

  • Tomcat / Railo stop responding with no error output

    - by andrewdixon
    This is going to sound very vague and I'm sure it will be voted down for not giving enough information however I don't really have any to give as you will see. We have an AWS instance running Amazon Linux, Apache, Tomcat and Railo and from time to time the Tomcat/Railo simply stops responding to requests and there are no errors output in the catalina.out file or any of the other log files in the Tomcat logs directory. When I issue the command to restart Tomcat/Railo the restart scripts sits there for a while then says that Tomcat has not responded so it has killed it off and then it starts up again and everything is fine until it happens again, anything from a couple of minutes to a couple of days later. I have done my best to check other logs on the server but have found no messages at all to indicate why Tomcat/Railo has given up and stopped responding. Can anyone suggest any reason why it might be doing this and / or any other log file(s) that we could check to see what is happening. Thanks. Andrew.

    Read the article

  • Twitter Tuesday - Top 10 @ArchBeat Tweets - June 3-9, 2014

    - by OTN ArchBeat
    The Top 10 tweets from @OTNArchBeat for the last seven days. RT @DBAKevlar: #EM12c rel4 is out! Woohoo!! Jun 3, 2014 at 10:36 AM Top 10 Arch Community Articles for May 2014 >> props to @markrittman @kevin_mcginley @porushh et al Jun 4, 2014 at 12:52 PM Architecture of Analytics: @markrittman @kevin_mcginley >> Free OTN Virtual Tech Summit - July 9 Jun 4, 2014 at 09:13 AM My Top 10 Tweets - May 27 - June 2 #ADF #Essbase #FusionApps #Goldengate #Kscope14 #WebLogic. Jun 3, 2014 at 10:27 AM Starting and Stopping a #JavaEE Environment when using Oracle #WebLogic | Rene van Wijk #oracleace Jun 5, 2014 at 11:00 AM Video: #KScope14 Preview: @DebraLilley never stops moving, never stops learning. Jun 3, 2014 at 11:19 AM The OTNArchBeat Daily is out! Stories via @oraclebase Jun 9, 2014 at 01:47 PM Where did my MDB concurrency go? | Eric Gross #weblogic Jun 9, 2014 at 08:48 AM Exalogic Tech tips and code samples from A-Team architect Andrew Hopkinson Jun 6, 2014 at 11:47 AM The OTNArchBeat Daily is out! Stories via @KentGraziano @DBAKevlar @dbasolved Jun 3, 2014 at 01:48 PM adf, essbase,

    Read the article

  • Should I use a config file or database for storing business rules?

    - by foiseworth
    I have recently been reading The Pragmatic Programmer which states that: Details mess up our pristine code—especially if they change frequently. Every time we have to go in and change the code to accommodate some change in business logic, or in the law, or in management's personal tastes of the day, we run the risk of breaking the system—of introducing a new bug. Hunt, Andrew; Thomas, David (1999-10-20). The Pragmatic Programmer: From Journeyman to Master (Kindle Locations 2651-2653). Pearson Education (USA). Kindle Edition. I am currently programming a web app that has some models that have properties that can only be from a set of values, e.g. (not actual example as the web app data confidential): light-type = sphere / cube / cylinder The light type can only be the above three values but according to TPP I should always code as if they could change and place their values in a config file. As there are several incidents of this throughout the app, my question is: Should I store possibly values like these in: a config file: 'light-types' = array(sphere, cube, cylinder), 'other-type' = value, 'etc = etc-value a single table in a database with one line for each config item a database with a table for each config item (e.g. table: light_types; columns: id, name) some other way? Many thanks for any assistance / expertise offered.

    Read the article

  • Thinking Sphinx not working in test mode

    - by J. Pablo Fernández
    I'm trying to get Thinking Sphinx to work in test mode in Rails. Basically this: ThinkingSphinx::Test.init ThinkingSphinx::Test.start freezes and never comes back. My test and devel configuration is the same for test and devel: dry_setting: &dry_setting adapter: mysql host: localhost encoding: utf8 username: rails password: blahblah development: <<: *dry_setting database: proj_devel socket: /tmp/mysql.sock # sphinx requires it test: <<: *dry_setting database: proj_test socket: /tmp/mysql.sock # sphinx requires it and sphinx.yml development: enable_star: 1 min_infix_len: 2 bin_path: /opt/local/bin test: enable_star: 1 min_infix_len: 2 bin_path: /opt/local/bin production: enable_star: 1 min_infix_len: 2 The generated config files, config/development.sphinx.conf and config/test.sphinx.conf only differ in database names, directories and similar things; nothing functional. Generating the index for devel goes without an issue $ rake ts:in (in /Users/pupeno/proj) default config Generating Configuration to /Users/pupeno/proj/config/development.sphinx.conf Sphinx 0.9.8.1-release (r1533) Copyright (c) 2001-2008, Andrew Aksyonoff using config file '/Users/pupeno/proj/config/development.sphinx.conf'... indexing index 'user_core'... collected 7 docs, 0.0 MB collected 0 attr values sorted 0.0 Mvalues, 100.0% done sorted 0.0 Mhits, 99.8% done total 7 docs, 422 bytes total 0.098 sec, 4320.80 bytes/sec, 71.67 docs/sec indexing index 'user_delta'... collected 0 docs, 0.0 MB collected 0 attr values sorted 0.0 Mvalues, nan% done total 0 docs, 0 bytes total 0.010 sec, 0.00 bytes/sec, 0.00 docs/sec distributed index 'user' can not be directly indexed; skipping. but when I try to do it for test it freezes: $ RAILS_ENV=test rake ts:in (in /Users/pupeno/proj) DEPRECATION WARNING: require "activeresource" is deprecated and will be removed in Rails 3. Use require "active_resource" instead.. (called from /Users/pupeno/.rvm/gems/ruby-1.8.7-p249/gems/activeresource-2.3.5/lib/activeresource.rb:2) default config Generating Configuration to /Users/pupeno/proj/config/test.sphinx.conf Sphinx 0.9.8.1-release (r1533) Copyright (c) 2001-2008, Andrew Aksyonoff using config file '/Users/pupeno/proj/config/test.sphinx.conf'... indexing index 'user_core'... It's been there for more than 10 minutes, the user table has 4 records. The database directory look quite diferently, but I don't know what to make of it: $ ls -l db/sphinx/development/ total 96 -rw-r--r-- 1 pupeno staff 196 Mar 11 18:10 user_core.spa -rw-r--r-- 1 pupeno staff 4982 Mar 11 18:10 user_core.spd -rw-r--r-- 1 pupeno staff 417 Mar 11 18:10 user_core.sph -rw-r--r-- 1 pupeno staff 3067 Mar 11 18:10 user_core.spi -rw-r--r-- 1 pupeno staff 84 Mar 11 18:10 user_core.spm -rw-r--r-- 1 pupeno staff 6832 Mar 11 18:10 user_core.spp -rw-r--r-- 1 pupeno staff 0 Mar 11 18:10 user_delta.spa -rw-r--r-- 1 pupeno staff 1 Mar 11 18:10 user_delta.spd -rw-r--r-- 1 pupeno staff 417 Mar 11 18:10 user_delta.sph -rw-r--r-- 1 pupeno staff 1 Mar 11 18:10 user_delta.spi -rw-r--r-- 1 pupeno staff 0 Mar 11 18:10 user_delta.spm -rw-r--r-- 1 pupeno staff 1 Mar 11 18:10 user_delta.spp $ ls -l db/sphinx/test/ total 0 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.spl -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp0 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp1 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp2 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp7 Nothing gets added to a log when this happens. Any ideas where to go from here? I can run the command line manually: /opt/local/bin/indexer --config config/test.sphinx.conf --all which generates the output as the rake ts:in, so no help there.

    Read the article

  • Cross Compiling Boost for use on the Gumstix Overo with GumROS

    - by amelim
    I'm trying to cross-compile boost for use with the ROS framework on a Gumstix Overo. I've been following the posted instructions here (modifying the script when need be), however I've come across an issue where bjam will not compile boost properly. I call bjam as follows: # boost if [ ! -f /opt/gumros/lib/libboost_date_time-gcc41-mt-1_38.so ] ; then if [ ! -f boost_1_38_0.tar.gz ] ; then wget --tries=10 http://heanet.dl.sourceforge.net/sourceforge/boost/boost_1_38_0.tar.gz fi # tar xzf boost_1_38_0.tar.gz cd boost_1_38_0 GPP_PATH=${OVEROTOP}/tmp/cross/armv7a/arm-angstrom-linux-gnueabi/bin/g++ GPP_VER=`${GPP_PATH} -v 2>&1 | tail -1 | awk '{print $3}'` echo "using gcc : ${GPP_VER} : ${GPP_PATH} ; " > tools/build/v2/user-config.jam sudo apt-get install bjam set +o errexit sudo bjam --toolset=gcc-${GPP_VER} --prefix=/opt/gumros --with-date_time install set -o errexit cd .. else echo "boost appears to be already installed; skipping." fi if [ ! -f /opt/gumros/lib/libboost_date_time-gcc41-mt-1_38.so ] ; then echo "Failed to compile libboost_date_time"; exit; fi I've checked the user-config to make sure everything was kosher as well as making sure the GPP_PATH is correct. However, when I run the scrip I come across compilation errors such as: Reading package lists... Done Building dependency tree Reading state information... Done bjam is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 5 not upgraded. ...patience... ...found 14370 targets... ...updating 14 targets... gcc.compile.c++ bin.v2/libs/date_time/build/gcc-4.3.3/release/threading-multi/gregorian/greg_month.o g++: error trying to exec 'cc1plus': execvp: No such file or directory "/home/andrew/overo-oe/tmp/cross/armv7a/arm-angstrom-linux-gnueabi/bin/g++" -ftemplate-depth-128 -O3 -finline-functions -Wno-inline -Wall -pthread -fPIC -DBOOST_ALL_DYN_LINK=1 -DBOOST_ALL_NO_LIB=1 -DDATE_TIME_INLINE -DNDEBUG -I"." -c -o "bin.v2/libs/date_time/build/gcc-4.3.3/release/threading-multi/gregorian/greg_month.o" "libs/date_time/src/gregorian/greg_month.cpp" ...failed gcc.compile.c++ bin.v2/libs/date_time/build/gcc-4.3.3/release/threading-multi/gregorian/greg_month.o... gcc.compile.c++ bin.v2/libs/date_time/build/gcc-4.3.3/release/threading-multi/gregorian/greg_weekday.o g++: error trying to exec 'cc1plus': execvp: No such file or directory "/home/andrew/overo-oe/tmp/cross/armv7a/arm-angstrom-linux-gnueabi/bin/g++" -ftemplate-depth-128 -O3 -finline-functions -Wno-inline -Wall -pthread -fPIC -DBOOST_ALL_DYN_LINK=1 -DBOOST_ALL_NO_LIB=1 -DDATE_TIME_INLINE -DNDEBUG -I"." -c -o "bin.v2/libs/date_time/build/gcc-4.3.3/release/threading-multi/gregorian/greg_weekday.o" "libs/date_time/src/gregorian/greg_weekday.cpp" ...failed gcc.compile.c++ bin.v2/libs/date_time/build/gcc-4.3.3/release/threading-multi/gregorian/greg_weekday.o... gcc.compile.c++ bin.v2/libs/date_time/build/gcc-4.3.3/release/threading-multi/gregorian/date_generators.o g++: error trying to exec 'cc1plus': execvp: No such file or directory Etc... For reference, I'm using this tutorial to help me out. http://www.ros.org/wiki/gumros

    Read the article

  • What is a long-term strategy to deal with CPU fan dust in my home office?

    - by PaulG
    There are numerous discussions of CPU overheating and how sometimes this can be corrected by removing the dust from the CPU fan. I have read many of these, but I can't find anyone expressing a long-term strategy to deal with this problem. There are some suggestions here, for example, about how often the inside of the computer should be dusted. But I find this generally unsatisfactory. As it stands, in my rather dusty house (heated by a wood stove, with no central air circulation), I need to vacuum out the CPU fan every 3 to 4 months. At high CPU load, this can make a difference between 65C and 100C. I'm tired of hauling out the vacuum every time I anticipate high CPU load. What steps can I take to deal with this systematically in the long-term? Moving my high CPU load computing to the cloud is not a realistic option. Neither is vacuuming my home office more than once a week! (Details: my computer is on the floor in a Cooler Master HAF922 case, and uses an Intel CPU fan on an i7 chip) EDIT: While this would definitely solve the problem (submerging motherboard in mineral oil), it is a bit of an expensive solution.

    Read the article

  • What was your biggest waste of money, and what should you have bought instead? [closed]

    - by rob
    I waste a lot of money on computer equipment and other electronics that I don't really need. I've also bought software that I've never really used, or which as been replaced by better free software. As I'm buying things, it doesn't seem like much--fifty bucks here, a hundred dollars there. But when I go back and look at how much I've spent over my past few electronics purchases, I usually start to think of the other things I could have bought with that money instead. Most of the computer hardware and electronics don't usually improve my life by much, if at all. Case in point: back when I was in college, I prided myself on getting the best deals for computer hardware, but when I went back and added up all the money I had spent, I had probably wasted close to a thousand dollars on "cheap" $100 hard drives that eventually all went bad (including the warranty replacements). Even if they did still work, it would not be worth the effort to use them, because they're too small and too noisy by today's standards. I've also spent thousands more on other junk, such as RAM and CPU upgrades that only gave modest performance jumps, and wireless audio transmitters that I used for a short time to stream music from the now-defunct Yahoo! Music service. Every time I see a really great deal on RAM or video cards, I come one click away from buying them, but these days I'm usually able to resist. I've been wanting to get into woodworking ever since I moved into my house, and five years later I'm finally saving up for a $600 table saw. Sure, I've already got a toolbox and a couple of the essential power tools like a drill and a jigsaw, but I can't help but think that I'd have an entire shop full of woodworking tools and a lot of nice wood furniture if I hadn't wasted all that money back in college. What has been your biggest waste of money on computer stuff and technology? If you had all that money back, would you make the same mistake again and buy the same types of things, or would you spend it on something else?

    Read the article

  • Why would I need a firewall if my server is well configured?

    - by Aitch
    I admin a handful of cloud-based (VPS) servers for the company I work for. The servers are minimal ubuntu installs that run bits of LAMP stacks / inbound data collection (rsync). The data is large but not personal, financial or anything like that (ie not that interesting) Clearly on here people are forever asking about configuring firewalls and such like. I use a bunch of approaches to secure the servers, for example (but not restricted to) ssh on non standard ports; no password typing, only known ssh keys from known ips for login etc https, and restricted shells (rssh) generally only from known keys/ips servers are minimal, up to date and patched regularly use things like rkhunter, cfengine, lynis denyhosts etc for monitoring I have extensive experience of unix sys admin. I'm confident I know what I'm doing in my setups. I configure /etc files. I have never felt a compelling need to install stuff like firewalls: iptables etc. Put aside for a moment the issues of physical security of the VPS. Q? I can't decide whether I am being naive or the incremental protection a fw might offer is worth the effort of learning / installing and the additional complexity (packages, config files, possible support etc) on the servers. To date (touch wood) I've never had any problems with security but I am not complacent about it either.

    Read the article

  • MBR seems to be gone

    - by bobobobo
    So, horror story for everyone. I bought two spanking new HDD's. MM!! Gbitage. I removed all my old HDD's, physically labelled them, and was preparing to install all new HDD's (fresh sys install included!) To make sure what HDD was what, I popped each OLD HDD (data filleD!) into a Thermaltake Blacx toaster.. surprisingly BOTH couldn't be read. I didn't have static on my hands! I'm certain of it. I touched metal, touched wood, before beginning this all. Thinking that was strage, I hauled up the new sys, installed Win XP (of course!) on the new HDD, and now the two OLD HDD's (data filled!) that were entered into the toaster cannot be read. And they had tons of data on them. I read about MBR's being nuked and it sounds like that is what it is. But I'm at a loss what to do. There are so many MBR recovery programs out there, I kind of feel overwhelmed. I don't want to lose my data by just pikcing one, yet it seems so close within reach, I'm not panicking anymore.. Anybody have a play by play that I could follow? I just don't want to spend $900 on data recovery centers if I can do this myself..

    Read the article

  • XSL Template outputting massive chunk of text, rather than HTML. But only on one section

    - by Throlkim
    I'm having a slightly odd situation with an XSL template. Most of it outputs fine, but a certain for-each loop is causing me problems. Here's the XML: <area> <feature type="Hall"> <Heading><![CDATA[Hall]]></Heading> <Para><![CDATA[Communal gardens, pathway leading to PVCu double glazed communal front door to]]></Para> </feature> <feature type="Entrance Hall"> <Heading><![CDATA[Communal Entrance Hall]]></Heading> <Para><![CDATA[Plain ceiling, centre light fitting, fire door through to inner hallway, wood and glazed panelled front door to]]></Para> </feature> <feature type="Inner Hall"> <Heading><![CDATA[Inner Hall]]></Heading> <Para><![CDATA[Plain ceiling with pendant light fitting and covings, security telephone, airing cupboard housing gas boiler serving domestic hot water and central heating, telephone point, storage cupboard housing gas and electric meters, wooden panelled doors off to all rooms.]]></Para> </feature> <feature type="Lounge (Reception)" width="3.05" length="4.57" units="metre"> <Heading><![CDATA[Lounge (Reception)]]></Heading> <Para><![CDATA[15' 6" x 10' 7" (4.72m x 3.23m) Window to the side and rear elevation, papered ceiling with pendant light fitting and covings, two double panelled radiators, power points, wall mounted security entry phone, TV aerial point.]]></Para> </feature> <feature type="Kitchen" width="3.05" length="3.66" units="metre"> <Heading><![CDATA[Kitchen]]></Heading> <Para><![CDATA[12' x 10' (3.66m x 3.05m) Double glazed window to the rear elevation, textured ceiling with strip lighting, range of base and wall units in Beech with brushed aluminium handles, co-ordinated working surfaces with inset stainless steel sink with mixer taps over, co-ordinated tiled splashbacks, gas and electric cooker points, large storage cupboard with shelving, power points.]]></Para> </feature> <feature type="Entrance Porch"> <Heading><![CDATA[Balcony]]></Heading> <Para><![CDATA[Views across the communal South facing garden, wrought iron balustrade.]]></Para> </feature> <feature type="Bedroom" width="3.35" length="3.96" units="metre"> <Heading><![CDATA[Bedroom One]]></Heading> <Para><![CDATA[13' 6" x 11' 5" (4.11m x 3.48m) Double glazed windows to the front and side elevations, papered ceiling with pendant light fittings and covings, single panelled radiator, power points, telephone point, security entry phone.]]></Para> </feature> <feature type="Bedroom" width="3.05" length="3.35" units="metre"> <Heading><![CDATA[Bedroom Two]]></Heading> <Para><![CDATA[11' 4" x 10' 1" (3.45m x 3.07m) Double glazed window to the front elevation, plain ceiling with centre light fitting and covings, power points.]]></Para> </feature> <feature type="bathroom"> <Heading><![CDATA[Bathroom]]></Heading> <Para><![CDATA[Obscure double glazed window to the rear elevation, textured ceiling with centre light fitting and extractor fan, suite in white comprising of low level WC, wall mounted wash hand basin and walk in shower housing 'Triton T80' electric shower, co-ordinated tiled splashbacks.]]></Para> </feature> </area> And here's the section of my template that processes it: <xsl:for-each select="area"> <li> <xsl:for-each select="feature"> <li> <h5> <xsl:value-of select="Heading"/> </h5> <xsl:value-of select="Para"/> </li> </xsl:for-each> </li> </xsl:for-each> And here's the output: Hall Communal gardens, pathway leading to PVCu double glazed communal front door to Communal Entrance Hall Plain ceiling, centre light fitting, fire door through to inner hallway, wood and glazed panelled front door to Inner Hall Plain ceiling with pendant light fitting and covings, security telephone, airing cupboard housing gas boiler serving domestic hot water and central heating, telephone point, storage cupboard housing gas and electric meters, wooden panelled doors off to all rooms. Lounge (Reception) 15' 6" x 10' 7" (4.72m x 3.23m) Window to the side and rear elevation, papered ceiling with pendant light fitting and covings, two double panelled radiators, power points, wall mounted security entry phone, TV aerial point. Kitchen 12' x 10' (3.66m x 3.05m) Double glazed window to the rear elevation, textured ceiling with strip lighting, range of base and wall units in Beech with brushed aluminium handles, co-ordinated working surfaces with inset stainless steel sink with mixer taps over, co-ordinated tiled splashbacks, gas and electric cooker points, large storage cupboard with shelving, power points. Balcony Views across the communal South facing garden, wrought iron balustrade. Bedroom One 13' 6" x 11' 5" (4.11m x 3.48m) Double glazed windows to the front and side elevations, papered ceiling with pendant light fittings and covings, single panelled radiator, power points, telephone point, security entry phone. Bedroom Two 11' 4" x 10' 1" (3.45m x 3.07m) Double glazed window to the front elevation, plain ceiling with centre light fitting and covings, power points. Bathroom Obscure double glazed window to the rear elevation, textured ceiling with centre light fitting and extractor fan, suite in white comprising of low level WC, wall mounted wash hand basin and walk in shower housing 'Triton T80' electric shower, co-ordinated tiled splashbacks. For reference, here's the entire XSLT: http://pastie.org/private/eq4gjvqoc1amg9ynyf6wzg The rest of it all outputs fine - what am I missing from the above section?

    Read the article

  • How to get the value of a SELECT HtmlElement in C# webBrowser control

    - by AndrewW
    Hi, In a C# WebBrowser control, I have generated a SELECT HtmlElement with a number of OPTION elements using w.RenderBeginTag(HtmlTextWriterTag.Select). I need to get the value of the select when the user changes it, and so added an event handler in the WebBrowser DocumentCompleted event. private void webBrowser1_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { .... webBrowser1.Document.GetElementById("id_select_0").AttachEventHandler("onchange", new EventHandler(ddSelectedIndexChanged)); .... } protected void ddSelectedIndexChanged(object sender, EventArgs e) { .... } The event handler does get called, but the sender parameter is null and e is empty. Does anyone know how to overcome this problem? Andrew

    Read the article

  • JavaScript and CSS files for ASP.NET MVC 2 EditorTemplate user controls

    - by Zack Peterson
    I'm using an EditorTemplate DateTime.ascx in my ASP.NET MVC 2 project. <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<DateTime>" %> <%: Html.TextBox(String.Empty, Model.ToString("M/dd/yyyy h:mm tt")) %> <script type="text/javascript"> $(function () { $('#<%: ViewData.TemplateInfo.GetFullHtmlFieldId(String.Empty) %>').AnyTime_picker({ format: "%c/%d/%Y %l:%i %p" }); }); </script> This uses the Any+Time™ JavaScript library for jQuery by Andrew M. Andrews III. I've added those library files (anytimec.js and anytimec.css) to the <head> section of my master page. Rather than include these JavaScript and Cascading Style Sheet files on every page of my web site, how can I instead include the .js and .css files only on pages that need them--pages that edit a DateTime type value?

    Read the article

  • How do you get AOL's OpenID site verification to work?

    - by Shawn Miller
    I have an OpenID relying party setup and using XRDS. It passes the "RP has discoverable return_to" interop test over at http://test-id.org/RP/DiscoverableReturnTo.aspx. Yahoo no longer complains with the message "Warning: This website has not confirmed its identity with Yahoo! and might be fraudulent." as outlined in Andrew Arnott's excellent blog post: http://blog.nerdbank.net/2008/06/why-yahoo-says-your-openid-site.html However, when I try to authenticate using AOL I see the "Warning! site verification could not be completed." message.

    Read the article

  • m2eclipse resource filtering

    - by drewzilla
    I've having problems with resource filtering using m2eclipse Maven support in Eclipse. It seems that filtering only takes place on resources that have changed. This is fundamentally flawed because, if I have a file that references properties (e.g. ${my.property}, if the value of the property changes, the filtering will only be performed if the referencing file is also modified - if I only change the property value (in my pom.xml), the filtering is not applied to the files that that reference it. So, if I make a change to a property in my pom file, the filtering is not applied. However, if I then go to the file that references that property (e.g. a Spring config file) then edit and save it, the filtering is applied. I did read somewhere that: "m2eclipse skips filtering if there were no resource changes during incremental build" I'm using m2eclipse 0.10.x Has anyone else come across this? Thanks, Andrew

    Read the article

  • NFS issue brings down entire vSphere ESX estate

    - by growse
    I experienced an odd issue this morning where an NFS issue appeared to have taken down the majority of my VMs hosted on a small vSphere 5.0 estate. The infrastructure itself is 4x IBM HS21 blades running around 20 VMs. The storage is provided by a single HP X1600 array with attached D2700 chassis running Solaris 11. There's a couple of storage pools on this which are exposed over NFS for the storage of the VM files, and some iSCSI LUNs for things like MSCS shared disks. Normally, this is pretty stable, but I appreciate the lack of resiliancy in having a single X1600 doing all the storage. This morning, in the logs of each ESX host, at around 0521 GMT I saw a lot of entries like this: 2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4cf9a8 3 2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4dc9e8 3 2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4d3fa8 3 2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4de0a8 3 [....] 2011-11-30T06:16:07.042Z cpu0:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:17:01.459Z cpu2:4011)NFS: 292: Restored connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:25:17.887Z cpu3:2051)NFSLock: 608: Stop accessing fd 0x41000a4c2b28 3 2011-11-30T06:27:16.063Z cpu3:4011)NFSLock: 568: Start accessing fd 0x41000a4d8928 again 2011-11-30T06:35:30.827Z cpu1:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /tank/ISO, mounted as 5acdbb3e-410e56e3-0000-000000000000 ("ISO (1)") 2011-11-30T06:36:37.953Z cpu6:2054)NFS: 292: Restored connection to the server 10.13.111.197 mount point /tank/ISO, mounted as 5acdbb3e-410e56e3-0000-000000000000 ("ISO (1)") 2011-11-30T06:40:08.242Z cpu6:2054)NFSLock: 608: Stop accessing fd 0x41000a4c3e68 3 2011-11-30T06:40:34.647Z cpu3:2051)NFSLock: 568: Start accessing fd 0x41000a4d8928 again 2011-11-30T06:44:42.663Z cpu1:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:44:53.973Z cpu0:4011)NFS: 292: Restored connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:51:28.296Z cpu5:2058)NFSLock: 608: Stop accessing fd 0x41000ae3c528 3 2011-11-30T06:51:44.024Z cpu4:2052)NFSLock: 568: Start accessing fd 0x41000ae3b8e8 again 2011-11-30T06:56:30.758Z cpu4:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:56:53.389Z cpu7:2055)NFS: 292: Restored connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T07:01:50.350Z cpu6:2054)ScsiDeviceIO: 2316: Cmd(0x41240072bc80) 0x12, CmdSN 0x9803 to dev "naa.600508e000000000505c16815a36c50d" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0. 2011-11-30T07:03:48.449Z cpu3:2051)NFSLock: 608: Stop accessing fd 0x41000ae46b68 3 2011-11-30T07:03:57.318Z cpu4:4009)NFSLock: 568: Start accessing fd 0x41000ae48228 again (I've put a complete dump from one of the hosts on pastebin: http://pastebin.com/Vn60wgTt) When I got in the office at 9am, I saw various failures and alarms and troubleshooted the issue. It turned out that pretty much all of the VMs were inaccessible, and that the ESX hosts either were describing each VM as 'powered off', 'powered on', or 'unavailable'. The VMs described as 'powered on' where not in any way reachable or responding to pings, so this may be lies. There's absolutely no indication on the X1600 that anything was awry, and nothing on the switches to indicate any loss of connectivity. I only managed to resolve the issue by rebooting the ESX hosts in turn. I have a number of questions: What the hell happened? If this was a temporary NFS failure, why did it put the ESX hosts into a state from which a reboot was the only recovery? In the future, when the NFS server goes a little off-piste, what would be the best approach to add some resilience? I've been looking at budgeting for next year and potentially have budget to purchase another X1600/D2700/disks, would an identical mirrored disk setup help to mitigate these sorts of failures automatically? Edit (Added requested details) To expand with some details as requested: The X1600 has 12x 1TB disks lumped together in mirrored pairs as tank, and the D2700 (connected with a mini SAS cable) has 12x 300GB 10k SAS disks lumped together in mirrored pairs as sastank zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c7t0d0s0 ONLINE 0 0 0 errors: No known data errors pool: sastank state: ONLINE scan: scrub repaired 0 in 74h21m with 0 errors on Wed Nov 30 02:51:58 2011 config: NAME STATE READ WRITE CKSUM sastank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c7t14d0 ONLINE 0 0 0 c7t15d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c7t16d0 ONLINE 0 0 0 c7t17d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c7t18d0 ONLINE 0 0 0 c7t19d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c7t20d0 ONLINE 0 0 0 c7t21d0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c7t22d0 ONLINE 0 0 0 c7t23d0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c7t24d0 ONLINE 0 0 0 c7t25d0 ONLINE 0 0 0 errors: No known data errors pool: tank state: ONLINE scan: scrub repaired 0 in 17h28m with 0 errors on Mon Nov 28 17:58:19 2011 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c7t1d0 ONLINE 0 0 0 c7t2d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c7t3d0 ONLINE 0 0 0 c7t4d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c7t5d0 ONLINE 0 0 0 c7t6d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c7t8d0 ONLINE 0 0 0 c7t9d0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c7t10d0 ONLINE 0 0 0 c7t11d0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c7t12d0 ONLINE 0 0 0 c7t13d0 ONLINE 0 0 0 errors: No known data errors The filesystem exposed over NFS for the primary datastore is sastank/VMStorage zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 45.1G 13.4G 92.5K /rpool rpool/ROOT 2.28G 13.4G 31K legacy rpool/ROOT/solaris 2.28G 13.4G 2.19G / rpool/dump 15.0G 13.4G 15.0G - rpool/export 11.9G 13.4G 32K /export rpool/export/home 11.9G 13.4G 32K /export/home rpool/export/home/andrew 11.9G 13.4G 11.9G /export/home/andrew rpool/swap 15.9G 29.2G 123M - sastank 1.08T 536G 33K /sastank sastank/VMStorage 1.01T 536G 1.01T /sastank/VMStorage sastank/comstar 71.7G 536G 31K /sastank/comstar sastank/comstar/sql_tempdb 6.31G 536G 6.31G - sastank/comstar/sql_tx_data 65.4G 536G 65.4G - tank 4.79T 578G 42K /tank tank/FTP 269G 578G 269G /tank/FTP tank/ISO 28.8G 578G 25.9G /tank/ISO tank/backupstage 2.64T 578G 2.49T /tank/backupstage tank/cifs 301G 578G 297G /tank/cifs tank/comstar 1.54T 578G 31K /tank/comstar tank/comstar/msdtc 1.07G 579G 32.8M - tank/comstar/quorum 577M 578G 47.9M - tank/comstar/sqldata 1.54T 886G 304G - tank/comstar/vsphere_lun 2.09G 580G 22.2M - tank/mcs-asset-repository 7.01M 578G 6.99M /tank/mcs-asset-repository tank/mscs-quorum 55K 578G 36K /tank/mscs-quorum tank/sccm 16.1G 578G 12.8G /tank/sccm As for the networking, all connections between the X1600, the Blades and the switch are either LACP or Etherchannel bonded 2x 1Gbit links. Switch is a single Cisco 3750. Storage traffic sits on its own VLAN segregated from VM machine traffic.

    Read the article

  • mod_rewrite apache

    - by Peter
    is there any way to hide redirected url, here is what I think: RewriteEngine On RewriteCond %{REQUEST_FILENAME} -f RewriteRule ^(.*)$ http://minteddomain.com/mint/pepper/tillkruess/downloads/tracker.php?url=http://%{HTTP_HOST}%{REQUEST_URI}&force So the long redirected url http://minteddomain.com/mint/pepper/tillkruess/downloads/tracker.php?url=http://%{HTTP_HOST}%{REQUEST_URI} to something shorter like /mintedomain.com/track/ It is possible? Adrian edit: Andrew: This is a stats software Mint (haveamint.com) with File Download tracker plugin. The File Download tracker works in this way: in .htaccess every file (zip, rar, txt,...) is redirected to the tracker.php file (because the stats): http://mydomain.com/mint/pepper/tillkruess/downloads/tracker.php?url=http://%{HTTP_HOST}%{REQUEST_URI} So the redirected url look like this for a zip file: http://minteddomain.com/mint/pepper/tillkruess/downloads/tracker.php?url=http://mydomain/downloads/apple.zip This redirected URL is very long and ugly. The best for me would be to redirect this redirected URL to something shorter URL: example: http://mydomain.com/track/downloads/apple.zip.. So the http://mydomain.com/track would be the http://minteddomain.com/mint/pepper/tillkruess/downloads/tracker.php

    Read the article

  • Is there a way to create custom UIDataDetectorTypes?

    - by KingAndrew
    Hi All, What I am trying to do is create tooltip functionality so that certain words in my instructional app can be tapped and the definition pops up. For the popup part I plan on using code from “AFInformationView” which provides bubbles on the iPhone. The part I'm struggling with is how to associate A particular word's location with the bubble. Currently I have the text on a UILabel that is on a custom UITableCell. Since I calculate the row height on the fly with: [textToUse sizeWithFont:[UIFont systemFontOfSize:FONT_SIZE] constrainedToSize:CGSizeMake(stop-start, 500)]; I'm not sure what the coordinates for a specific word will be. I was thinking that if I created a custom DataDetectorType that could be the fix. If anyone knows how to do this or has any other ideas I would be happy to hear them. Thanks, Andrew

    Read the article

  • Which operating systems book should I go for?

    - by pecker
    Hi, I'm in a confusion. For our course (1 year ago) I used Stallings. I read it. It was fine. But I don't own any operating system's book. I want to buy a book on operating systems. I'm confused!! which one to pick? Modern Operating Systems (3rd Edition) ~ Andrew S. Tanenbaum (Author) Operating System Concepts ~ Abraham Silberschatz , Peter B. Galvin, Greg Gagne Operating Systems: Internals and Design Principles (6th Edition) ~ William Stallings I've plans of getting into development of realworld operating systems : Linux, Unix & Windows Driver Development. I know that for each of these there are specific books available. But I feel one should have a basic book on the shelf. So, which one to go for?

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >