Search Results

Search found 4242 results on 170 pages for 'mark b'.

Page 138/170 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • OTN Virtual Technology Summit - July 9 - Middleware Track

    - by OTN ArchBeat
    The Architecture of Analytics: Big Time Big Data and Business Intelligence This four-session track, part of the free OTN Virtual Technology Summit on July 9, will present a solution architect's perspective on how business intelligence products in Oracle's Fusion Middleware family and beyond fit into an effective big data architecture, offering insight and expertise from Oracle ACE Directors and product team experts specializing in business Intelligence to help you meet your big data business intelligence challenges. Register now! Sessions Oracle Big Data Appliance Case Study: Using Big Data to Analyze Cancer-Genome Relationships Tom Plunkett, Lead Author of the Oracle Big Data Handbook What does it take to build an award winning Big Data solution? This presentation takes a deep technical dive into the use of the Oracle Big Data Appliance in a project for the National Cancer Institute's Frederick National Laboratory for Cancer Research. The Frederick National Laboratory and the Oracle team won several awards for analyzing relationships between genomes and cancer subtypes with big data, including the 2012 Government Big Data Solutions Award, the 2013 Excellence.Gov Finalist for Innovation, and the 2013 ComputerWorld Honors Laureate for Innovation. [30 mins] Getting Value from Big Data Variety Richard Tomlinson, Director, Product Management, Oracle Big data variety implies big data complexity. Performing analytics on diverse data typically involves mashing up structured, semi-structured and unstructured content. So how can we do this effectively to get real value? How do we relate diverse content so we can start to analyze it? This session looks at how we approach this tricky problem using Endeca Information Discovery. [30 mins] How To Leverage Your Investment In Oracle Business Intelligence Enterprise Edition Within a Big Data Architecture Oracle ACE Director Kevin McGinley More and more organizations are realizing the value Big Data technologies contribute to the return on investment in Analytics. But as an increasing variety of data types reside in different data stores, organizations are finding that a unified Analytics layer can help bridge the divide in modern data architectures. This session will examine how you can enable Oracle Business Intelligence Enterprise Edition (OBIEE) to play a role in a unified Analytics layer and the benefits and use cases for doing so. [30 mins] Oracle Data Integrator 12c As Your Big Data Data Integration Hub Oracle ACE Director Mark Rittman Oracle Data Integrator 12c (ODI12c), as well as being able to integrate and transform data from application and database data sources, also has the ability to load, transform and orchestrate data loads to and from Big Data sources. In this session, we'll look at ODI12c's ability to load data from Hadoop, Hive, NoSQL and file sources, transform that data using Hive and MapReduce processing across the Hadoop cluster, and then bulk-load that data into an Oracle Data Warehouse using Oracle Big Data Connectors. We will also look at how ODI12c enables ETL-offloading to a Hadoop cluster, with some tips and techniques on real-time capture into a Hadoop data reservoir and techniques and limitations when performing ETL on big data sources. [90 mins] Register now!

    Read the article

  • How to make sprint planning fun

    - by Jacob Spire
    Not only are our sprint planning meetings not fun, they're downright dreadful. The meetings are tedious, and boring, and take forever (a day, but it feels like a lot longer). The developers complain about it, and dread upcoming plannings. Our routine is pretty standard (user story inserted into sprint backlog by priority story is taken apart to tasks tasks are estimated in hours repeat), and I can't figure out what we're doing wrong. How can we make the meetings more enjoyable? ... Some more details, in response to requests for more information: Why are the backlog items not inserted and prioritized before sprint kickoff? User stories are indeed prioritized; we have no idea how long they'll take until we break them down into tasks! From the (excellent) answers here, I see that maybe we shouldn't estimate tasks at all, only the user stories. The reason we estimate tasks (and not stories) is because we've been getting story-estimates terribly wrong -- but I guess that's the subject for an altogether different question. Why are developers complaining? Meetings are long. Meetings are monotonous. Story after story, task after task, struggling (yes, struggling) to estimate how long it will take and what it involves. Estimating tasks makes user-story-estimation seem pointless. The longer the meeting, the less focus in the room. The less focused colleagues are, the longer the meeting takes. A recursive hate-spiral develops. We've considered splitting the meeting into two days in order to keep people focused, but the developers wouldn't hear of it. One day of planning is bad enough; now we'll have two?! Part of our problem is that we go into very small detail (in order to get more accurate estimations). But when we estimate roughly, we go way off the mark! To sum up the question: What are we doing wrong? What additional ways are there to make the meeting generally more enjoyable?

    Read the article

  • Evolution laggy due to IMAP -profile or due to some odd Sync -issue?

    - by Izzy
    I'm fighting with Evolution. Basically it's working fine -- but it is very slow to react in certain situations. Helper questions Could it be that changing away from Bonobo has to do with slowing-down? There might be some trouble with the new engine and "asynchronous actions". What to do about it? Are there e.g. any configuration files? I want to get the previous "working mood" back. How can I speed this thing up? Different scenarios when sending a mail, the composer window hangs there inactive for a couple of seconds, everything grayed out. Though there is a green check mark saying it's sent, I'm not sure a) why it's still blocking everything and b) whether I could simply close it without "breaking"/"losing" anything. In earlier versions, the composer window was closing pretty fast, and one could see the message being stored into the local "outbox" until it was sent, and one could immediately continue with the next task. I prefer that behaviour over the current, where I cannot do anything in the application until the window closes. switching between modules. Coming from mail and switching to the address book takes a couple of seconds. Same for switching to the calendar. I read about different "possible causes" and tried a few things: I only have 3 local address books, so no networking should be involved here. To make sure, I switched to offline mode and then tried to access the address book. No noticeable difference. I use 3 Google Calendars. Switching to offline mode made a minor difference, but so minor that it also could be "imagination" since one might have expected this in this case according to some reports, disabling the tasks should help. Well, it didn't in my case, as I don't use them regularly (just two local items stored here) Maybe I should also mention that I'm using the KDE4 desktop (so no Unity or Gnome, though both is installed on the computer). And I did not have this issue before I updated to 12.04.

    Read the article

  • Innovation for Retailers

    - by David Dorf
    One of my main objectives for this blog is to point out emerging technologies and how they might apply to the retail industry.  But ideas are just the beginning; retailers either have to rely on vendors or have their own lab to explore these ideas and see which ones work.  (A healthy dose of both is probably the best solution.)  The Nordstrom Innovation Lab is a fine example of dedicating resources to cultivate ideas and test prototypes. The video below, from 2011, is a case study in which the team builds an iPad app that helps customers purchase sunglasses in the store.  Customers take pictures of themselves wearing different sunglasses, then can do side-by-side comparisons. There are a few interesting take-aways from their process.  First, they are working in the store alongside employees and customers.  There's no concept of documenting all the requirements then building the product.  Instead, they work closely with those that will be using the app in order to fully understand what's needed.  When they find an issue, they change the software onsite and try again.  This iterative prototyping ensures their product hits the mark.  Feels like Extreme Programming if you recall that movement. Second, they have time-boxed the project to one week.  Either it works or it doesn't, and either way they've only expended a week's worth of resources.  Innovation always entails failure, and those that succeed are often good at detecting failure quickly then adjusting.  Fail fast and fail often. Third, its not always about technology.  I was impressed they used paper designs to walk through user stories and help understand the needs of the customer.  Pen and paper is the innovator's most powerful tool. Our Retail Applied Research (RAR) team uses some of these concepts in our development process.  (Calling it a process is probably overkill.)  We try to give life to concepts quickly so the rest of organization can help us decide if we're heading the right direction.  It takes many failures before finding a successful product.

    Read the article

  • So Much Happening at Devoxx

    - by Tori Wieldt
    Devoxx, the premier Java conference in Europe, has been sold out for a while. The organizers (thanks Stephan and crew!) cap the attendance to make sure all attendees have a great experience, and that speaks volumes about their priorities. The speakers, hackathons, labs, and networking are all first class. The Oracle Technology Network will be there, and if you were smart/lucky enough to get a ticket, come find us and join the fun: IoT Hack Fest Build fun and creative Internet of Things (IoT) applications with Java Embedded, Raspberry Pi and Leap Motion on the University Days (Monday and Tuesday). Learn from top experts Yara & Vinicius Senger and Geert Bevin at two Raspberry Pi & Leap Motion hands-on labs and hacking sessions. Bring your computer. Training and equipment will be provided. Devoxx will also host an Internet of Things shop in the exhibition floor where attendees can purchase Arduino, Raspberry PI and Robot starter kits. Bring your IoT wish list! Video Interviews Yolande Poirier and I will be interviewing members of the Java Community in the back of the Expo hall on Wednesday and Thursday. Videos are posted on Parleys and YouTube/Java. We have a few slots left, so contact me (you can DM @Java) if you want to share your insights or cool new tip or trick with the rest of the developer community. (No commercials, no fluff. Keep it techie and keep it real.)  Oracle Keynote Wednesday morning Mark Reinhold, Chief Java Platform Architect, and Brian Goetz, Java Language Architect will provide an update on Java 8 and beyond. Oracle Booth Drop by the Oracle booth to see old and new friends.  We'll have Java in Action demos and the experts to explain them and answer your questions. We are raffling off Raspberry Pi's each day, so be sure to get your badged scanned. We'll have beer in the booth each evening. Look for @Java in her lab coat.  See you at Devoxx! 

    Read the article

  • Is is possible to get a patch included in the current release? If so, how?

    - by Oli
    So a while back I reported a bug in Compiz's Place Window plugin. It's a fairly major regression for people affected by it: mainly those using Gnome-Fallback, judging by the reports. A patch surfaced a short time later. I created a PPA for testing and everybody involved so far is reporting the issues are fixed. It even fixes another bug. I've done testing with a standard Unity desktop and can say (for my testing) no adverse effects were visible. I want to get this pushed to Ubuntu right now for two main reasons: I'm selfish. I don't want to need to update my PPA every time a new version of Compiz is pushed to 12.04. I don't want Ubuntu users seeing their windows flying around because of a silly little bug. I want this patch pushed to Ubuntu's version of Compiz as soon as possible, so we can mark these bugs fixed and move on with our lives. Whose leg do I have to hump to get this pulled into Ubuntu right now? I don't maintain this project and it's an upstream thing but it's fairly integral to Ubuntu. I could go to Compiz but I imagine that if they accept the patch, it'll be months (at least a release) before it's anywhere near Ubuntu. And when I do find the right person, how can I make the process as slick as possible for them? I want them to see my request, go "Yup, that all looks great, done" and that be it. I don't want seventeen rounds of emails addressing aspects of the patch. More importantly, I don't want to waste their time either. And what do I have to provide them? My packaging skills are... lamentable. This was my first attempt at patching a package for redistribution so I've probably made every single packaging error known to man. Will they be happy with the original patch (so they can apply it themselves) or should I repackage things so the diff/changelog is a little cleaner (it took me a few goes and the versioning is all over the place). Note: This question is about Compiz but I'd prefer if answers could address other styles of package too so we have an authoritative and comprehensive thread of how to get things fixed.

    Read the article

  • What's Happening in Business Analytics at OpenWorld 2012?

    - by jmorourke
    Oracle OpenWorld 2012 is rapidly approaching on September 30th when we take over the city of San Francisco for five days.  The Business Analytics this year is our strongest ever with over 150 EPM, BI, Analytics and Data Warehousing sessions delivered by Oracle, our customers and partners.  We’ll also have Hands-On Labs, 20 demo pods dedicated to Business Analytics products, and over 30 partners exhibiting their solutions.  So what’s hot in the Business Analytics program at OpenWorld?  Here are some of the “can’t miss” sessions at this year’s conference: The EPM and BI general sessions, led by SVP of Product Development Balaji Yelamanchili will highlight what’s new provide a view into Oracle’s EPM, BI and Analytics strategies.  Both sessions are scheduled on Monday, October 1st. Thursday Keynote:  See More, Act Faster:  Oracle Business Analytics, led by Oracle President Mark Hurd, will provide a view into Oracle’s strategy for Business Analytics, especially engineered systems designed to provide extreme performance for the most rigorous analytic tasks. Superfast Business Intelligence with Oracle Exalytics.  Hear about various business intelligence scenarios in which Oracle Exalytics provides exemplary value—from operational reporting and prepackaged applications to analytics on unstructured data. Turn Insights into Real-Time Actions with Oracle Business Intelligence Mobile.  Learn how Oracle Business Intelligence Mobile enables organizations to deliver relevant information and turn insight into real-time action, no matter where employees are located. Empowering the Business User: Introduction to Oracle Endeca Information Discovery.  Find out how you can find fast answers to the new questions that confront your business every day, while avoiding the confusion and inconsistencies brought about by spreadsheets and desktop tools. Big Data:  The Big Story.  Learn how to harness big data, your existing data, and predictive analytics to make better decisions in an environment of rapid shifts in behavior and instant feedback.  Learn about the technologies that constitute a big data architecture, how to leverage and implement advanced analytics for real-time decisions, and the tools needed to know the unknown. Planning at the Speed of Business with Oracle Exalytics.  Learn how Oracle Hyperion Planning leverages the power of Oracle Exalytics to do planning faster, with more detail and more users than ever. For more details on these and other Business Analytics sessions at OpenWorld, download the Focus On Business Analytics program guide at:  http://www.oracle.com/openworld/focus-on/index.html We look forward to seeing you in San Francisco!

    Read the article

  • Working with Windows and Unix

    - by user554629
    Beware of new line characters One of the most frequent issues we encounter in Tech Support is the corruption of files that are transferred between Windows and Unix.   The transfer can occur at any stage, but ultimately involves a transfer of a file using an ftp client that is running on Windows;  it could be ftp or filezilla. Windows uses two characters to mark the end of a line in a text file (CR/LF),carriage return, linefeed.   Unix uses a single character (CR). In all situations, it is best to use binary mode transfer for all files, including ascii text files. Common problems: upload a core file from unix to windows using ftp in ascii mode.The file is going to be larger on Windows than Unix.ftp doesn't know if this is a text file with real line-ends, it takes every ascii CR and transmits two ascii characters CR/LF.The core file, tar file, library ... will be corrupted when transferred to Oracle. download a shell script to Windows, and transfer it to Unix using ftpIf the file is edited on Windows, the unix script line-end chars will be doubled.Unix doesn't know how to handle that, and will likely tell you the script is not executable.Why?  The first line of a shell script ( called "sh-bang" ), identifies the command interpreter the unix shell should use for this script.   Common examples:#/bin/sh#/bin/ksh#/bin/bash#/bin/perl#/bin/sh^M    # will not be understood.#/bin/env ksh # special syntax.  Find ksh and run it dos2unix is a common utility found on most unix platforms, that repairs the issue of Windows LineEnd characters in unix script files.   I've written my own flavor of this utility for use in Tech Support and build environments, that is a bit easier to use, and has some nice side-effects. accepts a list of files:   dos2unix *.sh repairs the file in-place.  Doesn't generate a new file you have to name retains the same timestamp;  it is the encoding that changed, not the file content. Here are the versions of dos2unix for each of the environments we work in.They are compressed with gzip, to avoid the ftp ascii transfer trap,and because I am quite limited in the number of files I can upload to this blog. AIX Linux Solaris sparc  Windows 

    Read the article

  • ADF Real World Developers Guide Book Review

    - by Grant Ronald
    I'm half way through my review of "Oracle ADF Real World Developer's Guide" by Jobinesh Purushothaman - unfortunately some work deadlines de-railed me from having completed my review by now but here goes.  First thing, Jobinesh works in the Oracle Product Management team with me, so is a colleague. That declaration aside, its clear that this is someone who has done the "real world" side of ADF development and that comes out in the book. In this book he addresses both the newbies and the experience developers alike.  He introduces the ADF building blocks like entity objects and view obejcts, but also goes into some of the nitty gritty details as well.  There is a pro and con to this approach; having only just learned about an entity or view object, you might then be blown away by some of the lower details of coding or lifecycle.  In that respect, you might consider this a book which you could read 3 or 4 times; maybe skipping some elements in the first read but on the next read you have a better grounding to learn the more advanced topics. One of the key issues he addresses is breaking down what happens behind the scenes.  At first, this may not seem important since you trust the framework to do everything for you - but having an understanding of what goes on is essential as you move through development.  For example, page 58 he explains the full lifecycle of what happens when you execute a query.  I think this is a great feature of his book. You see this elsewhere, for example he explains the full lifecycle of what goes on when a page is accessed : which files are involved,the JSF lifecycle etc. He also sprinkes the book with some best practices and advice which go beyond the standard features of ADF and really hits the mark in terms of "real world" advice. So in summary, this is a great ADF book, well written and covering a mass of information.  If you are brand new to ADF its still valid given it does start with the basics.  But you might want to read the book 2 or 3 times, skipping the advanced stuff on the first read.  For those who have some basics already then its going to be an awesome way to cement your knowledge and take it to the next levels.  And for the ADF experts, you are still going to pick up some great ADF nuggets.  Advice: every ADF developer should have one!

    Read the article

  • Development Approach: User Interface In or Domain Model Out?

    - by Berin Loritsch
    While I've never delivered anything using Smalltalk, my brief time playing with it has definitely left its mark. The only way to describe the experience is MVC the way it was meant to be. Essentially, all the heavy lifting for your application is done in the business objects (or domain model if you are so inclined). The standard controls are bound to the business objects in some way. For example, a text box is mapped to an object's field (the field itself is an object so it's easy to do). A button would mapped to a method. This is all done with a very simple and natural API. We don't have to think about binding objects, etc. It just works. Yet, in many newer languages and APIs you are forced to think from the outside in. First with C++ and MFC, and now with C# and WPF, Microsoft has gotten it's developer world hooked on GUI builders where you build your application by implementing event handlers. Java Swing development isn't so different, only you are writing the code to instantiate the controls on the form yourself. For some projects, there may never even be a domain model--just event handlers. I've been in and around this model for most of my carreer. Each way forces you to think differently. With the Smalltalk approach, your domain is smart while your GUI is dumb. With the default VisualStudio approach, your GUI is smart while your domain model (if it exists) is rather anemic. Many developers that I work with see value in the Smalltalk approach, and try to shoehorn that approach into the VisualStudio environment. WPF has some dynamic binding features that makes it possible; but there are limitations. Inevitably some code that belongs in the domain model ends up in the GUI classes. So, which way do you design/develop your code? Why? GUI first. User interaction is paramount. Domain first. I need to make sure the system is correct before we put a UI on it. There's pros and cons for either approach. Domain model fits in there with crystal cathedrals and pie in the sky. GUI fits in there with quick and dirty (sometimes really dirty). And for an added bonus: How do you make sure the code is maintainable?

    Read the article

  • Mapping Drive Error - System Error 1808

    - by Julian Easterling
    A vendor is attempting to map and preserve a network drive using nt authority/system; so it stays persistent when the interactive session of the server is lost. They were able to do this on one server (Windows 2008 R2) but not a second computer (also Windows 2008 R2). D:\PsExec.exe -s cmd.exe PsExec v1.98 - Execute processes remotely Copyright (C) 2001-2010 Mark Russinovich Sysinternals - www.sysinternals.com Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. all rights reserved. C:\Windows\system32>whoami nt authority\system C:\Windows\system32>net use New connections will be remembered. Status Local Remote Network -------------------------------------------------------------------- OK X: \\netapp1\share1 Microsoft Windows Network The command completed successfully. C:\Windows\system32>net use q: \\netapp1\share1 System error 1808 has occurred. The account used is a computer account. Use your global user account or local user account to access this server. C:\Windows\system32> I am unsure on how to set up a "machine account mapping" which will preserve the drive letter of the Netapp path being mapped, so that the service account running a Windows service can continue to access the share after interactive logon has expired on the server. Since they were able to do this on one server but not another, I'm not sure how to troubleshoot the problem? Any suggestions?

    Read the article

  • Invalid format for New Relic licence when installing on Elastic Beanstalk

    - by BenFreke
    We've created an app that is running on an Elastic Beanstalk instance, 64 bit PHP version 5.4 (so not legacy). I've used the New Relic installation instructions to install New Relic, and viewing phpinfo shows that New Relic is installed. However, I'm not getting any data in New Relic and that is because it is saying that the licence is ***invalid format*** under newrelic.licence I'm getting the licence from my New Relic account, and it is a 40 character hexadecimal string. Here is the current newrelic.config file in the .ebextensions folder I'm using, with most of the licence key commented out. packages: yum: newrelic-php5: [] rpm: newrelic: http://yum.newrelic.com/pub/newrelic/el5/x86_64/newrelic-repo-5-3.noarch.rpm commands: configure_new_relic: command: newrelic-install install env: NR_INSTALL_SILENT: true NR_INSTALL_KEY: ec9a4... Skitch of relevant phpinfo Can anyone shed some light on what's going on here? I've tried two different New Relic licence keys with the same error, I've also surrounded it with a single quote mark and tried uppercase only. And at this point I'm out of ideas on what to try. We're not AWS gurus so it could very easily be something simple like not opening a port to allow the licence to be validated?

    Read the article

  • smtp.gmail.com from bash gives "Error in certificate: Peer's certificate issuer is not recognized."

    - by ndasusers
    I needed my script to email admin if there is a problem, and the company only uses Gmail. Following a few posts instructions I was able to set up mailx using a .mailrc file. there was first the error of nss-config-dir I solved that by copying some .db files from a firefox directory. to ./certs and aiming to it in mailrc. A mail was sent. However, the error above came up. By some miracle, there was a Google certificate in the .db. It showed up with this command: ~]$ certutil -L -d certs Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI GeoTrust SSL CA ,, VeriSign Class 3 Secure Server CA - G3 ,, Microsoft Internet Authority ,, VeriSign Class 3 Extended Validation SSL CA ,, Akamai Subordinate CA 3 ,, MSIT Machine Auth CA 2 ,, Google Internet Authority ,, Most likely, it can be ignored, because the mail worked anyway. Finally, after pulling some hair and many googles, I found out how to rid myself of the annoyance. First, export the existing certificate to a ASSCII file: ~]$ certutil -L -n 'Google Internet Authority' -d certs -a > google.cert.asc Now re-import that file, and mark it as a trusted for SSL certificates, ala: ~]$ certutil -A -t "C,," -n 'Google Internet Authority' -d certs -i google.cert.asc After this, listing shows it trusted: ~]$ certutil -L -d certs Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI ... Google Internet Authority C,, And mailx sends out with no hitch. ~]$ /bin/mailx -A gmail -s "Whadda ya no" [email protected] ho ho ho EOT ~]$ I hope it is helpful to someone looking to be done with the error. Also, I am curious about somethings. How could I get this certificate, if it were not in the mozilla database by chance? Is there for instance, something like this? ~]$ certutil -A -t "C,," \ -n 'gmail.com' \ -d certs \ -i 'http://google.com/cert/this...'

    Read the article

  • Having trouble using psservice and sc.exe between Windows Server 2008 machines

    - by Teflon Mac
    I'm trying to control services on one W2k8 machine from another; no domain just a workgroup. The user account I'm logged in as is an administrator on both machines. I've tried both psservice and sc.exe. These work in a Windows Server 2003 environment but it looks like I need to an extra step or two due to the changed security model in 2008. Any ideas as to how grant permission to the Service Control Manager (psservice) or OpenService (sc)? I tried running the DOS window with "Run As Administrator" and it made no difference. With psservice I get the following D:\mydir>psservice \\REMOTESERVER -u "adminid" -p "adminpassword" start "Display Name of Service" PsService v2.22 - Service information and configuration utility Copyright (C) 2001-2008 Mark Russinovich Sysinternals - www.sysinternals.com Unable to access Service Control Manager on \\REMOTESERVER: Access is denied. In the remote server, I get the following message in the Security Log so I know I connect and login to the remote machine. I assume it then fails on a subsequent authorization step. The logoff message in the security log is just that ("An account was logged off."), so no extra info there. Special privileges assigned to new logon. Subject: Security ID: REMOTESERVER\adminid Account Name: adminid Account Domain: REMOTESERVER Logon ID: 0xxxxxxxx Privileges: SeSecurityPrivilege SeBackupPrivilege SeRestorePrivilege SeTakeOwnershipPrivilege SeDebugPrivilege SeSystemEnvironmentPrivilege SeLoadDriverPrivilege SeImpersonatePrivilege sc.exe is similar. The command syntax and error differs as below but I also see the same login message in the remote server's security log. D:\mydir>sc \\REMOTESERVER start "Registry Name of Service" [SC] StartService: OpenService FAILED 5: Access is denied.

    Read the article

  • Desktop SATA drives in SATA <-> FC array

    - by chris
    Let's assume you've got a box like one of these with space for 24 SATA disks. What are the best bits of advice for deploying this? For instance, should you be greedy and go for the 1.5 or 2tb disks or are they just not reliable enough to be used in an array like this and you should stick with 640gb or 750gb disks instead? Also, I know that FC (or generically, "enterprise class") disks have a different error recovery strategy than desktop disks. An enterprise disk will fail a read quickly and report to the controller that it wasn't able to read that block, and the RAID controller will quickly regenerate the info from the parity disk and mark the block as bad. A desktop disk, on the other hand, will try and try and try again to get the data, and in pathological cases this may cause a raid controller to fail the whole disk because the read operation times out. So there are a couple aspects to this question: What's the best sort of disk to get today? (ie specific disks on the market in Feb 2010) Generically, what should someone look for when trying to buy something like this that kinda walks the line between enterprise and consumer? Lastly -- is there anything that can be done with current "consumer" disks to make them more suitable for array use? IE can you use a SMART configuration to change the error recovery strategy used by the disk? Thanks!

    Read the article

  • Gmail: security warning icon

    - by Notetaker
    Hello, I just enabled some Gmail Labs programs in my Gmail account, and then I noticed the orange triangle icon with an exclamation mark in it at the end of the address bar of my Google Chrome browser. Clicking on it brought forth a "Security Information' dialog box, with the following messages: "--mail.google.com The identity of website has been verified by Thawlte SGC CA. --Your connection to mail.google.com is encrypted with 128-bit encryption. However, this page includes other resources which are not secure. These resources can be viewed by others while in transit, and can be modified by an attacker to change the look or behavior of the page." I then logged into two of my other Gmail accounts, one of which has no Gmail Labs programs enabled, and the other with 1 program enabled quite some time ago, both with the same result as above (i.e., with the appearance of the orange triangle warning sign in the address bar). I don't remember seeing the orange triangle before, but I'm not sure if it has ever appeared or not. I have "Always use https" enabled for my Gmail accounts. My questions are: Is there a way to identify and remove these un-secure "resources"? (Could enabling Gmail Labs programs have brought these on?) Meanwhile, are my Gmail accounts compromised and unsafe to use? If so, what should I being doing about that now? After this problem is solved, would I need to reset the password to my Gmail accounts, and/or take any other measures to restore their security? Many thanks for answering my questions!

    Read the article

  • SPF hardfail and DKIM failure when recipient has e-mail forwarding

    - by Beaming Mel-Bin
    I configured hardfail SPF for my domain and DKIM message signing on my SMTP server. Since this is the only SMTP server that should be used for outgoing mail from my domain, I didn't foresee any complications. However, consider the following situation: I sent an e-mail message via my SMTP server to my colleague's university e-mail. The problem is that my colleague forwards his university e-mail to his GMail account. These are the headers of the message after it reaches his GMail mailbox: Received-SPF: fail (google.com: domain of [email protected] does not designate 192.168.128.100 as permitted sender) client-ip=192.168.128.100; Authentication-Results: mx.google.com; spf=hardfail (google.com: domain of [email protected] does not designate 192.168.128.100 as permitted sender) [email protected]; dkim=hardfail (test mode) [email protected] (Headers have been sanitized to protect the domains and IP addresses of the non-Google parties) GMail checks the last SMTP server in the delivery chain against my SPF and DKIM records (rightfully so). Since the last STMP server in the delivery chain was the university's server and not my server, the check results in an SPF hardfail and DKIM failure. Fortunately, GMail did not mark the message as spam but I'm concerned that this might cause a problem in the future. Is my implementation of SPF hardfail perhaps too strict? Any other recommendations or potential issues that I should be aware of? Or maybe there is a more ideal configuration for the university's e-mail forwarding procedure? I know that the forwarding server could possibly change the envelope sender but I see that getting messy.

    Read the article

  • Dell Management Packs in System Center Operations Manager 2007 R2?

    - by bwerks
    Hey all, I recently set up SCOM in a small business network environment. The root management server is a Dell Poweredge 2950, and I'd like to use SCOM to monitor it using Dell's management packs. I've imported the management packs into the SCOM deployment and followed Dell's installation instructions, but it doesn't seem to be fully working yet. Currently, the Diagram views in the Dell tree (Monitoring tab) seem to show me the server's place in the network topology, so it seems that at least part of it is working. However, none of the reports under "Performance and Power Monitoring Views" provide any information. When clicking on one of them (Power Consumption (Watts), for instance), the display area is blank and there is a tooltip visible that reads "No performance counter is selected. To select a counter, place a check mark in the Show column in legend below." However, in the legend, there's nothing there for me to check. I've installed OpenManage 6.2 on the server as per the Dell documentation, but I don't know what else I could have done that I missed. Does this sound like a familiar problem to anyone?

    Read the article

  • Why does my dd backup of MacBook OS X fail to boot upon restore?

    - by James
    I created a backup of a MacBook hard drive (WD2500BEVS-88US) by hooking it up as a secondary drive on my linux system (Ubuntu 10.10). I used the following command: sudo dd if=/dev/sdc of=/home/backup.img bs=2M This appears to have completed with no errors. I noticed that the file is only 68 GB in size even though the drive is 250 GB in capacity. I restored the image to a spare drive (WD2500BEVS) with the following command: sudo dd if=/home/backup.img of=/dev/sdb bs=2M When I boot the spare drive in the Mac, it appears to start up for a few seconds and then shuts down. (It does not appear to load into the OS at all). When I open up the drive that won't boot in GParted, it looks like this: When looking at the information for the middle partition with the little red exclamation mark, it shows this: The original hard drive that boots ok shows up like this: Further info on both drives: sudo fdisk -l Disk /dev/sdb: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 30402 244198580 ee GPT WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 1 30402 244198580 ee GPT So why is my backup or restore failing? Why is dd not creating a byte for byte duplicate?

    Read the article

  • Drupal with clean urls turned on is putting question marks in URL

    - by aussiegeek
    I have a drupal site with clean urls, the pages load correctly, but then the URL is rewritten with a question mark in it, which I don't want the user to see. My .htaccess is: <IfModule mod_rewrite.c> RewriteEngine on # If your site can be accessed both with and without the 'www.' prefix, you # can use one of the following settings to redirect users to your preferred # URL, either WITH or WITHOUT the 'www.' prefix. Choose ONLY one option: # # To redirect all users to access the site WITH the 'www.' prefix, # (http://example.com/... will be redirected to http://www.example.com/...) # adapt and uncomment the following: # RewriteCond %{HTTP_HOST} ^example\.com$ [NC] # RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301] # # To redirect all users to access the site WITHOUT the 'www.' prefix, # (http://www.example.com/... will be redirected to http://example.com/...) # uncomment and adapt the following: # RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] # RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] # Modify the RewriteBase if you are using Drupal in a subdirectory or in a # VirtualDocumentRoot and the rewrite rules are not working properly. # For example if your site is at http://example.com/drupal uncomment and # modify the following line: # RewriteBase /drupal # # If your site is running in a VirtualDocumentRoot at http://example.com/, # uncomment the following line: RewriteBase / # Rewrite URLs of the form 'x' to the form 'index.php?q=x'. RewriteCond %{REQUEST_URI} !(connect|administration) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] </IfModule>

    Read the article

  • VMware databursts to vCenter

    - by Erwin Blonk
    [edit Nov.16th 2009: Thank you for the responses. I'm no longer on this project so the problem is out of my hands. Be sure I will return with other problems :-) ] I have a strange occurence of regular databursts over my WAN links from 2 sites to a site with vCenter. What happens is that a vCenter server is managing local ESX 3.5 servers and 2 from 2 sites across a WAN link. Each server sends an approximately 3MB worth of TLS data (less than 10% of the time it varies to higher or lower) every 15 minutes (with a margin of 2 minutes). So far, I've not been able to single out a process that causes it. I looked through all applications on each site. So far, it seems to originate from one server on each site. Although it may be coincidence and therefore not relevant, I found that one server, with very few exceptions, does a burst at 00:00 on the hour. The other 3 during the hour are a bit off the 15 minute mark but back at the top of the hour, you can sync your watch on it. The other server follows 5 minutes after that with no such precision. But, as said, it never differs more than 2 minutes. Servers are ESX 3.5, vCenter is 2.5.

    Read the article

  • Cygwin, ssh, and git on Windows Server 2008

    - by Paul
    Hi everyone. I'm trying to setup a git repository on an existing Windows 2008 (R2) server. I have successfully installed Cygwin & added git and ssh to the packages, and everything works perfectly (thanks to Mark for his article on it). I can ssh to localhost on the server, and I can do git operations locally on the server. When I try to do either from the client, however, I get the "port 22, Bad file number" error. Detailed SSH output is limited to this: OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug1: Connecting to {myserver} [{myserver}] port 22. debug1: connect to address {myserver} port 22: Attempt to connect timed out without establishing a connection ssh: connect to host {myserver} port 22: Bad file number Google tells me that this means I'm being blocked, usually, by a firewall. So, double-checked the firewall settings on the server, rule is there allowing port 22 traffic. I even tried turning off the firewall briefly, no change in behavior. I can ssh just fine from that client to other servers. The hosting company swears that there's no other firewalls blocking that server on port 22 (or any other port, they claim, but I find that hard to believe). I have another trouble ticket into them, just in case the first support person was full of it, but meanwhile I wanted to see if anyone could think of anything else it can be. Thanks, Paul

    Read the article

  • Monitor Exchange Email Address and run scripts

    - by WernerCD
    Okay... Not sure how "out there" this thought is... Right now to send a pager message (aka text message), a user logs into our AS400... logs into the program... enters user name and message and hit's F10 to send. With a little looking, it seems that you can run remote commands to the AS400 via FTP. So I'm working on building a script (batch or otherwise) that, given two parameters (user, message), will FTP into the AS400 and run a remote command: c:\>ftp server user: admin password: ***** ftp> quote rcmd SNDPGRMSG TOPGR(JDOE) MSG('This is a Test') ftp> quit So... what I want to do is setup an email account on our Exchange server Monitor the account for incoming mail upon receipt of incoming mail, parse it... say for example subject is defined as "Recipient" and email text is defined as "Pager message" run a batch that uses the above mentioned TOPGR and MSG as parameters... via FTP to the AS400 mark email as "read" The main thing I'm not sure about is monitoring an exchange account and running a script on incoming emails. I'm sure what I want to do is possible... but where would I start? EDIT: Clarification The main reasons for using this four part system are logging (messages sent via this are logged and reported by the AS400 program) and the existing scheduler for redirecting pages (For example, the weekly on-call person = TOPGR(oncall) gets updated by the AS400 program). I'm also trying to remove duplicate work. If I can get this setup working, I can redirect pages from OTHER systems into this one. I then don't have to update 2, soon to be 3, systems with current phone numbers, carriers, on-call schedules, etc. System #2 and #3 can just "email" [email protected].

    Read the article

  • solaris zpool SSD cache device "faulted"

    - by John-ZFS
    I am trying to get over these SATA SSD errors - smartctl command failed to read the SATA SSD - SATA is not supported what could be the reason for errors? does this mean that SSD has reached EOL & needs to be replacement? errors: No known data errors pool: zpool1216 state: DEGRADED status: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the faulted device, or use 'zpool clear' to mark the device repaired. scan: scrub repaired 0 in 0h24m with 0 errors on Fri May 18 14:31:08 2012 config: NAME STATE READ WRITE CKSUM zpool1216 DEGRADED 0 0 0 raidz1-0 ONLINE 0 0 0 c11t10d0 ONLINE 0 0 0 c11t11d0 ONLINE 0 0 0 c11t12d0 ONLINE 0 0 0 c11t13d0 ONLINE 0 0 0 c11t14d0 ONLINE 0 0 0 c11t15d0 ONLINE 0 0 0 c11t16d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 c11t4d0 ONLINE 0 0 0 c11t5d0 ONLINE 0 0 0 c11t6d0 ONLINE 0 0 0 c11t7d0 ONLINE 0 0 0 c11t8d0 ONLINE 0 0 0 c11t9d0 ONLINE 0 0 0 logs c9d0 FAULTED 0 0 0 too many errors cache c10d0 FAULTED 0 17 0 too many errors

    Read the article

  • Can't view other computers on Network

    - by Darkart
    Systems: My Machine: Windows 7 Ultimate connected through ethernet into router Her Machine--Other Machine: Windows 7 Ultimate connected through wireless Router:F5D8236-4 N Wireless Router Version 1 Firmware: 2.01.03 (Apr 28 2009) ISP: Comcast Problem: I can not view the "Other Machine" on the network at all. I opened command prompt and ran net view and saw the pc name. I tried pinging the pc and it times out. Went inside the router and tried viewing the computer on the DHCP list and it can not be seen. I restored the router back to default settings and firmware and completely reset the modem and router, and created home group. I went to the other machine to configure home group settings and made sure that both PC's had identical settings. She was able to see my machine but I could not see hers. I restarted both machines and now we cant see each other at all. Also her PC ("Other Machine") had exclamation mark in the wireless icon but was connected just fine. There is no firewalls on currently or anti-virus enabled, and still can not see each other. Right now I am checking for updated drivers for the wireless card, but my question is could it be the router or something hardware related? I have went through all the settings in the Home group and visited most FAQ's and still no luck. Also as it stands I can not view her machine inside the router DHCP Client List :(

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >