Search Results

Search found 11985 results on 480 pages for 'legal issues'.

Page 101/480 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Is Morton Code algorithm patented?

    - by Statement
    Synonyms: Morton Code, Morton Curve, Z-Order Curve, invented 1966 by G. M. Morton according to wiki. Not to be confused with Hilbert Curves which is closely related and have similar name. I was wondering if mentioned algorithm is patented. More generally, is there any place one can browse patented algorithms? I am quite new to all this legal stuff and I am keen to know more. The real question I have, I guess, is: Can I make money off a library that provides Morton Coding, legally?

    Read the article

  • SQLAuthority News – A Conversation with an Old Friend – Sri Sridharan

    - by pinaldave
    Sri Sridharan is my old friend and we often talk on GTalk. The subject varies from Life in India/USA, movies, musics, and of course SQL. We have our differences when we talk about food or movie but we always agree when we talk about SQL. Yesterday while chatting with him we talked about SQLPASS and the conversation lasted for a long time. Here is the conversation between us on GTalk. I have removed a few of the personal talks and formatted into paragraphs as GTalk often shows stuff out of formatting. Pinal: Sri, Congrats on running for the PASS BoD again. You were so close last year. What made you decide to run again this year? Sri: Thank you Pinal for your leadership in the PASS India Community and all the things you do out there. After coming so close last year, there was no doubt in my mind that I will run again. I was truly humbled by the support I got from the community. Growing up in India for over 25 years, you are brought up in a very competitive part of the world. Right from the pressure of staying in the top of the class from kindergarten to your graduation, the relentless push from your parents about studying and getting good grades (and nothing else matters), you land up essentially living in a pressure cooker. To survive that relentless pressure, you need to have a thick skin, ability to stand up for who you really are , what you want to accomplish and in the process stay true those values. I am striving for a greater cause, to make PASS an organization that can help people with their SQL Server careers, to make PASS relevant to its chapter members, to make PASS an organization that every SQL professional in the world wants to be connected with. Just because I did not get elected or appointed last year does not mean that these causes are not worth fighting. Giving up upon failing the first time is simply not in me. If I did that, what message would I send to those who voted for me? What message would I send to my kids? Pinal: As someone who has such strong roots in India, what can the Indian PASS Community expect from you? Sri: First of all, I think fostering a regional leadership is something PASS must encourage as part of its global growth plan. For PASS global being able to understand all the issues in a region of the world and make sound decisions will be a tough thing to do on a continuous basis. I expect people like you, chapter leaders, regional mentors, MVPs of the region start playing a bigger role in shaping the next generation of PASS. That is something I said in my campaign and I still stand by it. I would like to see growth in the number of chapters in India. The current count does not truly represent the full potential of that region. I was pretty thrilled to see the Bangalore SQLSaturday happen early this year. I would like to see more of SQLSaturday events, at least in the major metro cities. I know the issues in India are very different from the rest of the world. So the formula needs to be tweaked a little for it to work better in India. Once the SQLSaturday model is vetted out, maybe there could be enough justification to have SQLRally India. PASS needs to have a premier SQL event in that region. Going to USA or Europe for that matter is incredibly hard due to VISA issues etc. So this could be a case of where PASS comes closer to where the community is. Pinal: What portfolio would take on if you are elected to the PASS Board? Sri: There are some very strong folks on the PASS Board today. The President discusses the portfolios with the group and makes the final call on the portfolios. I am also a fan of having a team associated with the portfolios. In that case, one person is the primary for a portfolio but secondary on a couple of other portfolios. This way people on the board have a direct vested interest in a few portfolios. Personally, I know I would these portfolios good justice – Chapters, Global Growth and Events (SQLSat, SQLRally). I would try to see if we can get a director to focus on Volunteers.  To me that is very critical for growth in the international regions. Pinal: This is an interesting conversation with you Sri. I know you so long time but this is indeed inspiring to many. India is a big country and we appreciate your thoughts. Sri: Thank you very much for taking time to chat with me today. Cheers. There are pretty strong candidates for SQLPASS Board of Elections this year. I know all of them in person and honestly it is going to be extremely difficult to not to vote for anybody. I am indeed in a crunch right now how to pick one over another. Though the choice is difficult, I encourage you to vote for them. I am extremely confident that the new board of directors will all have the same goal – Better SQL Server Community. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, DBA, PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • What does it mean that hosting ToS DOESN'T allows HOTLINKS and WHY is it like that?

    - by Michal P.
    Free hosting services often doesn't allow to use hotlinks. I'm not sure if I understand it well and what is the reason of such a disaproval. I would like to have pictures for my webpages in Photobucket e.g. which is allowed to have hotlinks and use those pictures using hotlinks on my sites. Is it that what is not allowed? What is a problem for free host server owners to accept such links? Bandwidth of Photobucket is used only as I understand and it is completely legal. I'v read quite enough abt hotlinks, but I can't understand this simple issue.

    Read the article

  • Good PPA for ffmpeg

    - by teeks99
    Its annoying that the default ffmpeg in ubuntu's repositories is outdated and hobbled for legal reasons, however, there's a great page on the forums that walks you through building it from scratch: http://ubuntuforums.org/showthread.php?t=786095 (I also add in support for --enable-libdc1394 --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex when I build, which is nice, but not essential.) Anyway, I'm getting set to setup a computer lab running ubuntu for a multimedia class I'm teaching, and I was wondering if there was PPA out there that did something similar to what is available when building from scratch. The things I absolutely need from it are up-to-date libxvid, libx264, libfaac, and libmp3lame support plus the un-encumbered stuff (vorbis, theora, vpx). Is there a PPA out there that provides something like this?

    Read the article

  • How can I keep my privacy while owning multiple domain names?

    - by Abby
    I want to own, create and run x# of domains. I do not want 'whois' to have my name, home address and home phone available to anyone who looks me up. I've already bought a mailbox that I can use for my physical address. But... that doesn't get my name and number question answered. What is the best way to be anonymous yet still be legal? Do I need to incorporate all my sites and get an LLC? Can I create a company name without becoming an LLC? Then there's the phone number.... Thanks in advance to all who respond!

    Read the article

  • Can Separation of Duties Deter Cybercrime? YES!

    - by roxana.bradescu
    According to the CERT 2010 CyberSecurity Watch Survey: The public may not be aware of the number of incidents because almost three-quarters (72%), on average, of the insider incidents are handled internally without legal action or the involvement of law enforcement. However, cybercrimes committed by insiders are often more costly and damaging than attacks from outside. When asked what security policies and procedures supported or played a role in the deterrence of a potential cybercriminal, 36% said technically-enforced segregation of duties. In fact, many data protection regulations call for separation of duties and enforcement of least privilege. Oracle Database Security solutions can help you meet these requirements and prevent insider threats by preventing privileged IT staff from accessing the data they are charged with managing, ensuring developers and testers don't have access to production data, making sure that all database activity is monitored and audited to prevent abuse, and more. All without changes to your existing applications or costly infrastructure investments. To learn more, watch our Oracle Database Management Separation of Duties for Security and Regulatory Compliance webcast.

    Read the article

  • Can You Name the Top 10 Technology Trends?

    - by kellsey.ruppel
    Can You Name the Trends? No need to do the research. Come to this Webcast and find out. Join the conversation as Andy Mulholland, Global CTO, Capgemini, discusses the 10 game-changing technology trends that will enable business innovation. As you might expect, three of the trends discussed will be: Mobility: from nice-to-have to a cornerstone of user engagement Big data: how to acquire, organize, and analyze it Cloud computing: how to build applications, automate processes, collaborate, and secure the enterprise But you’ll have to attend the Webcast to learn about the other seven trends. Register now. And profit from the experience. REGISTER NOW Thurs., July 19, 201210 a.m. PT / 1 p.m. ET Presented by: Andy MulhollandGlobal CTO, Capgemini Christian FinnSenior Director, Oracle WebCenter Product Management, Oracle Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • Increase productivity, accelerate work-to-cash cycles, and reduce overall firm and client risk with

    Law firms around the world are faced with increasing pressures to do business faster and more efficiently. Learn how firms can automate manual, paper-driven processes, ensure regulatory compliance, integrate systems and offices brought together by mergers and acquisitions, and take on new business quickly and efficiently. Understand how firms can automate manual tasks with Oracle's Whitehill One; get invoices out the door faster with Whitehill Enterprise; and can go green with Whitehill Pre-Bill. In this session, you will hear about Oracle's new legal services offerings that accelerate work-to-cash cycles, increase productivity, and reduce overall firm and client risk.

    Read the article

  • How many developers before continuous integration becomes effective for us?

    - by Carnotaurus
    There is an overhead associated with continuous integration, e.g., set up, re-training, awareness activities, stoppage to fix "bugs" that turn out to be data issues, enforced separation of concerns programming styles, etc. At what point does continuous integration pay for itself? EDIT: These were my findings The set-up was CruiseControl.Net with Nant, reading from VSS or TFS. Here are a few reasons for failure, which have nothing to do with the setup: Cost of investigation: The time spent investigating whether a red light is due a genuine logical inconsistency in the code, data quality, or another source such as an infrastructure problem (e.g., a network issue, a timeout reading from source control, third party server is down, etc., etc.) Political costs over infrastructure: I considered performing an "infrastructure" check for each method in the test run. I had no solution to the timeout except to replace the build server. Red tape got in the way and there was no server replacement. Cost of fixing unit tests: A red light due to a data quality issue could be an indicator of a badly written unit test. So, data dependent unit tests were re-written to reduce the likelihood of a red light due to bad data. In many cases, necessary data was inserted into the test environment to be able to accurately run its unit tests. It makes sense to say that by making the data more robust then the test becomes more robust if it is dependent on this data. Of course, this worked well! Cost of coverage, i.e., writing unit tests for already existing code: There was the problem of unit test coverage. There were thousands of methods that had no unit tests. So, a sizeable amount of man days would be needed to create those. As this would be too difficult to provide a business case, it was decided that unit tests would be used for any new public method going forward. Those that did not have a unit test were termed 'potentially infra red'. An intestesting point here is that static methods were a moot point in how it would be possible to uniquely determine how a specific static method had failed. Cost of bespoke releases: Nant scripts only go so far. They are not that useful for, say, CMS dependent builds for EPiServer, CMS, or any UI oriented database deployment. These are the types of issues that occured on the build server for hourly test runs and overnight QA builds. I entertain that these to be unnecessary as a build master can perform these tasks manually at the time of release, esp., with a one man band and a small build. So, single step builds have not justified use of CI in my experience. What about the more complex, multistep builds? These can be a pain to build, especially without a Nant script. So, even having created one, these were no more successful. The costs of fixing the red light issues outweighed the benefits. Eventually, developers lost interest and questioned the validity of the red light. Having given it a fair try, I believe that CI is expensive and there is a lot of working around the edges instead of just getting the job done. It's more cost effective to employ experienced developers who do not make a mess of large projects than introduce and maintain an alarm system. This is the case even if those developers leave. It doesn't matter if a good developer leaves because processes that he follows would ensure that he writes requirement specs, design specs, sticks to the coding guidelines, and comments his code so that it is readable. All this is reviewed. If this is not happening then his team leader is not doing his job, which should be picked up by his manager and so on. For CI to work, it is not enough to just write unit tests, attempt to maintain full coverage, and ensure a working infrastructure for sizable systems. The bottom line: One might question whether fixing as many bugs before release is even desirable from a business prespective. CI involves a lot of work to capture a handful of bugs that the customer could identify in UAT or the company could get paid for fixing as part of a client service agreement when the warranty period expires anyway.

    Read the article

  • ORACLE Cloud Summits

    - by Thomas Leopold
      Next Generation of Enterprise Cloud Computing     Markieren Sie sich den Termin für Ihren Oracle Enterprise Cloud Summit. 02. März 2011 in Hannover 03. März 2011 in Hannover 15. März 2011 in Frankfurt 22. März 2011 in München Bei Rückfragen schreiben Sie einfach eine E-Mail an [email protected].   Copyright © 2011, Oracle and/or its affiliates.All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • Most appropriate OSS license for infrastructure code

    - by Richard Szalay
    I'm looking into potentially releasing some infrastructure code (related to automated builds and deployments) as OSS and I'm curious about how the various OSS licenses effect it. Specifically, LGPL prevents the code itself (part/whole) being modified into a commercial product (which is what I'm after), but allows it to be "linked to" in the creation of commercial products (also ok). How does the "linked to" clause relate to infrastructure code, which is not deployed with the product itself? Would the application still be required to provide "appropriate legal notices" (which I'm not fussed over)? Would I be better off looking at the Eclipse Public License?

    Read the article

  • How do I install a DVD decryption library?

    - by rocket101
    I recently built a computer running Ubuntu 11.10. It works great, except for one thing. With some video DVDs, when I insert them into my drive, the movie player just says "An error occurred. Could not read DVD. This may be because the DVD is encrypted and a DVD decryption library is not installed." My mac plays the DVD just fine. What is a DVD decryption library, and how can I install it? Thanks! EDIT: I need LEGAL way to do this. All I want is to be able to use this computer as a HTPC!

    Read the article

  • Unexpected advantage of Engineered Systems

    - by user12244672
    It's not surprising that Engineered Systems accelerate the debugging and resolution of customer issues. But what has surprised me is just how much faster issue resolution is with Engineered Systems such as SPARC SuperCluster. These are powerful, complex, systems used by customers wanting extreme database performance, app performance, and cost saving server consolidation. A SPARC SuperCluster consists or 2 or 4 powerful T4-4 compute nodes, 3 or 6 extreme performance Exadata Storage Cells, a ZFS Storage Appliance 7320 for general purpose storage, and ultra fast Infiniband switches.  Each with its own firmware. It runs Solaris 11, Solaris 10, 11gR2, LDoms virtualization, and Zones virtualization on the T4-4 compute nodes, a modified version of Solaris 11 in the ZFS Storage Appliance, a modified and highly tuned version of Oracle Linux running Exadata software on the Storage Cells, another Linux derivative in the Infiniband switches, etc. It has an Infiniband data network between the components, a 10Gb data network to the outside world, and a 1Gb management network. And customers can run whatever middleware and apps they want on it, clustered in whatever way they want. In one word, powerful.  In another, complex. The system is highly Engineered.  But it's designed to run general purpose applications. That is, the physical components, configuration, cabling, virtualization technologies, switches, firmware, Operating System versions, network protocols, tunables, etc. are all preset for optimum performance and robustness. That improves the customer experience as what the customer runs leverages our technical know-how and best practices and is what we've tested intensely within Oracle. It should also make debugging easier by fixing a large number of variables which would otherwise be in play if a customer or Systems Integrator had assembled such a complex system themselves from the constituent components.  For example, there's myriad network protocols which could be used with Infiniband.  Myriad ways the components could be interconnected, myriad tunable settings, etc. But what has really surprised me - and I've been working in this area for 15 years now - is just how much easier and faster Engineered Systems have made debugging and issue resolution. All those error opportunities for sub-optimal cabling, unusual network protocols, sub-optimal deployment of virtualization technologies, issues with 3rd party storage, issues with 3rd party multi-pathing products, etc., are simply taken out of the equation. All those error opportunities for making an issue unique to a particular set-up, the "why aren't we seeing this on any other system ?" type questions, the doubts, just go away when we or a customer discover an issue on an Engineered System. It enables a really honed response, getting to the root cause much, much faster than would otherwise be the case. Here's a couple of examples from the last month, one found in-house by my team, one found by a customer: Example 1: We found a node eviction issue running 11gR2 with Solaris 11 SRU 12 under extreme load on what we call our ExaLego test system (mimics an Exadata / SuperCluster 11gR2 Exadata Storage Cell set-up).  We quickly established that an enhancement in SRU12 enabled an 11gR2 process to query Infiniband's Subnet Manager, replacing a fallback mechanism it had used previously.  Under abnormally heavy load, the query could return results which were misinterpreted resulting in node eviction.  In several daily joint debugging sessions between the Solaris, Infiniband, and 11gR2 teams, the issue was fully root caused, evaluated, and a fix agreed upon.  That fix went back into all Solaris releases the following Monday.  From initial issue discovery to the fix being put back into all Solaris releases was just 10 days. Example 2: A customer reported sporadic performance degradation.  The reasons were unclear and the information sparse.  The SPARC SuperCluster Engineered Systems support teams which comprises both SPARC/Solaris and Database/Exadata experts worked to root cause the issue.  A number of contributing factors were discovered, including tunable parameters.  An intense collaborative investigation between the engineering teams identified the root cause to a CPU bound networking thread which was being starved of CPU cycles under extreme load.  Workarounds were identified.  Modifications have been put back into 11gR2 to alleviate the issue and a development project already underway within Solaris has been sped up to provide the final resolution on the Solaris side.  The fixed SPARC SuperCluster configuration greatly aided issue reproduction and dramatically sped up root cause analysis, allowing the correct workarounds and fixes to be identified, prioritized, and implemented.  The customer is now extremely happy with performance and robustness.  Since the configuration is common to other customers, the lessons learned are being proactively rolled out to other customers and incorporated into the installation procedures for future customers.  This effectively acts as a turbo-boost to performance and reliability for all SPARC SuperCluster customers.  If this had occurred in a "home grown" system of this complexity, I expect it would have taken at least 6 months to get to the bottom of the issue.  But because it was an Engineered System, known, understood, and qualified by both the Solaris and Database teams, we were able to collaborate closely to identify cause and effect and expedite a solution for the customer.  That is a key advantage of Engineered Systems which should not be underestimated.  Indeed, the initial issue mitigation on the Database side followed by final fix on the Solaris side, highlights the high degree of collaboration and excellent teamwork between the Oracle engineering teams.  It's a compelling advantage of the integrated Oracle Red Stack in general and Engineered Systems in particular.

    Read the article

  • What is the convention for the star location in reference variables?

    - by Brett Ryan
    Have been learning Objective-C and different books and examples use differing conventions for the location of the star (*) when naming reference variables. MyType* x; MyType *y; MyType*z; // this also works Personally I prefer the first option as it illustrates that x is a "pointer type of MyType". I see the first two used interchangeably, and sometimes in the same code I've seen differing uses of both. I want to know what is the most common convention It's been a very long time since I've programmed in C (15 years) so I can't remember if all variants are legal for C also or if this is Objective-C specific. I'd prefer answers which state why one is better than the other, as how I explained how I read it above.

    Read the article

  • Oracle Data Integrator Demo Webcast - Next Webcast - November 21st, 2013

    - by Javier Puerta
    Oracle Data Integrator Demo Webcast Next Webcast - November 21st, 2013 The ODI Product Management team will be hosting a demonstration webcast of Oracle Data Integrator regularly. We will be showing baseline functionality, and covering special topics as requested by our customers. Attendance to these webcasts is open to customers and partners Webcast Format The same format for the Webcast will be followed for each presentation: 05 minutes - Background & Overview 30 minutes - Introduction to ODI Features 15 minutes - Drill-Down into Special Topics 10 minutes - Questions and Answers Next Webcast Special Topics Oracle Data Integrator 12c Webcast Details Thursday November 21st 2013, 10:00 AM PST | 1:00 PM EST | 6:00 PM CET (1 hour) Web Conference Link: 594 942 837 (https://oracleconferencing.webex.com) Dial-In Number: AMER: 1-866-682-4770 (More Numbers) Phone Meeting ID/Passcode: 3096713/505638 More information on Oracle Data Integrator (ODI) Learn more about Oracle Data Integrator. Download Oracle Data Integrator 12c. Oracle Data Integrator Webcast Archive Copyright © 2013, Oracle. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • Is it a bad practice to register sntsh.com if my name is Santosh?

    - by Santosh
    My name is Santosh but I can't register a santosh.com because it is already taken. Most extensions for Santosh are already taken. I was trying to register a domain with .sh extension but santo.sh would cost me very high and I can't afford ~$100 for just a personal domain and that's only for one year only. Now I am thinking that I should register a sntsh.com. But there is a problem, will sntsh.com over my name Santosh don't create a SEO problem? One more thing, that totally different from above topic. If I register a santosh.name domain which is not registered, won't it create copyright and any legal problems with other santosh domains?

    Read the article

  • My error with upgrading 4.0 to 4.2- What NOT to do...

    - by Steve Tunstall
    Last week, I was helping a client upgrade from the 2011.1.4.0 code to the newest 2011.1.4.2 code. We downloaded the 4.2 update from MOS, upload and unpacked it on both controllers, and upgraded one of the controllers in the cluster with no issues at all. As this was a brand-new system with no networking or pools made on it yet, there were not any resources to fail back and forth between the controllers. Each controller had it's own, private, management interface (igb0 and igb1) and that's it. So we took controller 1 as the passive controller and upgraded it first. The first controller came back up with no issues and was now on the 4.2 code. Great. We then did a takeover on controller 1, making it the active head (although there were no resources for it to take), and then proceeded to upgrade controller 2. Upon upgrading the second controller, we ran the health check with no issues. We then ran the update and it ran and rebooted normally. However, something strange then happened. It took longer than normal to come back up, and when it did, we got the "cluster controllers on different code" error message that one gets when the two controllers of a cluster are running different code. But we just upgraded the second controller to 4.2, so they should have been the same, right??? Going into the Maintenance-->System screen of controller 2, we saw something very strange. The "current version" was still on 4.0, and the 4.2 code was there but was in the "previous" state with the rollback icon, as if it was the OLDER code and not the newer code. I have never seen this happen before. I would have thought it was a bad 4.2 code file, but it worked just fine with controller 1, so I don't think that was it. Other than the fact the code did not update, there was nothing else going on with this system. It had no yellow lights, no errors in the Problems section, and no errors in any of the logs. It was just out of the box a few hours ago, and didn't even have a storage pool yet. So.... We deleted the 4.2 code, uploaded it from scratch, ran the health check, and ran the upgrade again. once again, it seemed to go great, rebooted, and came back up to the same issue, where it came to 4.0 instead of 4.2. See the picture below.... HERE IS WHERE I MADE A BIG MISTAKE.... I SHOULD have instantly called support and opened a Sev 2 ticket. They could have done a shared shell and gotten the correct Fishwork engineer to look at the files and the code and determine what file was messed up and fixed it. The system was up and working just fine, it was just on an older code version, not really a huge problem at all. Instead, I went ahead and clicked the "Rollback" icon, thinking that the system would rollback to the 4.2 code.   Ouch... What happened was that the system said, "Fine, I will delete the 4.0 code and boot to your 4.2 code"... Which was stupid on my part because something was wrong with the 4.2 code file here and the 4.0 was just fine.  So now the system could not boot at all, and the 4.0 code was completely missing from the system, and even a high-level Fishworks engineer could not help us. I had messed it up good. We could only get to the ILOM, and I had to re-image the system from scratch using a hard-to-get-and-use FishStick USB drive. These are tightly controlled and difficult to get, almost always handcuffed to an engineer who will drive out to re-image a system. This took another day of my client's time.  So.... If you see a "previous version" of your system code which is actually a version higher than the current version... DO NOT ROLL IT BACK.... It did not upgrade for a very good reason. In my case, after the system was re-imaged to a code level just 3 back, we once again tried the same 4.2 code update and it worked perfectly the first time and is now great and stable.  Lesson learned.  By the way, our buddy Ryan Matthews wanted to point out the best practice and supported way of performing an upgrade of an active/active ZFSSA, where both controllers are doing some of the work. These steps would not have helpped me for the above issue, but it's important to follow the correct proceedure when doing an upgrade. 1) Upload software to both controllers and wait for it to unpack 2) On controller "A" navigate to configuration/cluster and click "takeover" 3) Wait for controller "B" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 4) Wait for controller "B" to apply the update and finish rebooting 5) Login to controller "B", navigate to configuration/cluster and click "takeover" 6) Wait for controller "A" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 7) Wait for controller "A" to apply the update and finish rebooting 8) Login to controller "B", navigate to configuration/cluster and click "failback"

    Read the article

  • Finding it Hard to Deliver Right Customer Experience: Think BPM!

    - by Ajay Khanna
    Our relationship with our customers is not a just a single interaction and we should not treat it like one. A customer’s relationship with a vendor is like a journey which starts way before customer makes a purchase and lasts long after that. The journey may start with customer researching a product that may lead to the eventual purchase and may continue with support or service needs for the product. A typical customer journey can be represented as shown below: As you may notice, customers tend to use multiple channels to interact with a company throughout their journey.  They also expect that they should get consistent experience, no matter what interaction channel they may choose. Customers do not like to repeat the information they have already provided and expect companies to remember their preferences, and offer them relevant products and services. If the company fails to meet this expectation, customers not only will abandon the purchase and go to the competitor but may also influence others’ purchase decision. Gone are the days when word of mouth was the only medium, and the customer could influence “Six” others. This is the age of social media and customer’s good or bad experience, especially bad get highly amplified and may influence hundreds of others. Challenges that face B2C companies today include: Delivering consistent experience: The reason that delivering consistent experience is challenging is due to fragmented data, disjointed systems and siloed multichannel interactions. Customers tend to get different service quality if they use web vs. phone vs. store. They get different responses from different service agents or get inconsistent answers if they call sales vs. service group in the company. Such inconsistent experiences result in lower customer satisfaction or NPS (net promoter score) numbers. Increasing Revenue: To stay competitive companies frequently introduce new products and services. Delay in launching such offerings has a significant impact on revenue realization. In addition to new product revenue, there are multiple opportunities to up-sell and cross-sell that impact bottom line. If companies are not able to identify such opportunities, bring a product to market quickly, or not offer the right product to the right customer at the right time, significant loss of revenue may occur. Ensuring Compliance: Companies must be compliant to ever changing regulations, these could be about Know Your Customer (KYC), Export/Import regulations, or taxation policies. In addition to government agencies, companies also need to comply with the SLA that they have committed to their customers. Lapse in meeting any of these requirements may lead to serious fines, penalties and loss in business. Companies have to make sure that they are in compliance will all such regulations and SLA commitments, at any given time. With the advent of social networks and mobile technology, companies not only need to focus on process efficiency but also on customer engagement. Improving engagement means delivering the customer experience as the customer is expecting and interacting with the customer at right time using right channel. Customers expect to be able to contact you via any channel of their choice (web, email, chat, mobile, social media), purchase via any viable channel (web, phone, store, mobile). Customers expect companies to understand their particular needs and remember their preferences on repeated visits. To deliver such an integrated, consistent, and contextual experience, power of BPM in must. Your company may be organized in departments like Marketing, Sales, Service. You may hold prospect data in SFA, order information in ERP, customer issues in CRM. However, the experience delivered to the customer must not be constrained by your system legacy. BPM helps in designing the right experience for the right customer and integrates all the underlining channels, systems, applications to make sure right information will be delivered to the right knowledge worker or to the customer every single time.     Orchestrating information across all systems (MDM, CRM, ERP), departments (commerce, merchandising, marketing service) and channels (Email, phone, web, social)  is the key, and that’s what BPM delivers. In addition to orchestrating systems and channels for consistency, BPM also provides an ability for analysis and decision management. By using data from historical transactions, social media and from other systems, users can determine the customer preferences, customer value, and churn propensity. This information, in the context, is then used while making a decision at a process step. Working with real-time decision management system can also suggest right up-sell or cross-sell offers, discounts or next-best-action steps for a particular customer. Timely action on customer issues or request is also a key tenet of a good customer experience. BPM’s complex event processing capabilities help companies to take proactive actions before issues get escalated. BPM system can be designed to listen to a certain event patters then deduce from those customer situations (credit card stolen, baggage lost, change of address) and do a triage before situation goes out of control. If such a situation arises you can send alerts to right people or immediately invoke corrective actions. Last but not least one of BPM’s key values is to drive continuous improvement. Learning about customers past experiences, interactions and social conversations, provide valuable insight. Such insight can be used to improve products, customer facing processes, and customer experience. You may take these insights as an input to design better more efficient and customer friendly sales, contact center or self-service processes. If customer experience is important for your business, make sure you have incorporated BPM as a part of your strategy to design, orchestrate and improve your customer facing processes.

    Read the article

  • Lack of Transparency in the Supply Chain Results in Inconsistent Reporting on Conflict Minerals

    - by Terri Hiskey
    May 31, 2014 was the official deadline for U.S.-listed companies to disclose use of conflict minerals to the SEC. Of the estimated 6,000 companies that were required to file audits of their tin, gold, tungsten or tantalum in their products, only 1,300 filed reports, and these results have revealed the ongoing challenges that many manufacturers are having complying with this legislation. An article authored by IDC analyst Heather Ashton,"Conflict Minerals Reporting Passes a Notable Milestone" notes that many leading companies such as Intel, Apple and HP filed their reports ahead of the deadline, but other companies are struggling with trying to trace their supply chain back to raw materials, especially as many non-U.S. based suppliers have no legal requirement to comply with the law since they are not U.S.-listed companies. This has resulted in widely varying levels of reporting from company to company. Check out the full article here. Are your customers experiencing the same pains?

    Read the article

  • Ubuntu will not start due to full partitions

    - by mike
    I left my computer downloading all the night and I did download 35 GB of movies (legal ...). I restarted the computed in the morning then I booted in my encrypted Windows partition for my work. I have left my computer downloading 35GB of files and when I restarted in the morning, I booted Windows. When I tried to access Ubuntu, it failed to boot and in low-graphic mode told me that it won't boot because the partition is full. I tried rescue and it reported 0 MB free. I also cannot delete files with sudo rm as all are impossible due to a read-only file system. I can mount it in Windows but there is a "write protection" there, also. Should I try a live USB?

    Read the article

  • MPL with Commercial Use Restrictions (And Other Questions)

    - by PythEch
    So basicly I want to use MPL 2.0 for my open source software but I also want to forbid commercial use. I'm not a legal expert, that's why I'm asking. Should I use dual-license (MPL + Modified BSD License)? Or what does sublicensing mean? If I wanted to license my project, I would include a notice to the header. What should I do if want to dual-license or sublicense? Also, is it OK to use nicknames as copyright owners? I am not able to distribute additional files (e.g LICENSE.txt, README.md etc) with the software simply because it is just a JS script. By open-source I mean not obfuscated JS code. So in this case, am I forced to use GPL to make redistribution of obfuscated work illegal? Thanks for reading, any help is appreciated, answering all of the questions is not essential.

    Read the article

  • Services - Separate Sites or One Site - Impact on SEO

    - by Lynda
    I have a client who is a lawyer that specializes in Criminal Defense and DUI, however, he does not show up well in Google. In researching the sites that rank better have much more content for those specialties than his site does and my thought it that he needs to add more quality content to rank better for those searches. On his site he mentions his specialties, but also he has various personal things on his sites that reflect his interest. These are clearly separated from the business portion. My questions are should he 1) separate his personal information into a new domain and 2) should he have a separate URL for each of his specialties? OR would one URL work as long as everything is clearly separated? I read once that for legal services to rank well you should make a separate site for each specialty and have that site focus solely on that service.

    Read the article

  • Licensing a collaborative research project

    - by Marcus Jones
    I am involved with an international research project which involves many different universities, national labs, and companies. The project is developed by national grants and in-kind support. One task in the project is to develop code to streamline workflow in our domain (energy simulation) by scripting common pre- and post-processing tasks for different tools. We want this code to be freely distributable to the simulation community. How can we ensure that this effort is digestible by the legal departments of these different parties such that the people involved can freely code?

    Read the article

  • .bash_history and .cache

    - by John Isaacks
    I have a user who's home directory is a Mercurial repository. Mercurial notified me that there were 2 new unversioned files in my repository. .bash_history and .cache/motd.legal-displayed. I assume bash_history is the history of bash commands for my user. I have no idea what the other is. I don't want these files to be versioned by Mercurial, are they safe to just delete, or will they come back, or mess something up? Can they be moved to somewhere else? Or do I have to add them to my .hgignore file?

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >