Search Results

Search found 15670 results on 627 pages for 'multi level'.

Page 371/627 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • BizTalk Server 2009 R2 = BizTalk Server 2010

    - by Rajesh Charagandla
    Microsoft has renamed BizTalk Server 2009 R2 as BizTalk Server 2010, and is now telling customers that the evolution of the product recommends it as a major version versus a minor update. BizTalk Server 2009 R2 was designed mainly to bring to the table support for the company’s latest technologies, including Windows Server 2008 R2, SQL Server 2008 R2 and Visual Studio 2010. Following is list of key capabilities added to the release 1.       Enhanced trading partner management that will enable our customers to manage complex B2B relationships with ease 2.       Increase productivity through enhanced BizTalk Mapper. These enhancements are critical in increasing productivity in both EAI and B2B solutions; and a favorite feature of our customers. 3.       Enable secure data transfer across business partners with FTPS adapter 4.       Updated adapters for SAP 7, Oracle eBusiness Suite 12.1, SharePoint 2010 and SQL Server 2008 R2 5.       Improved and simplified management with updated System Center management pack 6.       Simplified management through single dashboard which enables IT Pros to backup and restore BizTalk configuration 7.       Enhanced performance tuning capabilities at Host and Host Instance level 8.       Continued innovation in RFID Space with out of box event filtering and delivery of RFID events

    Read the article

  • Inevitable Corporate Bureaucracy

    - by Ahsan Alam
    Top executives of most smaller organizations want their companies to be different from the larger corporations. They want their organizations smaller in size; but bigger in productivity by eliminating red tapes and corporate bureaucracy. When the company is smaller, people often work like firefighters – taking on new business and technology challenges without thinking about any procedures and guidelines. People also tend to wear many hats to accomplish tasks quickly in order to integrate new businesses. For example, software developers in smaller organizations may take on responsibilities of client interactions, requirements gathering, design and development, code deployment, production support, network infrastructure support, database design and maintenance along with countless other duties. In addition, systems in smaller organizations tend to be loosely guarded. So, people often don't follow many procedures in order to setup environments and implement technical projects. It's not uncommon to change code and deploy without anyone realizing. Similarly, business requirements may also get defined in an informal manner without any type of documentation. As the company grows, everything starts to change significantly impacting people and the overall business process. Suddenly, following procedures become extremely important. Consequently, new roles, guidelines and procedures start to emerge. Everything from business process to technology implementation start to become more and more process oriented. Organizations start to define and document steps, invent procedure to track process and systems level changes, and start restricting access to various systems for security reasons. At the same time, as a growing company start doing businesses with larger clienteles, they are automatically forced to abide by all sorts of industry compliance laws. Moreover, growing companies tend to recruit experienced individuals to fill new roles who usually bring their expertise from larger and more bureaucratic organizations.   Despite the best efforts from the top executives, it seems increased number of procedures and guidelines as well as new recruits automatically contribute to the evolution of corporate bureaucracy. Maybe, corporate bureaucracy is an inevitable side effect of a growing organization.

    Read the article

  • links for 2011-02-22

    - by Bob Rhubart
    Eleven BI trends for 2011 | ITWeb Business Intelligence (tags: ping.fm) The Buttso Blathers: WebLogic Schema Files Buttso shares a link. (tags: orale weblogic) Cloud Computing & Enterprise Architecture | Open Group Blog "On the first look, it may seem like Enterprise Architecture is irrelevant in a company if your complete IT is running on Cloud Computing, SaaS and outsourcing/offshoring. I was of the same opinion last year. However, it is not the case. In fact, the complexity is going to get multiplied." (tags: opengroup cloud enterprisearchitecture) James Taylor: Change Logging Level for SOA 11g James says: "I’m sure there are many blogs out there that have this solution. But I seem to get asked this question a lot so I thought I would post it here for my convenience. (tags: oracle middleware soa) David Linthicum: The Truth behind Standards, SOA, and Cloud Computing "Most of the standards we've worked on in the world of SOA over the past several years are applicable to the world of cloud computing. Cloud computing is simply a change in platform, and the existing architectural standards we leverage should transfer nicely to the cloud computing space." - David Linthicum (tags: enterprisearchitecture soa cloud) C. Martin Harris, MD: HIMSS11 Update from the Chairman "We cannot allow ourselves to focus exclusively on near term goals. Our real goal is a technology-driven transformation of healthcare that will never stop. A true transformation is a process of lessons learned and applied, that continually open broad new horizons of opportunity." - C. Martin Harris, MD (tags: enterprisearchitecture modernization)

    Read the article

  • ArchBeat Link-o-Rama for 11/22/2011

    - by Bob Rhubart
    A Brief Introduction on Migrating to an Oracle-based Cloud Environment | Tom Laszewski "Before you can start migrating to the cloud, you must define what the cloud means to you," says Tom Laszeski. "The cloud is not a specific software or hardware product; contrary to what many technology vendors would have you believe." Custom Exception Registration for ADF BC EO Attribute | Andrejus Baranovskis "Sometimes customers prefer to implement business logic validation completely in Java, without using ADF BC declarative/Groovy validation rules," says Oracle ACE Director Andrejus Baranovskis. "Thats fine, we can code business logic validation in ADF and implement different custom validation methods on VO/EO level." Oracle Exadata Virtual Conference - Jan 20 2012 The Exadata SIG, along with IOUG, is organizing the First Exadata Virtual Conference, to be held on January 20, 2012. Proposals for presentations are now being accepted. Smooth Sailing or Rough Waters: Navigating Policy Administration Modernization | Helen Pitts "It’s no surprise that fueling growth, both now and in the future, continues to be a key driver for modernization" says Helen Pitts. "Why? Inflexible, hard-coded, legacy systems require customization by IT every time a change is required." Architects putting on the Ritz; Info integration book learning; Platform for SAS Grid Computing This week on the Architect Home Page on OTN. Webcast: Introducing Oracle WebLogic Server 12c: Developer Deep Dive - Dec 1 - 11am PT / 2pm ET Learn how Oracle WebLogic Server 12c enables rapid development of modern, lightweight Java EE 6 applications. Discover how you can leverage the latest development technologies, tools and standards when deploying to Oracle WebLogic Server across both conventional and Cloud environments. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ - Dec14. Free registration. When: December 14, 2011 Where: The Ritz-Carlton, Phoenix, 2401 East Camelback Road, Phoenix, AZ 85016 Registration is free, but seating is limited.

    Read the article

  • Solaris: What comes next?

    - by alanc
    As you probably know by now, a few months ago, we released Solaris 11 after years of development. That of course means we now need to figure out what comes next - if Solaris 11 is “The First Cloud OS”, then what do we need to make future releases of Solaris be, to be modern and competitive when they're released? So we've been having planning and brainstorming meetings, and I've captured some notes here from just one of those we held a couple weeks ago with a number of the Silicon Valley based engineers. Now before someone sees an idea here and calls their product rep wanting to know what's up, please be warned what follows are rough ideas, and as I'll discuss later, none of them have any committment, schedule, working code, or even plan for integration in any possible future product at this time. (Please don't make me force you to read the full Oracle future product disclaimer here, you should know it by heart already from the front of every Oracle product slide deck.) To start with, we did some background research, looking at ideas from other Oracle groups, and competitive OS'es. We examined what was hot in the technology arena and where the interesting startups were heading. We then looked at Solaris to see where we could apply those ideas. Making Network Admins into Socially Networking Admins We all know an admin who has grumbled about being the only one stuck late at work to fix a problem on the server, or having to work the weekend alone to do scheduled maintenance. But admins are humans (at least most are), and crave companionship and community with their fellow humans. And even when they're alone in the server room, they're never far from a network connection, allowing access to the wide world of wonders on the Internet. Our solution here is not building a new social network - there's enough of those already, and Oracle even has its own Oracle Mix social network already. What we proposed is integrating Solaris features to help engage our system admins with these social networks, building community and bringing them recognition in the workplace, using achievement recognition systems as found in many popular gaming platforms. For instance, if you had a Facebook account, and a group of admin friends there, you could register it with our Social Network Utility For Facebook, and then your friends might see: Alan earned the achievement Critically Patched (April 2012) for patching all his servers. Matt is only at 50% - encourage him to complete this achievement today! To avoid any undue risk of advertising who has unpatched servers that are easier targets for hackers to break into, this information would be tightly protected via Facebook's world-renowned privacy settings to avoid it falling into the wrong hands. A related form of gamification we considered was replacing simple certfications with role-playing-game-style Experience Levels. Instead of just knowing an admin passed a test establishing a given level of competency, these would provide recruiters with a more detailed level of how much real-world experience an admin has. Achievements such as the one above would feed into it, but larger numbers of experience points would be gained by tougher or more critical tasks - such as recovering a down system, or migrating a service to a new platform. (As long as it was an Oracle platform of course - migrating to an HP or IBM platform would cause the admin to lose points with us.) Unfortunately, we couldn't figure out a good way to prevent (if you will) “gaming” the system. For instance, a disgruntled admin might decide to start ignoring warnings from FMA that a part is beginning to fail or skip preventative maintenance, in the hopes that they'd cause a catastrophic failure to earn more points for bolstering their resume as they look for a job elsewhere, and not worrying about the effect on your business of a mission critical server going down. More Z's for ZFS Our suggested new feature for ZFS was inspired by the worlds most successful Z-startup of all time: Zynga. Using the Social Network Utility For Facebook described above, we'd tie it in with ZFS monitoring to help you out when you find yourself in a jam needing more disk space than you have, and can't wait a month to get a purchase order through channels to buy more. Instead with the click of a button you could post to your group: Alan can't find any space in his server farm! Can you help? Friends could loan you some space on their connected servers for a few weeks, knowing that you'd return the favor when needed. ZFS would create a new filesystem for your use on their system, and securely share it with your system using Kerberized NFS. If none of your friends have space, then you could buy temporary use space in small increments at affordable rates right there in Facebook, using your Facebook credits, and then file an expense report later, after the urgent need has passed. Universal Single Sign On One thing all the engineers agreed on was that we still had far too many "Single" sign ons to deal with in our daily work. On the web, every web site used to have its own password database, forcing us to hope we could remember what login name was still available on each site when we signed up, and which unique password we came up with to avoid having to disclose our other passwords to a new site. In recent years, the web services world has finally been reducing the number of logins we have to manage, with many services allowing you to login using your identity from Google, Twitter or Facebook. So we proposed following their lead, introducing PAM modules for web services - no more would you have to type in whatever login name IT assigned and try to remember the password you chose the last time password aging forced you to change it - you'd simply choose which web service you wanted to authenticate against, and would login to your Solaris account upon reciept of a cookie from their identity service. Pinning notes to the cloud We also all noted that we all have our own pile of notes we keep in our daily work - in text files in our home directory, in notebooks we carry around, on white boards in offices and common areas, on sticky notes on our monitors, or on scraps of paper pinned to our bulletin boards. The contents of the notes vary, some are things just for us, some are useful for our groups, some we would share with the world. For instance, when our group moved to a new building a couple years ago, we had a white board in the hallway listing all the NIS & DNS servers, subnets, and other network configuration information we needed to set up our Solaris machines after the move. Similarly, as Solaris 11 was finishing and we were all learning the new network configuration commands, we shared notes in wikis and e-mails with our fellow engineers. Users may also remember one of the popular features of Sun's old BigAdmin site was a section for sharing scripts and tips such as these. Meanwhile, the online "pin board" at Pinterest is taking the web by storm. So we thought, why not mash those up to solve this problem? We proposed a new BigAddPin site where users could “pin” notes, command snippets, configuration information, and so on. For instance, once they had worked out the ideal Automated Installation manifest for their app server, they could pin it up to share with the rest of their group, or choose to make it public as an example for the world. Localized data, such as our group's notes on the servers for our subnet, could be shared only to users connecting from that subnet. And notes that they didn't want others to see at all could be marked private, such as the list of phone numbers to call for late night pizza delivery to the machine room, the birthdays and anniversaries they can never remember but would be sleeping on the couch if they forgot, or the list of automatically generated completely random, impossible to remember root passwords to all their servers. For greater integration with Solaris, we'd put support right into the command shells — redirect output to a pinned note, set your path to include pinned notes as scripts you can run, or bring up your recent shell history and pin a set of commands to save for the next time you need to remember how to do that operation. Location service for Solaris servers A longer term plan would involve convincing the hardware design groups to put GPS locators with wireless transmitters in future server designs. This would help both admins and service personnel trying to find servers in todays massive data centers, and could feed into location presence apps to help show potential customers that while they may not see many Solaris machines on the desktop any more, they are all around. For instance, while walking down Wall Street it might show “There are over 2000 Solaris computers in this block.” [Note: this proposal was made before the recent media coverage of a location service aggregrator app with less noble intentions, and in hindsight, we failed to consider what happens when such data similarly falls into the wrong hands. We certainly wouldn't want our app to be misinterpreted as “There are over $20 million dollars of SPARC servers in this building, waiting for you to steal them.” so it's probably best it was rejected.] Harnessing the power of the GPU for Security Most modern OS'es make use of the widespread availability of high powered GPU hardware in today's computers, with desktop environments requiring 3-D graphics acceleration, whether in Ubuntu Unity, GNOME Shell on Fedora, or Aero Glass on Windows, but we haven't yet made Solaris fully take advantage of this, beyond our basic offering of Compiz on the desktop. Meanwhile, more businesses are interested in increasing security by using biometric authentication, but must also comply with laws in many countries preventing discrimination against employees with physical limations such as missing eyes or fingers, not to mention the lost productivity when employees can't login due to tinted contacts throwing off a retina scan or a paper cut changing their fingerprint appearance until it heals. Fortunately, the two groups considering these problems put their heads together and found a common solution, using 3D technology to enable authentication using the one body part all users are guaranteed to have - pam_phrenology.so, a new PAM module that uses an array USB attached web cams (or just one if the user is willing to spin their chair during login) to take pictures of the users head from all angles, create a 3D model and compare it to the one in the authentication database. While Mythbusters has shown how easy it can be to fool common fingerprint scanners, we have not yet seen any evidence that people can impersonate the shape of another user's cranium, no matter how long they spend beating their head against the wall to reshape it. This could possibly be extended to group users, using modern versions of some of the older phrenological studies, such as giving all users with long grey beards access to the System Architect role, or automatically placing users with pointy spikes in their hair into an easy use mode. Unfortunately, there are still some unsolved technical challenges we haven't figured out how to overcome. Currently, a visit to the hair salon causes your existing authentication to expire, and some users have found that shaving their heads is the only way to avoid bad hair days becoming bad login days. Reaction to these ideas After gathering all our notes on these ideas from the engineering brainstorming meeting, we took them in to present to our management. Unfortunately, most of their reaction cannot be printed here, and they chose not to accept any of these ideas as they were, but they did have some feedback for us to consider as they sent us back to the drawing board. They strongly suggested our ideas would be better presented if we weren't trying to decipher ink blotches that had been smeared by the condensation when we put our pint glasses on the napkins we were taking notes on, and to that end let us know they would not be approving any more engineering offsites in Irish themed pubs on the Friday of a Saint Patrick's Day weekend. (Hopefully they mean that situation specifically and aren't going to deny the funding for travel to this year's X.Org Developer's Conference just because it happens to be in Bavaria and ending on the Friday of the weekend Oktoberfest starts.) They recommended our research techniques could be improved over just sitting around reading blogs and checking our Facebook, Twitter, and Pinterest accounts, such as considering input from alternate viewpoints on topics such as gamification. They also mentioned that Oracle hadn't fully adopted some of Sun's common practices and we might have to try harder to get those to be accepted now that we are one unified company. So as I said at the beginning, don't pester your sales rep just yet for any of these, since they didn't get approved, but if you have better ideas, pass them on and maybe they'll get into our next batch of planning.

    Read the article

  • Events and objects being skipped in GameMaker

    - by skeletalmonkey
    Update: Turns out it's not an issue with this code (or at least not entirely). Somehow the objects I use for keylogging and player automation (basic ai that plays the game) are being 'skipped' or not loaded about half the time. These are invisible objects in a room that have basic effects such are simulating button presses, or logging them. I don't know how to better explain this problem without putting up all my code, so unless someone has heard of this issue I guess I'll be banging my head against the desk for a bit /Update I've been continuing work on modifying Spelunky, but I've run into a pretty major issue with GameMaker, which I hope is me just doing something wrong. I have the code below, which is supposed to write log files named sequentially. It's placed in a End Room event such that when a player finishes a level, it'll write all their keypress's to file. The problem is that it randomly skips files, and when it reaches about 30 logs it stops creating any new files. var file_name; file_count = 4; file_name = file_find_first("logs/*.txt", 0); while (file_name != "") { file_count += 1; file_name = file_find_next(); } file_find_close(); file = file_text_open_write("logs/log" + string(file_count) + ".txt"); for(i = 0; i < ds_list_size(keyCodes); i += 1) { file_text_write_string(file, string(ds_list_find_value(keyCodes, i))); file_text_write_string(file, " "); file_text_write_string(file, string(ds_list_find_value(keyTimes, i))); file_text_writeln(file); } file_text_close(file); My best guess is that the first counting loop is taking too long and the whole thing is getting dropped? Also, if anyone can tell me of a better way to have sequentially numbered log files that would also be great. Log files have to continue counting over multiple start/stops of the game.

    Read the article

  • DNS and VPN issues

    - by Lewis
    I recently purchased a year contract for a KVM 512MB VPS running Ubuntu 11.04. I'm having some issues setting up some things on it though - two in particular that I just can't for the life of me figure out. Okay, so I'm trying to setup pptpd as my VPN for my iPhone and my Mac when I'm out on wireless networks. I'm able to login and the chap authenticates but that's as far as I get, no domains will resolve and end up loading forever, I uncommented ms-dns lines as someone had recommended to me and changed the DNS servers to Googles public ones with no luck, is there something I'm missing? (It's probably staring me in the face.) My second issue is that I have managed to setup LAMP but am having a problem with my domain, I have pointed the DNS at 123-reg to my VPS's IP and the 'www .' resolves properly, but when I try to go to the domain without the 'www .' I get the apache landing page ("The web server software is running but no content has been added, yet.") I'm pretty sure there's something I've gotta configure in Apache for the virtual host but I'm missing it. Apart from these minor set-backs I'm enjoying the low-level configuration options of having a VPS and love managing my own server. Thanks!

    Read the article

  • Useful certifications for a young programmer

    - by Alain
    As @Paddyslacker elegantly stated in Are certifications worth it? The main purpose of certifications is to make money for the certifying body. I am a fairly young developer, with only an undergraduate degree, and my job is (graciously) offering to sponsor some professional development of my choice (provided it can be argued that it will contribute to the quality of work I do for them). A search online offers a slew of (mostly worthless) certifications one can attain. I'm wondering if there are any that are actually recognized in the (North American) industry as an asset. My local university promoted CIPS (I.S.P., ITCP) at the time I was graduating, but for all I can tell it's just the one that happened to get its foot in the door. It's certainly money grubbing - with a $205 a year fee. So are there any such certifications that provide useful credentials? To better define 'useful' - would it benefit full time developers, or is it only something worth while to the self-employed? Would any certifications lead me to being considered for higher wages, or can that only be achieved with more experience and an higher-level degree?

    Read the article

  • symlink for dbus headers

    - by DarenW
    Source code for something that won't compile has the line #include but in real life that header file is in /usr/include/dbus-1.0/ Similarsituation exists for the dbus-c++ package. Why doesn't Ubuntu provide a symlink /usr/include/dbus pointing to the dbus-1.0 directory? Is this an bug in the dbus package? If intended, what it the purpose? Is it a proper fix to add a symlink myself? (Changing the source is not practical - there are many files, and they need to match what other people have.) update: ok, I totally misunderstood the situation, though it still comes down to a problem I think should be solved by a symlink. The dbus directory referred to in the #include statement is a deeper level directory under /usr/include/dbus-1.0/. The real problem is that the file dbus-arch-deps.h appears to be missing, but is actually stored in the weird location /usr/lib/x86_64-linux-gnu/dbus-1.0/include/dbus/ so now - why doesn't ubuntu provide a symlink to this in /usr/include/dbus-1.0/dbus, or actually store it there?

    Read the article

  • Huge google impression drop after cleaning html

    - by olgatorresfoundation
    Good morning, I am the webmaster of a non-profit organization that donates grants to colorectal cancer research projects and funds various colorectal cancer information campaigns. We have three domains: www.fundacioolgatorres dot org (Catalan) www.fundacionolgatorres dot org (Spanish) www.olgatorresfoundation dot org (English) So what happened? I redesigned olgatorresfoundation on the 20th and the fundacionolgatorres on the 30th of May. In both cases, exactly two days later, the number of impressions on both dropped to a halt. Granted, we did not have the traffic of Microsoft, but a 90% decrease a disaster of incredible proportions for us. My only real changes were cleaning up the old ineffective HTML to a cleaner form (mostly moving away from redundant table construction to a table-less view). Here is a before and after snapshot of what the change looks like: Before: http://www.fundacioolgatorres.org/aparell_digestiu/introduccio/ (unchanged page in Catalan) After: http://www.olgatorresfoundation.org/digestive_system/introduction/ (changed page in English) Anybody has a clue to what just happened? Why should a normal, sane html improvement be punished and so dramatically? No URLs have been changed, neither have page names or descriptions. Possible secondary question: If it is so that Google sees it as a major overhaul and decides to drop the pagerank sharply, does it come back to pre-change levels if the content "checks out" or will the page start over from scratch earning those pagerank points (which would mean that we would have to wait 6 months for the pages to recover to the level they had two weeks ago)? (duplicated from productforums.google dot com/forum/#!category-topic/webmasters/crawling-indexing--ranking/YsnyX0JzOpY, hoping to reach a wider audience)

    Read the article

  • New Big Data Appliance Security Features

    - by mgubar
    The Oracle Big Data Appliance (BDA) is an engineered system for big data processing.  It greatly simplifies the deployment of an optimized Hadoop Cluster – whether that cluster is used for batch or real-time processing.  The vast majority of BDA customers are integrating the appliance with their Oracle Databases and they have certain expectations – especially around security.  Oracle Database customers have benefited from a rich set of security features:  encryption, redaction, data masking, database firewall, label based access control – and much, much more.  They want similar capabilities with their Hadoop cluster.    Unfortunately, Hadoop wasn’t developed with security in mind.  By default, a Hadoop cluster is insecure – the antithesis of an Oracle Database.  Some critical security features have been implemented – but even those capabilities are arduous to setup and configure.  Oracle believes that a key element of an optimized appliance is that its data should be secure.  Therefore, by default the BDA delivers the “AAA of security”: authentication, authorization and auditing. Security Starts at Authentication A successful security strategy is predicated on strong authentication – for both users and software services.  Consider the default configuration for a newly installed Oracle Database; it’s been a long time since you had a legitimate chance at accessing the database using the credentials “system/manager” or “scott/tiger”.  The default Oracle Database policy is to lock accounts thereby restricting access; administrators must consciously grant access to users. Default Authentication in Hadoop By default, a Hadoop cluster fails the authentication test. For example, it is easy for a malicious user to masquerade as any other user on the system.  Consider the following scenario that illustrates how a user can access any data on a Hadoop cluster by masquerading as a more privileged user.  In our scenario, the Hadoop cluster contains sensitive salary information in the file /user/hrdata/salaries.txt.  When logged in as the hr user, you can see the following files.  Notice, we’re using the Hadoop command line utilities for accessing the data: $ hadoop fs -ls /user/hrdataFound 1 items-rw-r--r--   1 oracle supergroup         70 2013-10-31 10:38 /user/hrdata/salaries.txt$ hadoop fs -cat /user/hrdata/salaries.txtTom Brady,11000000Tom Hanks,5000000Bob Smith,250000Oprah,300000000 User DrEvil has access to the cluster – and can see that there is an interesting folder called “hrdata”.  $ hadoop fs -ls /user Found 1 items drwx------   - hr supergroup          0 2013-10-31 10:38 /user/hrdata However, DrEvil cannot view the contents of the folder due to lack of access privileges: $ hadoop fs -ls /user/hrdata ls: Permission denied: user=drevil, access=READ_EXECUTE, inode="/user/hrdata":oracle:supergroup:drwx------ Accessing this data will not be a problem for DrEvil. He knows that the hr user owns the data by looking at the folder’s ACLs. To overcome this challenge, he will simply masquerade as the hr user. On his local machine, he adds the hr user, assigns that user a password, and then accesses the data on the Hadoop cluster: $ sudo useradd hr $ sudo passwd $ su hr $ hadoop fs -cat /user/hrdata/salaries.txt Tom Brady,11000000 Tom Hanks,5000000 Bob Smith,250000 Oprah,300000000 Hadoop has not authenticated the user; it trusts that the identity that has been presented is indeed the hr user. Therefore, sensitive data has been easily compromised. Clearly, the default security policy is inappropriate and dangerous to many organizations storing critical data in HDFS. Big Data Appliance Provides Secure Authentication The BDA provides secure authentication to the Hadoop cluster by default – preventing the type of masquerading described above. It accomplishes this thru Kerberos integration. Figure 1: Kerberos Integration The Key Distribution Center (KDC) is a server that has two components: an authentication server and a ticket granting service. The authentication server validates the identity of the user and service. Once authenticated, a client must request a ticket from the ticket granting service – allowing it to access the BDA’s NameNode, JobTracker, etc. At installation, you simply point the BDA to an external KDC or automatically install a highly available KDC on the BDA itself. Kerberos will then provide strong authentication for not just the end user – but also for important Hadoop services running on the appliance. You can now guarantee that users are who they claim to be – and rogue services (like fake data nodes) are not added to the system. It is common for organizations to want to leverage existing LDAP servers for common user and group management. Kerberos integrates with LDAP servers – allowing the principals and encryption keys to be stored in the common repository. This simplifies the deployment and administration of the secure environment. Authorize Access to Sensitive Data Kerberos-based authentication ensures secure access to the system and the establishment of a trusted identity – a prerequisite for any authorization scheme. Once this identity is established, you need to authorize access to the data. HDFS will authorize access to files using ACLs with the authorization specification applied using classic Linux-style commands like chmod and chown (e.g. hadoop fs -chown oracle:oracle /user/hrdata changes the ownership of the /user/hrdata folder to oracle). Authorization is applied at the user or group level – utilizing group membership found in the Linux environment (i.e. /etc/group) or in the LDAP server. For SQL-based data stores – like Hive and Impala – finer grained access control is required. Access to databases, tables, columns, etc. must be controlled. And, you want to leverage roles to facilitate administration. Apache Sentry is a new project that delivers fine grained access control; both Cloudera and Oracle are the project’s founding members. Sentry satisfies the following three authorization requirements: Secure Authorization:  the ability to control access to data and/or privileges on data for authenticated users. Fine-Grained Authorization:  the ability to give users access to a subset of the data (e.g. column) in a database Role-Based Authorization:  the ability to create/apply template-based privileges based on functional roles. With Sentry, “all”, “select” or “insert” privileges are granted to an object. The descendants of that object automatically inherit that privilege. A collection of privileges across many objects may be aggregated into a role – and users/groups are then assigned that role. This leads to simplified administration of security across the system. Figure 2: Object Hierarchy – granting a privilege on the database object will be inherited by its tables and views. Sentry is currently used by both Hive and Impala – but it is a framework that other data sources can leverage when offering fine-grained authorization. For example, one can expect Sentry to deliver authorization capabilities to Cloudera Search in the near future. Audit Hadoop Cluster Activity Auditing is a critical component to a secure system and is oftentimes required for SOX, PCI and other regulations. The BDA integrates with Oracle Audit Vault and Database Firewall – tracking different types of activity taking place on the cluster: Figure 3: Monitored Hadoop services. At the lowest level, every operation that accesses data in HDFS is captured. The HDFS audit log identifies the user who accessed the file, the time that file was accessed, the type of access (read, write, delete, list, etc.) and whether or not that file access was successful. The other auditing features include: MapReduce:  correlate the MapReduce job that accessed the file Oozie:  describes who ran what as part of a workflow Hive:  captures changes were made to the Hive metadata The audit data is captured in the Audit Vault Server – which integrates audit activity from a variety of sources, adding databases (Oracle, DB2, SQL Server) and operating systems to activity from the BDA. Figure 4: Consolidated audit data across the enterprise.  Once the data is in the Audit Vault server, you can leverage a rich set of prebuilt and custom reports to monitor all the activity in the enterprise. In addition, alerts may be defined to trigger violations of audit policies. Conclusion Security cannot be considered an afterthought in big data deployments. Across most organizations, Hadoop is managing sensitive data that must be protected; it is not simply crunching publicly available information used for search applications. The BDA provides a strong security foundation – ensuring users are only allowed to view authorized data and that data access is audited in a consolidated framework.

    Read the article

  • Log php errors in ubuntu

    - by resting
    I followed the setup here: Where is the PHP error log When I look into /var/log/php_errors.log, I could see some PHP errors. PHP Warning: file_get_contents(/var/www/...): failed to open stream: No such file or directory in ... But what I'm trying to see is the error when I removed a semicolon from a statement. That error above has no relation to file from where I removed the semicolon so we can just ignore that. When I access the page with the removed semicolon, I get The website encountered an error while retrieving https://myapp/download/decode/testfile. It may be down for maintenance or configured incorrectly. HTTP Error 500 (Internal Server Error): An unexpected condition was encountered while the server was attempting to fulfill the request. But no logs in /var/log/php_errors.log. How do I see the error that usually says which line and which file the process failed? The real reason for trying to see the error is because I have a very huge loop, that throws the HTTP 500 error and I can't see the exact error. I'm just simulation with a removed semicolon to test things out. Other settings: error_reporting = E_ALL & ~E_DEPRECATED display_errors = On On Ubuntu 10.04.4 LTS Update Ok, I managed to get the error message to display. Parse error: syntax error, unexpected T_IF in ... However, it's still not logged. It wasn't displaying previously because Cakephp's debug level was at 0. Setting it to 2 displays the message, but no logs.

    Read the article

  • MMO Data Persistence Question

    - by JasonG
    I wanted to ask a question regarding data persistence strategies for an MMO. I have some experience in the games industry with social synchronous games. At Zynga, we stored static proto data in XML on both the client and the server and stored instance/runtime data in membase. For clarity sake, proto data for a Potion would be PotionName or MaxCharges, while runtime/instance data would be something like ChargesRemaining. So basically, if a player picks up a potion the instance is (via prediction) created from XML data on the client, the request gets sent to the server where the instance is created from XML and then added to membase. Is the same strategy that would be used for soemthing like an MMO? Would it be feasible to have static proto data in some kind of in-memory no-sql database on both client and server with instance data being stored on the server in a more enterprise level database? Or should all data (proto/instance) be stored on the server and the client gets everything from server? I know a lot of this might on certain game requirements, however, i'm basically looking for some general opinion/best practices here if there are any.

    Read the article

  • JRockit R28 "Ropsten" released

    - by tomas.nilsson
    R28 is a major release (as indicated by the careless omissions of "minor" and "revision" numbers. The formal name would be R28.0.0). Our customers expect grand new features and innovation from major releases, and "Ropsten" will not disappoint. One of the biggest challenges for IT systems is after the fact diagnostics. That is - Once something has gone wrong, the act of trying to figure out why it went wrong. Monitoring a system and keeping track of system health once it is running is considered a hard problem (one that we to some extent help our customers solve already with JRockit Mission Control), but doing it after something occurred is close to impossible. The most common solution is to set up heavy logging (and sacrificing system performance to do the logging) and hope that the problem occurs again. No one really thinks that this is a good solution, but it's the best there is. Until now. Inspired by the "Black box" in airplanes, JRockit R28 introduces the Flight Recorder. Flight Recorder can be seen as an extremely detailed log, but one that is always on and that comes without a cost to system performance. With JRockit Flight Recorder the customer will be able to get diagnostics information about what happened _before_ a problem occurred, instead of trying to guess by looking at the fallout. Keywords that are important to the customer are: • Extremely detailed, always on, diagnostics information • No performance overhead • Powerful tooling to visualize the data recorded. • Enables diagnostics of bugs and SLA breaches after the fact. For followers of JRockit, other additions are: • New JMX agent that allows JRMC to be used through firewalls more easily • Option to generate HPROF dumps, compatible with tools like Eclipse MAT • Up to 64 BG compressed references (previously 4) • View memory allocation on a thread level (as an Mbean and in Mission Control) • Native memory tracking (Command line and Mbean) • More robust optimizer. • Dropping support for Java 1.4.2 and Itanium If you have any further questions, please email [email protected]. The release can be downloaded from http://www.oracle.com/technology/software/products/jrockit/index.html

    Read the article

  • What is the value in hiding the details through abstractions? Isn't there value in transparency?

    - by user606723
    Background I am not a big fan of abstraction. I will admit that one can benefit from adaptability, portability and re-usability of interfaces etc. There is real benefit there, and I don't wish to question that, so let's ignore it. There is the other major "benefit" of abstraction, which is to hide implementation logic and details from users of this abstraction. The argument is that you don't need to know the details, and that one should concentrate on their own logic at this point. Makes sense in theory. However, whenever I've been maintaining large enterprise applications, I always need to know more details. It becomes a huge hassle digging deeper and deeper into the abstraction at every turn just to find out exactly what something does; i.e. having to do "open declaration" about 12 times before finding the stored procedure used. This 'hide the details' mentality seems to just get in the way. I'm always wishing for more transparent interfaces and less abstraction. I can read high level source code and know what it does, but I'll never know how it does it, when how it does it, is what I really need to know. What's going on here? Has every system I've ever worked on just been badly designed (from this perspective at least)? My philosophy When I develop software, I feel like I try to follow a philosophy I feel is closely related to the ArchLinux philosophy: Arch Linux retains the inherent complexities of a GNU/Linux system, while keeping them well organized and transparent. Arch Linux developers and users believe that trying to hide the complexities of a system actually results in an even more complex system, and is therefore to be avoided. And therefore, I never try to hide complexity of my software behind abstraction layers. I try to abuse abstraction, not become a slave to it. Question at heart Is there real value in hiding the details? Aren't we sacrificing transparency? Isn't this transparency valuable?

    Read the article

  • How to read Scala code with lots of implicits?

    - by Petr Pudlák
    Consider the following code fragment (adapted from http://stackoverflow.com/a/12265946/1333025): // Using scalaz 6 import scalaz._, Scalaz._ object Example extends App { case class Container(i: Int) def compute(s: String): State[Container, Int] = state { case Container(i) => (Container(i + 1), s.toInt + i) } val d = List("1", "2", "3") type ContainerState[X] = State[Container, X] println( d.traverse[ContainerState, Int](compute) ! Container(0) ) } I understand what it does on high level. But I wanted to trace what exactly happens during the call to d.traverse at the end. Clearly, List doesn't have traverse, so it must be implicitly converted to another type that does. Even though I spent a considerable amount of time trying to find out, I wasn't very successful. First I found that there is a method in scalaz.Traversable traverse[F[_], A, B] (f: (A) => F[B], t: T[A])(implicit arg0: Applicative[F]): F[T[B]] but clearly this is not it (although it's most likely that "my" traverse is implemented using this one). After a lot of searching, I grepped scalaz source codes and I found scalaz.MA's method traverse[F[_], B] (f: (A) => F[B])(implicit a: Applicative[F], t: Traverse[M]): F[M[B]] which seems to be very close. Still I'm missing to what List is converted in my example and if it uses MA.traverse or something else. The question is: What procedure should I follow to find out what exactly is called at d.traverse? Having even such a simple code that is so hard analyze seems to me like a big problem. Am I missing something very simple? How should I proceed when I want to understand such code that uses a lot of imported implicits? Is there some way to ask the compiler what implicits it used? Or is there something like Hoogle for Scala so that I can search for a method just by its name?

    Read the article

  • Should programmers itemize testing for projects? [on hold]

    - by Patton77
    I recently hired a programming team to do a port of my iPad app to the iPhone and Android platforms. Now, in a separate contract, I am asking them to implement a bunch of tips on how to play the app, similar like you would find in Candy Crush or Cut the Rope. They want to charge 12 hours @ $35/hr for the "Testing all of the Tips", telling me that normally it would take them more than 25 hours but that they will 'bear the difference'. I am not familiar with this level of itemization, but maybe it's a new practice? I am used to devs doing their own quality control, and then having a testing/acceptance period. They are using Cocos 2D-X, and they say that the tips going to multiple platforms makes all of the hours jack up. I feel like they might be overcharging, and it's difficult for me to know because it's kind of like with a mechanic. "It took us 5 hours to replace the radiator". How can you dispute that? It seems to me that most of you would charge for the work but NOT for hours that you are 'testing'. Am I missing something? Thanks for any help and advice you can give!

    Read the article

  • Detect two specific objects collision with bullet physics

    - by sebap123
    I have got some problem with defining collision between objects in my game using bullet physics. I know that objects are colliding with each other simultaneously and I don't have to do anything more. However I need to be noticed when one object collides with one of the rest. It is quite awkward written so I will tell what I want to achive. I have got ball which hits wall from tubes. Everything is on the floor. When ball hits wall some fragments fall down to infinity. So I have got bellow floor btStaticPlaneShape. This is place where most of objects is stoping and then I can start another action. But not all of them. So I've been trying to use function checkCollideWith but it isn't good method as it was said in reference and wiki. So I've checked method described in wiki http://bulletphysics.org/mediawiki-1.5.8/index.php/Collision_Callbacks_and_Triggers called contact information. This isn't good method either because it is extremly hard to identify what is what when colliding. You have to also remember that ball is almost all the time colliding with something - floor, wall or eart level. So is there any other method to check what is colliding with what?

    Read the article

  • Good ruby book with exercises? [closed]

    - by watabou
    I find that I learn the best with a book that has a number of exercises at the end of each chapters. A great example of this is C++ Primer Plus by Stephen Prata or Scientific Programming with Python or the Horstmann Java books. All of those books have a number of programming exercises at the end tailored to that specific chapter. I love the styles of those book and was wondering if there is anything similar for Ruby. I've extensively searched google for this and people have been suggesting different stuff like different websites like Ruby Koans and LRTHW but honestly, I've tried those and they aren't for me. I taught myself Python with the the Hard Way book and to be honest, it's not for me. Now, forgive me if I'm blunt but does anyone have a Ruby programming BOOK (i.e. not a website), that has EXERCISES in it? I do NOT want a website, unless the book is only or is freely available online by the author, similar to the Hard Way books. I would say that I'm a intermediate level programmer with only some Ruby experience but if you know of a beginner book on Ruby, that is fine too. Thanks in advance, I would really really appreciate the help.

    Read the article

  • Should programmers itemize testing in testing? [on hold]

    - by Patton77
    I recently hired a programming team to do a port of my iPad app to the iPhone and Android platforms. Now, in a separate contract, I am asking them to implement a bunch of tips on how to play the app, similar like you would find in Candy Crush or Cut the Rope. They want to charge 12 hours @ $35/hr for the "Testing all of the Tips", telling me that normally it would take them more than 25 hours but that they will 'bear the difference'. I am not familiar with this level of itemization, but maybe it's a new practice? I am used to devs doing their own quality control, and then having a testing/acceptance period. They are using Cocos 2D-X, and they say that the tips going to multiple platforms makes all of the hours jack up. I feel like they might be overcharging, and it's difficult for me to know because it's kind of like with a mechanic. "It took us 5 hours to replace the radiator". How can you dispute that? It seems to me that most of you would charge for the work but NOT for hours that you are 'testing'. Am I missing something? Thanks for any help and advice you can give!

    Read the article

  • Notes for a NetBeans IDE 7.4 HTML5 Screencast

    - by Geertjan
    I'm making a screencast that intends to thoroughly introduce NetBeans IDE 7.4 as a tool for HTML, JavaScript, and CSS developers. Here's the current outline, additions and other suggestions are welcome. Getting Started Downloading NetBeans IDE for HTML5 and PHP Examining the NetBeans installation directory, especially netbeans.conf Examining the NetBeans user directory Command line options for starting NetBeans IDE Exploring NetBeans IDE Menus and toolbars Versioning tools Options Window Go through whole Options window Change look and feels Adding themes Syntax coloring Code templates Plugin Manager and Plugin Portal Dark Look and Feel Themes Toggle line wrap Emmet HTML Tidy NetBeans Cheat Sheets Creating HTML5 projects From scratch From online template, e.g., Twitter Bootstrap From ZIP file From folder on disk From sample Editing Useful shortcuts Alt-Enter: see the current hints Alt-Shift-DOT/COMMA: expand selection (CTRL instead of Alt on Mac) Ctrl-Shift-Up/Down: copy up/down Alt-Shift-Up/Down: move up/down Alt-Insert: generate code (Lorum Ipsum) View menu | Show Non-printable Characters Source menu Show keyboard shortcut card Useful hints Surround with Tag Remove Surrounding Tag Useful code completion Link tag for CSS, show completion Script tag for JavaScript, show completion Create code templates in Options window Useful HTML Palette items Unordered List Link Useful code navigation Navigator Navigate menu Useful project settings Project-level deployment settings CSS Preprocessors (SASS/LESS) Cordova support Useful window management Dragging, minimizing, undocking Ctrl-Shift-Enter: distraction-free mode Alt-Shift Enter: maximization Debugging JavaScript debugger Deploying Embedded browser Responsive design Inspect in NetBeans mode Chrome browser with NetBeans plugin Android and iOS browsers Cordova makes native packages On device debugging On device styling Documentation PHP and HTML5 Learning Trail: https://netbeans.org/kb/trails/php.html Contributing Social Media: Twitter, Facebook, blogs Plugin Portal Planning to complete the above screencast this week, will continue editing this page as more useful features arise in my mind or hopefully in the comments in this blog entry!

    Read the article

  • SEO for Country & Language Specific content.

    - by kecebongsoft
    Currently I am creating a website which has a common topic for an article, but it's going to be different content for each country, and also, each of that content will be provided in several languages. And this mechanism exists in most of the parts in the website. For example, I have an article about tax. This article has to be different for each country, for example china. And tax content for china should be written in china AND english language (for non china-speaker). What is the best URL pattern to handle this? What I've been thinking is, using a sub folder (/country-code/language-code/) such as: www.example.com/cn/cn/tax www.example.com/cn/en/tax Or using top level domain such as: www.example.cn/cn/tax www.example.cn/en/tax Or subdomain such as cn.example.com/cn/tax cn.example.com/en/tax I think I will not prefer the last option since I might need to use subdomain for other purpose. Which left only subfolder and TLDN. I've read some articles saying that TLDN is good for localized content (language-specific content), but in my case, my TLDN will also has english contents (for non local speaker) which is specific only to that particular country (also the purpose of this is to let people from other country easily search it through google). What is the best pattern to pick and why?.

    Read the article

  • What is the kd tree intersection logic?

    - by bobobobo
    I'm trying to figure out how to implement a KD tree. On page 322 of "Real time collision detection" by Ericson The text section is included below in case Google book preview doesn't let you see it the time you click the link text section Relevant section: The basic idea behind intersecting a ray or directed line segment with a k-d tree is straightforward. The line is intersected against the node's splitting plane, and the t value of intersection is computed. If t is within the interval of the line, 0 <= t <= tmax, the line straddles the plane and both children of the tree are recursively descended. If not, only the side containing the segment origin is recursively visited. So here's what I have: (open image in new tab if you can't see the lettering) The logical tree Here the orange ray is going thru the 3d scene. The x's represent intersection with a plane. From the LEFT, the ray hits: The front face of the scene's enclosing cube, The (1) splitting plane The (2.2) splitting plane The right side of the scene's enclosing cube But here's what would happen, naively following Ericson's basic description above: Test against splitting plane (1). Ray hits splitting plane (1), so left and right children of splitting plane (1) are included in next test. Test against splitting plane (2.1). Ray actually hits that plane, (way off to the right) so both children are included in next level of tests. (This is counter-intuitive - shouldn't only the bottom node be included in subsequent tests) Can some one describe what happens when the orange ray goes through the scene correctly?

    Read the article

  • Approaches to timed puzzle elements

    - by ndg
    I'm working on a side scrolling game that has a number of timed puzzle elements. As a simple example: I have a number of moving platforms that have been setup to transition in a pattern. Ideally I'd like to ensure that as the player first approaches them, they are in an ideal state -- whereby the player can witness the full transition and more experienced players (i.e: speedrunners) can complete the puzzle immediately without having to wait for the current transition to complete. The issue here, in a nutshell, is that because these platforms begin transitioning at the start of the level, it's impossible to correctly calculate when the player is likely to stumble upon them. I've done a fair bit of Googling but haven't managed to turn up any decent resources with regards to solving a problem like this. The obvious solution is to only begin updating the objects when the player (or more likely: the camera) first encounters them. But this becomes difficult when you consider more complicated situations. It seems like potentially the easiest way of handling this is to have an invisible trigger volume that will tell any puzzle elements located inside of it that the player has 'arrived' upon first colliding with the player. But this would mean I'd have to logically group puzzle elements, which could become fairly messy in a hurry. Take, for instance, a puzzle that appears to the right of the screen. It may take the player a number of seconds to reach it. It would look strange if the elements involved were to remain stationary. But by the time the player arrives, it's likely things will be 'out of sync'. I wanted to post here in the hopes that others know of, or have implemented, a decent solution to this problem?

    Read the article

  • Duck checker in Python: does one exist?

    - by elliot42
    Python uses duck-typing, rather than static type checking. But many of the same concerns ultimately apply: does an object have the desired methods and attributes? Do those attributes have valid, in-range values? Whether you're writing constraints in code, or writing test cases, or validating user input, or just debugging, inevitably somewhere you'll need to verify that an object is still in a proper state--that it still "looks like a duck" and "quacks like a duck." In statically typed languages you can simply declare "int x", and anytime you create or mutate x, it will always be a valid int. It seems feasible to decorate a Python object to ensure that it is valid under certain constraints, and that every time that object is mutated it is still valid under those constraints. Ideally there would be a simple declarative syntax to express "hasattr length and length is non-negative" (not in those words. Not unlike Rails validators, but less human-language and more programming-language). You could think of this as ad-hoc interface/type system, or you could think of it as an ever-present object-level unit test. Does such a library exist to declare and validate constraint/duck-checking on Python-objects? Is this an unreasonable tool to want? :) (Thanks!) Contrived example: rectangle = {'length': 5, 'width': 10} # We live in a fictional universe where multiplication is super expensive. # Therefore any time we multiply, we need to cache the results. def area(rect): if 'area' in rect: return rect['area'] rect['area'] = rect['length'] * rect['width'] return rect['area'] print area(rectangle) rectangle['length'] = 15 print area(rectangle) # compare expected vs. actual output! # imagine the same thing with object attributes rather than dictionary keys.

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >