Search Results

Search found 8369 results on 335 pages for 'company'.

Page 260/335 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • How do I get an Enter USB TV Box TV tuner aka Gadmei UTV302 to work?

    - by Subhash
    Has anyone had any success in using the Enter USB TV Box from Enter Multimedia? It comes bundled with software that works in Windows. I have had no luck using it in Ubuntu 10.10. Update 1 Here is the output from lsusb Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 003: ID 093a:2510 Pixart Imaging, Inc. Optical Mouse Bus 004 Device 002: ID 046d:c312 Logitech, Inc. DeLuxe 250 Keyboard Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 006: ID 1f71:3301 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub I can't find the Enter USB TV Box listed in this. In the dmesg tail command, I found something that seems to be related to the card: usb 1-5: new high speed USB device using ehci_hcd and address 6 usb 1-5: config 1 interface 0 altsetting 1 bulk endpoint 0x83 has invalid maxpacket 256 Update 2 From Windows I learned that this USB TV tuner uses some chipset from Gadmei corporation. All computer stores in India sell Enter USB TV Box if you ask for an USB TV tuner. No other brand seems to be interested in this market. Update 3 I learned that this TV tuner is rebranded version of Gadmei UTV302 (USB TV Tuner Box). Update 4 I tried adding em28xx as the chipset (as suggested by user BOBBO below) for the tuner but that did not work. I went back to my Pinnacle PCTV internal card. I don't think the tuner referred by UbuntuForums (Gadmei UTV 330) and the tuner that I have (Gadmei UTV 302) are the same. My USB tuner is several times bigger. My tuner seems to be a newer device with a newer tuner chip. I will submit details of this device to the LinuxTV developers this weekend. Update 5 I opened the tuner box and found that it uses a tuner from a Chinese company - Tenas. Model is TNF 8022-DFA. Update 6 Tuner chip specs (retrived from supplier directory) for Tenas TNF 8022-DFA. Supply voltage: true 5V device(low power dissipation) Control system: I2C bus control of tuning, address selection Tuning system: PLL controlled tuning Receiving system: system PAL D/K,IF(Intermediate Frequency): 38MHz Receiving channels: full frequency range from channel DS1 (49.75MHz) to channel DS57 (863.25MHz); Use Texas Instruments SN761678 IC solution, with mini install size Update 7 Reverse side of the circuit board. Picture of the TV tuner

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • SQL SERVER – What is MDS? – Master Data Services in Microsoft SQL Server 2008 R2

    - by pinaldave
    What is MDS? Master Data Services helps enterprises standardize the data people rely on to make critical business decisions. With Master Data Services, IT organizations can centrally manage critical data assets company wide and across diverse systems, enable more people to securely manage master data directly, and ensure the integrity of information over time. (Source: Microsoft) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to standardize your data and apply the business rules to validate data you must attend my session. MDS is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Master Data Management What is Master Data and Challenges MDM Challenges and Advantage Microsoft Master Data Services Benefits and Key Features Uses of MDS Capabilities Key Features of MDS This slides decks will be followed by around 30 mins demo which will have story of entity, hierarchies, versions, security, consolidation and collection. I will be tell this story keeping business rules in center. We take one business rule which will be simple validation rule and will make it much more complex and yet very useful to product. I will also demonstrate few real life scenario where I will be talking about MDS and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session: Master Data Services in Microsoft SQL Server 2008 R2 Date: April 12, 2010  Time: 2:30pm-3:30pm SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision making by allowing you to create, manage and propagate changes from single master view of your business entities. Also with MDS – Master Data-hub which is the vital component helps ensure reporting consistency across systems and deliver faster more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Developer’s Life – Every Developer is a Spiderman

    - by Pinal Dave
    I have to admit, Spiderman is my favorite superhero.  The most recent movie recently was released in theaters, so it has been at the front of my mind for some time. Spiderman was my favorite superhero even before the latest movie came out, but of course I took my whole family to see the movie as soon as I could!  Every one of us loved it, including my daughter.  We all left the movie thinking how great it would be to be Spiderman.  So, with that in mind, I started thinking about how we are like Spiderman in our everyday lives, especially developers. Let me list some of the reasons why I think every developer is a Spiderman. We have special powers, just like a superhero.  There is a reason that when there are problems or emergencies, we get called in, just like a superhero!  Our powers might not be the ability to swing through skyscrapers on a web, our powers are our debugging abilities, but there are still similarities! Spiderman never gives up.  He might not be the strongest superhero, and the ability to shoot web from your wrists is a pretty cool power, it’s not as impressive as being able to fly, or be invisible, or turn into a hulking green monster.  Developers are also human.  We have cool abilities, but our true strength lies in our willingness to work hard, find solutions, and go above and beyond to solve problems. Spiderman and developers have “spidey sense.”  This is sort of a joke in the comics and movies as well – that Spiderman can just tell when something is about to go wrong, or when a villain is just around the corner.  Developers also have a spidey sense about when a server is about to crash (usually at midnight on a Saturday). Spiderman makes a great superhero because he doesn’t look like one.  Clark Kent is probably fooling no one, hiding his superhero persona behind glasses.  But Peter Parker actually does blend in.  Great developers also blend in.  When they do their job right, no one knows they were there at all. “With great power comes great responsibility.”  There is a joke about developers (sometimes we even tell the jokes) about how if they are unhappy, the server or databases might mysteriously develop problems.  The truth is, very few developers would do something to harm a company’s computer system – they take their job very seriously.  It is a big responsibility. These are just a few of the reasons why I love Spiderman, why I love being a developer, and why I think developers are the greatest.  Let me know other reasons you love Spiderman and developers, or if you can shoot webs from your wrists – I might have a job for you. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Oracle SQL Developer is for Oracle Database

    - by thatjeffsmith
    What is Oracle SQL Developer? Well, according to this document on OTN… What is SQL Developer? Date: May 2014 Oracle SQL Developer is the Oracle Database IDE. A free graphical user interface, Oracle SQL Developer allows database users and administrators to do their database tasks in fewer clicks and keystrokes. A productivity tool, SQL Developer’s main objective is to help the end user save time and maximize the return on investment in the Oracle Database technology stack. Ok, sounds pretty straightforward. Where does the confusion lie then? Some People Use SQL Developer to Connect to 3rd Party Databases SQL Developer allows you to register 3rd party database JDBC drivers. The 3rd party being a company OTHER than Oracle that makes a database product. You know who they are (SAP, MSFT, IBM, etc.) Registering 3rd party JDBC drivers in SQL Developer But maybe you don’t understand why we support these types of connections? It’s for one driving reason. To Help You Migrate to Oracle Database Yes, you get a worksheet and a tree to query and browse those systems. But, the real meat and bones there are around our migration projects and our translation scratch editor. At the end of the day, it’s there so you can move your data from say Sybase ASE to Oracle Database. On a side note, the migration technology was previously available in a separate application, the Migration Workbench. The technology and the awesome people behind it were folded into SQL Developer. So when asked what SQL Developer is, I say it’s the Database IDE and the official 3rd party database migration to Oracle platform. So anyways, when you ask for better support for another 3rd party provider, we deliver that support based on that business driver. If another 3rd party database jdbc driver is introduced, it’s because we have a lot of customers migrating from that platform. We’re not adding it to make it easier for you to work with SQL Server on your Mac. But, if you find that useful – that is cool. It’s just not why we’ve got the support for SQL Server connections in SQL Developer.

    Read the article

  • Looking for Your Next Challenge...Don't Stretch Too Far

    - by david.talamelli
    In my role as a Recruiter at Oracle I receive a large number of resumes of people who are interested in working with us. People contact me for a number of reasons, it can be about a specific role that we may be hiring for or they may send me an email asking if there are any suitable roles for them. Sometimes when I speak to people we have similar roles available to the roles that they may actually be in now. Sometimes people are interested in making this type of sideways move if their motivation to change jobs is not necessarily that they are looking for increased responsibility or career advancement (example: money, redundancy, work environment). However there are times when after walking through a specific role with a candidate that they may say to me - "You know that is very similar to the role that I am doing now. I would not want to move unless my next role presents me with the next challenge in my career". This is a far statement - if a person is looking to change jobs for the next step in their career they should be looking at suitable opportunities that will address their need. In this instance a sideways step will not really present any new challenges or responsibilities. The main change would be the company they are working for. Candidates looking for a new role because they are looking to move up the ladder should be looking for a role that offers them the next level of responsibility. I think the best job changes for people who are looking for career advancement are the roles that stretch someone outside of their comfort zone but do not stretch them so much that they can't cope with the added responsibilities and pressure. In my head I often think of this example in the same context of an elastic band - you can stretch it, but only so much before it snaps. That is what you should be looking for - to be stretched but not so much that you snap. If you are for example in an individual contributor role and would like to move into a management role - you may not be quite ready to take on a role that is managing a large workforce or requires significant people management experience. While your intentions may be right, your lack of management experience may fit you outside of the scope of search to be successful this type of role. In this example you can move from an individual contributor role to a management role but it may need to be managing a smaller team rather than a larger team. While you are trying to make this transition you can try to pick up some responsibilities in your current role that would give you the skills and experience you need for your next role. Never be afraid to put your hand up to help on a new project or piece of work. You never know when that newly gained experience may come in handy in your career. This article was originally posted on David Talamelli's Blog - David's Journal on Tap

    Read the article

  • SQL SERVER – 4 Tips for ETL Software IDE Developers

    - by pinaldave
    In a previous blog, I introduced the notion of Semantic Types. To an end-user, a seamlessly integrated semantic typing engine significantly increases the ease of use of an ETL IDE (integrated development environment, or developer studio). This led me to think about other ease-of-use issues I have encountered while building ETL applications. When I get stumped while programming, I find myself asking the variations on these questions: “How do I…?” “Now what?” “Why isn’t this working?” “Why do I have to redo the work I just did?” It seems to me that a good ETL IDE will anticipate these questions and seek to answer them before they are even asked. So here are my tips to help software vendors build developer IDEs that actually make development easier. How do I…? While developing an ETL application, have you ever asked yourself: “How do I set up the connection to my SQL Server database?”,“How do I import my table definitions from Access?”, etc. An easy answer might be “read the manual” but sometimes product manuals are not robust or easily accessible. So, integrating robust how-to instructions directly into your ETLstudio would help users get the information they need at the time they need it. Now what? IDEs in general know where you last clicked or performed an action using an input device such as a keyboard; so they should be able to reasonably predict the design context you are in and suggest the next steps accordingly. Context-sensitive suggestions based on the state of the user’s work will help users move forward in ETL application development. Why isn’t this working? Or why do I have to wait till I compile to be told about a critical design issue? If an ETL IDE is smart enough to signal to users what in their design structures is left to be completed or has been completed incorrectly, then the developer can spend much less time in the designàcompileàerror-correct loop. Just-in-time validation helps users detect and correct programming errors earlier in the ETL development life cycle. Why do I have to redo the work I just did? In ETL development, schemas, transformation rules, connectivity objects, etc., can be reused in various situations. Using mouse-clicks to build and manage libraries of reusable design objects implies that the application development effort should decrease over time and as the library acquires more objects. I met a great company at SQL Pass that is trying to address many of these usability issues. Check them out at www.expressor-software.com. What other ease-of-use suggestions do you have for ETL software vendors? Please post your valuable comments. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL

    Read the article

  • Informal Interviews: Just Relax (or Should I?)

    - by david.talamelli
    I was in our St Kilda Rd office last week and had the chance to meet up with Dan and David from GradConnection. I love what these guys are doing, their business has been around for two years and I really like how they have taken their own experiences from University found a niche in their market and have chased it. These guys are always networking. Whenever they come to Melbourne they send me a tweet to catch up, even though we often miss each other they are persistent. It sounds like their business is going from strength to strength and I have to think that success comes from their hard work and enthusiasm for their business. Anyway, before my meeting with ProGrad I noticed a tweet from Kevin Wheeler who was saying it was his last day in Melbourne - I sent him a message and we met up that afternoon for a coffee (I am getting to the point I promise). On my way back to the office after my meeting I was on a tram and was sitting beside a lady who was talking to her friend on her mobile. She had just come back from an interview and was telling her friend how laid back the meeting was and how she wasn't too sure of the next steps of the process as it was a really informal meeting. The recurring theme from this phone call was that 1) her and the interviewer got along really well and had a lot in common 2) the meeting was very informal and relaxed. I wasn't at the interview so I cannot say for certain, but in my experience regardless of the type of interview that is happening whether it is a relaxed interview at a coffee shop or a behavioural interview in an office setting one thing is consistent: the employer is assessing your ability to perform the role and fit into the company. Different interviewers I find have different interviewing styles. For example some interviewers may create a very relaxed environment in the thinking this will draw out less practiced answers and give a more realistic view of the person and their abilities while other interviewers may put the candidate "under the pump" to see how they react in a stressful situation. There are as many interviewing styles as there are interviewers. I think candidates regardless of the type of interview need to be professional and honest in both their skills/experiences, abilities and career plans (if you know what they are). Even though an interview may be informal, you shouldn't slip into complacency. You should not forget the end goal of the interview which is to get a job. Business happens outside of the office walls and while you may meet someone for a coffee it is still a business meeting no matter how relaxed the setting. You don't need to be stick in the mud and not let your personality shine through, but that first impression you make may play a big part in how far in the interview process you go. This article was originally posted on David Talamelli's Blog - David's Journal on Tap

    Read the article

  • Agile isn’t always Agile

    - by BuckWoody
    I want to make a disclaimer before I dive into this topic – At Microsoft we use all kinds of development methodologies, and I’ve worked in lots of other shops using lots of methodologies. This is one of those “religious” topics like which programming language or database is best, and is bound to generate some heat. But this isn’t pointed towards one particular event or company. But I really don’t like Agile. In particular, I really don’t like Scrum. Let me explain. Agile is a methodology for developing software that emphasizes adapting to change more so than the traditional “waterfall” method of developing software. Within Agile is a process called a “scrum” meeting. The pitch goes that in this quick, stand-up meeting the people involved in the development project (which should include the DBA, but very often doesn’t) go around the room stating what they are working on, when that will be finished and what is keeping them from getting finished (“blockers”, these are called). Sounds all very non-threatening – we’re just “enabling” the developers to work more efficiently. And that’s what we all want, isn’t it? Except it doesn’t work. In my experience (and yours might be VERY different) this just turns into a micro-management environment, where devs have to defend their daily work. Of all the work environments I hate the most, micro-management environments are THE worst. I don’t like workign in them, and I don’t like creating them. The other issue I have with Scrum is that it makes your whole team task-focused. Everyone wants to make sure that they are not the “long pole” in the meeting (meaning that they aren’t the one that gets all the attention) so they only focus on safe, quick tasks. And although you have all of the boxes checked, the project does not go well at all – even when it does finish. Before you comment (and please do comment) I fully realize that Agile <> Scrum. But in my experience, it sometimes turns into that. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • A Few Words from Oracle’s Channel Chief

    - by Meghan Fritz-Oracle
    As Oracle enters a new fiscal year, I want to take a moment and reflect on my time at Oracle thus far. The technology industry is currently at an inflection point trying to figure out where growth will come from. When you look at Oracle’s portfolio of products, it's a complete stack from applications to disc, offering differentiation in the marketplace. I was initially drawn to Oracle’s leadership, strategy, and world-class technology. Since joining the Oracle team in October 2013, I’ve had the privilege of traveling around the globe visiting our partners and customers, and wanted to share several common themes that came up during these meetings. Cloud: Many partners are trying to figure out how to build a business around the cloud. Oracle partners can currently resell or refer our cloud services. We saw over 300 percent growth from cloud resale last quarter. Engineered Systems: Hardware and software integrated together to simplify IT allows our joint customers to focus on the innovation they need to compete in a complex marketplace. We're seeing great success in a several areas, with more partners saying, “Let’s start with Oracle on Oracle.” The Internet of Things: This is the next big opportunity for device manufacturers and ISV‘s to capture market share in what is projected to be a mulit-trillion-dollar opportunity, according to Gartner.  Competition: We've got a tremendous middleware platform and a tremendous database install base. We’re not just a database company; we are a complete provider. So looking ahead, what are my priorities for fiscal 2015? Oracle PartnerNetwork has some very exciting plans on the horizon. There’s a lot more leadership and announcements to unfold, especially at this year’s Global Partner Kickoff taking place on June 25 + 26 depending on your region and time zone. I along with several other Oracle executives will be shedding light on Oracle’s strategy for the upcoming year, the latest opportunities within the OPN Specialized Program and sales strategies that will help you to continue to grow and profit with Oracle. Stay tuned for registration information next week.We also have Oracle OpenWorld and JavaOne to look forward to. These conferences are taking place in San Francisco from September 28 – October 2. We’ll have a variety of partner-specific activities for you at OPN Central @ OpenWorld including the OPN keynote, the famed AfterDark networking reception, access to the OPN Lounge and more.In the meantime, I hope that everyone has a great end to fiscal 2014.Best regards,Rich Geraffo Senior Vice President, Worldwide Alliances and Channels

    Read the article

  • Oracle Database 12c: Partner Material

    - by Thanos Terentes Printzios
    Oracle Database 12c offers the latest innovation from Oracle Database Server Technologies with a new Multitenant Architecture, which can help accelerate database consolidation and Cloud projects. The primary resource for Partners on Database 12c is of course the Oracle Database 12c Knowledge Zone where you can get up to speed on the latest Database 12c enhancements so you can sell, implement and support this. Resources and material on Oracle Database 12c can be found all around Oracle.com, but even hidden in AR posters like the one on the left. Here are some additional resources for you Oracle Database 12c: Interactive Quick Reference is a multimedia tool for various terms and concepts used in the Oracle Database 12c release. This reference was built as a multimedia web page which provides descriptions of the database architectural components, and references to relevant documentation. Overall, is a nice little tool which may help you quickly to find a view you are searching for or to get more information about background processes in Oracle Database 12c. Use this tool to find valuable information for any complex concept or product in an intuitive and useful manner. Oracle Database 12c Learning Library contains several technical traininings (2-day DBA, Multitenant Architecture, etc) but also Videos/Demos, Learning Paths by Role and a lot more. Get ready and become an Oracle Database 12c Specialized Partner with the Oracle Database 12c Specialization for Partners. Review the Specialization Criteria, your company status and apply for an Oracle Database 12c Specialization. Access our OPN training repository to get prepared for the exams. "Oracle Database 12c: Plug into the Cloud!"  Marketing Kit includes a great selection of assets to help Oracle partners in their marketing activities to promote solutions that leverage all the new features of Oracle Database 12c. In the package you will find assets (templates, invitation texts, presentations, telemarketing script,...) to be used for your demand generation activities; a full set of presentations with the value propositions for customers; and Sales Enablement and Sales Support material. Review here and start planning your marketing activities around Database 12c. Oracle Database 12c Quick Reference Guide (PDF) and Oracle Database 12c – Partner FAQ (PDF) Partners that need further assistance with Database 12c can always contact us at partner.imc-AT-beehiveonline.oracle-DOT-com or locally address one the Oracle ECEMEA Partner Hubs for assistance.

    Read the article

  • How To: LIC of India Online Policy Payments And Status Enquiries

    - by Kavitha
    Life Insurance Corporation (LIC) of India is the largest state-owned insurance company in India and also the country’s largest investor. The premium  amount for the insurance policies purchased from LIC are paid by visiting the nearest LIC office or by taking help of LIC agents. It’s a time consuming process and most of us are fed up of standing in long queues at LIC offices for paying premium amount. LIC Online Services Website The worries are not any more, no need to stand in a long queue or approach an agent for paying your LIC policies. LIC of India has an online payment and also renewal facility : http://licindia.in. To pay the policies online we have to register with LIC and login to the site using the registered username and password. Once you login, you can enter your profile information and LIC policies that are purchased on your name(register the policies that are purchased  only on your name, otherwise you land in to troubles). Once registered, managing activities of like payments, loan eligibility checking, policy maturity, etc. are very easy. For online payment of policies you can find Pay Premium Online tab which when clicked takes you to a page that lists all the policies that are due. Payments can be made using credit/debit cards and online banking systems. Almost all the Indian banks are covered as part of the online payment system. Other services that are available through the online system of LIC are : View ULIP Policies,Premium Calendar, Calculate Loan Eligibility, Revival Quote, Policy Maturity, Address Change Requests, etc. LIC Policy Status Enquiry Through Phone LIC also has a helpline/customer care  number ‘1251‘. You can call 1251 to know about  your policy status, premium due date, Loan possibility and loan amount possible, time of maturity etc. This article titled,How To: LIC of India Online Policy Payments And Status Enquiries, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • SOA, Cloud and Service Technology Symposium a super success!

    - by JuergenKress
    SOA, Cloud and Service Technology Symposium in London was a huge success. More than 600 international attendees participated in it. Our SOA & BPM Community had a great presence there. At joint booth with the Specialized partners link consulting, eProseed and Griffiths Waite, we presented the latest product updates and had many interesting discussions with customers and speakers. Special thanks to our HQ product management team Demed, Tim, Manas for coming over right before OOW. Also a very big Thank to Matthias Ziegler from Accenture for presenting our joint presentation individually! If you missed the conference here are the key presentations links for your reference: Big Data and its impact on SOA Demed L'Her [View PDF] Building 21st Century Service-Oriented Airports Shyam Kumar [View PDF] Building Cloudy Services Anne Thomas Manes [View PDF] Community Management: The Next Wave of SOA Governance and API Management Tim E. Hall [View PDF] Elastic SOA in the Cloud Steve Millidge [View PDF] Governing Shared Services: On-Premise & In the Cloud Thomas Erl [View Video] Introducing the Cloud Computing Design Patterns Catalogue Thomas Erl and Amin Naserpour [CloudPatterns.org] Lost in Translation - Common Mistakes Interpreting Patterns Mark Simpson [View PDF] Moving Applications to the Cloud: Migration Options Anne Thomas Manes [View PDF] New Paradigms for Application Architecture: From Applications to IT Services Anne Thomas Manes [View PDF] NoSQL for Data Services, Data Virtualization & Big Data Guido Schmutz [View PDF] A Pragmatic Approach to Cloud Computing Andrea Morena [View PDF] The Successful Execution of the SOA and BPM Vision Using a Business Capability Framework: Concepts and Examples Clemens Utschig and Manas Deb [View PDF] Service Modeling & BPM Business Value Patterns Matthias Ziegler [View PDF] [Podcast] SOA Adoption in the Brazilian Ministry of Health - Case Study Ricardo Puttini and Andre Toffanello [PDF Coming Soon] SOA Environments are a Big Data Problem Markus Zirn, Splunk and Maciej Barcz [View PDF] SOA Governance at EDP: A Global Energy Company Manuel Rosa [View PDF] For all presentations please visit the SOA, Cloud and Service Technology Symposium Website SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • PERT shows relationships between defined tasks in a project without taking into consideration a time line

    The program evaluation and review technique (PERT) shows relationships between defined tasks in a project without taking into consideration a time line. This chart is an excellent way to identify dependencies of tasks based on other tasks. This chart allows project managers to identify the critical path of a project to minimize any time delays to the project. According to Craig Borysowich in his article “Pros & Cons of the PERT/CPM Method stated the following advantages and disadvantages: “PERT/CPM has the following advantages: A PERT/CPM chart explicitly defines and makes visible dependencies (precedence relationships) between the WBS elements, PERT/CPM facilitates identification of the critical path and makes this visible, PERT/CPM facilitates identification of early start, late start, and slack for each activity, PERT/CPM provides for potentially reduced project duration due to better understanding of dependencies leading to improved overlapping of activities and tasks where feasible.  PERT/CPM has the following disadvantages: There can be potentially hundreds or thousands of activities and individual dependency relationships, The network charts tend to be large and unwieldy requiring several pages to print and requiring special size paper, The lack of a timeframe on most PERT/CPM charts makes it harder to show status although colors can help (e.g., specific color for completed nodes), When the PERT/CPM charts become unwieldy, they are no longer used to manage the project.” (Borysowich, 2008) Traditionally PERT charts are used in the initial planning of a project like in a project that is utilizing the waterfall approach. Once the chart was created then project managers could further analyze this data to determine the earliest start time for each stage in the project. This is important because this information can be used to help forecast resource needs during a project and where in the project. However, the agile environment can approach this differently because of their constant need to be in contact with the client and the other stakeholders.  The PERT chart can also be used during project iteration to determine what is to be worked on next, such as a prioritized To-Do list a wife would give her husband at the start of a weekend. In my personal opinion, the COTS-centric environment would not really change how a company uses a PERT chart in their day to day work. The only thing I can is that there would be less tasks to include in the chart because the functionally milestones are already completed when the components are purchased. References: http://www.netmba.com/operations/project/pert/ http://web2.concordia.ca/Quality/tools/20pertchart.pdf http://it.toolbox.com/blogs/enterprise-solutions/pros-cons-of-the-pertcpm-method-22221

    Read the article

  • 10 PowerShell One Liners

    - by BizTalk Visionary
    Here are a few one-liners that use NetCmdlets. Some of these I've blogged about before, some are new. Let me know if you have questions, which ones you find useful, or how you altered these to suit your own needs. Send email to a list of recipient addresses: import-csv users.csv | % { send-email -to $_.email -from [email protected] -subject "Important Email" –message "Hello World!" -server 10.0.1.1 } Show the access control list for a specific Exchange folder: get-imap -server $mymailserver -cred $mycred -folder INBOX.RESUMES –acl Add look and read permissions on an Exchange folder, for a list of accounts pulled from a CSV file: import-csv users.csv | % { set-imap -server -acluser $_.username $mymailserver -cred $mycred -folder INBOX.RESUMES –acl “lr”  } Sync system time with an Internet time server: get-time -server clock.psu.edu –set To remotely sync the time on a set of computers: import-csv computers.csv | % { Invoke-Command -computerName $_.computer -cred $mycred -scriptblock { get-time -server clock.psu.edu –set } } Delete all emails from an Exchange folder that match a certain criteria.  For example, delete all emails from [email protected]: get-imap -server $mailserver –cred $mycred | ? {$_.FromEmail -eq [email protected]} | %{ set-imap -server $mailserver –cred $mycred-message $_.Id -delete } Update Twitter status from PowerShell: get-http –url "http://twitter.com/statuses/update.xml" –cred $mycred -variablename status -variablevalue "Tweeting with NetCmdlets!" A test-path that works over FTP, FTPS (SSL), and SFTP (SSH) connections: get-ftp -server $remoteserver –cred $mycred -path /remote/path/to/checkfor* Don't forget the *.  Also, to use SSL or SSH just add an –ssl or –ssh parameter. List disabled user accounts in Active Directory (or any other LDAP server): get-ldap -server $ad -cred $mycred -dn dc=yourdc -searchscope wholesubtree     -search "(&(objectclass=user)(objectclass=person)(company=*)(userAccountControl:1.2.840.113556.1.4.803:=2))" List Active Directory groups and their members: get-ldap -server testman -cred $mycred -dn dc=NS2 -searchscope wholesubtree -search "(&(objectclass=group)(cn=*admin*))" | select ResultDN, member Display the last initialization time (e.g. last reboot time) of all discoverable SNMP agents on a network: import-csv computers.csv | % { get-snmp -agent $_.computer -oid sysUpTime.0 | %{([datetime]::Now).AddSeconds(-($_.OIDValue/100))} } Not mentioned here:  data conversion (Yenc, QP, UUencoding, MD5, SHA1, base64, etc), DNS, News Groups (NNTP/UseNet), POP mail, RSS feeds, Amazon S3, Syslog, TFTP, TraceRoute, SNMP Traps, UDP, WebDAV, whois, Rexec/Rshell/Telnet, Zip files, sending IMs (Jabber/GoogleTalk/XMPP), sending text messages and pages, ping, and more. Original Source: Lance's Textbox

    Read the article

  • Silverlight Death Trolls Dancing on XAML&rsquo;s Grave

    - by D'Arcy Lussier
    I’m starting to see a whole bunch of tweets and blog posts on how Silverlight/WPF is dead, or how the XAML team has been disbanded at Microsoft, or how someone predicted Silverlight would die, blah blah blah. They all have a similar ring to it though: “Told ya so!” “They were stupid ideas anyway!” “Serves Microsoft right, boy are they dumb!” Let me tell you something, all those that are gleefully raving about Silverlight/WPF’s demise are nothing more than death trolls. Let’s assume that everything out there is true. Microsoft is obviously moving towards HTML 5 in a huge way (TechCrunch pointed out that SkyDrive has replaced its Silverlight based version with an HTML 5 one), and not just on the web as we’ve seen with recent announcements about how HTML 5 apps will be natively supported on Windows 8. WPF never caught on in the marketplace, regardless of its superior technology offering to Winforms. And Silverlight…well, it gave Flash a good run for its money, but plug-in based web applications are becoming passé in light of HTML 5. (It’s interesting that at a developer conference I put on just a few weeks ago, only 1 out of 60+ sessions included Silverlight. 5 focussed on HTML 5.) So what does this *death* of Silverlight/WPF/XAML mean then in the grand scheme of things (again, assuming that they truly *are* dying/dead)? Well, nothing really…at least nothing bad. Silverlight has given us some fantastic applications and experiences (Vancouver Olympics anyone?), and WP7 couldn’t have launched without Silverlight as its development platform. And WPF, although it had putrid adoption, has had some great success stories. A Canadian company that I talked to recently showed me how they re-wrote their point-of-sale application entirely in WPF, and the product is a huge success providing features their competitors aren’t. Arguably (and I say that only because I know I’m going to get WTF comments for this), VS.NET 2010 is a great example of what a WPF app can provide over previous C++ based applications. Technologies evolve. In a decade we’ve had 5 versions of the .NET framework, seen languages like J# come and go, seen F# appear, see communications layers change with WCF, seen EF go through multiple evolutions and traditional ADO.NET Datasets go extinct (from actual use anyway), and ASP.NET Webforms be replaced with ASP.NET MVC as a preferred web platform. Is Silverlight and WPF done? Maybe…probably?…thing is, it doesn’t really affect me personally in any way, or you…so why would we care if its gets replaced with something better and more robust that we can build better solutions with? Just remember the golden rule: don’t feed the trolls.

    Read the article

  • Oracle's Australian Graduate Recruitment Program

    - by david.talamelli
    I have been with Oracle for 5 years now and one thing that I have found that there is never a shortage of here is - Variety. Over the last 5 years I have had the opportunity to work on projects across various countries, across various technologies and skill-sets and also across various level of seniority. No two days are the same. One of the projects I was fortunate to be involved in occurred last year and it is one of the ones that is closest to me. Last year I was able to take responsibility for our 2011 Graduate Recruitment drive in Australia. Two weeks ago I went to Sydney to meet our Graduates who started in February 2011 with us and it was great to see them come to the end (or beginning actually) of our journey together. I am excited at the potential of what our Graduates careers will develop into here with us. I remember at our interviewing last year trying to explain life in Oracle, it is great to see those same Graduates with us now learning and developing life and business skills that I hope they will take with them in their professional careers. I was talking to one of my colleagues this week who mentioned the excitement and energy that our new Graduates bring is infectious, and I agree it really is. Our Graduates have a big learning curve ahead of them and they are about to start going on rotations into some of our Business Groups - but I think it is a great experience to see how a global company operates and pulls together to achieve results together. Here is a picture we took the other week of this year's Oracle Graduates (if any of our Graduates are reading this blog - it was great seeing you in NSW and I do wish you all the success here at Oracle) Once again Oracle's Graduate Program will be running in 2011 in Australia (Graduates will start in Jan/Feb 2012). The Oracle Australia Graduate Development Program is a one-year program consisting of orientation, formal training, project rotations in one core line of business and finally job placement. The formal training is a combination of structured development programs on soft skills and functional competencies via various delivery formats. Graduates are also expected to work in a team environment and complete multiple projects addressing real business challenges and at the time gaining a broad business understanding. For our Australia program we are hiring in our North Ryde and Melbourne offices. Resume submissions are being accepted now. First Round interviews will take place in June 2011 with Final Round interviews in July 2011. The Australia Graduate Program is open to Australian Residents and Citizens who are either in the final year of their studies or have graduated the previous year. For more details on Oracle and our Graduate Program visit our Campus website To express your interest, mail your resume to [email protected]

    Read the article

  • Is 4-5 years the “Midlife Crisis” for a programming career?

    - by Jeff
    I’ve been programming C# professionally for a bit over 4 years now. For the past 4 years I’ve worked for a few small/medium companies ranging from “web/ads agencies”, small industry specific software shops to a small startup. I've been mainly doing "business apps" that involves using high-level programming languages (garbage collected) and my overall experience was that all of the works I’ve done could have been more professional. A lot of the things were done incorrectly (in a rush) mainly due to cost factor that people always wanted something “now” and with the smallest amount of spendable money. I kept on thinking maybe if I could work for a bigger companies or a company that’s better suited for programmers, or somewhere that's got the money and time to really build something longer term and more maintainable I may have enjoyed more in my career. I’ve never had a “mentor” that guided me through my 4 years career. I am pretty much blog / google / self taught programmer other than my bachelor IT degree. I’ve also observed another issue that most so called “senior” programmer in “my working environment” are really not that senior skill wise. They are “senior” only because they’ve been a long time programmer, but the code they write or the decisions they make are absolutely rubbish! They don't want to learn, they don't want to be better they just want to get paid and do what they've told to do which make sense and most of us are like that. Maybe that’s why they are where they are now. But I don’t want to become like them I want to be better. I’ve run into a mental state that I no longer intend to be a programmer for my future career. I started to think maybe there are better things out there to work on. The more blogs I read, the more “best practices” I’ve tried the more I feel I am drifting away from “my reality”. But I am not a great programmer otherwise I don't think I am where I am now. I think 4-5 years is a stage that can be a step forward career wise or a step out of where you are. I just wanted to hear what other have to say about what I’ve mentioned above and whether you’ve experienced similar situation in your past programming career and how you dealt with it. Thanks.

    Read the article

  • So it comes to PASS…

    - by Tony Davis
    How does your company gauge the benefit of attending a technical conference? What's the best change you made as a direct result of attendance? It's time again for the PASS Summit and I, like most people go with a set of general goals for enhancing technical knowledge; to learn more about PowerShell, to drill into SQL Server performance tuning techniques, and so on. Most will write up a brief report on the event for the rest of the team. Ideally, however, it will go a bit further than that; each conference should result in a specific improvement to one of your systems, or in the way you do your job. As co-editor of Simple-talk.com, and responsible for the majority of our SQL books, my “high level” goals don't vary much from conference to conference. I'm always on the lookout for good new authors. I target interesting new technologies and tools and try to learn more. I return with a list of actions, new articles to commission, and potential new authors. Three years ago, however, I started setting myself the goal of implementing “one new thing” after each conference. After one, I adopted Kanban for managing my workload, a technique that places strict limits on “work in progress” and makes the overall workload, and backlog, highly visible. After another I trialled a community book project. At PASS 2010, one of my general goals was to delve deeper into SQL Server transaction log mechanics, but on top of that, I set a specific goal of writing something useful on the topic. I started a Stairway series and, ultimately, it's turned into a book! If you're attending the PASS Summit this year, take some time to consider what specific improvement or change you'll implement as a result. Also, try to drop by the Red Gate booth (#101). During the Vendor event on Wednesday evening, Gail Shaw and I will be there to discuss, and hand out copies of the book. Cheers, Tony.  

    Read the article

  • The softer side of BPM

    - by [email protected]
    BPM and RTD are great complementary technologies that together provide a much higher benefit than each of them separately. BPM covers the need for automating processes, making sure that there is uniformity, that rules and regulations are complied with and that the process runs smoothly and quickly processes the units flowing through it. By nature, this automation and unification can lead to a stricter, less flexible process. To avoid this problem it is common to encounter process definition that include multiple conditional branches and human input to help direct processing in the direction that best applies to the current situation. This is where RTD comes into play. The selection of branches and conditions and the optimization of decisions is better left in the hands of a system that can measure the results of its decisions in a closed loop fashion and make decisions based on the empirical knowledge accumulated through observing the running of the process.When designing a business process there are key places in which it may be beneficial to introduce RTD decisions. These are:Thresholds - whenever a threshold is used to determine the processing of a unit, there may be an opportunity to make the threshold "softer" by introducing an RTD decision based on predicted results. For example an insurance company process may have a total claim threshold to initiate an investigation. Instead of having that threshold, RTD could be used to help determine what claims to investigate based on the likelihood they are fraudulent, cost of investigation and effect on processing time.Human decisions - sometimes a process will let the human participants make decisions of flow. For example, a call center process may leave the escalation decision to the agent. While this has flexibility, it may produce undesired results and asymetry in customer treatment that is not based on objective functions but subjective reasoning by the agent. Instead, an RTD decision may be introduced to recommend escalation or other kinds of treatments.Content Selection - a process may include the use of messaging with customers. The selection of the most appropriate message to the customer given the content can be optimized with RTD.A/B Testing - a process may have optional paths for which it is not clear what populations they work better for. Rather than making the arbitrary selection or selection by committee of the option deeped the best, RTD can be introduced to dynamically determine the best path for each unit.In summary, RTD can be used to make BPM based process automation more dynamic and adaptable to the different situations encountered in processing. Effectively making the automation softer, less rigid in its processing.

    Read the article

  • What was missing from the Content Strategy Forum?

    - by Roger Hart
    In April, Paris hosted the first ever Content Strategy Forum. The event's website proudly proclaims: 170 attendees, 18 nationalities, 17 speakers, 1 volcano... Content Strategy Forum 2010 rocked the world! The volcano was in Iceland, and the closest we came to rocking the world was a cursory mention in the Huffington Post, but I'll grant the event was awesome. One thing missing from that list, however, is "94 companies" (Plus a couple of universities and freelancers, and what have you). A glance through the attendees directory reveals a fairly wide organisational turnout - 24 students from two Parisian universities, countless design and marketing agencies, a series of tech firms, small and large. Two delegates from IBM, two from ARM, an appearance from RIM, Skype, and Facebook; twelve from the various bits of eBay. Oh, and, err, nobody from Google, Microsoft, Yahoo, Amazon, Play, Twitter, LinkedIn, Craigslist, the BBC, no banks I noticed, and I didn't spot a newspaper. You get the idea. Facebook notwithstanding, you have to scroll through a few pages to Alexa rankings to find company names from the attendee list. I find this interesting, and I'm not wholly sure what to make of it. Of the large, web-centric, content-rich organizations conspicuously absent, at least one of two things is true: They didn't know about the event They didn't care about the event Maybe these guys all have content strategy completely sorted, and it's an utterly naturalised part of their business process. Maybe nobody at say, Apple or Play.com ever publishes a single piece of content that isn't neatly tailored to their (clearly defined, of course) user and business goals. Wouldn't that be lovely? The thing is, in that rosy and beatific world, there's still a case for those folks to join the community. There are bound to be other perspectives, and things to learn. You see, the other thing achingly conspicuous by its absence was case studies. In her keynote address, Kristina Halvorson made the point that what content strategy really needs is some big, loud success stories. A point I'd firmly second as a content strategist working within an organisation. Sarah Cancilla's presentation on content strategy at Facebook included some very neat, specific examples, and was richer for it. It didn't hurt that the example was Facebook - you're getting impressively big numbers off base. What about the other big boys? Is there anybody out there with a perspective? Do we all just look very silly to you, fretting away over text and images and users and purposes? Is content validation and maintenance so accustomed a part of your business that calling attention to it is like sniffing the air and saying "Hmm, a lot of nitrogen about today."? And if it is, do you have any wisdom to share?

    Read the article

  • Fidelity Investments Life Insurance Executive Weighs in on Policy Administration Modernization

    - by helen.pitts(at)oracle.com
    James Klauer, vice president of Client Services Technology at Fidelity Investment Life Insurance, weighs in on the rationale and challenges associated with policy administration system replacement in this month's digital issue of Insurance Networking News.    In "The Policy Administration Replacement Quandary"  Klauer shared the primary business benefit that can be realized by adopting a modern policy administration system--a timely topic given that recent industry analyst surveys indicate policy administration replacement and modernization will continue to be a top priority for insurers this year.    "Modern policy administration systems are more flexible than systems of the past," Klauer says in the article. " This has allowed us to shorten our delivery time for new products and product changes.  We have also had a greater ability to integrate with other systems and to deliver process efficiencies."   Klauer goes on to advise that insurers ensure they have a solid understanding of the requirements when replacing their legacy policy administration system. "If you can afford the time, take the opportunity to re-engineer your business processes.  We were able to drastically change our death processes, introducing automation and error-proofing." Click here to read more of Klauer's insights and recommendations for best practices in the publication's "Ask & Answered" column.   You also can learn more the benefits of an adaptive, rules-driven approach to policy administration and how to mitigate risks associated with system replacement by attending the free Oracle Insurance Virtual Summit:  Fueling the Adaptive Insurance Enterprise, 10:00 a.m. - 6:00 p.m. EST, Wednesday, January 26.      Insurance Networking News and Oracle Insurance have teamed up to bring you this first-of-its-kind event. This year's theme, "Fueling the Adaptive Insurance Enterprise," will focus on bringing you information about exciting new technology concepts, which can help your company react more quickly to new market opportunities and, ultimately, grow the business.    Visit virtual booths and chat online with Oracle product specialists, network with other insurers, learn about exciting new product announcements, win prizes, and much more--all without leaving your office.  Be sure and attend the on-demand session, "Adapt, Transform and Grow: Accelerate Speed to Market with Adaptive Insurance Policy Administration," hosted by Kate Fowler, product strategy director for Oracle Insurance Policy Administration for Life and Annuity.   Register Now!   Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • The Missing Post

    - by Joe Mayo
    It’s somewhat of a mystery how the writing process can conjure up results that weren’t initially intended. Case in point is the fact that another post was planned to be in place of this one, but it never made the light of day.  This particular post started off as an introduction to a technology I had just learned, used, and wanted to share the experience with others.  The beginning was fun and demonstrated how easy it was to get started.  One of the things I’ve been pondering over time is that the Web is filled with introductions to new technologies and quick first looks, so I set out to add more depth, share lessons learned, and generally help you avoid the problems I encountered along the way; problems being a key theme of why you aren’t reading that post at this very minute.  Problems that curiously came from nowhere to thwart my good intentions. Success was sweet when using the tool for the prototypical demo scenario. The thing is, I intended the tool to accomplish a real task.  Having embarked on the path toward getting the job done, glitches began creeping into the process.  Realizing that this was all a bit new, I had patience and found a suitable work-around, but this was to be short lived. As in marching ants to a freshly laid out picnic, the problems kept coming until I had to get up and walk away.  Not to be outdone, sheer will and brute force manual intervention led to mission accomplishment.  Though I kept a positive outlook and was pleased at the final result, the process of using the tool had somewhat soured. Regardless of a less than stellar experience with the tool, I have a great deal of respect for the company that produced it and the people who built it. Perhaps I empathize for what they might feel after reading a post that details such deficiencies in their product.  Sure, if you’re in this business, you’ve got to have a thick skin; brush it off, fix the problem, and move on to greatness. But, today I feel like they’re people and are probably already aware of any issues I would seemingly reveal.  Anyone who builds a product or provides a service takes a lot of pride in what they do.  Sometimes they screw up and if their worth a dime, they make it up. I think that will happen in this case and there’s no reason why I should post information that has the potential to sound more negative than helpful.  While no one would ever notice or care either way, I’m posting something that won’t harm. Joe

    Read the article

  • A toolset for self improvement and learning [closed]

    - by Sebastian
    Possible Duplicate: I’m having trouble learning I've been working as an IT consultant for 1½ years and I am very passionate about programming. Before that I studied MSc Software Engineering and had both a part time job as a developer for a big telecom company. During that time I also took extra courses and earned a SCJP certificate. I have been continuously reading a lot of books during the last 3½ years. Now to my problem. I want to continue learning and become a really, really good developer. Apart from my daytime job as a full time java developer I have taken university courses in, for me, new languages and paradigms. Most recently, android game development and then functional programming with Scala. I've read books, went to conferences and had a couple of presentations for internal training purposes in our local office. I want to have some advice from other people who have previously been in my situation or currently are. What are you guys doing to keep improving yourselves? Here is some things that I have found are working for me: Reading books I've mostly read books about best practices for programming, OO-design, refactoring, design patterns, tdd. Software craftmanship if you like. I keep a reading list and my current book is Apprenticeship patterns. Taking courses In my country we have a really good system for taking online distance courses. I have also taken one course at coursera.org and a highly recommend that platform. Ive looked at courses at oreilly.com, industriallogic, javaspecialists.eu and they seem to be okay. If someone gives these type of courses a really good review, I can probably convince my boss. Workshops that span over a couple of days would probably be harder, but Ive seen that uncle Bob will have one about refactoring and tdd in 6months not far from here.. :) Are their possibly some online learning platforms that I dont know about? Educational videos I've bought uncle bobs videos from cleancoders.com and I highly recommend them. The only thing I dont like is that they are quite expensive and that he talks about astronomy for ~10 minutes in every episode. Getting certified I had a lot of fun and learned a lot when I studied for the SCJP. I have also done some preparation for the microsoft equivalent but never went for it. I think it is a good when selling yourself as a newly graduated student and also will boost your knowledge if your are interested in it. Now I would like others to start sharing their experiences and possibly give me some advice! BR Sebastian

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >