Search Results

Search found 34513 results on 1381 pages for 'end task'.

Page 878/1381 | < Previous Page | 874 875 876 877 878 879 880 881 882 883 884 885  | Next Page >

  • GParted in UBUNTU shows entire disk as UNALLOCATED SPACE

    - by msPeachy
    Good day to everyone. I hope someone can help me with my problem. I have a dual boot Windows and Ubuntu system. I recently encountered an hd0 out of disk error and wasn't able to boot Ubuntu. So I booted into Windows, after 2 to 3 times of booting and rebooting Windows, I tried booting Ubuntu but still I get the hd0 out of disk error. I decided to run Ubuntu from LIVEUSB to try to fix my Ubuntu partition using GParted, but when I run GParted, it shows my entire disk as UNALLOCATED SPACE! The strange thing is that Nautilus still shows and mounts my partitions. Also every time I boot into Windows , my partitions exists and I am able to read and write to them. I have no idea what is wrong. Please help! I can't stand using Windows since most of the tools I use are in Ubuntu. I don't mind reinstalling Ubuntu. In fact I already tried reinstalling using the LIVEUSB but I wasn't able to, since GParted or the Ubuntu installer itself does not recognized my partitions and shows the entire disk as unallocated space. I am currently running Ubuntu from LIVEUSB. Here's the outpuf of sudo fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb30ab30a Device Boot Start End Blocks Id System /dev/sda1 * 2048 104869887 52433920 83 Linux /dev/sda2 104869888 105074687 102400 7 HPFS/NTFS/exFAT /dev/sda3 105074688 156149759 25537536 7 HPFS/NTFS/exFAT /dev/sda4 156151800 625153409 234500805 f W95 Ext'd (LBA) /dev/sda5 156151808 169156591 6502392 82 Linux swap / Solaris /dev/sda6 169158656 294991871 62916608 7 HPFS/NTFS/exFAT /dev/sda7 294993920 471037944 88022012+ 7 HPFS/NTFS/exFAT /dev/sda8 471041928 625121152 77039612+ 7 HPFS/NTFS/exFAT When I run, sudo parted -l, I got this error message: ubuntu@ubuntu:~$ sudo parted -l Error: Can't have a partition outside the disk!

    Read the article

  • SD-CARD reader does not show in ubuntu

    - by shantanu
    I bought Acer asipre 4250. It have built-in SD card reader. But it is not working. Nothing show in /media or fdisk but something in dmesg. dmesg: new high-speed USB device number 3 using ehci_hcd [ 127.396733] scsi5 : usb-storage 2-2:1.0 [ 128.526562] scsi 5:0:0:0: Direct-Access Multiple Card Reader 1.00 PQ: 0 ANSI: 0 [ 128.532512] sd 5:0:0:0: Attached scsi generic sg2 type 0 [ 129.008110] ohci_hcd 0000:00:12.0: PCI INT A disabled [ 129.032083] ohci_hcd 0000:00:13.0: PCI INT A disabled [ 129.056411] ohci_hcd 0000:00:16.0: PCI INT A disabled [ 129.338026] sd 5:0:0:0: [sdb] Attached SCSI removable disk [ 129.808328] ohci_hcd 0000:00:14.5: PCI INT C disabled [ 167.728616] usb 2-2: USB disconnect, device number 3 [ 169.872284] ehci_hcd 0000:00:13.2: PCI INT B disabled [ 169.872340] ehci_hcd 0000:00:13.2: PME# enabled fdisk -l: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0006bc6d Device Boot Start End Blocks Id System /dev/sda1 * 2048 48828415 24413184 7 HPFS/NTFS/exFAT /dev/sda2 48828416 50829311 1000448 82 Linux swap / Solaris /dev/sda3 50829312 99657727 24414208 83 Linux /dev/sda4 99659774 625141759 262740993 5 Extended Partition 4 does not start on physical sector boundary. /dev/sda5 99659776 275439615 87889920 7 HPFS/NTFS/exFAT /dev/sda6 275441664 451221503 87889920 7 HPFS/NTFS/exFAT /dev/sda7 451223552 625141759 86959104 7 HPFS/NTFS/exFAT I found another problem just right now. I format last three drives as EXT4 with disk utility. But they are showing as NTFS/exFAT in fdisk. :-(

    Read the article

  • Story of success: MySQL Enterprise Backup (MEB) was successfully integrated with IBM Tivoli Storage Manager (TSM) via System Backup to Tape (SBT) interface.

    - by user13334359
    Since version 3.6 MEB supports backups to tape through the SBT interface.The officially supported tool for such backups to tape is Oracle Secure Backup (OSB).But there are a lot of other Storage Managers. MEB allows to use them through the SBT interface. Since version 3.7 it also has option --sbt-environment which allows to pass environment variables, not needed by OSB, to third-party managers. At the same time MEB can not guarantee it would work with all of them.This month we were contacted by a customer who wanted to use IBM Tivoli Storage Manager (TSM) with MEB. We could only say them same thing I wrote in previous paragraph: this solution is supposed to work, but you have to be pioneers of this technology. And they agreed. They agreed to be the pioneers and so the story begins.MEB requires following options to be specified by those who want to connect it to SBT interface:--sbt-database-name: a name which should be handed over to SBT interface. This can be any name. Default, MySQL, works for most cases, so user is not required to specify this option.--sbt-lib-path: path to SBT library. For TSM this library comes with "Data Protection for Oracle", which, in its turn, interfaces with Oracle Recovery Manager (RMAN), which uses SBT interface. So you need to install it even if you don't use Oracle.--sbt-environment: environment for third-party manager. This option is not needed when you use OSB, but almost always necessary for third-party SBT managers. TSM requires variable TDPO_OPTFILE to be set and point to the TSM configuration file.--backup-image=sbt:: path to the image. Prefix "sbt:" indicates that image should be sent through SBT interfaceSo full command in our case would look like: ./mysqlbackup --port=3307 --protocol=tcp --user=backup_user --password=foobar \ --backup-image=sbt:my-first-backup --sbt-lib-path=/usr/lib/libobk.so \ --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt" --backup-dir=/path/to/my/dir backup-to-imageAnd this command results in the following output log: MySQL Enterprise Backup version 3.7.1 [2012/02/16] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ...  ./mysqlbackup --port=3307 --protocol=tcp --user=backup_user         --password=foobar --backup-image=sbt:my-first-backup         --sbt-lib-path=/usr/lib/libobk.so         --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt"         --backup-dir=/path/to/my/dir backup-to-image sbt-environment: 'TDPO_OPTFILE=/path/to/my/tdpo.opt' INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.             At the end of a successful 'backup-to-image' run mysqlbackup             prints "mysqlbackup completed OK!". --------------------------------------------------------------------                        Server Repository Options: --------------------------------------------------------------------   datadir                          =  /path/to/data   innodb_data_home_dir             =  /path/to/data   innodb_data_file_path            =  ibdata1:2048M;ibdata2:2048M;ibdata3:64M:autoextend:max:2048M   innodb_log_group_home_dir        =  /path/to/data   innodb_log_files_in_group        =  2   innodb_log_file_size             =  268435456 --------------------------------------------------------------------                        Backup Config Options: --------------------------------------------------------------------   datadir                          =  /path/to/my/dir/datadir   innodb_data_home_dir             =  /path/to/my/dir/datadir   innodb_data_file_path            =  ibdata1:2048M;ibdata2:2048M;ibdata3:64M:autoextend:max:2048M   innodb_log_group_home_dir        =  /path/to/my/dir/datadir   innodb_log_files_in_group        =  2   innodb_log_file_size             =  268435456 Backup Image Path= sbt:my-first-backup mysqlbackup: INFO: Unique generated backup id for this is 13297406400663200 120220 08:54:00 mysqlbackup: INFO: meb_sbt_session_open: MMS is 'Data Protection for Oracle: version 5.5.1.0' 120220 08:54:00 mysqlbackup: INFO: meb_sbt_session_open: MMS version '5.5.1.0' mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 31668381. mysqlbackup: INFO: Starting log scan from lsn 31668224. 120220  8:54:00 mysqlbackup: INFO: Copying log... 120220  8:54:00 mysqlbackup: INFO: Log copied, lsn 31668381.           We wait 1 second before starting copying the data files... 120220  8:54:01 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata1 (Antelope file format). mysqlbackup: Progress in MB: 200 400 600 800 1000 1200 1400 1600 1800 2000 120220  8:55:30 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata2 (Antelope file format). mysqlbackup: Progress in MB: 200 400 600 800 1000 1200 1400 1600 1800 2000 120220  8:57:18 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata3 (Antelope file format). mysqlbackup: INFO: Preparing to lock tables: Connected to mysqld server. 120220 08:57:22 mysqlbackup: INFO: Starting to lock all the tables.... 120220 08:57:22 mysqlbackup: INFO: All tables are locked and flushed to disk mysqlbackup: INFO: Opening backup source directory '/path/to/data/' 120220 08:57:22 mysqlbackup: INFO: Starting to backup all files in subdirectories of '/path/to/data/' mysqlbackup: INFO: Backing up the database directory 'mysql' mysqlbackup: INFO: Backing up the database directory 'test' mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 31668381.           (This is the highest lsn found on page)           Scanned log up to lsn 31670396.           Was able to parse the log up to lsn 31670396.           Maximum page number for a log record 328 120220 08:57:23 mysqlbackup: INFO: All tables unlocked mysqlbackup: INFO: All MySQL tables were locked for 0.000 seconds 120220 08:59:01 mysqlbackup: INFO: meb_sbt_backup_close: blocks: 4162  size: 1048576  bytes: 4363985063 120220  8:59:01 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: MySQL binlog position: filename bin_mysql.001453, position 2105 mysqlbackup: WARNING: backup-image already closed mysqlbackup: INFO: Backup image created successfully.:            Image Path: 'sbt:my-first-backup' -------------------------------------------------------------    Parameters Summary -------------------------------------------------------------    Start LSN                  : 31668224    End LSN                    : 31670396 ------------------------------------------------------------- mysqlbackup completed OK!Backup successfully completed.To restore it you should use same commands like you do for any other MEB image, but need to provide sbt* options as well: $./mysqlbackup --backup-image=sbt:my-first-backup --sbt-lib-path=/usr/lib/libobk.so \ --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt" --backup-dir=/path/to/my/dir image-to-backup-dirThen apply log as usual: $./mysqlbackup --backup-dir=/path/to/my/dir apply-logThen stop mysqld and finally copy-back: $./mysqlbackup --defaults-file=path/to/my.cnf --backup-dir=/path/to/my/dir copy-back  Disclaimer. This is only story of one success which can be useful for someone else. MEB is not regularly tested and not guaranteed to work with IBM TSM or any other third-party storage manager.

    Read the article

  • Advantages of Hudson and Sonar over manual process or homegrown scripts.

    - by Tom G
    My coworker and I recently got into a debate over a proposed plan at our workplace. We've more or less finished transitioning our Java codebase into one managed and built with Maven. Now, I'd like for us to integrate with Hudson and Sonar or something similar. My reasons for this are that it'll provide a 'zero-click' build step to provide testers with new experimental builds, that it will let us deploy applications to a server more easily, that tools such as Sonar will provide us with well-needed metrics on code coverage, Javadoc, package dependencies and the like. He thinks that the overhead of getting up to speed with two new frameworks is unacceptable, and that we should simply double down on documentation and create our own scripts for deployment. Since we plan on some aggressive rewrites to pay down the technical debt previous developers incurred (gratuitous use of Java's Serializable interface as a file storage mechanism that has predictably bit us in the ass) he argues that we can document as we go, and that we'll end up changing a large swath of code in the process anyways. I contend that having accurate metrics that Sonar (or fill in your favorite similar tool) provide gives us a good place to start for any refactoring efforts, not to mention general maintenance -- after all, knowing which classes are the most poorly documented, even if it's just a starting point, is better than seat-of-the-pants guessing. Am I wrong, and trying to introduce more overhead than we really need? Some more background: an alumni of our company is working at a Navy research lab now and suggested these two tools in particular as one they've had great success with using. My coworker and I have also had our share of friendly disagreements before -- he's more of the "CLI for all, compiles Gentoo in his spare time and uses Git" and I'm more of a "Give me an intuitive GUI, plays with XNA and is fine with SVN" type, so there's definitely some element of culture clash here.

    Read the article

  • Version control for game development - issues and solutions?

    - by Cyclops
    There are a lot of Version Control systems available, including open-source ones such as Subversion, Git, and Mercurial, plus commercial ones such as Perforce. How well do they support the process of game-development? What are the issues using VCS, with regard to non-text files (binary files), large projects, etc? What are solutions to these problems, if any? For organization of Answers, let's try on a per-package basis. Update each package/Answer with your results. Also, please list some brief details in your answer, about whether your VCS is free or commercial, distributed versus centralized, etc. Update: Found a nice article comparing two of the VCS below - apparently, Git is MacGyver and Mercurial is Bond. Well, I'm glad that's settled... And the author has a nice quote at the end: It’s OK to proselytize to those who have not switched to a distributed VCS yet, but trying to convert a Git user to Mercurial (or vice-versa) is a waste of everyone’s time and energy. Especially since Git and Mercurial's real enemy is Subversion. Dang, it's a code-eat-code world out there in FOSS-land...

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • RTFMobile

    - by ultan o'broin
    It may seem obvious but it’s worth stating again. The idea that mobile users are going to read lots of user assistance on their devices is just wrong. So, Jakob Nielsen’s post Mobile Content Is Twice as Difficult serves as a timely reminder for anyone thinking of putting manuals as a form of user assistance onto mobile phones. There is also an excellent post on UXMag.com, explaining that one of the ways to screw up with your iPhone app is to throw an old-style user manual into the user experience: 10 Surefire Ways to Screw Up Your iPhone App.   (Image copyright and referenced from UX Magazine 2010)   Instead, user assistance  alternatives—if any at all—include one-time tours, graphics, in-context instructions, and so on. Not so sure that importing “humor” and “personality” work so well in the enterprise app space, myself. However, the message is clear: iPhone users don’t read manuals. Great message. Users will figure it out, and if they can’t, well then your app’s UX is a problem and the app will fail. Shame some teams are obsessed with figuring out ways to port existing manuals to mobile platforms without any thought for the UX. Razorfish’s Scatter/Gather blog says it all: One thing that is particularly discouraging, most material currently available on “Creating Content for the iPad” or similar themes turns out to be about getting traditional content onto, or into, the iPad. Now, manuals for non-end users in PDF format on eReaders is a different matter. I have research on that, but it’s for another post. Technorati Tags: mobile,user assistance,UX,user experience,manuals,documentation

    Read the article

  • Restrict Tile Map to its boundaries

    - by Farooq Arshed
    I have loaded a tmx file in cocos2dx and now I am trying to implement panning. I have successfully implemented the panning first part where the map moves. Now I want to restrict the map so it does not display the map beyond its boundary where it shows black screen. I am confused as to how to implement it. Below is my code any help would be appreciated. bool HelloWorld::init() { if ( !CCLayer::init() ) { return false; } const char* tmx= "isometric_grass_and_water.tmx"; _tileMap = new CCTMXTiledMap(); _tileMap->initWithTMXFile(tmx); this->addChild(_tileMap); this->setTouchEnabled(true); return true; } void HelloWorld::ccTouchesBegan(CCSet *touches, CCEvent *event){ CCSetIterator it; for (it=touches->begin(); it!=touches->end(); ++it){ CCTouch* touch = (CCTouch*)it.operator*(); CCLog("touches id: %d", touch->getID()); oldLoc = touch->getLocationInView(); oldLoc = CCDirector::sharedDirector()->convertToGL(oldLoc); } } void HelloWorld::ccTouchesMoved(CCSet *touches, CCEvent *event) { if (touches->count() == 1) { CCTouch* touch = (CCTouch*)( touches->anyObject() ); this->moveScreen(touch); } else if (touches->count() == 2) { this->scaleScreen(touches); } } void HelloWorld::moveScreen(CCTouch* touch) { CCPoint currentLoc = touch->getLocationInView(); currentLoc = CCDirector::sharedDirector()->convertToGL(currentLoc); CCPoint moveTo = ccpSub(oldLoc, currentLoc); moveTo = ccpMult(moveTo, -1); oldLoc = currentLoc; this->setPosition(ccpAdd(this->getPosition(), ccp(moveTo.x, moveTo.y))); }

    Read the article

  • Plastic Clamshell Packaging Voted Worse Design Ever

    - by Jason Fitzpatrick
    We’ve all been there: frustrated and trying free a new purchase from it’s plastic clamshell jail. You’re not alone, the packaging design has been voted the worst in history. In a poll at Quora, users voted on the absolute worst piece of design work they’d encountered. Overwhelmingly, they voted the annoying-to-open clamshell design to the top. The author of the top comment/entry, Anita Shillhorn writes: “Design should help solve problems” — clamshells are supposed to make it harder to steal small products and easier for employees to arrange on display — but this packaging, she says, makes new ones, such as time wasted, frustration, and the little nicks and scrapes people incur as they just try to get their damn lightbulb out. This is a product designed for the manufacturers and the retailers, not the end users. There is even a Wikipedia page devoted to “wrap rage,” “the common name for heightened levels of anger and frustration resulting from the inability to open hard-to-remove packaging.” Hit up the link below for more entries in their worst-design poll. Before you go, if you’ve got a great tip for getting goods out of the plastic shell they ship in, make sure to share it in the comments. What Is The Worst Piece of Design Ever Done? [via The Atlantic] HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • Update the model on HttpPost and render the changes in the View

    - by Etienne Giust
    With MVC3, I came over that problem where I was rendering a view with an updated model at the end of an HttpPost and the changes to the model were never applied to the rendered view :   NOT working as expected ! [HttpPost]         public ActionResult Edit(JobModel editedJobModel)         {             // Update some model property             editedJobModel.IsActive = true;                          // The view will NOT be updated as expected             return View(editedJobModel);         }   This is the standard behavior. In MVC3, POSTing the model does not render the Html helpers again. In my example, a HiddenFor bound to the IsActive value will not have its value set to true after the view is rendered.   Are you stuck, then ?   Well, for one, you’re not supposed to do that: in an ideal world you are supposed to apply the Post/Redirect/Get pattern. You would redirect to a new GET after your POST performed its actions. That’s what I usually do, but sometimes, when maintaining code and implementing slight changes to a pre-existing and tested logic, one prefers to keep structural changes to a minimum.   If you really have to (but my advice is to try to implement the PRG pattern whenever possible), here is a solution to alter values of the model on a POST and have the MVC engine render it correctly :   Solution [HttpPost] public ActionResult Edit(JobModel editedJobModel) {     // NOT WORKING : Update some model property     //editedJobModel.IsActive = true;     //Force ModelState value for IsActive property     ModelState["IsActive"].Value = new ValueProviderResult(true, "True", null);          // The view will be updated as expected     return View(editedJobModel); }   As you can see, it is a “dirty” solution, as the name (as a  string) of the updated property is used as a key of the ModelState dictionary. Also, the use of ValueProviderResult is not that straightforward.   But hey, it works.

    Read the article

  • how can I fix error: hd0 out of disk?

    - by rux
    I am running Ubuntu 12.04 on a netbook - Acer AS 1410. After a download session, I restarted the computer and it said: error: hd0 out of disk. Press any key to continue... I pressed everything, but it's just frozen there. Any idea what's wrong with it and what I can do to fix it? I haven't been able to run my computer at all since it's frozen like that. Help please! I booted the live cd and ran sudo fdisk -lu into terminal, and here's what it gave me: Disk /dev/sda: 60.0 GB, 60022480896 bytes 255 heads, 63 sectors/track, 7297 cylinders, total 117231408 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9a696263 Device Boot Start End Blocks Id System /dev/sda3 2048 117229567 58613760 5 Extended /dev/sda5 * 71647232 109039615 18696192 83 Linux /dev/sda6 109041664 117229567 4093952 82 Linux swap / Solaris /dev/sda7 4096 71645183 35820544 83 Linux Partition table entries are not in disk order I am somewhat of a beginner in this, so don't know what this means. any ideas? Thanks!

    Read the article

  • Exadata X3 In-Memory Database Machine: To be or not to be

    - by Luis Moreno Campos
    Since Larry Ellison announced Oracle Exadata X3 as the new generation of the Database Machine, he established the product in the In-Memory Database arena. And that annoyed some people. We all know that In-Memory Databases are the ones that *only* execute in memory and use the other layers of storage for persistency (mainly disk). Oracle database has always been a technology that uses memory as a caching mechanism and that hasn't change nor it will change with Oracle Database 12c. So this is the central point of fuss when it comes to announcing an Engineered Systems as In-Memory Database, when in fact it still runs Oracle Database, not vanilla but still the same product. Let me tell you purist people out there: when you find no new ground breaking point to get all excited about you decide to bash it, and go against its claims. It's not like a car manufacturer that launches a mini-van in the market and calls it a Sports Car, we are talking about a fundamental change in the ILM stack: level 2 of caching is now self sufficient. It's not DRAM? Who cares, still let's you put in flash amounts of data not done up until now, so I guess Oracle can name it whatever Larry wants because in the end it's something never done before. Now let's imagine that you hop on the pure In-Memory Database bandwagon. You would be stuck with a database technology that lags behind the Oracle Database hundreds of light years in man/hours innovations and features. Do you really want to travel back in time? Remember, the first rule about time travelling is that "Security is not Guaranteed". Your choice. LMC

    Read the article

  • What are functional-programming ways of implementing Conway's Game of Life

    - by George Mauer
    I recently implemented for fun Conway's Game of Life in Javascript (actually coffeescript but same thing). Since javascript can be used as a functional language I was trying to stay to that end of the spectrum. I was not happy with my results. I am a fairly good OO programmer and my solution smacked of same-old-same-old. So long question short: what is the (pseudocode) functional style of doing it? Here is Pseudocode for my attempt: class Node update: (board) -> get number_of_alive_neighbors from board get this_is_alive from board if this_is_alive and number_of_alive_neighbors < 2 then die if this_is_alive and number_of_alive_neighbors > 3 then die if not this_is_alive and number_of_alive_neighbors == 3 then alive class NodeLocations at: (x, y) -> return node value at x,y of: (node) -> return x,y of node class Board getNeighbors: (node) -> use node_locations to check 8 neighbors around node and return count nodes = for 1..100 new Node state = new NodeState(nodes) locations = new NodeLocations(nodes) board = new Board(locations, state) executeRound: state = clone state accumulated_changes = for n in nodes n.update(board) apply accumulated_changes to state board = new Board(locations, state)

    Read the article

  • Mobility Card in Bangalore for Transportation

    - by Rekha
    Transport Minister R Ashoka announced Bangalore Metropolitan Transport Corporation (BMTC) services are going to be best in the world soon. BMTC has planned to launch a Mobility Card with which commuters can get rides in BMTC, KSRTC and future Metro Train facilities without buying tickets for each ride. The conductor with have a simple device in which the commuters can swipe their cards to deduct the ticket tarrif for bus or metro rides automatically. This Mobility card can be obtained by paying a fixed amount. This method is time saving and the commuters can be saved from paying the exact change for tickets. Ashoka says the Volvo Vayu Vaira services have internet connectivity and voice announcements of every bus stop names and this has been appreciated by the commuters. With WiFi Connections in Shatabdi Trains soon and Mobility Cards, India is soon to match the services of US Standards. Government officials are keen in implementing these services before the end of this year. Hope all these services are well used and maintained.   This article titled,Mobility Card in Bangalore for Transportation, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • What's New in Database Lifecycle Management in Enterprise Manager 12c Release 3

    - by HariSrinivasan
    Enterprise Manager 12c Release 3 includes improvements and enhancements across every area of the product. This blog provides an overview of the new and enhanced features in the Database Lifecycle Management area. I will deep dive into specific features more in depth in subsequent posts. "What's New?"  In this release, we focused on four things: 1. Lifecycle Management Support for new Database12c - Pluggable Databases 2. Management of long running processes, such as a security patch cycle (Change Activity Planner) 3. Management of large number of systems by · Leveraging new framework capabilities for lifecycle operations, such as the new advanced ‘emcli’ script option · Refining features such as configuration search and compliance 4. Minor improvements and quality fixes to existing features · Rollback support for Single instance databases · Improved "OFFLINE" Patching experience · Faster collection of ORACLE_HOME configurations Lifecycle Management Support for new Database 12c - Pluggable Databases Database 12c introduces Pluggable Databases (PDBs), the brand new addition to help you achieve your consolidation goals. Pluggable databases offer unprecedented consolidation at database level and native lifecycle verbs for creating, plugging and unplugging the databases on a container database (CDB). Enterprise Manager can supplement the capabilities of pluggable databases by offering workflows for migrating, provisioning and cloning them using the software library and the deployment procedures. For example, Enterprise Manager can migrate an existing database to a PDB or clone a PDB by storing a versioned copy in the software library. One can also manage the planned downtime related to patching by  migrating the PDBs to a new CDB. While pluggable databases offer these exciting features, it can also pose configuration management and compliance challenges if not managed properly. Enterprise Manager features like inventory management, topology associations and configuration search can mitigate the sprawl of PDBs and also lock them to predefined golden standards using configuration comparison and compliance rules. Learn More ... Management of Long Running datacenter processes - Change Activity Planner (CAP) Currently, customers resort to cumbersome methods to create, execute, track and monitor change activities within their data center. Some customers use traditional tools such as spreadsheets, project planners and in-house custom built solutions. Customers often have weekly sync up meetings across stake holders to collect status and updates. Some of the change activities, for example the quarterly patch set update (PSU) patch rollouts are not single tasks but processes with multiple tasks. Some of those tasks are performed within Enterprise Manager Cloud Control (for example Patch) and some are performed outside of Enterprise Manager Cloud Control. These tasks often run for a longer period of time and involve multiple people or teams. Enterprise Manger Cloud Control supports core data center operations such as configuration management, compliance management, and automation. Enterprise Manager Cloud Control release 12.1.0.3 leverages these capabilities and introduces the Change Activity Planner (CAP). CAP provides the ability to plan, execute, and track change activities in real time. It covers the typical datacenter activities that are spread over a long period of time, across multiple people and multiple targets (even target types). Here are some examples of Change Activity Process in a datacenter: · Patching large environments (PSU/CPU Patching cycles) · Upgrading large number of database environments · Rolling out Compliance Rules · Database Consolidation to Exadata environments CAP provides user flows for Compliance Officers/Managers (incl. lead administrators) and Operators (DBAs and admins). Managers can create change activity plans for various projects, allocate resources, targets, and groups affected. Upon activation of the plan, tasks are created and automatically assigned to individual administrators based on target ownership. Administrators (DBAs) can identify their tasks and understand the context, schedules, and priorities. They can complete tasks using Enterprise Manager Cloud Control automation features such as patch plans (or in some cases outside Enterprise Manager). Upon completion, compliance is evaluated for validations and updates the status of the tasks and the plans. Learn More about CAP ...  Improved Configuration & Compliance Management of a large number of systems Improved Configuration Comparison:  Get to the configuration comparison results faster for simple ad-hoc comparisons. When performing a 1 to 1 comparison, Enterprise Manager will perform the comparison immediately and take the user directly to the results without having to wait for a job to be submitted and executed. Flattened system comparisons reduce comparison setup time and reduce complexity. In addition to the previously existing topological comparison, users now have an option to compare using a “flattened” methodology. Flattening means to remove duplicate target instances within the systems and remove the hierarchy of member targets. The result are much easier to spot differences particularly for specific use cases like comparing patch levels between complex systems like RAC and Fusion Apps. Improved Configuration Search & Advanced EMCLI Script option for Mass Automation Enterprise manager 12c introduces a new framework level capability to be able to script and stitch together multiple tasks using EMCLI. This powerful capability can be leveraged for lifecycle operations, especially when executing a task over a large number of targets. Specific usages of this include, retrieving a qualified list of targets using Configuration Search and then using the resultset for automation. Another example would be executing a patching operation and then re-executing on targets where it may have failed. This is complemented by other enhancements, such as a better usability for designing reusable configuration searches. IN EM 12c Rel 3, a simplified UI makes building adhoc searches even easier. Searching for missing patches is a common use of configuration search. This required the use of the advanced options which are now clearly defined and easy to use. Perform “Configuration Search” using the EMCLI. Users can find and execute Configuration Searches from the EMCLI which can be extremely useful for building sophisticated automation scripts. For an example, Run the Search named “Oracle Databases on Exadata” which finds all Database targets running on top of Exadata. Further filter the results by refining by options like name, host, etc.. emcli get_targets -config_search="Databases on Exadata" –target_name="exa%“ Use this in powerful mass automation operations using the new emcli script option. For example, to solve the use case of – Finding all DBs running on Exadata and housing E-Biz and Patch them. Create a Python script with emcli functions and invoke it in the new EMCLI script option shell. Invoke the script in the new EMCLI with script option directly: $<path to emcli>/emcli @myPSU_Patch.py Richer compliance content:  Now over 50 Oracle Provided Compliance Standards including new standards for Pluggable Database, Fusion Applications, Oracle Identity Manager, Oracle VM and Internet Directory. 9 Oracle provided Real Time Monitoring Standards containing over 900 Compliance Rules across 500 Facets. These new Real time Compliance Standards covers both Exadata Compute nodes and Linux servers. The result is increased Oracle software coverage and faster time to compliance monitoring on Exadata. Enhancements to Patch Management: Overhauled "OFFLINE" Patching experience: Simplified Patch uploads UI to improve the offline experience of patching. There is now a single step process to get the patches into software library. Customers often maintain local repositories of patches, sometimes called software depots, where they host the patches downloaded from My Oracle Support. In the past, you had to move these patches to your desktop then upload them to the Enterprise Manager's Software library through the Enterprise Manager Cloud Control user interface. You can now use the following EMCLI command to upload multiple patches directly from a remote location within the data center: $emcli upload_patches -location <Path to Patch directory> -from_host <HOSTNAME> The upload process filters all of the new patches, automatically selects the relevant metadata files from the location, and uploads the patches to software library. Other Improvements:  Patch rollback for single instance databases, new option in the Patch Plan to rollback the patches added to the patch plans. Upon execution, the procedure would rollback the patch and the SQL applied to the single instance Databases. Improved and faster configuration collection of Oracle Home targets can enable more reliable automation at higher level functions like Provisioning, Patching or Database as a Service. Just to recap, here is a list of database lifecycle management features:  * Red highlights mark – New or Enhanced in the Release 3. • Discovery, inventory tracking and reporting • Database provisioning including o Migration to Pluggable databases o Plugging and unplugging of pluggable databases o Gold image based cloning o Scaling of RAC nodes •Schema and data change management •End-to-end patch management in online and offline modes, including o Patch advisories in online (connected with My Oracle Support) and offline mode o Patch pre-deployment analysis, deployment and rollback (currently only for single instance databases) o Reporting • Upgrade planning and execution of the upgrade process • Configuration management including • Compliance management with out-of-box content • Change Activity Planner for planning, designing and tracking long running processes For more information on Enterprise Manager’s database lifecycle management capabilities, visit http://www.oracle.com/technetwork/oem/lifecycle-mgmt/index.html

    Read the article

  • Which is the best free ide/plugin for struts2?

    - by shahensha
    Hello friends, I have just learnt struts 2 and now I have taken up a full fledged project in it. I learnt the basics of struts 2 in Netbeans with it's struts2 plugin. But I am not at all happy with it, as it is very basic and I end up doing most of the work. It is obviously better than plain-vanilla text editor, but still not at all near to what netbeans provides for springs and hibernate. I know because netbeans provides native support for springs and hibernate, it is meant to be better. I don't mind changing my IDE if i get better support for struts2! So my questions are Please list all the free IDEs where native support for struts2 is provided. And if possible please compare them. Please list all the plugins that are available for eclipse for struts2 development. I have heard there are better plugins in eclipse. Also, if there are better plugins in any other IDE (other than netbeans or eclipse of course), please list them giving links. Please give me some tips which I'll need before starting a full blown project in Struts2. I haven't worked on any project on Struts2. I have just finished reading Struts 2 in Action of Manning publications. Thanking you in advance! regards shahensha

    Read the article

  • IRM Item Codes &ndash; what are they for?

    - by martin.abrahams
    A number of colleagues have been asking about IRM item codes recently – what are they for, when are they useful, how can you control them to meet some customer requirements? This is quite a big topic, but this article provides a few answers. An item code is part of the metadata of every sealed document – unless you define a custom metadata model. The item code is defined when a file is sealed, and usually defaults to a timestamp/filename combination. This time/name combo tends to make item codes unique for each new document, but actually item codes are not necessarily unique, as will become clear shortly. In most scenarios, item codes are not relevant to the evaluation of a user’s rights - the context name is the critical piece of metadata, as a user typically has a role that grants access to an entire classification of information regardless of item code. This is key to the simplicity and manageability of the Oracle IRM solution. Item codes are occasionally exposed to users in the UI, but most users probably never notice and never care. Nevertheless, here is one example of where you can see an item code – when you hover the mouse pointer over a sealed file. As you see, the item code for this freshly created file combines a timestamp with the file name. But what are item codes for? The first benefit of item codes is that they enable you to manage exceptions to the policy defined for a context. Thus, I might have access to all oracle – internal files - except for 2011_03_11 13:33:29 Board Minutes.sdocx. This simple mechanism enables Oracle IRM to provide file-by-file control where appropriate, whilst offering the scalability and manageability of classification-based control for the majority of users and content. You really don’t want to be managing each file individually, but never say never. Item codes can also be used for the opposite effect – to include a file in a user’s rights when their role would ordinarily deny access. So, you can assign a role that allows access only to specified item codes. For example, my role might say that I have access to precisely one file – the one shown above. So how are item codes set? In the vast majority of scenarios, item codes are set automatically as part of the sealing process. The sealing API uses the timestamp and filename as shown, and the user need not even realise that this has happened. This automatically creates item codes that are for all practical purposes unique - and that are also intelligible to users who might want to refer to them when viewing or assigning rights in the management UI. It is also possible for suitably authorised users and applications to set the item code manually or programmatically if required. Setting the item code manually using the IRM Desktop The manual process is a simple extension of the sealing task. An authorised user can select the Advanced… sealing option, and will see a dialog that offers the option to specify the item code. To see this option, the user’s role needs the Set Item Code right – you don’t want most users to give any thought at all to item codes, so by default the option is hidden. Setting the item code programmatically A more common scenario is that an application controls the item code programmatically. For example, a document management system that seals documents as part of a workflow might set the item code to match the document’s unique identifier in its repository. This offers the option to tie IRM rights evaluation directly to the security model defined in the document management system. Again, the sealing application needs to be authorised to Set Item Code. The Payslip Scenario To give a concrete example of how item codes might be used in a real world scenario, consider a Human Resources workflow such as a payslips. The goal might be to allow the HR team to have access to all payslips, but each employee to have access only to their own payslips. To enable this, you might have an IRM classification called Payslips. The HR team have a role in the normal way that allows access to all payslips. However, each employee would have an Item Reader role that only allows them to access files that have a particular item code – and that item code might match the employee’s payroll number. So, employee number 123123123 would have access to items with that code. This shows why item codes are not necessarily unique – you can deliberately set the same code on many files for ease of administration. The employees might have the right to unseal or print their payslip, so the solution acts as a secure delivery mechanism that allows payslips to be distributed via corporate email without any fear that they might be accessed by IT administrators, or forwarded accidentally to anyone other than the intended recipient. All that remains is to ensure that as each user’s payslip is sealed, it is assigned the correct item code – something that is easily managed by a simple IRM sealing application. Each month, an employee’s payslip is sealed with the same item code, so you do not need to keep amending the list of items that the user has access to – they have access to all documents that carry their employee code.

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • 12.04 grub unable to boot on /sde, upgrade-grub and boot-repair failed, please help

    - by VGR
    My problem is I've 4 disks in a raid array listed as sda, sdb... sdd and grub 2 refuses to boot on /sde (the 5th disk, standalone and containing a clean install of 12.04 64 bits). I tried all solutions but all fail. (live CD/USB with grub-setup, also tried repair-grub, and tried also in the "grub rescue" set prefix= etc). I also tried to deactivate the RAID array in the BIOS, but I'd rather not destroy it, and I didn't find a way to make the standalone disk as '/sda1' (this would satisfy grub). In the BIOS, the would-be /sda is the only bootable hard disk; it ends up as /sde and grubs complains. I've made repair-grub issue a pastebin. I always end up in grub-rescue and I'm stuck. I need Ubuntu to boot so that I can add the device array handler for my disks. I can't switch the disks and I can't disconnect the SATA RAID controller. I need: (a) a workaround so that grub starts on /sde; or (b) a way to change the order in which Ubuntu sees the disks, at boot time. I could then provide grub with a /sda1. Thanks a lot. up please thanks a lot it's not the same problem as booting ubuntu from raid. My RAID array serves only of data repository windows had no problem with this configuration

    Read the article

  • Hosting WCF over internet

    - by user1876804
    I am pretty new to exposing the WCF services hosted on IIS over internet. I will be deploying a WCF service over IIS(6 or 7) and would like to expose this service over the internet. This will be hosted in a corporate network having firewall, I want this service to be accessible over the internet(should be able to pass through the firewall) I did some research on this and some of the pointers I got: 1. I could use wsHTTPBinding or nettcpbinding (the client is intended to be .net client). Which of the bindings is preferable. 2. To overcome the corporate I came across DMZ server, what is the purpose of this and do I really need to use this). 3. I will be passing some files between the client and server, and the client needs to know the progress of the processing on server and the end result. I know this is a very broad question to ask, but could anyone give me pointers where I could start on this and what approach to take for this problem.

    Read the article

  • Is there a visual web application builder or rapid webapp prototyping framework?

    - by Jesper Mortensen
    Question: Is there such a thing as a self-hosted framework or CMS especially tailored towards the creation of interactive web applications without -- or with an absolute minimum of -- programming? (Substantially less programming than say a simple Rails app or a plugin for Wordpress, Joomla etc would require.) As for desired features I'd settle for whatever is available, but some ideas could be: A User authentication and Permissions system. A GUI-driven input form builder. A GUI-driven template / visual site design builder. A simple scripting language (think AppleScript-like simplicity) A highly modular architecture, with high-level business objects (users, forms data, etc) exposed for easy re-use. If something like the above doesn't exist, then what comes near this? Need: This is for self-hosted rapid prototyping of web applications, and limited user testing of webapp user interface designs in a closed user test. Notes: I know about Ruby on Rails (Rails), Django, Pyramid etc. I'm looking for something much faster to work in, for making prototypes. I know about CMS's in general but find that most of them are tailored towards displaying information to the end users. If there is an exceptionally easy-to-master CMS with easy scripting (lets say much more so than for example Wordpress) then I'd be interested.

    Read the article

  • boot up fails. drops to initramfs prompt 12.04

    - by dpm
    I am running an HP pavilion dv6000 dual boot win7 and Ubuntu 12.04. (well, up until today). after a reboot, the boot process drops to the busy box shell and i end up at the prompt: BusyBox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) Ive been researching others who have had this same problem, but haven't been able to find any of those solutions to work for me. I tried the method described here: http://www.proposedsolution.com/solutions/ubuntu-booting-to-initramfs-prompt/ and after the final command mount -t ntfs-3g /dev/sda1 /root -o force it does nothing and gives me another (initramfs) prompt. I can boot to a live CD (USB) and get to a terminal, but it doesn't seem to do much good, as I can see the /dev/sda1 in the ls command, but it doesn't recognize it when I try to cd to it. My command line skills are very green, and am just starting to grasp them. One more question: using the command fdisk -l how can I tell which mount point (sda1/sda2) is my windows partition and which one is Ubuntu? Any help? I'm in a bit over my head right now...

    Read the article

  • Is there a portal dedicated to HTML5 games?

    - by Bane
    Just to get something straight; by "portal", I mean a website that frequently publishes a certain type of games, has a blog, some articles, maybe some tutorials and so on. All of these things are not required (except the game publishing part, of course), for example, I consider Miniclip to be a flash game portal. The reason for defining this term is because I'm not sure if other people use it in this context. I recently (less than a year ago) got into HTML5 game development, nothing serious, just my own small projects that I didn't really show to a lot of people, and that certainly didn't end up somewhere on the web (although, I am planning to make a website for my next game). I am interested in the existence of an online portal where indie devs (or non-indie ones, doesn't really matter that much) can publish their own games, sort of like "by devs for devs", also a place where you can find some simple tutorials on basic HTML5 game development and so on... I doubt something like this exists for several reasons: You can't really commercialize an HTML5 game without a strong server-side and microtransactions The code can be easily copied HTML5 is simply new, and things need time to get their own portals somewhere... If a thing like this does not exist, I think I might get into making one some day...

    Read the article

  • Mocking property sets

    - by mehfuzh
    In this post, i will be showing how you can mock property sets with your expected values or even action using JustMock. To begin, we have a sample interface: public interface IFoo {     int Value { get; set; } } Now,  we can create a mock that will throw on any call other than the one expected, generally its a strict mock and we can do it like: bool expected = false;  var foo = Mock.Create<IFoo>(BehaviorMode.Strict);  Mock.ArrangeSet(() => { foo.Value = 1; }).DoInstead(() => expected  = true);    foo.Value = 1;    Assert.True(expected); Here , the method for running though our expectation for set is Mock.ArrangeSet , where we can directly set our expectations or can even set matchers into it like: var foo = Mock.Create<IFoo>(BehaviorMode.Strict);   Mock.ArrangeSet(() => foo.Value = Arg.Matches<int>(x => x > 3));   foo.Value = 4; foo.Value = 5;   Assert.Throws<MockException>(() => foo.Value = 3);   In the example, any set for value not satisfying matcher expression will throw an MockException as this is a strict mock but what will be the case for loose mocks, where we also have to assert it. Here, let’s take an interface with an indexed property. Indexers are treated in the same way as properties, as with basic indexers let you access your class if it were an array. public interface IFooIndexed {     string this[int key] { get; set; } } We want to  setup a value for a particular index,  we then will pass that mock to some implementer where it will be actually called. Once done, we want to assert that if it has been invoked properly. var foo = Mock.Create<IFooIndexed>();   Mock.ArrangeSet(() => foo[0] = "ping");   foo[0] = "ping";   Mock.AssertSet(() => foo[0] = "ping"); In the above example, both the values are user defined, it might happen that we want to make it more dynamic, In this example, i set it up for set with any value and finally checked if it is set with the one i am looking for. var foo = Mock.Create<IFooIndexed>();   Mock.ArrangeSet(() => foo[0] = Arg.Any<string>());   foo[0] = "ping";   Mock.AssertSet(() => foo[0] = Arg.Matches<string>(x => string.Compare("ping", x) == 0)); This is more or less of mocking user sets , but we can further have it to throw exception or even do our own task for a particular set , like : Mock.ArrangeSet(() => foo.MyProperty = 10).Throws(new ArgumentException()); Or  bool expected = false;  var foo = Mock.Create<IFoo>(BehaviorMode.Strict);  Mock.ArrangeSet(() => { foo.Value = 1; }).DoInstead(() => expected  = true);    foo.Value = 1;    Assert.True(expected); Or call the original setter , in this example it will throw an NotImplementedExpectation var foo = Mock.Create<FooAbstract>(BehaviorMode.Strict); Mock.ArrangeSet(() => { foo.Value = 1; }).CallOriginal(); Assert.Throws<NotImplementedException>(() => { foo.Value = 1; });   Finally, try all these, find issues, post them to forum and make it work for you :-). Hope that helps,

    Read the article

  • How to move an UIView along a curved CGPath according to user dragging the view

    - by Felipe Cypriano
    I'm trying to build a interface that the user can move his finger around the screen an a list of images moves along a path. The idea is that the images center nevers leaves de path. Most of the things I found was about how to animate using CGPath and not about actually using the path as the track to a user movement. I need to objects to be tracked on the path even if the user isn't moving his fingers over the path. For example (image bellow), if the object is at the beginning of the path and the user touches anywhere on the screen and moves his fingers from left to right I need that the object moves from left to right but following the path, that is, going up as it goes to the right towards the path's end. This is the path I've draw, imagine that I'll have a view (any image) that the user can touch and drag it along the path, there's no need to move the finger exactly over the path. If the user move from left to right the image should move from left to right but going up if need following the path. This is how I'm creating the path: CGPoint endPointUp = CGPointMake(315, 124); CGPoint endPointDown = CGPointMake(0, 403); CGPoint controlPoint1 = CGPointMake(133, 187); CGPoint controlPoint2 = CGPointMake(174, 318); CGMutablePathRef path = CGPathCreateMutable(); CGPathMoveToPoint(path, NULL, endPointUp.x, endPointUp.y); CGPathAddCurveToPoint(path, NULL, controlPoint1.x, controlPoint1.y, controlPoint2.x, controlPoint2.y, endPointDown.x, endPointDown.y); Any idead how can I achieve this?

    Read the article

< Previous Page | 874 875 876 877 878 879 880 881 882 883 884 885  | Next Page >