Search Results

Search found 669 results on 27 pages for 'mass'.

Page 22/27 | < Previous Page | 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Run command with space characters in bash script

    - by ??iu
    I have a file that contains a list of files: 02 of Clubs.eps 02 of Diamonds.eps 02 of Hearts.eps 02 of Spades.eps ... I am attempting to mass-convert these to png format in several sizes. The script I am using to do this is: while read -r line do for i in 80 35 200 do convert $(sed 's/ /\\ /g' <<< Cards/${line}) -size ${i}x${i} ../img/card/$(basename $(tr ' ' '_' <<< ${line} | tr '[A-Z]' '[a-z]') .eps)_${i}.png; done done < card_list.txt However, this doesn't work, apparently trying to split on each word, resulting in the following error output: convert: unable to open image `Cards/02\': No such file or directory @ error/blob.c/OpenBlob/2514. convert: no decode delegate for this image format `Cards/02\' @ error/constitute.c/ReadImage/532. convert: unable to open image `of\': No such file or directory @ error/blob.c/OpenBlob/2514. convert: no decode delegate for this image format `of\' @ error/constitute.c/ReadImage/532. convert: unable to open image `Clubs.eps': No such file or directory @ error/blob.c/OpenBlob/2514. If I change the convert to an echo the result looks right and if I copy a line and run it myself in the shell it works fine: convert Cards/02\ of\ Clubs.eps -size 80x80 ../img/card/02_of_clubs_80.png convert Cards/02\ of\ Clubs.eps -size 35x35 ../img/card/02_of_clubs_35.png convert Cards/02\ of\ Clubs.eps -size 200x200 ../img/card/02_of_clubs_200.png convert Cards/02\ of\ Diamonds.eps -size 80x80 ../img/card/02_of_diamonds_80.png convert Cards/02\ of\ Diamonds.eps -size 35x35 ../img/card/02_of_diamonds_35.png convert Cards/02\ of\ Diamonds.eps -size 200x200 ../img/card/02_of_diamonds_200.png convert Cards/02\ of\ Hearts.eps -size 80x80 ../img/card/02_of_hearts_80.png convert Cards/02\ of\ Hearts.eps -size 35x35 ../img/card/02_of_hearts_35.png convert Cards/02\ of\ Hearts.eps -size 200x200 ../img/card/02_of_hearts_200.png convert Cards/02\ of\ Spades.eps -size 80x80 ../img/card/02_of_spades_80.png UPDATE: Just adding quotes (see below) has the same result as the above, where I had been using sed to add backslashes convert '"'Cards/${line}'"' -size ${i}x${i} ../img/card/$(basename $(tr ' ' '_' <<< ${line} | tr '[A-Z]' '[a-z]') .eps)_${i}.png; I've tried both double and single quotes

    Read the article

  • How to make a Stored Procedure that takes in XML and uses that xml as an Update + call this stored p

    - by chobo2
    Hi I am using ms sql server 2005 and I want to do a mass update. I am thinking that I might be able to do it with sending an xml document to a stored procedure. So I seen many examples on how to do it for insert CREATE PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData XML) AS INSERT INTO dbo.UserTable(CreateDate) SELECT @UpdatedProdData.value('(/ArrayOfUserTable/UserTable/CreateDate)[1]', 'DATETIME') But I am not sure how it would look like for an update. I am also unsure how do I pass in the xml through ado.net? Do I pass it as a string through a parameter or what? I know sqlDataApater has a batch update method but I am using linq to sql. So I rather keep using it. So if this works I would be able to grab all records with linq to sql and have them as objects. Then manipulate the objects and use xml seralization. Finally I could just use ado.net simple to send the xml to the server. This might be slower then the sqlDataAdapter but I am willing to take that hit if I can keep using objects.

    Read the article

  • RoR routing problem. Calling custom action, but getting redirected to show action

    - by conorgil
    I am working on a project in ruby on rails and I am having a very difficult time with a basic problem. I am trying to call a custom action in one of my controllers, but the request is somehow getting redirected to the default 'show' action and I cannot figure out why. link in edit.html.erb: <%= link_to 'Mass Text Entry', :action=>"create_or_add_food_item_from_text" %> Error from development.log: ActiveRecord::RecordNotFound (Couldn't find Menu with ID=create_or_add_food_item_from_text): app/controllers/menus_controller.rb:20:in `show' routes.rb file: ActionController::Routing::Routes.draw do |map| map.resources :nutrition_objects map.resources :preference_objects map.resources :institutions map.resources :locations map.resources :menus map.resources :food_items map.resources :napkins map.resources :users map.resource :session, :controller => 'session' map.root :controller=>'pages', :action=>'index' map.about '/about', :controller=>'pages', :action=>'about' map.contact '/contact', :controller=>'pages', :action=>'contact' map.home '/home', :controller=>'pages', :action=>'index' map.user_home '/user/home', :controller=>'rater', :action=>'index' map.user_napkins '/user/napkins', :controller=>'rater', :action=>'view_napkins' map.user_preferences '/user/preferences',:controller=>'rater', :action=>'preferences' map.blog '/blog', :controller=>'pages', :action=>'blog' map.signup '/signup', :controller=>'users', :action=>'new' map.login '/login', :controller=>'session', :action=>'new' map.logout '/logout', :controller=>'session', :action=>'destroy' # Install the default routes as the lowest priority. map.connect ':controller/:action' map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' end Menus_controller.rb: class MenusController < ApplicationController ... def create_or_add_food_item_from_text end ... end create_or_add_food_item_from_text.html.erb simply has a div to show a form with a text box in it. I have the rest of my app working fine, but this is stumping me. Any help is appreciated.

    Read the article

  • Image is not displaying in email template on 2nd time forward

    - by Don
    Good day Friends, I've a mass mailing program with simple mail templates (HTML and few Images). I've a problem with image display. My clients are not getting images in the mail. Sometimes they get a mail with all the images, But if they forward the same email to someone else, they can’t get the images in forwarded mail. I really don’t know what’s happening with the approach., most of the cases the 2nd time forwarded mail is not showing the images properly. For example, consider I send a mail to client A, Here, Client A will get a mail with Images. Further, If Client A forward the same message to Person B then Person B is not getting Images in the Forwarded email. I’m using the following Approach to Embed an image in the mail template: StringBuilder sb = new StringBuilder(" <some html content> <img src=\"cid:main.png\" alt=\"\" border=\"0\" usemap=\"#Map\"> </html content ends here>"); Attachment imgMain = new Attachment(Server.MapPath("main.png")); imgMain.ContentId = "main.png"; MailMessageObject.Attachments.Add(imgMain); Instead of attachment, I tried bypassing the Image path from server directly. Something like as follows: StringBuilder sb = new StringBuilder(" <some html content> <img src=\"www.mydomain.com/images/main.png\" alt=\"\" border=\"0\" usemap=\"#Map\"> </html content ends here>"); But, result is same, Please help to resolve this problem

    Read the article

  • Excel - Variable number of leading zeros in variable length numbers?

    - by daltec
    The format of our member numbers has changed several times over the years, such that 00008, 9538, 746, 0746, 00746, 100125, and various other permutations are valid, unique and need to be retained. Exporting from our database into the custom Excel template needed for a mass update strips the leading zeros, such that 00746 and 0746 are all truncated to 746. Inserting the apostrophe trick, or formatting as text, does not work in our case, since the data seems to be already altered by the time we open it in Excel. Formatting as zip won't work since we have valid numbers less than five digits in length that cannot have zeros added to them. And I am not having any luck with "custom" formatting as that seems to require either adding the same number of leading zeros to a number, or adding enough zeros to every number to make them all the same length. Any clues? I wish there was some way to set Excel to just take what it's given and leave it alone, but that does not seem to be the case! I would appreciate any suggestions or advice. Thank you all very much in advance!

    Read the article

  • Minimizing server load in case of many XMLHttpRequest calls

    - by user1888975
    I am making a website where users are to like some articles. Whenever the like button is clicked I am sending a XMLHttpRequest to the server to run a file called like_clicked.php along with the get data of article id and user id. This file takes article id and user id and updates the sql database and also adds a node in an xml file related to the user. This is the first time I am doing something for mass usage. I am worried about the server load when too many users call the like_clicked.php file. Please help me, if this method is ok. I am also thinking of an alternative in case the above method fails. I am thinking of making many like_clicked files (namely like_clicked1.php, like_clicked2.php ... ) to minimize load on a single i-node. Is there a method to detect that it is better to call the next like_clicked file. Here we would need to detect how many calls per unit time are coming for the particular file. How do we handle this? Thanks in advance.

    Read the article

  • How can I add two models in one form, where one model is a has_many :through?

    - by Angela
    How do I model having multiple Addresses for a Company and assign a single Address to a Contact? Contacts belong_to a Company. A Company has_many Contacts. A Company also has_many Addresses. And each Contact has_one Address. I want to be able, whenever I create a New Contact, to access all the addresses in all Contacts that belong to the Company. Here is are my Models: class Company < ActiveRecord::Base attr_accessible :name, :phone, :addresses has_many :contacts has_many :addresses, :through => :contacts end class Contact < ActiveRecord::Base attr_accessible :first_name, :last_name, :title, :phone, :fax, :email, :company, :date_entered, :campaign_id, :company_name, :address belongs_to :company has_one :address accepts_nested_attributes_for :address end class Address < ActiveRecord::Base attr_accessible :street1 has_many :contacts end How do I create the View in the _form for Contacts so that I can 1) Add an Address when creating a Contact; 2) Display the options of the Address. Here is how I am doing step 1, which is just to add a new address for a new contact: <% f.fields_for :addresses do |builder| %> <p> <%= builder.label :street1, "Street 1" %> </br> <%= builder.text_field :street1 %> <p> Right now, what I have doesn't work. The console says I cannot mass-assign addresses when I hit "submit" on this New contact form.

    Read the article

  • Ouch, how to escape this in sed? Cleaning up iframe malware

    - by user1769783
    I'm helping someone clean up a malware infection on a site and I'm having a difficult time correctly matching some strings in sed so I can create a script to mass search and replace / remove it. The strings are: <script>document.write('<style>.vb_style_forum {filter: alpha(opacity=0);opacity: 0.0;width: 200px;height: 150px;}</style><div class="vb_style_forum"><iframe height="150" width="200" src="http://www.iws-leipzig.de/contacts.php"></iframe></div>');</script> <script>document.write('<style>.vb_style_forum {filter: alpha(opacity=0);opacity: 0.0;width: 200px;height: 150px;}</style><div class="vb_style_forum"><iframe height="150" width="200" src="http://vidintex.com/includes/class.pop.php"></iframe></div>');</script> <script>document.write('<style>.vb_style_forum {filter: alpha(opacity=0);opacity: 0.0;width: 200px;height: 150px;}</style><div class="vb_style_forum"><iframe height="150" width="200" src="http://www.iws-leipzig.de/contacts.php"></iframe></div>');</script> I cant seem to figure out how to escape the various characters in those lines... If I try to just say delete the entire line if it matches http://vidintex.com/includes/class.pop.php it also deletes the closing "" in the .html files as well. Any help would be greatly appreciated!

    Read the article

  • Sys. engineer has decided to dynamically transform all XSLs into DLLs on website build process. DLL

    - by John Sullivan
    Hello, OS: Win XP. Here is my situation. I have a browser based application. It is "wrapped" in a Visual Basic application. Our "Systems Engineer Senior" has decided to spawn DLL files from all of our XSL pages (many of which have duplicate names) upon building a new instance of the website and have the active server pages (ASPX) use the DLL instead. This has created a "known issue" in which ~200 DLL naming conflicts occur and, thus, half of our application is broken. I think a solution to this problem is that, thankfully, we're generating the names of the DLLs and linking them up with our application dynamically. Therefore we can do something kludgy like generate a hash and append it to the end of the DLL file name when we build our website, then always reference the DLL that had some kind of random string / hash appended to its name. Aside from outright renaming the DLLs, is there another way to have multiple DLLs with the same name register for one application? I think the answer is "No, only between different applications using a special technique." Please confirm. Another question I have on my mind is whether this whole idea is a good practice -- converting our XSL pages (which we use in mass -- every time a response from our web app occurs) into DLL functions that call a "function" to do what the XSL page did via an active server page (ASPX), when we were before just sending an XML response to an XSL page via aspx.

    Read the article

  • Mysql dropping inserts with triggers

    - by user2891127
    Using mysql 5.5. I have two tables. One has a whitelist of hashes. When I insert a new row into the other table, I want to first compare the hash in the insert statement to the whitelist. If it's in the whitelist, I don't want to do the insert (less data to plow through later). The inserts are generated from another program and are text files with sql statements. I've been playing with triggers, and almost have it working: BEGIN IF (select count(md5hash) from whitelist where md5hash=new.md5hash) 0 THEN SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Already Whitelisted'; END IF; END But there's a problem. The Signal throwing up the error stops the import. I want to skip that line, not stop the whole import. Some searching didn't find any way to silently skip the import. My next idea was to create a duplicate table definition, and redirect the insert to that dup table. But the old and new don't seem to apply to table names. Other then adding an ignore column to my table then doing a mass drop based on that column after the import, is there any way to achieve my goal?

    Read the article

  • JQUERY - how to get updated value after ajax removes data from within it?

    - by Brian
    I have a an element with thumbnails. I allow users to sort their display order (which fires off an update to the DB via ajax). I also allow them to delete images (which, after deletion, fires off a request to update the display order for all remaining images). My problem is with binding or live I think, but I don't know where to apply it. The array fired off upon delete contains ALL the ids for the images that were there on page load. The issue is that after they delete an image the array STILL contains the original ids (including the one that was deleted) so it is obviously not refreshing the value of the element after ajax has removed things from inside it. I need to tell it to go get the refreshed contents... From what I have been reading, this is normal but I don't understand how to tie it into my routine. I need to trigger the mass re-ordering after any deletion. Any ideas gurus? $('a.delimg').click(function(){ var parent = $(this).parent().parent(); var id = $(this).attr('id'); $.ajax({ type: "POST", url: "../updateImages.php", data: "action=delete&id=" + id, beforeSend: function() { parent.animate({'backgroundColor':'#fb6c6c'},300); $.jnotify("<strong>Deleting This Image & Updating The Image Order</strong>", 5000); }, success: function(data) { parent.slideUp(300,function() { parent.remove(); $("#images ul").sortable(function() { //NEEDS TO GET THE UPDATED CONTENT var order = $(this).sortable("serialize") + '&action=updateRecordsListings'; $.post("../updateImages.php", order, function(theResponse){ $.jnotify("<strong>" + theResponse + "</strong>", 2000); }); }); }); } }); return false; }); Thanks for any help you can be.

    Read the article

  • Oracle Announces Oracle Exadata X3 Database In-Memory Machine

    - by jgelhaus
    Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012 News Facts During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Next-Generation Technologies Deliver Dramatic Performance Improvements Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads. Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including: Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash; 20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold; 33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors; Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data; Up to 30 percent reduction in power and cooling. Configured for Your Business, Available Today Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability. Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers. Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications. Supporting Quotes “Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.” Supporting Resources Oracle Press Release Oracle Exadata Database Machine Oracle Exadata X3-2 Database In-Memory Machine Oracle Exadata X3-8 Database In-Memory Machine Oracle Database 11g Follow Oracle Database via Blog, Facebook and Twitter Oracle OpenWorld 2012 Oracle OpenWorld 2012 Keynotes Like Oracle OpenWorld on Facebook Follow Oracle OpenWorld on Twitter Oracle OpenWorld Blog Oracle OpenWorld on LinkedIn Mark Hurd's keynote with Andy Mendelsohn and Juan Loaiza - - watch for the replay to be available soon at http://www.youtube.com/user/Oracle or http://www.oracle.com/openworld/live/on-demand/index.html

    Read the article

  • SQL SERVER – List of All the Samples Database Available to Download for FREE

    - by Pinal Dave
    It is pretty much very common to have a sample database for any database product. Different companies keep on improving their product and keep on coming up with innovation in their product. To demonstrate the capability of their new enhancements they need the sample database. Microsoft have various sample database available for free download for their SQL Server Product. I have collected them here in a single blog post. Download an AdventureWorks Database The AdventureWorks OLTP database supports standard online transaction processing scenarios for a fictitious bicycle manufacturer (Adventure Works Cycles). Scenarios include Manufacturing, Sales, Purchasing, Product Management, Contact Management, and Human Resources. Coconut Dal Coconut Dal is a lightweight data access layer, for use in projects where the Entity Framework cannot be used or Microsoft’s Enterprise Library Data Block is unsuitable. Anyone who is handwriting ADO.NET should use a library instead and Coconut Dal might be the answer.  DataBooster – Extension to ADO.NET Data Provider The dbParallel DataBooster library is a high-performance extension to ADO.NET Data Provider, includes two aspects: 1) A slimmed down API encapsulation which simplified the most common data access operations (DbConnection -> DbCommand -> DbParameter -> DbDataReader) into a single class DbAccess, to help application with a clean DAL, avoid over-packing and redundant-copy of data transfer. 2) A booster for writing mass data onto database. Base on a rational utilization of database concurrency and a effective utilization of network bandwidth. Tabular AMO 2012 The sample is made of two project parts. The first part is a library of functions to manage tabular models -AMO2Tabular V2-. The second part is a sample to build a tabular model -AdventureWorks Tabular AMO 2012- using the AMO2Tabular library; the created model is similar to the ‘AdventureWorks Tabular Model 2012. SQL Server Analysis Services Product Samples SQL Server Analysis Services provides, a unified and integrated view of all your business data as the foundation for all of your traditional reporting, online analytical processing (OLAP) analysis, Key Performance Indicator (KPI) scorecards, and data mining. Analysis Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Reporting Services Product Samples This project contains Reporting Services samples released with Microsoft SQL Server product. These samples are in the following five categories: Application Samples, Extension Samples, Model Samples, Report Samples, and Script Samples. If you are interested in contributing Reporting Services samples, please let us know by posting in the developers’ forum. Reporting Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008 R2 PCU1. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Integration Services Product Samples This project contains Integration Services samples released with Microsoft SQL Server product. These samples are in the following two categories: Package Samples and Programming Samples. If you are interested in contributing Integration Services samples, please let us know by posting in the developers’ forum. Integration Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. Windows Azure SQL Reporting Admin Sample The SQLReportingAdmin sample for Windows Azure SQL Reporting demonstrates the usage of SQL Reporting APIs, and manages (add/update/delete) permissions of SQL Reporting users. Windows Azure SQL Reporting ReportViewer-SOAP API usage sample These sample projects demonstrate how to embed a Microsoft ReportViewer control that points to reports hosted on SQL Reporting report servers and how to use SQL Reporting SOAP APIs in your Windows Azure Web application. Enterprise Library 5.0 – Integration Pack for Windows Azure This NuGet package contains a zip file with the source code for the Enterprise Library Integration Pack for Windows Azure.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Sample Database

    Read the article

  • Google TV Gets Bad Reception. Can Media Center Pull in the Signal?

    - by andrewbrust
    The news hit Monday morning that Google has decided to delay the release of its Google TV platform, and has asked its OEMs to delay any products that embed the software.  Coming just about two weeks prior to the 2011 Consumer Electronics Show (CES), Google’s timing is about the worst imaginable.  CES is where the platform should have had its coming out party, especially given all the anticipation that has built up since its initial announcement came 7 months ago. At last year’s CES, it seemed every consumer electronics company had fashioned its own software stack for Internet-based video programming and applications/widgets on its TVs, optical disc players and set top boxes.  In one case, I even saw two platforms on a single TV set (one provided by Yahoo and the other one native to the TV set). The whole point of Google TV was to solve this problem and offer a standard, embeddable platform.  But that won’t be happening, at least not for a while.  Google seems unable to get it together, and more proprietary approaches, like Apple TV, don’t seem to be setting the world of TV-Internet convergence on fire, either. It seems to me, that when it comes to building a “TV operating system,” Windows Media Center is still the best of a bad bunch.  But it won’t stay so for much longer without some changes.  Will Redmond pick up the ball that Google has fumbled?  I’m skeptical, but hopeful.  Regardless, here are some steps that could help Microsoft make the most of Google’s faux pas: Introduce a new Media Center version that uses XBox 360, rather than Windows 7 (or 8), as the platform.  TV platforms should be appliance-like, not PC-like.  Combine that notion with the runaway sales numbers for Xbox 360 Kinect, and the mass appeal it has delivered for Xbox, and the switch form Windows makes even more sense. As I have pointed out before, Microsoft’s Xbox implementation of its Mediaroom platform (announced and demoed at last year’s CES) gets Redmond 80% of the way toward this goal.  Nothing stops Microsoft from going the other 20%, other than its own apathy, which I hope has dissipated. Reverse the decision to remove Drive Extender technology from Windows Home Server (WHS), and create deep integration between WHS and Media Center.  I have suggested this previously as well, but the recent announcement that Drive Extender would be dropped from WHS 2.0 creates the need for me to a) join the chorus of people urging Microsoft to reconsider and b) reiterate the importance of Media Center-WHS integration in the context of a Google compete scenario. Enable Windows Phone 7 (WP7) as a Media Center client.  This would tighten the integration loop already established between WP7, Xbox and Zune.  But it would also counter Echostar/DISH Network/Sling Media, strike a blow against Google/Android (and even Apple/iOS) and could be the final strike against TiVO. Bring the WP7 user interface to Media Center and Kinect-enable it.  This would further the integration discussed above and would be appropriate recognition of WP7’s Metro UI having been built on the heritage of the original Media Center itself.  And being able to run your DVR even if you can’t find the remote (or can’t see its buttons in the dark) could be a nifty gimmick. Microsoft can do this but its consumer-oriented organization, responsible for Xbox, Zune and WP7, has to take the reins here, or none of this will likely work.  There’s a significant chance that won’t happen, but I won’t let that stop me from hoping that it does and insisting that it must.  Honestly, this fight is Microsoft’s to lose.

    Read the article

  • The dislikes of TDD

    - by andrewstopford
    I enjoy debates about TDD and Brian Harrys blog post is no exception. Brian sounds out what he likes and dislikes about TDD and it's the dislikes I'll focus on. The idea of having unit tests that cover virtually every line of code that I’ve written that I have to refactor every time I refactor my code makes me shudder.  Doing this way makes me take nearly twice as long as it would otherwise take and I don’t feel like I get sufficient benefits from it. Refactoring your tests to match your refactored code sounds like the tests are suffering. Too many hard dependencies with no SOLID concerns are a sure fire reason you would do this. Maybe at the start of a TDD cycle you would need to do this as your design evolves and you remove these dependencies but this should quickly be resolved as you refactor. If you find your self still doing it then stop and look back at your design. Don’t get me wrong, I’m a big fan of unit tests.  I just prefer to write them after the code has stopped shaking a bit.  In fact most of my early testing is “manual”.  Either I write a small UI on top of my service that allows me to plug in values and try it or write some quick API tests that I throw away as soon as I have validated them. The problem with this is that a UI can make assumptions on your code that then just unit test around and very quickly the design becomes bad and you technical debt sweeps in. If you want to blackbox test your code with a UI then do so after your TDD cycles not before. This is probably by biggest issue with a literal TDD interpretation.  TDD says you never write a line of code without a failing test to show you need it.  I find it leads developers down a dangerous path.  Without any help from a methodology, I have met way too many developers in my life that “back into a solution”.  By this, I mean they write something, it mostly works and they discover a new requirement so they tack it on, and another and another and when they are done, they’ve got a monstrosity of special cases each designed to handle one specific scenario.  There’s way more code than there should be and it’s way too complicated to understand. I believe in finding general solutions to problems from which all the special cases naturally derive rather than building a solution of special cases.  In my mind, to do this, you have to start by conceptualizing and coding the framework of the general algorithm.  For me, that’s a relatively monolithic exercise. TDD is an development pratice not a methodology, the danger is that the solution becomes a mass of different things that violate DRY. TDD won't solve these problems, only good communication and practices like pairing will help. Above all else an assumption that TDD replaces a methodology is a mistake, combine it with what ever works for your team\business but only good communication will help. A good naming scheme\structure for folders, files and tests can help you and your team isolate what tests are for what.

    Read the article

  • The Next Wave of PeopleSoft Capabilities for the Staffing Industry Is Here

    - by Mark Rosenberg
    With the release of PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 in January this year, we introduced substantial new capabilities for our Staffing Industry customers. Through a co-development project with Infosys Limited, we have enriched Oracle's PeopleSoft Staffing Solution with new tools aimed at accelerating and improving the quality of job order fulfillment, increasing branch recruiter productivity, and driving profitable growth. Staffing industry firms succeed based on their ability to rapidly, cost-effectively, and continually fill their pipelines with new clients and job orders, recruit the best talent, and match orders with talent. Pressure to execute in each of these functional areas is even more acute on staffing firms as contingent labor becomes a more substantial and permanent part of the workforce mix. In an industry that creates value through speedy execution, there is little room for manual, inefficient processes and brittle, custom integrations, which throttle profitability and growth. The latest wave of investment in the PeopleSoft Staffing Solution focuses on generating efficiency and flexibility for our customers. Simplicity To operate profitably and continue growing, a Staffing enterprise needs its client management, recruiting, order fulfillment, and other processes to function in harmony. Most importantly, they need to be simple for recruiters, branch managers, and applicants to access and understand. The latest PeopleSoft Staffing Solution set of enhancements includes numerous automated defaulting mechanisms and information-rich dashboard pagelets that even a new employee can learn quickly. Pending Applicant, Agenda management, Search, and other pagelets are just a few of the newest, easy-to-use tools that not only aggregate and summarize information, but also provide instant access to applicants, tasks, and key reports for branch staff. Productivity The leading firms in the Staffing industry are those that can more efficiently orchestrate large numbers of candidates, clients, and orders than their competitors can. PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 delivers productivity boosters that Staffing firms can leverage to streamline tasks and processes for competitive advantage. For example, we enhanced the Recruiting Funnel, which manages the candidate on-boarding process, with a highly interactive user interface. It integrates disparate Staffing business processes and exploits new PeopleTools technologies to offer a superior on-boarding user experience. Automated creation of agenda items and assignment tasks for each candidate minimizes setup and organizes assignment steps for the on-boarding process. Mass updates of tasks and instant access to the candidate overview page (which we also expanded), candidate event status, event counts, and other key data enable recruiters to better serve clients and candidates. Lower TCO Constructing and maintaining an efficient yet flexible labor supply chain can be complicated, let alone expensive. Traditionally, Staffing firms have been challenged in controlling their technology cost of ownership because connecting candidate and client-facing tools involved building and integrating custom applications and technologies and managing staff turnover, placing heavy demands on IT and support staff. With PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2, there are two major enhancements that aggressively tackle these challenges. First, we added another integration framework to enable cost-effective linking of the Staffing firm’s PeopleSoft applications and its job board distributors. (The first PeopleSoft 9.1 Feature Pack released in March 2011 delivered an integration framework to connect to resume parsing providers.) Second, we introduced the teaming concept to enable work to be partitioned to groups, as well as individuals. These two capabilities, combined with a host of others, position Staffing firms to configure and grow their businesses without growing their IT and overhead expenditures. For our Staffing Industry customers, PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 is loaded with high-value tools aimed at enabling and sustaining a flexible labor supply chain. For more information, contact [email protected] or [email protected].

    Read the article

  • How do you install a USB CD Rom drive?

    - by Matt Allen
    Hello, I recently purchased a USB CD ROM drive, but I don't know how to get it to work with my computer which runs Ubuntu 10.04. http://www.amazon.com/gp/product/B00303H908/ref=oss_product When I issue the lsusb command, it shows up as: Bus 002 Device 016: ID 05e3:0701 Genesys Logic, Inc. USB 2.0 IDE Adapter The computer doesn't recognize it automatically. How can I get this drive to show up as an actual drive on my computer? If this particular drive can't handle Linux, can you recommended one which can and provide a link to it so I can purchase it? Thanks! Update: I was asked by Scaine to run a command and report back with the output: joe@joe-laptop:~$ tail -f /var/log/kern.log Dec 29 12:51:35 joe-laptop kernel: [103190.551437] sr 7:0:0:0: [sr1] Add. Sense: Illegal mode for this track Dec 29 12:51:35 joe-laptop kernel: [103190.551446] sr 7:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 Dec 29 12:51:35 joe-laptop kernel: [103190.551463] end_request: I/O error, dev sr1, sector 0 Dec 29 12:51:35 joe-laptop kernel: [103190.877542] sr 7:0:0:0: [sr1] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Dec 29 12:51:35 joe-laptop kernel: [103190.877551] sr 7:0:0:0: [sr1] Sense Key : Illegal Request [current] Dec 29 12:51:35 joe-laptop kernel: [103190.877559] Info fld=0x0, ILI Dec 29 12:51:35 joe-laptop kernel: [103190.877562] sr 7:0:0:0: [sr1] Add. Sense: Illegal mode for this track Dec 29 12:51:35 joe-laptop kernel: [103190.877572] sr 7:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 Dec 29 12:51:35 joe-laptop kernel: [103190.877588] end_request: I/O error, dev sr1, sector 0 Dec 29 13:08:46 joe-laptop kernel: [104221.558911] usb 2-2.2: USB disconnect, address 16 Then when I plugged the drive back into the computer, I got: Dec 29 13:10:29 joe-laptop kernel: [104324.668320] usb 2-2.2: new high speed USB device using ehci_hcd and address 17 Dec 29 13:10:29 joe-laptop kernel: [104324.761702] usb 2-2.2: configuration #1 chosen from 1 choice Dec 29 13:10:29 joe-laptop kernel: [104324.762700] scsi8 : SCSI emulation for USB Mass Storage devices Dec 29 13:10:29 joe-laptop kernel: [104324.762935] usb-storage: device found at 17 Dec 29 13:10:29 joe-laptop kernel: [104324.762938] usb-storage: waiting for device to settle before scanning Dec 29 13:10:34 joe-laptop kernel: [104329.760521] usb-storage: device scan complete Dec 29 13:10:34 joe-laptop kernel: [104329.761344] scsi 8:0:0:0: CD-ROM TEAC CD-224E 1.7A PQ: 0 ANSI: 0 CCS Dec 29 13:10:34 joe-laptop kernel: [104329.767425] sr1: scsi3-mmc drive: 24x/24x cd/rw xa/form2 cdda tray Dec 29 13:10:34 joe-laptop kernel: [104329.767612] sr 8:0:0:0: Attached scsi CD-ROM sr1 Dec 29 13:10:34 joe-laptop kernel: [104329.767720] sr 8:0:0:0: Attached scsi generic sg2 type 5 Dec 29 13:10:34 joe-laptop kernel: [104330.141060] sr 8:0:0:0: [sr1] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Dec 29 13:10:34 joe-laptop kernel: [104330.141069] sr 8:0:0:0: [sr1] Sense Key : Illegal Request [current] Dec 29 13:10:34 joe-laptop kernel: [104330.141077] Info fld=0x0, ILI Dec 29 13:10:34 joe-laptop kernel: [104330.141081] sr 8:0:0:0: [sr1] Add. Sense: Illegal mode for this track Dec 29 13:10:34 joe-laptop kernel: [104330.141090] sr 8:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 Dec 29 13:10:34 joe-laptop kernel: [104330.141106] end_request: I/O error, dev sr1, sector 0 Dec 29 13:10:34 joe-laptop kernel: [104330.141113] __ratelimit: 18 callbacks suppressed There was more output than this (the number of lines started growing after the drive was plugged back in, and kept growing), but this is the first few lines.

    Read the article

  • Google, typography, and cognitive fluency for persuasion

    - by Roger Hart
    Cognitive fluency is - roughly - how easy it is to think about something. Mere Exposure (or familiarity) effects are basically about reacting more favourably to things you see a lot. Which is part of why marketers in generic spaces like insipid mass-market lager will spend quite so much money on getting their logo daubed about the place; or that guy at the bus stop starts to look like a dating prospect after a month or two. Recent thinking suggests that exposure effects likely spin off cognitive fluency. We react favourably to things that are easier to think about. I had to give tech support to an older relative recently, and suggested they Google the problem. They were confused. They could not, apparently, Google the problem, because part of it was that their Google toolbar had mysteriously vanished. Once I'd finished trying not to laugh, I started thinking about typography. This is going somewhere, I promise. Google is a ubiquitous brand. Heck, it's a verb, and their recent, jaw-droppingly well constructed Paris advert is more or less about that ubiquity. It trades on Google's integration into any information-seeking behaviour. But, as my tech support encounter suggests, people settle into comfortable patterns of thinking about things. They build schemas, and altering them can take work. Maybe the ubiquity even works to cement that. Alongside their online effort, Google is running billboard campaigns to advertise Chrome, a free product in a crowded space. They are running these ads in some kind of kooky Calibri / Comic Sans hybrid. Now, at first it seems odd that one of the world's more ubiquitous brands needs to run a big print campaign in public places - surely they have all the fluency they need? Well, not so much. Chrome, after all, is not the same as their core product, so there's some basic awareness work to do, and maybe a whole new batch of exposure effect to try and grab. But why the typeface? It's heavily foregrounded, and the ads are extremely textual. Plus, don't we all know that jovial, off-beat fonts look unprofessional, or something? There's a whole bunch of people who want (often rightly) to ban Comic Sans I wonder, though. Are Google trying to subtly disrupt cognitive fluency? There's an interesting paper (pdf) about - among other things - the effects of typography on they way people answer survey questions. Participants given the slightly harder to read question gave more abstract answers. The paper references other work suggesting that generally speaking, less-fluent question framing elicits more considered answers. The Chrome ad typeface is less fluent for print. Reactions may therefore be more considered, abstract, and disruptive. Is that, in fact, what Google need? They have brand ubiquity, but they want here to change accustomed behaviour, to get people to think about changing their browser. Is this actually a very elegant piece of persuasive information design? If you think about their "what is a browser?" vox pop research video, there's certainly a perceptual barrier they're going to have to tackle somehow.

    Read the article

  • Book review (Book 6) - Wikinomics

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for year. You can read my first book review here. The book I chose for November 2011 was: Wikinomics: How Mass Collaboration Changes Everything, by Don Tapscott   Why I chose this Book: I’ve heard a lot about this book - was one of the “must read” kind of business books (many of which are very “fluffy”) and supposedly deals with collaborating using technology - so I want to see what it says about collaborative efforts and how I can leverage them. What I learned: I really disliked this book. I’ve never been a fan of the latest “business book”, and sadly that’s what this felt like to me. A “business book” is what I call a work that has a fairly simple concept to get across, and then proceeds to use various made-up terms, analogies and other mechanisms to fill hundreds of pages doing it. This perception is at my own – the book is pretty old, and these things go stale quickly. The author’s general point (at least what I took away from it) was: Open Source is good, proprietary is bad. Collaboration is the hallmark of successful companies. In my mind, you can save yourself the trouble of reading this work if you get these two concepts down. Don’t get me wrong – open source is awesome, and collaboration is a good thing, especially in places where it fits. But it’s not a panacea as the author seems to indicate. For instance, he continuously uses the example of MySpace to show a “2.0” company, which I think means that you can enter text as well as read it on a web page. All well and good. But we all know what happened to MySpace, and of course he missed the point entirely about this new web environment: low barriers to entry often mean low barriers to exit. And the open, collaborative company being the best model – well, I think we all know a certain computer company famous for phones and music that is arguably quite successful, and is probably one of the most closed, non-collaborative (at least with its customers) on the planet. So that sort of takes away that argument. The reality of business is far more complicated. Collaboration is an amazing tool, and should be leveraged heavily. However, at the end of the day, after you do your research you need to pick a strategy and stick with it. Asking thousands of people to assist you in building your product probably will not work well. Open Source is great – but some proprietary products are quite functional as well, have a long track record, are well supported, and will probably be upgraded. Everything has its place, so use what works where it is needed. There is no single answer, sadly. So did I waste my time reading the book? Did I make a bad choice? Not at all! Reading the opinions and thoughts of others is almost always useful, and it’s important to consider opinions other than your own. If nothing else, thinking through the process either convinces you that you are wrong, or helps you understand better why you are right.

    Read the article

  • Bad Spot to Be In: Playing Catch-up with Mobile Advertising

    - by Mike Stiles
    You probably noticed, there’s a mass migration going on from online desktop/laptop usage to smartphone/tablet usage.  It’s an indicator of how we live our lives in the modern world: always on the go, with no intention of being disconnected while out there. Consequently, paid as it relates to mobile advertising is taking the social spotlight. eMarketer estimated that in 2013, US adults would spend about 2 hours, 21 minutes a day on mobile, not counting talking time. More people in the world own smartphones than own toothbrushes (bad news I suppose if you’re marketing toothpaste). They’re using those mobile devices to access social networks, consuming at least 17% of their mobile time on them. Frankly, you don’t need a deep dive into mobile usage stats to know what’s going on. Just look around you in any store, venue or coffee shop. It’s really obvious…our mobile devices are now where we “are,” so that’s where marketers can increasingly reach us. And it’s a smart place for them to do just that. Mobile devices can be viewed more and more as shopping facilitators. Usually when someone is on mobile, they are not in passive research mode. They are likely standing near a store or in front of a product, using their mobile to seek reassurance that buying that product is the right move. They are the hottest of hot prospects. Consider that 4 out of 5 consumers use smartphones to shop, 52% of Americans use mobile devices for in-store for research, 70% of mobile searches lead to online action inside of an hour, and people that find you on mobile convert at almost 3x the rate as those that find you on desktop or laptop. But what are marketers doing? Enter statistics from Mary Meeker’s latest State of the Internet report. Common sense says you buy advertising where people are spending their eyeball time, right? But while mobile is 20% of media use and rising, the ad spend there is 4%. Conversely, while print usage is at 5% and falling, ad spend there is 19%. We all love nostalgia, but come on. There are reasons marketing dollar migration to mobile has not matched user migration, including the availability of mobile ad products and the ability to measure user response to mobile ads. But interesting things are happening now. First came Facebook’s mobile ad, which let app developers pay to get potential downloads. Then their mobile ad network was announced at F8, allowing marketers to target users across non-Facebook apps while leveraging the wealth of diverse data Facebook has on those users, a big deal since Nielsen has pointed out mobile apps make up 89% of the media time spent on mobile. Twitter has a similar play in motion with their MoPub acquisition. And now mobile deeplinks have arrived, which can take users straight to sub-pages of mobile apps for a faster, more direct shopper/researcher user experience. The sooner the gratification, the smoother and faster the conversion. To be clear, growth in mobile ad spending is well underway. After posting $13.1 billion in 2013, Gartner expects global mobile ad spending to reach $18 billion this year, then go to $41.9 billion by 2017. Cheap smartphones and data plans are spreading worldwide, further fueling the shift to mobile. Mobile usage in India alone should grow 400% by 2018. And, of course, there’s the famous statistic that mobile should overtake desktop Internet usage this year. How can we as marketers mess up this opportunity? Two ways. We could position ourselves in perpetual “catch-up” mode and keep spending ad dollars where the public used to be. And we could annoy mobile users with horrid old-school marketing practices. Two-thirds of users told Forrester they think interruptive in-app ads are more annoying than TV ads. Make sure your brand’s social marketing technology platform is delivering a crystal clear picture of your social connections so the mobile touch point is highly relevant, mobile optimized, and delivering real value and satisfying experiences. Otherwise, all we’ve done is find a new way to be unwanted. @mikestiles @oraclesocialPhoto: Kate Mallatratt, freeimages.com

    Read the article

  • DeveloperDeveloperDeveloper! Scotland 2010 - DDDSCOT

    - by Plip
    DDD in Scotland was held on the 8th May 2010 in Glasgow and I was there, not as is uaual at these kind of things as an organiser but actually as a speaker and delegate. The weekend started for me back on Thursday with the arrival of Dave Sussman to my place in Lancashire, after a curry and watching the Electon night TV coverage we retired to our respective beds (yes, I know, I hate to shatter the illusion we both sleep in the same bed wearing matching pijamas is something I've shattered now) ready for the drive up to Glasgow the following afternoon. Before heading up to Glasgow we had to pick up Young Mr Hardy from Wigan then we began the four hour drive back in time... Something that struck me on the journey up is just how beautiful Scotland is. The menacing landscapes bordered with fluffy sheep and whirly-ma-gigs are awe inspiring - well worth driving up if you ever get the chance. Anywho we arrived in Glasgow, got settled intot he hotel and went in search of Speakers for pre conference drinks and food. We discovered a gaggle (I believe that's the collective term) of speakers in the Bar and when we reached critical mass headed off to the Speakers Dinner location. During dinner, SOMEONE set my hair on FIRE. That's all I'm going to say on the matter. Whilst I was enjoying my evening there was something nagging at me, I realised that I should really write my session as I was due to give it the following morning. So after a few more drinks I headed back to the hotel and got some well earned sleep (and washed the fire damage out of my hair). Next day, headed off to the conference which was a lovely stroll through Glasgow City Centre. Non of us got mugged, murdered (or set on fire) arriving safely at the venue, which was a bonus.   I was asked to read out the opening Slides for Barry Carr's session which I did dilligently and with such professionalism that I shocked even myself. At which point I reliased in just over an hour I had to give my presentation, so headed back to the speaker room to finish writing it. Wham, bam and it was all over. Session seemed to go well. I was speaking on Exception Driven Development, which isn't so much a technical solution but rather a mindset around how one should treat exceptions and their code. To be honest, I've not been so nervous giving a session for years - something about this topic worried me, I was concerned I was being too abstract in my thinking or that what I was saying was so obvious that everyone would know it, but it seems to have been well recieved which makes me a happy Speaker. Craig Murphy has some brilliant pictures of DDD Scotland 2010. After my session was done I grabbed some lunch and headed back to the hotel and into town to do some shopping (thus my conspicuous omission from the above photo). Later on we headed out to the geek dinner which again was a rum affair followed by a few drinks and a little boogie woogie. All in all a well run, well attended conference, by the community for the community. I tip my hat to the whole team who put on DDD Scotland!       

    Read the article

  • Drinking Our Own Champagne: Fusion Accounting Hub at Oracle

    - by Di Seghposs
    A guest post by Corey West, Senior Vice President, Oracle's Corporate Controller and Chief Accounting Officer There's no better story to tell than one about Oracle using its own products with blowout success. Here's how this one goes. As you know, Oracle has increased its share of the software market through a number of high-profile acquisitions. Legally combining companies is a very complicated process -- it can take months to complete, especially for the acquisitions with offices in several countries, each with its own unique laws and regulations. It's a mission critical and time sensitive process to roll an acquired company's legacy systems (running vital operations, such as accounts receivable and general ledger (GL)) into the existing systems at Oracle. To date, we've run our primary financial ledgers in E-Business Suite R12 -- and we've successfully met the requirements of the business and closed the books on time every single quarter. But there's always room for improvement and that comes in the form of Fusion Applications. We are now live on Fusion Accounting Hub (FAH), which is the first critical step in moving to a full Fusion Financials instance. We started with FAH so that we could design a global chart of accounts. Eventually, every transaction in every country will originate from this global chart of accounts -- it becomes the structure for managing our business more uniformly. In conjunction, we're using Oracle Hyperion Data Relationship Management (DRM) to centralize and automate governance of our global chart of accounts and related hierarchies, which will help us lower our costs and greatly reduce risk. Each month, we have to consolidate data from our primary general ledgers. We have been able to simplify this process considerably using FAH. We can now submit our primary ledgers running in E-Business Suite (EBS) R12 directly to FAH, eliminating the need for more than 90 redundant consolidation ledgers. Also we can submit incrementally, so if we need to book an adjustment in a primary ledger after close, we can do so without re-opening it and re-submitting. As a result, we have earlier visibility to period-end actuals during the close. A goal of this implementation, and one that we successfully achieved, is that we are able to use FAH globally with no customization. This means we have the ability to fully deploy ledger sets at the consolidation level, plus we can use standard functionality for currency translation and mass allocations. We're able to use account monitoring and drill down functionality from the consolidation level all the way through to EBS primary ledgers and sub-ledgers, which allows someone to click through a transaction appearing at the consolidation level clear through to its original source, a significant productivity enhancement when doing research. We also see a significant improvement in reporting using Essbase cube and Hyperion Smart View. Specifically, "the addition of an Essbase cube on top of the GL gives us tremendous versatility to automate and speed our elimination process," says Claire Sebti, Senior Director of Corporate Accounting at Oracle. A highlight of this story is that FAH is running in a co-existence environment. Our plan is to move to Fusion Financials in steps, starting with FAH. Next, our Oracle Financial Services Software subsidiary will move to a full Fusion Financials instance. Then we'll replace our EBS instance with Fusion Financials. This approach allows us to plan in steps, learn as we go, and not overwhelm our teams. It also reduces the risk that comes with moving the entire instance at once. Maria Smith, Vice President of Global Controller Operations, is confident about how they've positioned themselves to uptake more Fusion functionality and is eager to "continue to drive additional efficiency and cost savings." In this story, the happy customers are Oracle controllers, financial analysts, accounting specialists, and our management team that get earlier access to more flexible reporting. "Fusion Accounting Hub simplifies our processes and gives us more transparency into account activity," raves Alex SanJuan, Senior Director, Record to Report Strategic Process Owner. Overall, the team has been very impressed with the usability and functionality of FAH and are pleased with the quantifiable improvements. Claire Sebti states, "Our WD5 close activities have been reduced by at least four hours of system processing time, just for the consolidation group." Fusion Accounting Hub is an inspiring beginning to our Fusion Financials implementation story. There's no doubt it's going to be an international bestseller! Corey West, Senior Vice President Oracle's Corporate Controller and Chief Accounting Officer

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 2012

    - by Bob Rhubart
    Every day ArchBeat searches the web for content created by and for community members, and then shares that content via social media. Here's the list of the Top 10 most popular items posted on the OTN ArchBeat Facebook Page for November 2012. One-Stop Shop for Oracle Webcasts Webcasts can be a great way to get information about Oracle products without having to go cross-eyed reading yet another document off your computer screen. Oracle's new Webcast Center offers selectable filtering to make it easy to get to the information you want. Yes, you have to register to gain access, but that process is quick, and with over 200 webcasts to choose from you know you'll find useful content. OAM/OVD JVM Tuning Vinay from the Oracle Fusion Middleware Architecture Group (otherwise known as the A-Team) shares a process for analyzing and improving performance in Oracle Virtual Directory and Oracle Access Manager. White Paper: Oracle Exalogic Elastic Cloud: Advanced I/O Virtualization Architecture for Consolidating High-Performance Workloads This new white paper by Adam Hawley (with contributions from Yoav Eilat) describes in great detail the incorporation into Oracle Exalogic of virtualized InfiniBand I/O interconnects using Single Root I/O Virtualization (SR-IOV) technology. Architected Systems: "If you don't develop an architecture, you will get one anyway..." "Can you build a system without taking care of architecture?," asks Manuel Ricca. "You certainly can. But inevitably the system will be unbalanced, neglecting the interests of key stakeholders, and problems will soon emerge." Backup and Recovery of an Exalogic vServer via rsync "On Exalogic a vServer will consist of a number of resources from the underlying machine," says the man known only as Donald. "These resources include compute power, networking and storage. In order to recover a vServer from a failure in the underlying rack all of these components have to be thoughts about. This article only discusses the backup and recovery strategies that apply to the storage system of a vServer." This Week on the OTN Architect Community Home Page Make time to check out this week's features on the OTN Solution Architect Homepage, including: SOA Practitioner Guide: Identifying and Discovering Services Technical article by Yuli Vasiliev on Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster Podcast: Are You Future Proof? Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic – when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. OIM 11g : Multi-thread approach for writing custom scheduled job | Saravanan V S Saravanan shares insight and expertise relevant to "designing and developing an OIM schedule job that uses multi threaded approach for updating data in OIM using APIs." How to Create Virtual Directory in Weblogic Server | Zeeshan Baig Oracle ACE Zeeshan Baig shows you how in six easy steps. SOA Galore: New Books for Technical Eyes Only Shake up up your technical skills with this trio of new technical books from community members covering SOA and BPM. Thought for the Day "Humans are the best value in computers -- where else can you get a non-linear computer weighing only about 160lbs, having a billion binary decision elements, that can be mass-produced by unskilled labour?" — Anonymous Source: SoftwareQuotes.com

    Read the article

  • Feynman's inbox

    - by user12607414
    Here is Richard Feynman writing on the ease of criticizing theories, and the difficulty of forming them: The problem is not just to say something might be wrong, but to replace it by something — and that is not so easy. As soon as any really definite idea is substituted it becomes almost immediately apparent that it does not work. The second difficulty is that there is an infinite number of possibilities of these simple types. It is something like this. You are sitting working very hard, you have worked for a long time trying to open a safe. Then some Joe comes along who knows nothing about what you are doing, except that you are trying to open the safe. He says ‘Why don’t you try the combination 10:20:30?’ Because you are busy, you have tried a lot of things, maybe you have already tried 10:20:30. Maybe you know already that the middle number is 32 not 20. Maybe you know as a matter of fact that it is a five digit combination… So please do not send me any letters trying to tell me how the thing is going to work. I read them — I always read them to make sure that I have not already thought of what is suggested — but it takes too long to answer them, because they are usually in the class ‘try 10:20:30’. (“Seeking New Laws”, page 161 in The Character of Physical Law.) As a sometime designer (and longtime critic) of widely used computer systems, I have seen similar difficulties appear when anyone undertakes to publicly design a piece of software that may be used by many thousands of customers. (I have been on both sides of the fence, of course.) The design possibilities are endless, but the deep design problems are usually hidden beneath a mass of superfluous detail. The sheer numbers can be daunting. Even if only one customer out of a thousand feels a need to express a passionately held idea, it can take a long time to read all the mail. And it is a fact of life that many of those strong suggestions are only weakly supported by reason or evidence. Opinions are plentiful, but substantive research is time-consuming, and hence rare. A related phenomenon commonly seen with software is bike-shedding, where interlocutors focus on surface details like naming and syntax… or (come to think of it) like lock combinations. On the other hand, software is easier than quantum physics, and the population of people able to make substantial suggestions about software systems is several orders of magnitude bigger than Feynman’s circle of colleagues. My own work would be poorer without contributions — sometimes unsolicited, sometimes passionately urged on me — from the open source community. If a Nobel prize winner thought it was worthwhile to read his mail on the faint chance of learning a good idea, I am certainly not going to throw mine away. (In case anyone is still reading this, and is wondering what provoked a meditation on the quality of one’s inbox contents, I’ll simply point out that the volume has been very high, for many months, on the Lambda-Dev mailing list, where the next version of the Java language is being discussed. Bravo to those of my colleagues who are surfing that wave.) I started this note thinking there was an odd parallel between the life of the physicist and that of a software designer. On second thought, I’ll bet that is the story for anybody who works in public on something requiring special training. (And that would be pretty much anything worth doing.) In any case, Feynman saw it clearly and said it well.

    Read the article

  • Cannot Mount USB 3.0 Hard Disk ?!!

    - by Tenken
    Hi, I have a USB 3.0 external hard disk which I am unable to mount. The entry appears in the "lsusb" command, but I do not exactly understand how to mount it. This is the output for my lsusb command. "ASMedia Technology Inc." is the USB 3.0 device. I would appreciate some help in mounting and accessing the hard disk. This the relevant output of my "lsusb -v" : Bus 009 Device 002: ID 174c:5106 ASMedia Technology Inc. Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x174c ASMedia Technology Inc. idProduct 0x5106 bcdDevice 0.01 iManufacturer 2 ASMedia iProduct 3 AS2105 iSerial 1 00000000000000000000 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 3 bMaxPacketSize0 9 idVendor 0x1d6b Linux Foundation idProduct 0x0003 3.0 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.35-28-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:04:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection TT think time 8 FS bits bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0503 highspeed power enable connect Port 4: 0000.0503 highspeed power enable connect Device Status: 0x0003 Self Powered Remote Wakeup Enabled This is the error given when I try to mount the hard drive: shinso@shinso-IdeaPad:~$ sudo mount /dev/sdb /mnt [sudo] password for shinso: mount: /dev/sdb: unknown device This the output of "dmesg|tail": [30062.774178] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [30535.800977] usb 9-4: USB disconnect, address 3 [30659.237342] Valid eCryptfs headers not found in file header region or xattr region [30659.237351] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [31259.268310] Valid eCryptfs headers not found in file header region or xattr region [31259.268313] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [31860.059058] Valid eCryptfs headers not found in file header region or xattr region [31860.059062] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [32465.220590] Valid eCryptfs headers not found in file header region or xattr region [32465.220593] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO I am using Ubuntu 10.10 (64 bit). Any help is appreciated.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27  | Next Page >