Search Results

Search found 4099 results on 164 pages for 'bulk export'.

Page 137/164 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • How would you structure your entity model for storing arbitrary key/value data with different data t

    - by Nathan Ridley
    I keep coming across scenarios where it will be useful to store a set of arbitrary data in a table using a per-row key/value model, rather than a rigid column/field model. The problem is, I want to store the values with their correct data type rather than converting everything to a string. This means I have to choose either a single table with multiple nullable columns, one for each data type, or a set of value tables, one for each data type. I'm also unsure as to whether I should use full third normal form and separate the keys into a separate table, referencing them via a foreign key from the value table(s), or if it would be better to keep things simple and store the string keys in the value table(s) and accept the duplication of strings. Old/bad: This solution makes adding additional values a pain in a fluid environment because the table needs to be modified regularly. MyTable ============================ ID Key1 Key2 Key3 int int string date ---------------------------- 1 Value1 Value2 Value3 2 Value4 Value5 Value6 Single Table Solution This solution allows simplicity via a single table. The querying code still needs to check for nulls to determine which data type the field is storing. A check constraint is probably also required to ensure only one of the value fields contains non-nulll data. DataValues ============================================================= ID RecordID Key IntValue StringValue DateValue int int string int string date ------------------------------------------------------------- 1 1 Key1 Value1 NULL NULL 2 1 Key2 NULL Value2 NULL 3 1 Key3 NULL NULL Value3 4 2 Key1 Value4 NULL NULL 5 2 Key2 NULL Value5 NULL 6 2 Key3 NULL NULL Value6 Multiple-Table Solution This solution allows for more concise purposing of each table, though the code needs to know the data type in advance as it needs to query a different table for each data type. Indexing is probably simpler and more efficient because there are less columns that need indexing. IntegerValues =============================== ID RecordID Key Value int int string int ------------------------------- 1 1 Key1 Value1 2 2 Key1 Value4 StringValues =============================== ID RecordID Key Value int int string string ------------------------------- 1 1 Key2 Value2 2 2 Key2 Value5 DateValues =============================== ID RecordID Key Value int int string date ------------------------------- 1 1 Key3 Value3 2 2 Key3 Value6 How do you approach this problem? Which solution is better? Also, should the key column be separated into a separate table and referenced via a foreign key or be should it be kept in the value table and bulk updated if for some reason the key name changes?

    Read the article

  • Why is the page still caching even after the no-cache headers have been sent?

    - by Matthew Grasinger
    I've done a ton of research on this and have asked many people with help and still no success. Here are the details... I'm involved in developing a website that pulls data from various data files, combines them in a temp .csv file, and then is graphed using a popular graphing library: dygraphs. The bulk of the website is written in PHP. The parameters that determine the data that is graphed are stored in the users session, the .csv is named after the users session and available for download, and then the .csv file is written in a script that passes it to the dygraphs object. And we've found, even with the no-cache headers sent: header("Cache-Control: no-cache, must-revalidate"); header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); Many users experience in the middle of a session, (if enough different graphs are generated) the page displaying an older, static rendering of the page (data they had graphed earlier in the session) as if it were cached and loaded instead of getting a new request. It only gets weirder though: I've checked using developer tools in both Firefox and Chrome and both browsers are receiving the no-cache headers just fine; Even when the problem occurs if you view the page source, the source is the correct content (a table/legend is also dynamically created using php, the source shows the correct table, but what is rendered is older content); the page begins to render correctly until the graph is about to be display, and then shows the older content; the older content displays as if it were a completely static overlay--the cached graph does not have the same dynamic features (roll over data point display, zoom and pan, etc.) And it is as if the correct page were somewhere beneath it (the download button for the csv file moves depending on how large the table is. The older, static page does nothing if you click the download .csv button, but if you can manage to find the one in the page beneath it you can click and still download the .csv. The data in the .csv is correct) It is one of the strangest things I've seen in development thus far. Some other relevant facts are that all the problems I've personally experience occurred while I was using Chrome. Non of these symptoms have been reported by Firefox users. IE users have had the same problems (IE users are forced to use chrome frame). I'm at my wits end at this point. We've sent the php headers; we've tried setting the cache profile for php on IIS as "DisableCache" (or whatever); we've tried sending a random query string to the results page; we've tried all the appropriate meta tags--all with no success.

    Read the article

  • CreationName for SSIS 2008 and adding components programmatically

    If you are building SSIS 2008 packages programmatically and adding data flow components, you will probably need to know the creation name of the component to add. I can never find a handy reference when I need one, hence this rather mundane post. See also CreationName for SSS 2005. We start with a very simple snippet for adding a component: // Add the Data Flow Task package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = package.Executables[0] as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add OLE-DB source component - ** This is where we need the creation name ** IDTSComponentMetaData90 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "OLEDBSource"; componentSource.ComponentClassID = "DTSAdapter.OLEDBSource.2"; So as you can see the creation name for a OLE-DB Source is DTSAdapter.OLEDBSource.2. CreationName Reference  ADO NET Destination Microsoft.SqlServer.Dts.Pipeline.ADONETDestination, Microsoft.SqlServer.ADONETDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 ADO NET Source Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter, Microsoft.SqlServer.ADONETSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Aggregate DTSTransform.Aggregate.2 Audit DTSTransform.Lineage.2 Cache Transform DTSTransform.Cache.1 Character Map DTSTransform.CharacterMap.2 Checksum Konesans.Dts.Pipeline.ChecksumTransform.ChecksumTransform, Konesans.Dts.Pipeline.ChecksumTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Conditional Split DTSTransform.ConditionalSplit.2 Copy Column DTSTransform.CopyMap.2 Data Conversion DTSTransform.DataConvert.2 Data Mining Model Training MSMDPP.PXPipelineProcessDM.2 Data Mining Query MSMDPP.PXPipelineDMQuery.2 DataReader Destination Microsoft.SqlServer.Dts.Pipeline.DataReaderDestinationAdapter, Microsoft.SqlServer.DataReaderDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Derived Column DTSTransform.DerivedColumn.2 Dimension Processing MSMDPP.PXPipelineProcessDimension.2 Excel Destination DTSAdapter.ExcelDestination.2 Excel Source DTSAdapter.ExcelSource.2 Export Column TxFileExtractor.Extractor.2 Flat File Destination DTSAdapter.FlatFileDestination.2 Flat File Source DTSAdapter.FlatFileSource.2 Fuzzy Grouping DTSTransform.GroupDups.2 Fuzzy Lookup DTSTransform.BestMatch.2 Import Column TxFileInserter.Inserter.2 Lookup DTSTransform.Lookup.2 Merge DTSTransform.Merge.2 Merge Join DTSTransform.MergeJoin.2 Multicast DTSTransform.Multicast.2 OLE DB Command DTSTransform.OLEDBCommand.2 OLE DB Destination DTSAdapter.OLEDBDestination.2 OLE DB Source DTSAdapter.OLEDBSource.2 Partition Processing MSMDPP.PXPipelineProcessPartition.2 Percentage Sampling DTSTransform.PctSampling.2 Performance Counters Source DataCollectorTransform.TxPerfCounters.1 Pivot DTSTransform.Pivot.2 Raw File Destination DTSAdapter.RawDestination.2 Raw File Source DTSAdapter.RawSource.2 Recordset Destination DTSAdapter.RecordsetDestination.2 RegexClean Konesans.Dts.Pipeline.RegexClean.RegexClean, Konesans.Dts.Pipeline.RegexClean, Version=2.0.0.0, Culture=neutral, PublicKeyToken=d1abe77e8a21353e Row Count DTSTransform.RowCount.2 Row Count Plus Konesans.Dts.Pipeline.RowCountPlusTransform.RowCountPlusTransform, Konesans.Dts.Pipeline.RowCountPlusTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Number Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Sampling DTSTransform.RowSampling.2 Script Component Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost, Microsoft.SqlServer.TxScript, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Slowly Changing Dimension DTSTransform.SCD.2 Sort DTSTransform.Sort.2 SQL Server Compact Destination Microsoft.SqlServer.Dts.Pipeline.SqlCEDestinationAdapter, Microsoft.SqlServer.SqlCEDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 SQL Server Destination DTSAdapter.SQLServerDestination.2 Term Extraction DTSTransform.TermExtraction.2 Term Lookup DTSTransform.TermLookup.2 Trash Destination Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc TxTopQueries DataCollectorTransform.TxTopQueries.1 Union All DTSTransform.UnionAll.2 Unpivot DTSTransform.UnPivot.2 XML Source Microsoft.SqlServer.Dts.Pipeline.XmlSourceAdapter, Microsoft.SqlServer.XmlSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Here is a simple console program that can be used to enumerate the pipeline components installed on your machine, and dumps out a list of all components like that above. You will need to add a reference to the Microsoft.SQLServer.ManagedDTS assembly. using System; using System.Diagnostics; using Microsoft.SqlServer.Dts.Runtime; public class Program { static void Main(string[] args) { Application application = new Application(); PipelineComponentInfos componentInfos = application.PipelineComponentInfos; foreach (PipelineComponentInfo componentInfo in componentInfos) { Debug.WriteLine(componentInfo.Name + "\t" + componentInfo.CreationName); } Console.Read(); } }

    Read the article

  • Links to my “Best of 2010” Posts

    - by ScottGu
    I hope everyone is having a Happy New Years! 2010 has been a busy blogging year for me (this is the 100th blog post I’ve done in 2010).  Several people this week suggested I put together a summary post listing/organizing my favorite posts from the year.  Below is a quick listing of some of my favorite posts organized by topic area: VS 2010 and .NET 4 Below is a series of posts I wrote (some in late 2009) about the VS 2010 and .NET 4 (including ASP.NET 4 and WPF 4) release we shipped in April: Visual Studio 2010 and .NET 4 Released Clean Web.Config Files Starter Project Templates Multi-targeting Multiple Monitor Support New Code Focused Web Profile Option HTML / ASP.NET / JavaScript Code Snippets Auto-Start ASP.NET Applications URL Routing with ASP.NET 4 Web Forms Searching and Navigating Code in VS 2010 VS 2010 Code Intellisense Improvements WPF 4 Add Reference Dialog Improvements SEO Improvements with ASP.NET 4 Output Cache Extensibility with ASP.NET 4 Built-in Charting Controls for ASP.NET and Windows Forms Cleaner HTML Markup with ASP.NET 4 - Client IDs Optional Parameters and Named Arguments in C# 4 - and a cool scenarios with ASP.NET MVC 2 Automatic Properties, Collection Initializers and Implicit Line Continuation Support with VB 2010 New <%: %> Syntax for HTML Encoding Output using ASP.NET 4 JavaScript Intellisense Improvements with VS 2010 VS 2010 Debugger Improvements (DataTips, BreakPoints, Import/Export) Box Selection and Multi-line Editing Support with VS 2010 VS 2010 Extension Manager (and the cool new PowerCommands Extension) Pinning Projects and Solutions VS 2010 Web Deployment Debugging Tips/Tricks with Visual Studio Search and Navigation Tips/Tricks with Visual Studio Visual Studio Below are some additional Visual Studio posts I’ve done (not in the first series above) that I thought were nice: Download and Share Visual Studio Color Schemes Visual Studio 2010 Keyboard Shortcuts VS 2010 Productivity Power Tools Fun Visual Studio 2010 Wallpapers Silverlight We shipped Silverlight 4 in April, and announced Silverlight 5 the beginning of December: Silverlight 4 Released Silverlight 4 Tools for VS 2010 and WCF RIA Services Released Silverlight 4 Training Kit Silverlight PivotViewer Now Available Silverlight Questions Announcing Silverlight 5 Silverlight for Windows Phone 7 We shipped Windows Phone 7 this fall and shipped free Visual Studio development tools with great Silverlight and XNA support in September: Windows Phone 7 Developer Tools Released Building a Windows Phone 7 Twitter Application using Silverlight ASP.NET MVC We shipped ASP.NET MVC 2 in March, and started previewing ASP.NET MVC 3 this summer.  ASP.NET MVC 3 will RTM in less than 2 weeks from today: ASP.NET MVC 2: Strongly Typed Html Helpers ASP.NET MVC 2: Model Validation Introducing ASP.NET MVC 3 (Preview 1) Announcing ASP.NET MVC 3 Beta and NuGet (nee NuPack) Announcing ASP.NET MVC 3 Release Candidate 1  Announcing ASP.NET MVC 3 Release Candidate 2 Introducing Razor – A New View Engine for ASP.NET ASP.NET MVC 3: Layouts with Razor ASP.NET MVC 3: New @model keyword in Razor ASP.NET MVC 3: Server-Side Comments with Razor ASP.NET MVC 3: Razor’s @: and <text> syntax ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor ASP.NET MVC 3: Layouts and Sections with Razor IIS and Web Server Stack The IIS and Web Stack teams have made a bunch of great improvements to the core web server this year: Fix Common SEO Problems using the URL Rewrite Extension Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Introducing IIS Express SQL CE 4 (New Embedded Database Support with ASP.NET) Introducing Web Matrix EF Code First EF Code First is a really nice new data option that enables a very clean code-oriented data workflow: Announcing Entity Framework Code-First CTP5 Release Class-Level Model Validation with EF Code First and ASP.NET MVC 3 Code-First Development with Entity Framework 4 EF 4 Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database jQuery and AJAX Contributions My team began making some significant source code contributions to the jQuery project this year: jQuery Templates, Data Link and Globalization Accepted as Official jQuery Plugins jQuery Templates and Data Linking (and Microsoft contributing to jQuery) jQuery Globalization Plugin from Microsoft Patches and Hot Fixes Some useful fixes you can download prior to VS 2010 SP1: Patch for Cut/Copy “Insufficient Memory” issue with VS 2010 Patch for VS 2010 Find and Replace Dialog Growing Patch for VS 2010 Scrolling Context Menu Videos of My Talks Some recordings of technical talks I’ve done this year: ASP.NET 4, ASP.NET MVC, and Silverlight 4 Talks I did in Europe VS 2010 and ASP.NET 4 Web Forms Talk in Arizona Other About Technical Debates (and ASP.NET Web Forms and ASP.NET MVC debates in particular) ASP.NET Security Fix Now on Windows Update Upcoming Web Camps I’d like to say a big thank you to everyone who follows my blog – I really appreciate you reading it (the comments you post help encourage me to write it).  See you in the New Year! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Raspberry Pi entrance signed backed by Umbraco - Part 1

    - by Chris Houston
    Being experts on all things Umbraco, we jumped at the chance to help our client, QV Offices, with their pressing signage predicament. They needed to display a sign in the entrance to their building and approached us for our advice. Of course it had to be electronic: displaying multiple names of their serviced office clients, meeting room bookings and on-the-pulse promotions. But with a winding Victorian staircase and minimal storage space how could the monitor be run, updated and managed? That’s where we came in…Raspberry PiUmbraco CMSAutomatic updatesAutomated monitor of the signPower saving when the screen is not in useMounting the screenThe screen that has been used is a standard LED low energy Full HD screen and has been mounted on the wall using it's VESA mounting points, as the wall is a stud wall we were able to add an access panel behind the screen to feed through the mains, HDMI and sensor cables.The Raspberry Pi is then tucked away out of sight in the main electrical cupboard which just happens to be next to the sign, we had an electrician add a power point inside this cupboard to allow us to power the screen and the Raspberry Pi.Designing the interface and editing the contentAlthough a room sign was the initial requirement from QV Offices, their medium term goal has always been to add online meeting booking to their website and hence we suggested adding information about the current and next day's meetings to the sign that would be pulled directly from their online booking system.We produced the design and built the web page to fit exactly on a 1920 x 1080 screen (Full HD in Portrait)As you would expect all the information can be edited via an Umbraco CMS, they are able to add floors, rooms, clients and virtual clients as well as add meeting bookings to their meeting diary.How we configured the Raspberry PiAfter receiving a new Raspberry Pi we downloaded the latest release of Raspbian operating system and followed the official guide which shows how to copy the OS onto an SD card from a Mac, we then followed the majority of steps on this useful guide: 10 Things to Do After Buying a Raspberry Pi.Installing ChromiumWe chose to use the Chromium web browser which for those who do not know is the open sourced version of Google Chrome. You can install this from the terminal with the following command:sudo apt-get install chromium-browserInstalling UnclutterWe found this little application which automatically hides the mouse pointer, it is used in the script below and is installed using the following command:sudo apt-get install unclutterAuto start Chromium and disabling the screen saver, power saving and mouseWhen the Raspberry Pi has been installed it will not have a keyboard or mouse and hence if their was a power cut we needed it to always boot and re-loaded Chromium with the correct URL.Our preferred command line text editor is Nano and I have assumed you know how to use this editor or will be able to work it out pretty quickly.So using the following command:sudo nano /etc/xdg/lxsession/LXDE/autostartWe then changed the autostart file content to:@lxpanel --profile LXDE@pcmanfm --desktop --profile LXDE@xscreensaver -no-splash@xset s off@xset -dpms@xset s noblank@chromium --kiosk --incognito http://www.qvoffices.com/someURL@unclutter -idle 0The first few commands turn off the screen saver and power saving, we then open Cromium in Kiosk Mode (full screen with no menu etc) and pass in the URL to use (I have changed the URL in this example) We found a useful blog post with the Cromium command line switches.Finally we also open an application called Unclutter which auto hides the mouse after 0 seconds, so you will never see a mouse on the sign.We also had to edit the following file:sudo nano /etc/lightdm/lightdm.confAnd added the following line under the [SeatDefault] section:xserver-command=X -s 0 dpmsRefreshing the screenWe decided to try and add a scheduled task that would trigger Chromium to reload the page, at some point in the future we might well change this to using Javascript to update the content, but for now this works fine.First we installed the XDOTool which enables you to script Keyboard commands:sudo apt-get install xdotoolWe used the Refreshing Chromium Browser by Shell Script post as a reference and created the following shell script (which we called refreshing.sh):export DISPLAY=":0"WID=$(xdotool search --onlyvisible --class chromium|head -1)xdotool windowactivate ${WID}xdotool key ctrl+F5This selects the correct display and then sends a CTRL + F5 to refresh Chromium.You will need to give this file execute permissions:chmod a=rwx refreshing.shNow we have the script file setup we just need to schedule it to call this script periodically which is done by using Crontab, to edit this you use the following command:crontab -eAnd we added the following:*/5 * * * * DISPLAY=":.0" /home/pi/scripts/refreshing.sh >/home/pi/cronlog.log 2>&1This calls our script every 5 minutes to refresh the display and it logs any errors to the cronlog.log file.SummaryQV Offices now have a richer and more manageable booking system than they did before we started, and a great new sign to boot.How could we make sure that the sign was running smoothly downstairs in a busy office centre? A second post will follow outlining exactly how Vizioz enabled QV Offices to monitor their sign simply and remotely, from the comfort of their desks.

    Read the article

  • SharePoint.DesignFactory.ContentFiles–building WCM sites

    - by svdoever
    One of the use cases where we use the SharePoint.DesignFactory.ContentFiles tooling is in building SharePoint Publishing (WCM) solutions for SharePoint 2007, SharePoint 2010 and Office365. Publishing solutions are often solutions that have one instance, the publishing site (possibly with subsites), that in most cases need to go through DTAP. If you dissect a publishing site, in most case you have the following findings: The publishing site spans a site collection The branding of the site is specified in the root site, because: Master pages live in the root site (/_catalogs/masterpage) Page layouts live in the root site (/_catalogs/masterpage) The style library lives in the root site ( /Style Library) and contains images, css, javascript, xslt transformations for your CQWP’s, … Preconfigured web parts live in the root site (/_catalogs/wp) The root site and subsites contains a document library called Pages (or your language-specific version of it) containing publishing pages using the page layouts and master pages The site collection contains content types, fields and lists When using the SharePoint.DesignFactory.ContentFiles tooling it is very easy to create, test, package and deploy the artifacts that can be uploaded to the SharePoint content database. This can be done in a fast and simple way without the need to create and deploy WSP packages. If we look at the above list of artifacts we can use SharePoint.DesignFactory.ContentFiles for master pages, page layouts, the style library, web part configurations, and initial publishing pages (these are normally made through the SharePoint web UI). Some artifacts like content types, fields and lists in the above list can NOT be handled by SharePoint.DesignFactory.ContentFiles, because they can’t be uploaded to the SharePoint content database. The good thing is that these artifacts are the artifacts that don’t change that much in the development of a SharePoint Publishing solution. There are however multiple ways to create these artifacts: Use paper script: create them manually in each of the environments based on documentation Automate the creation of the artifacts using (PowerShell) script Develop a WSP package to create these artifacts I’m not a big fan of the third option (see my blog post Thoughts on building deployable and updatable SharePoint solutions). It is a lot of work to create content types, fields and list definitions using all kind of XML files, and it is not allowed to modify these artifacts when in use. I know… SharePoint 2010 has some content type upgrade possibilities, but I think it is just too cumbersome. The first option has the problem that content types and fields get ID’s, and that these ID’s must be used by the metadata on for example page layouts. No problem for SharePoint.DesignFactory.ContentFiles, because it supports deploy-time resolving of these ID’s using PowerShell. For example consider the following metadata definition for the page layout contactpage-wcm.aspx.properties.ps1: Metadata page layout # This script must return a hashtable @{ name=value; ... } of field name-value pairs # for the content file that this script applies to. # On deployment to SharePoint, these values are written as fields in the corresponding list item (if any) # Note that fields must exist; they can be updated but not created or deleted. # This script is called right after the file is deployed to SharePoint.   # You can use the script parameters and arbitrary PowerShell code to interact with SharePoint. # e.g. to calculate properties and values at deployment time.   param([string]$SourcePath, [string]$RelativeUrl, $Context) @{     "ContentTypeId" = $Context.GetContentTypeID('GeneralPage');     "MasterPageDescription" = "Cloud Aviator Contact pagelayout (wcm - don't use)";     "PublishingHidden" = "1";     "PublishingAssociatedContentType" = $Context.GetAssociatedContentTypeInfo('GeneralPage') } The PowerShell functions GetContentTypeID and GetAssociatedContentTypeInfo can at deploy-time resolve the required information from the server we are deploying to. I personally prefer the second option: automate creation through PowerShell, because there are PowerShell scripts available to export content types and fields. An example project structure for a typical SharePoint WCM site looks like: Note that this project uses DualLayout. So if you build Publishing sites using SharePoint, checkout out the completely free SharePoint.DesignFactory.ContentFiles tooling and start flying!

    Read the article

  • UPK 3.6.1 (is Coming)

    - by marc.santosusso
    In anticipation of the release of UPK 3.6.1, I'd like to briefly describe some of the features that will be available in this new version. Topic Editor in Tabs Topic Editors now open in tabs instead of separate Developer windows. This offers several improvements: First, the bubble editor can be docked and resized in the same way as other editor panes. That's right, you can resize the bubble editor! The second enhancement that this changes brings is an improved undo and redo which allows each action to be undone and redone in the Topic Editor. New Sound Editor The topic and web page editors include a new sound editor with all the bells and whistles necessary to record, edit, import, and export, sound. Sound can be captured during topic recording--which is great for a Subject Matter Expert (SME) to narrate what they're recording--or after the topic has been recorded. Sound can also be added to web pages and played on the concept panes of modules, sections and topics. Turn off bubbles in Topics Authors may opt to hide bubbles either per frame or for an entire topic. When you want to draw a user's attention to the content on the screen instead of the bubble. This feature works extremely well in conjunction with the new sound capabilities. For instance, consider recording conceptual information with narration and no bubbles. Presentation Output UPK content can be published as a Presentation in Microsoft PowerPoint format. Publishing for Presentation will create a presentation for each topic published. The presentation template can be customized Using the same methods offered for the UPK document outputs, allowing your UPK-generated presentations to match your corporate branding. Autosave and Recovery The Developer will automatically save your work as often as you would like. This affords authors the ability to recover these automatically saved documents if their system or UPK were to close unexpectedly. The Developer defaults to save open documents every ten minutes. Package Editor Enhancement Files in packages will now open in the associated application when double-clicked. Authors can also choose to "Open with..." from the context menu (AKA right click menu.) See It! Window See It! mode may now be launched in a non-fullscreen window. This is available from the kp.html file in any Player package. This version of See It! mode offers on-screen navigation controls including previous frame, next frame, pause etc. Firefox Enhancments The UPK Player will now offer both Do It! mode and sound playback when viewed using Firefox web browser. Player Support for Safari The UPK Player is now fully supported on the Safari web browser for both Mac OS and Windows platforms. Keep document checked out Authors may choose to keep a document checked out when performing a check in. This allows an author to have a new version created on the server and continue editing. Close button on individual tabs A close button has been added to the tabs making it easier to close a specific tab. Outline Editor Enhancements Authors will have the option to prevent concepts from immediately displaying in the Developer when an outline item is selected. This makes it faster to move around in the outline editor. Tell us which feature you're most excited to use in the comments.

    Read the article

  • USB software protection dongle for Java with an SDK which is cross-platform “for real”. Does it exist?

    - by Unai Vivi
    What I'd like to ask is if anybody knows about an hardware USB-dongle for software protection which offers a very complete out-of-the-box API support for cross-platform Java deployments. Its SDK should provide a jar (only one, not one different library per OS & bitness) ready to be added to one's project as a library. The jar should contain all the native stuff for the various OSes and bitnesses From the application's point of view, one should continue to write (api calls) once and run everywhere, without having to care where the end-user will run the software The provided jar should itself deal with loading the appropriate native library Does such a thing exist? With what I've tried so far, you have different APIs and compiled libraries for win32, linux32, win64, linux64, etc (or you even have to compile stuff yourself on the target machine), but hey, we're doing Java here, we don't know (and don't care) where the program will run! And we can't expect the end-user to be a software engineer, tweak (and break!) its linux server, link libraries, mess with gcc, litter the filesystem, etc... In general, Java support (in a transparent cross-platform fashion) is quite bad with the dongle SDKs I've evaluated so far (e.g. KeyLok and SecuTech's UniKey). I even purchased (no free evaluation kit available) SecureMetric SDKs&dongles (they should've been "soooo" straighforward to integrate -- according to marketing material :\ ) and they were the worst ever: SecureDongle X has no 64bit support and SecureDongle SD is not cross-platform at all. So, has anyone out there been through this and found the ultimate Java security usb dongle for cross-platform deployments? Note: software is low-volume, high-value; application is off-line (intranet with no internet access), so no online-activation alternatives and the like. -- EDIT Tried out HASP dongles (used to be called "Aladdin"), and added them to the no-no list: here, too, there is no out-of-the-box (out-of-the-jar) support: e.g. end-linux-user has to manually put the .so library (the specific file for the appropriate bitness) in the right place on his filesystem, and export an env. variable accordingly. -- EDIT 2 I really don't understand all the negativity and all the downvoting: is this a taboo topic? Is it so hard to understand that a freelance developer has to put food on the table everyday to feed its family and pay the bills at the end of the month? Please don't talk about "adding value" as a supplier, because that'd be off-topic. Furthermore I'm not in direct contact with end-customers, but there's an intermediate reselling entity: it's this entity I want to prevent selling copies of the software without sharing the revenue. -- EDIT 3 I'd like to emphasize the fact that the question is looking for a technical answer, not one about opinions concerning business models, philosophical lucubrations on the concept of value, resellers' reliability, etc. I cannot change resellers, because this isn't a "general purpose" kind of sw, but a very vertical one and (for some reasons it's not worth explaining here) I must go through them. I just need to prevent the "we sold 2 copies, here's your share [bwahaha we sold 10]" scenario.

    Read the article

  • How to reproject a shapefile from WGS 84 to Spherical/Web Mercator projection.

    - by samkea
    Definitions: You will need to know the meaning of these terms below. I have given a small description to the acronyms but you can google and know more about them. #1:WGS-84- World Geodetic Systems (1984)- is a standard reference coordinate system used for Cartography, Geodesy and Navigation. #2: EPGS-European Petroleum Survey Group-was a scientific organization with ties to the European petroleum industry consisting of specialists working in applied geodesy, surveying, and cartography related to oil exploration. EPSG::4326 is a common coordinate reference system that refers to WGS84 as (latitude, longitude) pair coordinates in degrees with Greenwich as the central meridian. Any degree representation (e.g., decimal or DMSH: degrees minutes seconds hemisphere) may be used. Which degree representation is used must be declared for the user by the supplier of data. So, the Spherical/Web Mercator projection is referred to as EPGS::3785 which is renamed to EPSG:900913 by google for use in googlemaps. The associated CRS(Coordinate Reference System) for this is the "Popular Visualisation CRS / Mercator ". This is the kind of projection that is used by GoogleMaps, BingMaps,OSM,Virtual Earth, Deep Earth excetra...to show interactive maps over the web with thier nearly precise coordinates.  Reprojection: After reading alot about reprojecting my coordinates from the deepearth project on Codeplex, i still could not do it. After some help from a colleague, i got my ball rolling.This is how i did it. #1 You need to download and open your shapefile using Q-GIS; its the one with the biggest number of coordinate reference systems/ projections. #2 Use the plugins menu, and enable ftools and the WFS plugin. #3 Use the Vector menu--> Data Management Tools and choose define current projection. Enable, use predefined reference system and choose WGS 84 coodinate system. I am personally in zone 36, so i chose WGS84-UTM Zone 36N under ( Projected Coordinate Systems--> Universal Transverse Mercator) and click ok. #4 Now use the Vector menu--> Data Management Tools and choose export to new projection. The same dialog will pop-up. Now choose WGS 84 EPGS::4326 under Geodetic Coordinate Systems. My Input user Defined Spatial Reference System should looks like this: +proj=tmerc +lat_0=0 +lon_0=33 +k=0.9996 +x_0=500000 +y_0=200000 +ellps=WGS84 +datum=WGS84 +units=m +no_defs Your Output user Defined Spatial Reference System should look like this: +proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs Browse for the place where the shapefile is going to be and give the shapefile a name(like origna_reprojected). If it prompts you to add the projected layer to the TOC, accept. There, you have your re-projected map with latitude and longitude pair of coordinates. #5 Now, this is not the actual Spherical/Web Mercator projection, but dont worry, this is where you have to stop. All the other custom web-mapping portals will pick this projection and transform it into EPGS::3785 or EPSG:900913 but the coordinates will still remain as the LatLon pair of the projected shapefile. If you want to test, a particular know point, Q-GIS has a lot of room for that. Go ahead and test it.

    Read the article

  • Integrate Google Docs with Outlook the Easy Way

    - by Matthew Guay
    Want to use Google Docs and Microsoft office together?  Here’s how you can use Harmony for Google Docs to integrates them seamlessly with Outlook. Harmony for Google Docs is an exciting new plugin for Outlook 2007 (a version for Outlook 2010 is in the works).  It lets you integrate your Google Docs account with Outlook via a sidebar.  From this, you can find any of your Google docs or upload a new document, and then you can open the document to view or edit it in Outlook. Getting Started Download Harmony for Google Docs (link below), and install as normal.  Make sure Outlook is closed before you run the install. Next time you open Outlook, the new Harmony sidebar will automatically open.  Enter your Google Account info, and click Sign In. Now, all of your Google Docs will show up in the sidebar. Double-click any file to open it in Outlook.  You may have to sign-in to Google Docs the first time you open a document. Here’s a Google Doc open in Outlook.  Notice that everything works, including full editing. All Google Docs features worked great in our tests except for the new drawings tool.  When we tried to insert a drawing, Outlook had a script error.  This was the only problem we had with Harmony, and could be due to an interaction between Google Drawings and Outlook’s rendering engine. Harmony makes it easy to find any file in your Google Docs account.  You can search for a file, or sort your files by type, recentness, and more. You can also easily add any document to Google Docs directly from Harmony.  You can drag and drop any document, including one attached to an email, to the Harmony sidebar, and it will directly upload to your Google Docs account. And, when you’re writing a new email or reply, click the Show Documents button to open the Harmony sidebar.  From here, you can add documents as usual and share it with email recipient. Conclusion We previously covered a similar app OffiSync which combines Google doc features with MS Office. However, Harmony makes it much easier to use Google Apps along with Outlook.  This gives you an easy and efficient way to collaborate on documents with coworkers, all without leaving Outlook.  And, if your company uses SharePoint instead of Google Docs, Harmony offers a SharePoint edition that integrates with Outlook just as easily! Link Download Harmony for Google Docs Similar Articles Productive Geek Tips How To Export Documents from Google Docs to Your ComputerView Your Google Calendar in Outlook 2007Sync Your Outlook and Google Calendar with Google Calendar SyncIntegrate Twitter With Microsoft OutlookSlacker Geek: Update Your Facebook Profile from Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams

    Read the article

  • Fusion CRM ISV program is gaining weight: Examples of certified add-on's

    - by Richard Lefebvre
    The Fusion CRM ISV program is gaining traction. Please find below few examples of the partners having certified their add-on's to seamlessly work on top of Oracle Fusion CRM. For more information, please contact [email protected] ·         Opportunity-to-Quote.  Big Machines now integrates seamlessly to Oracle Fusion CRM, enabling customers with complex products and services and multiple sales channels to streamline the entire opportunity-to-quote process, including product selection, configuration, pricing, quoting, and approval workflows.  Create a custom hyperlink in the Opportunity to invoke Big Machines CPQ application to create a quote and sync up with the Fusion CRM custom quote object using the CRUD operations. The quote can be updated using the custom button in the custom tab in the opportunity details. See: http://www.bigmachines.com/oracle.php  ·         SaaS Billing and Subscription Management.  Is your prospect/customer asking whether top billing partners support Fusion CRM?  Positioning an integrated CRM solution for billing usage and subscription based services?  Need to implement a billable solution on the Oracle Java Cloud Service?  Aria Systems and Zuora have recently engaged with Oracle to deepen their integrations to Fusion CRM and team with Oracle for joint opportunities.  ·         Google Apps, SharePoint, Email-CRM Integrations o   Do your prospects use Google Apps in their business operations?  A “Best of AppExchange” award winner recently completed their integration for Fusion CRM.  CirrusInsight plugs Fusion CRM web services directly into Gmail, allowing you to search existing opportunity or contact, provide account information, and create an interaction such as phone call, appointment, or email against a customer or contact in Fusion CRM directly from Gmail.  o   An EMEA / France based partner, Aryvart provides bi-directional synchronization of appointments and tasks between Google calendar and Oracle Fusion CRM. For customers, it means adopting Oracle Fusion CRM while continuing to use Google calendar for appointments. o   Looking to lower the barrier and expand in SharePoint accounts?  InFact Group (EMEA / France & Germany) provides Microsoft SharePoint Connector for Oracle Fusion CRM. With this solution, you can store documents attached to an opportunity, into Microsoft SharePoint repository. For customers, it means adopting Oracle Fusion CRM while continuing to collaborate across existing content management infrastructure. o   Need to connect to MacMail, GroupWise, or Outlook/Exchange?  Omni Technology is a partner whose Riva CRM Integration recently engaged for support Fusion CRM as a key platform. Migration Tools from competitive CRMs, to Oracle Fusion CRM.  Data Migration Tools from legacy CRMs, to Oracle Fusion CRM.  A partner with the tools and techniques to speed adoption, Conemis provides data integration tools to export data from legacy CRM, and import into Oracle Fusion CRM via WebServices APIs. For customers, it means reducing cost of data migration from legacy CRM system into Oracle Fusion CRM. 

    Read the article

  • Rotate a Video 90 degrees with VLC or Windows Live Movie Maker

    - by DigitalGeekery
    Have you ever captured video with your cell phone or camcorder only to discover when you play it back on your computer that the video is rotated 90 degrees? Or maybe you shot it that way on purpose because you preferred portrait style to a landscape view? Before you go straining your neck or flipping your monitor on it’s side to watch your video, we’ll show you a few easier methods. If you simply want to rotate the video while you watch it, we’ll show you how to accomplish that with VLC Media Player. If you want to convert the video so it is rotated permanently, we’ll show you how to do that with Windows Live Movie Maker and output your video as a WMV file. Rotate and Watch a Video in VLC Download, install, and run VLC Media Player. (See download link below)   Open your video file by going to Media  > Open File… and browsing for your file. Or, by just dragging and dropping your video onto the VLC player.   Choose Tools from the Menu bar and select Effects and Filters. On the Video Effects tab, tick the Transform checkbox and choose your degrees of rotation. The video is rotated counter-clockwise, so to rotate clockwise 90 degrees you’ll want to choose Rotate by 270 degrees.   Now you can enjoy your video the way it was intended to be viewed. Rotate and Convert the Video with Windows Live Movie Maker Starting with Windows 7, Windows Movie Maker no longer comes pre-installed with the OS. It’s now part of the Windows Live suite that is available as a separate, free download for Windows 7 and Vista. (Windows XP is not supported) You can find the link to our detailed instruction on how to install Windows Live at the end of the article. To add your video files to Windows Movie Maker, click on Add videos and photos on the Home tab, or drag and drop the video into the blank area on the right side of the application. Next, you’ll need to rotate the video. Staying on the Home tab, click on the Rotate right 90° or Rotate left 90°.   You’ll see your video is now oriented properly on the left.   To save and convert your video to WMV format, click the Movie Maker tab just to the left of the Home tab. Hover your cursor over Save movie, and then select your output settings. You also have the option to burn directly to DVD. Browse for a location to save it and rename the output file if you’d like. Click Save. You’ll be notified when the file is complete. Now you’ll have your video properly oriented in WMV file format.   These are two rather easy ways to accomplish rotating your video. Unfortunately, Windows Live Movie Maker doesn’t give you a lot of  options for output. If you want to output to a file, your only choice is WMV format or DVD. However, previous versions will also allow you to export to AVI. How-To Geek’s Install Windows Live Essentials In Windows 7 Article. Download Windows Live Download VLC Media Player Similar Articles Productive Geek Tips How to Make/Edit a movie with Windows Movie Maker in Windows VistaCreate and Author DVDs in Windows 7Family Fun: Share Photos with Photo Gallery and Windows Live SpacesInstall Windows Live Essentials In Windows 7Add Network Support to Windows Live MovieMaker TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses

    Read the article

  • How to Restore the Real Internet Explorer Desktop Icon in Windows 7

    - by The Geek
    Remember how previous versions of Windows had an Internet Explorer icon on the desktop, and you could right-click it to quickly access the Internet Options screen? It’s completely gone in Windows 7, but a geeky hack can bring it back. Microsoft removed this feature to comply with all those murky legal battles they’ve had, and their alternate suggestion is to create a standard shortcut to iexplore.exe on the Desktop, but it’s not the same thing. We’ve got a registry hack to bring it back. This guest article was written by Ramesh from the WinHelpOnline blog, where he’s got loads of really geeky registry hacks. Bring Back the Internet Explorer Namespace Icon in Windows 7 the Easy Way If you just want the IE icon back, all you need to do is download the RealInternetExplorerIcon.zip file, extract the contents, and then double-click on the w7_ie_icon_restore.reg file. That’s all you have to do. There’s also an undo registry file there if you want to get rid of it. Download the Real Internet Explorer Icon Registry Hack Manual Registry Hack If you prefer doing things the manual way, or just really want to understand how this hack works, you can follow through the manual steps below to learn how it was done, but we’ll have to warn you that it’s a lot of steps. Launch Regedit.exe using the Start Menu search box, and then navigate to the following location: HKEY_CLASSES_ROOT \ CLSID \ {871C5380-42A0-1069-A2EA-08002B30309D} Right-click on the key on the left-hand pane, choose Export, and save it to a .REG file (say, ie-guid.reg) Open up the REG file using Notepad… From the Edit menu, click Replace, and replace every occurrence of the following GUID string {871C5380-42A0-1069-A2EA-08002B30309D} … with a custom GUID string, such as: {871C5380-42A0-1069-A2EA-08002B30301D} Save the REG file and close Notepad, and then double-click on the file to merge the contents to the registry. Either re-open the registry editor, or use the F5 key to reload everything with the new changes (this step is important). Now you can navigate downto the following registry key: HKEY_CLASSES_ROOT \ CLSID \ {871C5380-42A0-1069-A2EA-08002B30301D} \ Shellex \ ContextMenuHandlers \ ieframe Double-click on the (default) key in the right-hand pane and set its data as: {871C5380-42A0-1069-A2EA-08002B30309D} With this done, press F5 on the desktop and you’ll see the Internet Explorer icon that looks like this: The icon appears incomplete without the Properties command in right click menu, so keep reading. Final Registry Hack Adjustments Click on the following key, which should still be viewable in your Registry editor window from the last step. HKEY_CLASSES_ROOT\CLSID\{871C5380-42A0-1069-A2EA-08002B30301D} Double-click LocalizedString in the right-hand pane and type the following data to rename the icon. Internet Explorer Select the following key: HKEY_CLASSES_ROOT\CLSID\{871C5380-42A0-1069-A2EA-08002B30301D}\shell Add a subkey and name it as Properties, then select the Properties key, double-click the (default) value and type the following: P&roperties Create a String value named Position, and type the following data bottom At this point the window should look something like this: Under Properties, create a subkey and name it as Command, and then set its (default) value as follows: control.exe inetcpl.cpl Navigate down to the following key, and then delete the value named LegacyDisable HKEY_CLASSES_ROOT \ CLSID \ {871C5380-42A0-1069-A2EA-08002B30301D} \ shell \ OpenHomePage Now head to the this key: HKEY_LOCAL_MACHINE \ SOFTWARE \ Microsoft \ Windows \ CurrentVersion \ Explorer \ Desktop \ NameSpace Create a subkey named {871C5380-42A0-1069-A2EA-08002B30301D} (which is the custom GUID that we used earlier in this article.) Press F5 to refresh the Desktop, and here is how the Internet Explorer icon would look like, finally. That’s it! It only took 24 steps, but you made it through to the end—of course, you could just download the registry hack and get the icon back with a double-click. Similar Articles Productive Geek Tips Quick Help: Restore Show Desktop Icon in Windows VistaQuick Help: Restore Flip3D Icon in Windows VistaAdd Internet Explorer Icon to Windows XP / Vista DesktopHide, Delete, or Destroy the Recycle Bin Icon in Windows 7 or VistaBuilt-in Quick Launch Hotkeys in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • Closer look at the SOA 12c Feature: Oracle Managed File Transfer

    - by Tshepo Madigage-Oracle
    The rapid growth of cloud-based applications in the enterprise, combined with organizations' desire to integrate applications with mobile technologies, is dramatically increasing application integration complexity. To meet this challenge, Oracle introduced Oracle SOA Suite 12c, the latest version of the industry's most complete and unified application integration and SOA solution. With simplified cloud, mobile, on-premises, and Internet of Things (IoT) integration capabilities, all within a single platform, Oracle SOA Suite 12c helps organizations speed time to integration, improve productivity, and lower TCO. To extend its B2B solution capabilities with Oracle SOA Suite 12c, Oracle unveiled Oracle Managed File Transfer, an integrated solution that enables organizations to virtually eliminate file transfer complexities. This allows customers to load data securely into Oracle Cloud applications as well as third-party cloud or partner applications. Oracle Managed File Transfer (Oracle MFT) enables secure file exchange and management with internal departments and external partners. It protects against inadvertent access to unsecured files at every step in the end-to-end transfer of files. It is easy to use especially for non technical staff so you can leverage more resources to manage the transfer of files. The extensive reporting capabilities allow you to get quick status of a file transfer and resubmit it as required. You can protect data in your DMZ by using the SSH/FTP reverse proxy. Oracle Managed File Transfer can help integrate applications by transferring files between them in complex use case patterns. Standalone: Transferring files on its own using embedded FTP and sFTP servers and the file systems to which it has access. SOA Integration: a SOA application can be the source or target of a transfer. A SOA application can also be the common endpoint for the target of one transfer and the source of another. B2B Integration: B2B application can be the source or target of a transfer. A B2B application can also be the common endpoint for the target of one transfer and the source of another. Healthcare Integration:  Healthcare application can be the source or target of a transfer. A Healthcare application can also be the common endpoint for the target of one transfer and the source of another. Oracle Service Bus (OSB) integration: OMT can integrate with Oracle Service Bus web service interfaces. OSB interface can be the source or target of a transfer. An Oracle Service Bus interface can also be the common endpoint for the target of one transfer and the source of another. Hybrid Integration: can be one participant in a web of data transfers that includes multiple application types. Oracle Managed File Transfers has four user roles: file handlers, designers, monitors, and administrators. File Handlers: - Copy files to file transfer staging areas, which are called sources. - Retrieve files from file transfer destinations, which are called targets. Designers: - Create, read, update and delete file transfer sources. - Create, read, update and delete file transfer targets. - Create, read, update and delete transfers, which link sources and targets in complete file delivery flows. - Deploy and test transfers. Monitors: - Use the Dashboard and reports to ensure that transfer instances are successful. - Pause and resume lengthy transfers. - Troubleshoot errors and resubmit transfers. - View artifact deployment details and history. - View artifact dependence relationships. - Enable and disable sources, targets, and transfers. - Undeploy sources, targets, and transfers. - Start and stop embedded FTP and sFTP servers. Administrators: - All file handler tasks - All designer tasks - All monitor tasks - Add other users and determine their roles - Configure user directory permissions - Configure the Oracle Managed File Transfer server - Configure embedded FTP and sFTP servers, including security - Configure B2B and Healthcare domains - Back up and restore the Oracle Managed File Transfer configuration - Purge transferred files and instance data - Archive and restore instance data and payloads - Import and export metadata You will find all the related information about SOA 12.1.3. Oracle Manages File Transfer OMT in the documentation: Using Oracle Manages File Transfer Resources and links: Oracle Unveils Oracle SOA Suite 12c Oracle Managed Files Transfer Oracle Managed Files Transfer SOA 12c White Paper For further enquiries don't hesitate to contact us at [email protected] and join our Partner Webcast on Oracle SOA Suite 12c

    Read the article

  • SQLAuthority News – Follow up on – Replace a Column Name in Multiple Stored Procedure all together

    - by pinaldave
    Last month I had a fantastic time with lots of puzzles and brain teasers, the amount of participation which I have received on the blog is indeed inspiring to write more. One of the blog post was about how to replace a column name in all the stored procedures. The article had very interesting conversation as a follow up. Please read the original article Replace a Column Name in Multiple Stored Procedure all together before reading this blog further as they are connected. Let us start few of the interesting comments. SQL Server Expert Imran Mohammed had a wonderful first and excellent note. I suggest all of you to read it. Imran stresses on the Data Modelling and Logical as well as Physical Design. Developers must create a logical design and get approval on naming convention, data types, references, constraints, indexes etc. He further suggested that one should not cut steps but must follow all the industry standards and guidelines. Here extended my blog post with following note – “Extending Pinal’s answer, what you can do is go to database properties, all tasks, scripts objects, in scripting wizard select all the stored procedure for which you want to change column name, export the query to a new window and then do find and replace, all in once window and execute the script. But make sure you check what you are replacing, sometimes column names are also used in table names, for ex:Table Name: Product and Column Name: ProductId, ProductName”. Thanks Imran Great Points!  Gatej Alexandru suggested that it is not good idea to DROP or CREATE but rather use ALTER as quite possible there may be permissions issue as well. Very good point let me see if I can write blog post over it. Vinay Kumar and SQLStudent144 have proposed another method to achieve the same. I am combining their solution and writing them here. Step 1. Press Ctrl+T or change “Result to Text” mode. Step 2. Execute below commands.SELECT 'EXEC sp_helptext [' + referencing_schema_name + '.' + referencing_entity_name + ']' FROM sys.dm_sql_referencing_entities('schema.objectname','OBJECT') Where schema.objectname is the object or table you are searching for. Step 3. Now copy the result and paste in new window. Again Press Ctrl+T or change “Result to Text” mode. Step 4. Copy the result and paste in new window. Execute the query. Step 5. Copy the result and paste in new window. Step 6. Now find your searching text in the script, make necessary changes and execute this script. Do not forget to remove the code which is generated in resultset which are not relevant to the T-SQL Script. Digitqr suggest we can do this for other objects besides Stored Procedure as well. Iosif suggests to use tool SQL Search from RedGate. I guess this sums it well. We have an alternative perspective to our original issue of replacing the column name in multiple stored procedure. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Getting UPK data into Excel

    - by maria.cozzolino(at)oracle.com
    Did you ever want someone to review your UPK outline outside of the Developer? You can send your outline to an Excel report, which can be distributed through email. Depending on how much additional data you want with your outline, there are two ways you can do this task. Basic data: • You can print a listing of all the items in the outline. • With your outline open, choose File/Print... • Choose the "Save document as" command on the right, and choose Excel (or xlsx). • HINT: If you have not expanded your entire outline, it's faster to use the commands in Developer to expand the entire outline. However, you can expand specific sections by clicking on them in the print preview. • NOTE: If you have the Details view displayed rather than the Player view, you can print all the data that appears in that view. Advanced data: If you desire a more detailed report, you can use the HP Quality Center publishing style, which also creates an Excel file. This style contains a default set of fields for use with Quality Center, but any of the metadata fields can be added to the report, and it can be used for more than just importing into HP Quality Center. To add additional columns to the HP Quality Center publishing style: 1. Make a copy of the publishing style. This process ensures that you have a good copy to revert to if something goes wrong with your customizations, and also allows you to keep your modifications when the software is upgraded. 2. Open the copy of the columnspec.xml file in your favorite XML editor - I use notepad. (This file is located in a language-specific folder in the HP Quality Center publishing style.) 3. Scroll down the columnspec file until you find the column to include. All the metadata fields that can be added to the report are listed in the columnspec file - you just need to tell the system to include the columns. 4. You will see a series of sections like this: 5. Change the value for "col export" to "yes". This will include the column in the Excel file. 6. If desired, change the value for "Play_ModesColHeader" to be whatever name you wish to appear in the Excel column heading. 7. Save the columnspec file. 8. Save the publishing style package. Now, when you publish for HP Quality Center, you will see your newly added columns. You can refer to the section on Customizing HP Quality Center Output in the Content Deployment Guide for additional customization details. Happy customization! I'd be interested in hearing what other uses you have for Excel reporting. Wishing you and yours a happy and healthy New Year! ~~Maria Cozzolino, Manager of Software Requirements and UI

    Read the article

  • MySQL for Excel 1.3.0 Beta has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.3.0.  This is a beta release for 1.3.x. MySQL for Excel is an application plug-in enabling data analysts to very easily access and manipulate MySQL data within Microsoft Excel. It enables you to directly work with a MySQL database from within Microsoft Excel so you can easily do tasks such as: Importing MySQL data into Excel Exporting Excel data directly into MySQL to a new or existing table Editing MySQL data directly within Excel As this is a beta version the MySQL for Excel product can be downloaded only by using the product standalone installer at this link http://dev.mysql.com/downloads/windows/excel/ Your feedback on this beta version is very well appreciated, you can raise bugs on the MySQL bugs page or give us your comments on the MySQL for Excel forum. Changes in MySQL for Excel 1.3.0 (2014-06-06, Beta) This section documents all changes and bug fixes applied to MySQL for Excel since the release of 1.2.1. Several new features were added, for more information see What Is New In MySQL for Excel (http://dev.mysql.com/doc/refman/5.6/en/mysql-for-excel-what-is-new.html). Known limitations: Upgrading from versions MySQL for Excel 1.2.0 and lower is not possible due to a bug fixed in MySQL for Excel 1.2.1. In that scenario, the old version (MySQL for Excel 1.2.0 or lower) must be uninstalled first. Upgrading from version 1.2.1 works correctly. <CTRL> + <A> cannot be used to select all database objects. Either <SHIFT> + <Arrow Key> or <CTRL> + click must be used instead. PivotTables are normally placed to the right (skipping one column) of the imported data, they will not be created if there is another existing Excel object at that position. Functionality Added or Changed Imported data can now be refreshed by using the native Refresh feature. Fields in the imported data sheet are then updated against the live MySQL database using the saved connection ID. Functionality was added to import data directly into PivotTables, which can be created from any Import operation. Multiple objects (tables and views) can now be imported into Excel, when before only one object could be selected. Relational information is also utilized when importing multiple objects. All options now have descriptive tooltips. Hovering over an option/preference displays helpful information about its use. A new Export Data, Advanced Options option was added that shows all available data types in the Data Type combo box, instead of only showing a subset of the most popular data types. The option dialogs now include a Refresh to Defaults button that resets the dialog's options to their defaults values. Each option dialog is set individually. A new Add Summary Fields for Numeric Columns option was added to the Import Data dialog that automatically adds summary fields for numeric data after the last row of the imported data. The specific summary function is selectable from many options, such as "Total" and "Average." A new collation option was added for the schema and table creation wizards. The default schema collation is "Server Default", and the default table collation is "Schema Default". Collation options may be selected from a drop-down list of all available collations. Quick links: MySQL for Excel documentation: http://dev.mysql.com/doc/en/mysql-for-excel.html. MySQL on Windows blog: http://blogs.oracle.com/MySQLOnWindows. MySQL for Excel forum: http://forums.mysql.com/list.php?172. MySQL YouTube channel: http://www.youtube.com/user/MySQLChannel. Enjoy and thanks for the support! 

    Read the article

  • Archiving SQLHelp tweets

    - by jamiet
    #SQLHelp is a Twitter hashtag that can be used by any Twitter user to get help from the SQL Server community. I think its fair to say that in its first year of being it has proved to be a very useful resource however Kendra Little (@kendra_little) made a very salient point yesterday when she tweeted: Is there a way to search the archives of #sqlhelp Trying to remember answer to a question I know I saw a couple months ago http://twitter.com/#!/Kendra_Little/status/15538234184441856 This highlights an inherent problem with Twitter’s search capability – it simply does not reach far enough back in time. I have made steps to remedy that situation by putting into place two initiatives to archive Tweets that contain the #sqlhelp hashtag. The Archivist http://archivist.visitmix.com/ is a free service that, quite simply, archives a history of tweets that contain a given search term by periodically polling Twitter’s search service with that search term and subsequently displaying a dashboard providing an aggregate view of those tweets for things like tweet volume over time, top users and top words (Archivist FAQ). I have set up an archive on The Archivist for “sqlhelp” which you can view at http://archivist.visitmix.com/jamiet/7. Here is a screenshot of the SQLHelp dashboard 36 minutes after I set it up: There is lots of good information in there, including the fact that Jonathan Kehayias (@SQLSarg) is the most active SQLHelp tweeter (I suspect as an answerer rather than a questioner ) and that SSIS has proven to be a rather (ahem) popular subject!! Datasift The Archivist has its uses though for our purposes it has a couple of downsides. For starters you cannot search through an archive (which is what Kendra was after) and nor can you export the contents of the archive for offline analysis. For those functions we need something a bit more heavyweight and for that I present to you Datasift. Datasift is a tool (currently an alpha release) that allows you to search for tweets and provide them through an object called a Datasift stream. That sounds very similar to normal Twitter search though it has one distinct advantage that other Twitter search tools do not – Datasift has access to Twitter’s Streaming API (aka the Twitter Firehose). In addition it has access to a lot of other rather nice features: It provides the Datasift API that allows you to consume the output of a Datasift stream in your tool of choice (bring on my favourite ultimate mashup tool J ) It has a query language (called Filtered Stream Definition Language – FSDL for short) A Datasift stream can consume (and filter) other Datasift streams Datasift can (and does) consume services other than Twitter If I refer to Datasift as “ETL for tweets” then you may get some sort of idea what it is all about. Just as I did with The Archivist I have set up a publicly available Datasift stream for “sqlhelp” at http://datasift.net/stream/1581/sqlhelp. Here is the FSDL query that provides the data: twitter.text contains "sqlhelp" Pretty simple eh? At the current time it provides little more than a rudimentary dashboard but as Datasift is currently an alpha release I think this may be worth keeping an eye on. The real value though is the ability to consume the output of a stream via Datasift’s RESTful API, observe: http://api.datasift.net/stream.xml?stream_identifier=c7015255f07e982afdeebdf1ae6e3c0d&username=jamiet&api_key=XXXXXXX (Note that an api_key is required during the alpha period so, given that I’m not supplying my api_key, this URI will not work for you) Just to prove that a Datasift stream can indeed consume data from another stream I have set up a second stream that further filters the first one for tweets containing “SSIS”. That one is at http://datasift.net/stream/1586/ssis-sqlhelp and here is the FSDL query: rule "414c9845685ff8d2548999cf3162e897" and (interaction.content contains "ssis") When Datasift moves beyond alpha I’ll re-assess how useful this is going to be and post a follow-up blog. @Jamiet

    Read the article

  • Silabs cp2102 driver problem

    - by Zxy
    I downloaded appropriate driver from its own site, unzipped it and then tried to install it. But: root@ghostrider:/home/zero/Downloads# tar xvf cp210x-3.1.0.tar.gz cp210x-3.1.0/ cp210x-3.1.0/COPYING cp210x-3.1.0/cp210x/ cp210x-3.1.0/cp210x-3.1.0.spec cp210x-3.1.0/cp210x/.rpmmacros cp210x-3.1.0/cp210x/configure cp210x-3.1.0/cp210x/cp210x.c cp210x-3.1.0/cp210x/cp210x.h cp210x-3.1.0/cp210x/cp210xuniversal.c cp210x-3.1.0/cp210x/cp210xuniversal.h cp210x-3.1.0/cp210x/installmod cp210x-3.1.0/cp210x/Makefile24 cp210x-3.1.0/cp210x/Makefile26 cp210x-3.1.0/cp210x/rpmmacros24 cp210x-3.1.0/cp210x/rpmmacros26 cp210x-3.1.0/cp210x/Rules.make cp210x-3.1.0/INSTALL cp210x-3.1.0/makerpm cp210x-3.1.0/PACKAGE-LIST cp210x-3.1.0/README cp210x-3.1.0/RELEASE-NOTES cp210x-3.1.0/REPORTING-BUGS cp210x-3.1.0/rpm/ cp210x-3.1.0/rpm/brp-java-repack-jars cp210x-3.1.0/rpm/brp-python-bytecompile cp210x-3.1.0/rpm/check-rpaths cp210x-3.1.0/rpm/check-rpaths-worker root@ghostrider:/home/zero/Downloads# cd cp210x-3.1.0 root@ghostrider:/home/zero/Downloads/cp210x-3.1.0# ls COPYING cp210x-3.1.0.spec makerpm README REPORTING-BUGS cp210x INSTALL PACKAGE-LIST RELEASE-NOTES rpm root@ghostrider:/home/zero/Downloads/cp210x-3.1.0# run ./makerpm No command 'run' found, did you mean: Command 'zrun' from package 'moreutils' (universe) Command 'runq' from package 'exim4-daemon-heavy' (main) Command 'runq' from package 'exim4-daemon-light' (main) Command 'runq' from package 'sendmail-bin' (universe) Command 'grun' from package 'grun' (universe) Command 'qrun' from package 'torque-client' (universe) Command 'qrun' from package 'torque-client-x11' (universe) Command 'lrun' from package 'lustre-utils' (universe) Command 'rn' from package 'trn' (multiverse) Command 'rn' from package 'trn4' (multiverse) Command 'rup' from package 'rstat-client' (universe) Command 'srun' from package 'slurm-llnl' (universe) run: command not found root@ghostrider:/home/zero/Downloads/cp210x-3.1.0# sudo ./makerpm + uname -r + kernel_release=3.2.0-25-generic-pae + pwd + current_dir=/home/zero/Downloads/cp210x-3.1.0 + export current_dir + uname -r + KVER=3.2.0-25-generic-pae + echo 3.2.0-25-generic-pae + awk -F . -- { print $1 } + KVER1=3 + echo 3.2.0-25-generic-pae + awk -F . -- { print $2 } + KVER2=2 + sed -e s/3\.2\.//g + echo 3.2.0-25-generic-pae + KVER3=0-25-generic-pae + [ -f /root/.rpmmacros ] + echo 2 2 + [ 2 == 4 ] ./makerpm: 25: [: 2: unexpected operator + echo 0-25-generic-pae 0-25-generic-pae + [ 0-25-generic-pae -gt 15 ] ./makerpm: 29: [: Illegal number: 0-25-generic-pae + cp /home/zero/Downloads/cp210x-3.1.0/cp210x/rpmmacros24 /root/.rpmmacros + d=/var/tmp/silabs + [ ! -d /var/tmp/silabs ] + mkdir /var/tmp/silabs + cd /var/tmp/silabs + r=/var/tmp/silabs/rpmbuild + o=cp210x-3.1.0 + s=/var/tmp/silabs/rpmbuild/SOURCES + spec=cp210x-3.1.0.spec + rm -rf /var/tmp/silabs/rpmbuild + mkdir rpmbuild + mkdir rpmbuild/SOURCES + mkdir rpmbuild/SRPMS + mkdir rpmbuild/SPECS + mkdir rpmbuild/BUILD + mkdir rpmbuild/RPMS + cd /var/tmp/silabs/rpmbuild/SOURCES + rm -rf cp210x-3.1.0 + mkdir cp210x-3.1.0 + cp -r /home/zero/Downloads/cp210x-3.1.0/cp210x/Makefile24 /home/zero/Downloads/cp210x-3.1.0/cp210x/Makefile26 /home/zero/Downloads/cp210x- 3.1.0/cp210x/Rules.make /home/zero/Downloads/cp210x-3.1.0/cp210x/configure /home/zero/Downloads/cp210x-3.1.0/cp210x/cp210x.c /home/zero/Downloads/cp210x- 3.1.0/cp210x/cp210x.h /home/zero/Downloads/cp210x-3.1.0/cp210x/cp210xuniversal.c /home/zero/Downloads/cp210x-3.1.0/cp210x/cp210xuniversal.h /home/zero/Downloads/cp210x- 3.1.0/cp210x/installmod /home/zero/Downloads/cp210x-3.1.0/cp210x/rpmmacros24 /home/zero/Downloads/cp210x-3.1.0/cp210x/rpmmacros26 cp210x-3.1.0 + echo 2 2 + [ 2 == 4 ] ./makerpm: 64: [: 2: unexpected operator + echo 0-25-generic-pae 0-25-generic-pae + [ 0-25-generic-pae -gt 15 ] ./makerpm: 68: [: Illegal number: 0-25-generic-pae + cp /home/zero/Downloads/cp210x-3.1.0/cp210x/.rpmmacros24 cp210x-3.1.0/.rpmmacros cp: cannot stat `/home/zero/Downloads/cp210x-3.1.0/cp210x/.rpmmacros24': No such file or directory + MyCopy=0 + rm -f cp210x-3.1.0.tar + rm -f cp210x-3.1.0.tar.gz + tar -cf cp210x-3.1.0.tar cp210x-3.1.0 + gzip cp210x-3.1.0.tar + cp /home/zero/Downloads/cp210x-3.1.0/cp210x-3.1.0.spec /var/tmp/silabs/rpmbuild/SPECS + rpmbuild -ba /var/tmp/silabs/rpmbuild/SPECS/cp210x-3.1.0.spec ./makerpm: 121: ./makerpm: rpmbuild: not found + [ -f /root/.rpmmacros.cp210x ] How may I solve my problem? Thanks

    Read the article

  • XNA 4 Deferred Rendering deforms the model

    - by Tomáš Bezouška
    I have a problem when rendering a model of my World - when rendered using BasicEffect, it looks just peachy. Problem is when I render it using deferred rendering. See for yourselves: what it looks like: http://imageshack.us/photo/my-images/690/survival.png/ what it should look like: http://imageshack.us/photo/my-images/521/survival2.png/ (Please ignora the cars, they shouldn't be there. Nothing changes when they are removed) Im using Deferred renderer from www.catalinzima.com/tutorials/deferred-rendering-in-xna/introduction-2/ except very simplified, without the custom content processor. Here's the code for the GBuffer shader: float4x4 World; float4x4 View; float4x4 Projection; float specularIntensity = 0.001f; float specularPower = 3; texture Texture; sampler diffuseSampler = sampler_state { Texture = (Texture); MAGFILTER = LINEAR; MINFILTER = LINEAR; MIPFILTER = LINEAR; AddressU = Wrap; AddressV = Wrap; }; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; float2 TexCoord : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 TexCoord : TEXCOORD0; float3 Normal : TEXCOORD1; float2 Depth : TEXCOORD2; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); output.TexCoord = input.TexCoord; //pass the texture coordinates further output.Normal = mul(input.Normal,World); //get normal into world space output.Depth.x = output.Position.z; output.Depth.y = output.Position.w; return output; } struct PixelShaderOutput { half4 Color : COLOR0; half4 Normal : COLOR1; half4 Depth : COLOR2; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; output.Color = tex2D(diffuseSampler, input.TexCoord); //output Color output.Color.a = specularIntensity; //output SpecularIntensity output.Normal.rgb = 0.5f * (normalize(input.Normal) + 1.0f); //transform normal domain output.Normal.a = specularPower; //output SpecularPower output.Depth = input.Depth.x / input.Depth.y; //output Depth return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } And here are the rendering parts in XNA: public void RednerModel(Model model, Matrix world) { Matrix[] boneTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(boneTransforms); Game.GraphicsDevice.DepthStencilState = DepthStencilState.Default; Game.GraphicsDevice.BlendState = BlendState.Opaque; Game.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; foreach (ModelMesh mesh in model.Meshes) { foreach (ModelMeshPart meshPart in mesh.MeshParts) { GBufferEffect.Parameters["View"].SetValue(Camera.Instance.ViewMatrix); GBufferEffect.Parameters["Projection"].SetValue(Camera.Instance.ProjectionMatrix); GBufferEffect.Parameters["World"].SetValue(boneTransforms[mesh.ParentBone.Index] * world); GBufferEffect.Parameters["Texture"].SetValue(meshPart.Effect.Parameters["Texture"].GetValueTexture2D()); GBufferEffect.Techniques[0].Passes[0].Apply(); RenderMeshpart(mesh, meshPart); } } } private void RenderMeshpart(ModelMesh mesh, ModelMeshPart part) { Game.GraphicsDevice.SetVertexBuffer(part.VertexBuffer); Game.GraphicsDevice.Indices = part.IndexBuffer; Game.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, part.NumVertices, part.StartIndex, part.PrimitiveCount); } I import the model using the built in content processor for FBX. The FBX is created in 3DS Max. I don't know the exact details of that export, but if you think it might be relevant, I will get them from my collegue who does them. What confuses me though is why the BasicEffect approach works... seems the FBX shouldnt be a problem. Any thoughts? They will be greatly appreciated :)

    Read the article

  • How to inhibit suspend temporarily?

    - by Zorn
    I have searched around a bit for this and can't seem to find anything helpful. I have my PC running Ubuntu 12.10 set up to suspend after 30 minutes of inactivity. I don't want to change that, it works great most of the time. What I do want to do is disable the automatic suspend if a particular application is running. How can I do this? The closest thing I've found so far is to add a shell script in /usr/lib/pm-utils/sleep.d which checks if the application is running and returns 1 to indicate that suspend should be prevented. But it looks like the system then gives up on suspending automatically, instead of trying again after another 30 minutes. (As far as I can tell, if I move the mouse, that restarts the timer again.) It's quite likely the application will finish after a couple of hours, and I'd rather my PC then suspended automatically if I'm not using it at that point. (So I don't want to add a call to pm-suspend when the application finishes.) Is this possible? Any advice would be appreciated. Cheers. EDIT: As I noted in one of the comments below, what I actually wanted was to inhibit suspend when my PC was serving files over NFS; I just wanted to focus on the "suspend" part of the question because I already had an idea how to solve the NFS part. Using the 'xdotool' idea given in one of the answers, I have come up with the following script which I run from cron every few minutes. It's not ideal because it stops the screensaver kicking in as well, but it does work. I need to have a look at why 'caffeine' doesn't correctly re-enable suspend later on, then I could probably do better. Anyway, this does seem to work, so I'm including it here in case anyone else is interested. #!/bin/bash # If the output of this function changes between two successive runs of this # script, we inhibit auto-suspend. function check_activity() { /usr/sbin/nfsstat --server --list } # Prevent the automatic suspend from kicking in. function inhibit_suspend() { # Slightly jiggle the mouse pointer about; we do a small step and # reverse step to try to stop this being annoying to anyone using the # PC. TODO: This isn't ideal, apart from being a bit hacky it stops # the screensaver kicking in as well, when all we want is to stop # the PC suspending. Can 'caffeine' help? export DISPLAY=:0.0 xdotool mousemove_relative --sync -- 1 1 xdotool mousemove_relative --sync -- -1 -1 } LOG="$HOME/log/nfs-suspend-blocker.log" ACTIVITYFILE1="$HOME/tmp/nfs-suspend-blocker.current" ACTIVITYFILE2="$HOME/tmp/nfs-suspend-blocker.previous" echo "Started run at $(date)" >> "$LOG" if [ ! -f "$ACTIVITYFILE1" ]; then check_activity > "$ACTIVITYFILE1" exit 0; fi /bin/mv "$ACTIVITYFILE1" "$ACTIVITYFILE2" check_activity > "$ACTIVITYFILE1" if cmp --quiet "$ACTIVITYFILE1" "$ACTIVITYFILE2"; then echo "No activity detected since last run" >> "$LOG" else echo "Activity detected since last run; inhibiting suspend" >> "$LOG" inhibit_suspend fi

    Read the article

  • Personal Technology – Excel Tip: Comparing Excel Files

    - by Pinal Dave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versionsfrom SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. I have been writing about Excel Tips over my blog and thought it would be great to share one interesting tips here as a guest blog here. Assume a situation where you want to compare multiple excel files. Here is a typical scenario I have encountered as a common activity. Assume you are sending an Excel file with tons of data, formulae and multiple sheets. Now you are requesting your colleague to validate the file and if required change content for correctness. After receiving the file from your colleague, now you want to know what changes were made by this person to your document. Now here is a cool new addition to Excel 2013 that can help you achieve this task. To get to this option, click the INQUIRE Tab. Incase you don’t have the INQUIRE Tab, check Options using INQUIRE blog. In that post, we discuss all the other options of INQUIRE tab. Once you are on the INQUIRE Tab, select “Compare Files” button as shown in the figure above. This brings a dialog as below. If you are on Windows 8 or Windows 7 OS, search for an application called “Spreadsheet Compare 2013”. Ultimately both the options lead us to the same application. If you are using the stand alone app, once the App initializes, click the “Compare files” options from the toolbar. Make sure to give two different Excel files as shown in the figure above. After selecting the Excel Sheets, you can see the Compare tool has a number of other options to play from. We will talk about some of them later in this post. Just below our toolbar is a colorful side-by-side comparison of both our excel sheets. We can also see the various Tab’s from each file. There is a meaning for each of our color coding which will be discussed next. As you saw above, the color coding has a meaning. For example the bottom pane lists each of the color coding and most importantly each of the changes as compared side-by-side. The detailed information shown below can be exported using the “Export Results” options from the toolbar as a separate Excel Workbook or can be copied to clipboard to be used later. The final piece of the puzzle is to show a graphical view of these differences results based on each category. We cannot drill down per se, but this is a great way to know that the maximum changes seem to be based on “Cell Formats” and then few “Calculated Values” have changed. The INQUIRE option and Spreadsheet Compare 2013 tool is part of Excel 2013. So as you explore using the new version of Excel, there are many such hidden features that are worth exploring. Do let us know if you enjoyed learning a new feature today and I hope you will play around with this feature in your day-today challenges when working with Excel files. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Excel, Personal Technology

    Read the article

  • Part 9: EBS Customizations, how to track

    - by volker.eckardt(at)oracle.com
    In the previous blogs we were concentrating on the preparation tasks. We have defined standards, we know about the tools and techniques we will start with. Additionally, we have defined the modification strategy, and how to handle such topics best. Now we are ready to take the requirements! Such requirements coming over in spreadsheets, word files (like GAP documents), or in any other format. As we have to assign some attributes, we start numbering all that and assign a short name to each of these requirements (=CEMLI reference). We may also have already a Functional person assigned, and we might involve someone from the tech team to estimate, and we like to assign a status such as 'planned', 'estimated' etc. All these data are usually kept in spreadsheets, but I would put them into a database (yes, I am from Oracle :). If you don't have any good looking and centralized application already, please give a try with Oracle APEX. It should be up and running in a day and the imported sheets are than manageable concurrently!  For one of my clients I have created this CEMLI-DB; in between enriched with a lot of additional functionality, but initially it was just a simple centralized CEMLI tracking application. Why I am pointing out again the centralized method to manage such data? Well, your data quality will dramatically increase, if you let your project members see (also review and update) "your" data.  APEX allows you to filter, sort, print, and also export. And if you can spend some time to define proper value lists, everyone will gain from. APEX allows you to work in 'agile' mode, means you can improve your application step by step. Let's say you like to reference a document, or even upload the same, you can do that. Or, you need to classify the CEMLIs by release, just add this release field, same for business area or CEMLI type. One CEMLI record may then look like this: Prepare one or two (online) reports, to be ready to present your "workload" to the project management. Use such extracts also when you work offline (to prioritize etc.). But as soon as you are again connected, feed the data back into the central application. Note: I have combined this application with an additional issue tracker.  Here the most important element is the CEMLI reference, which acts as link to any other application (if you are not using APEX also as issue tracker :).  Please spend a minute to define such a reference (see blog Part 8: How to name Customizations).   Summary: Building the bridge from Gap analyse to the development has to be done in a controlled way. Usually the information is provided differently, but it is suggested to collect all requirements centrally. Oracle APEX is a great solution to enter and maintain such information in a structured, but flexible way. APEX helped me a lot to work with distributed development teams during the complete development cycle.

    Read the article

  • Bash completion doesn't work, or is ignoring what I've typed; but works for commands

    - by Neil Traft
    Bash completion seems to be ignoring what I've typed (it tries to complete, but acts as if there's nothing under the cursor). I know I saw it work on this machine earlier today, but I'm not sure what has changed. Some examples: cd shows all directories under my current folder: $ cd co<tab><tab> cmake/ config/ doc/ examples/ include/ programs/ sandbox/ src/ .svn/ tests/ Commands like ls and less show all files and directories under my current folder: $ ls co<tab><tab> cmake/ config/ .cproject Doxyfile.in include/ programs/ README.txt src/ tests/ CMakeLists.txt COPYING.txt doc/ examples/ mainpage.dox .project sandbox/ .svn/ Even when I try to complete things from a different folder, it gives me only the results for my current folder (telling me that it is completely ignoring what I've typed): $ cd ~/D<tab><tab> cmake/ config/ doc/ examples/ include/ programs/ sandbox/ src/ .svn/ tests/ But it seems to be working fine for commands and variables: $ if<tab><tab> if ifconfig ifdown ifnames ifquery ifup $ echo $P<tab><tab> $PATH $PIPESTATUS $PPID $PS1 $PS2 $PS4 $PWD $PYTHONPATH I do have this bit in my .bashrc, and I have confirmed that my .bashrc is indeed getting sourced: if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi I've even tried manually executing that file, but it doesn't fix the problem: $ . /etc/bash_completion There was even one point in time where it was working for ls, but was not working for cd ... but I can't replicate that result now. Update: I also just discovered that I have terminals open from earlier that still work. I ran source .bashrc in one of them and afterwards completion was broken. Here is my .bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # # Modified by Neil Traft #source ~/.profile # Allow globs to expand hidden files shopt -s dotglob nullglob # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines or lines starting with space in the history. # See bash(1) for more options HISTCONTROL=ignoreboth # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # If set, the pattern "**" used in a pathname expansion context will # match all files and zero or more directories and subdirectories. #shopt -s globstar # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # Color the prompt export PS1="\[$(tput setaf 2)\]\u@\h:\[$(tput setaf 5)\]\W\[$(tput setaf 2)\] $\[$(tput sgr0)\] " # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi

    Read the article

  • Measuring Code Quality

    - by DotNetBlues
    Several months back, I was tasked with measuring the quality of code in my organization. Foolishly, I said, "No problem." I figured that Visual Studio has a built-in code metrics tool (Analyze -> Calculate Code Metrics) and that would be a fine place to start with. I was right, but also very wrong. The Visual Studio calculates five primary metrics: Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class Coupling, and Lines of Code. The first two are figured at the method level, the second at (primarily) the class level, and the last is a simple count. The first question any reasonable person should ask is "Which one do I look at first?" The first question any manager is going to ask is, "What one number tells me about the whole application?" My answer to both, in a way, was "Maintainability Index." Why? Because each of the other numbers represent one element of quality while MI is a composite number that includes Cyclomatic Complexity. I'd be lying if I said no consideration was given to the fact that it was abstract enough that it's harder for some surly developer (I've been known to resemble that remark) to start arguing why a high coupling or inheritance is no big deal or how complex requirements are to blame for complex code. I should also note that I don't think there is one magic bullet metric that will tell you objectively how good a code base is. There are a ton of different metrics out there, and each one was created for a specific purpose in mind and has a pet theory behind it. When you've got a group of developers who aren't accustomed to measuring code quality, picking a 0-100 scale, non-controversial metric that can be easily generated by tools you already own really isn't a bad place to start. That sort of answers the question a developer would ask, but what about the management question; how do you dashboard this stuff when Visual Studio doesn't roll up the numbers to the solution level? Since VS does roll up the MI to the project level, I thought I could just figure out what sort of weighting Microsoft used to roll method scores up to the class level and then to the namespace and project levels. I was a bit surprised by the answer: there is no weighting. That means that a class with one 1300 line method (which will score a 0 MI) and one empty constructor (which will score a 100 MI) will have an overall MI of a respectable 50. Throw in a couple of DTOs that are nothing more than getters and setters (which tend to score 95 or better) and the project ends up looking really, really healthy. The next poor bastard who has to work on the application is probably not going to be singing the praises of its maintainability, though. For the record, that 1300 line method isn't a hypothetical, either. So, what does one do with that? Well, I decided to weight the average by the Lines of Code per method. For our above example, the formula for the class's MI becomes ((1300 * 0) + (1 * 100))/1301 = .077, rounded to 0. Sounds about right. Continue the pattern for namespace, project, solution, and even multi-solution application MI scores. This can be done relatively easily by using the "export to Excel" button and running a quick formula against the data. On the short list of follow-up questions would be, "How do I improve my application's score?" That's an answer for another time, though.

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >