Search Results

Search found 10492 results on 420 pages for 'online backups'.

Page 333/420 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • The Politics of Junk Filtering

    - by mikef
    If the national postal service, such as the Royal Mail in the UK, were to go through your letters and throw away all the stuff it considered to be junk instead of delivering it to you, you might be rather pleased until you discovered that it took a too liberal decision about what was junk. Catalogs you'd asked for? Junk! Requests from charities? Who needs them! Parcels from competing carriers? Toss them away! The possibility for abuse for an agency that was in a monopolistic position is just too scary to tolerate. After all, the postal service could charge 'consultancy fees' to any sender who wanted to guarantee that his stuff got delivered, or they could even farm this out to other companies. Because Microsoft Outlook is just about the only email client used by the international business community in the west, its' SPAM filter is the final arbiter as to what gets read. My Outlook 2007, set to the default settings, junks all the perfectly innocent email newsletters that I subscribe to. Whereas Google Mail, Yahoo, and LIVE are all pretty accurate in detecting spam, Outlook makes all sorts of silly mistakes. The documentation speaks techno-babble about 'advanced heuristics', but the result boils down to an inaccurate mess. The more that Microsoft fiddles with it, the stickier the mess. To make matters worse, it still lets through obvious spam. The filter is occasionally updated along with other automatic 'security' updates you opt for automatic updates. As an editor for a popular online publication that provides a newsletter service, this is an obvious source of frustration. We follow all the best-practices we know about. We ensure that it is a trivial task to opt out of receiving it. We format the newsletter to the requirements of the Service Providers. We follow up, and resolve, every complaint. As a result, it gets delivered. It is galling to discover that, after all that effort, Outlook then often judges the contents to be junk on a whim, so you don't get to see it. A few days ago, Microsoft published the PST file format specification, under pressure from a European Union interoperability investigation by ECIS (the European Committee for Interoperable Systems). The objective was that other applications could then access existing PST files so as to migrate from existing Outlook installations to other solutions. Joaquín Almunia, the current competition commissioner, should now turn his attention to the more subtle problems of Microsoft Outlook. The Junk problem seems to have come from clumsy implementation of client-side spam filtering rather than from deliberate exploitation of a monopoly on the desktop email client for businesses, but it is a growing problem nonetheless. Cheers, Michael Francis

    Read the article

  • How to Format a USB Drive in Ubuntu Using GParted

    - by Trevor Bekolay
    If a USB hard drive or flash drive is not properly formatted, then it will not show up in the Ubuntu Places menu, making it hard to interact with. We’ll show you how to format a USB drive using the tool GParted. Note: Formatting a USB drive will destroy any data currently stored on it. If you think that your USB drive is already properly formatted, but Ubuntu just isn’t picking it up, try unplugging it and plugging it back in to a different USB slot, or restarting your machine with the device plugged in on start-up. Open a terminal by clicking on Applications in the top-left of the screen, then Accessories > Terminal. GParted should be installed by default, but we’ll make sure it’s installed by entering the following command in the terminal: sudo apt-get install gparted To open GParted, enter the following command in the terminal: sudo gparted Find your USB drive in the drop-down box at the top right of the GParted window. The drive should be unallocated – if it has a valid partition on it, then you may be looking at the wrong drive. Note: Make sure you’re on the correct drive, as making changes on the wrong hard drive with GParted can delete all data on a hard drive! Assuming you’re on the right drive, right-click on the unallocated grey block and click New. In the window that pops up, change the File System to fat32 for USB Flash Drives, NTFS for USB Hard Drives that will be used in Windows, or ext3/ext4 for USB Hard Drives that will be used exclusively in Linux. Add a label if you’d like, and then click Add. Click the green checkmark and then the Apply button to apply the changes. GParted will now format your drive. If you’re formatting a large USB Hard Drive, this can take some time. Once the process is done, you can close GParted, and the drive will now show up in the Places menu. Clicking on the drive will mount it and open it in a File Browser window. It will also add a shortcut to the drive on the Desktop by default. Your USB drive is now ready to store your files! Similar Articles Productive Geek Tips Using GParted to Resize Your Windows Vista PartitionInstall an RPM Package on Ubuntu LinuxCreate a Persistent Bootable Ubuntu USB Flash DriveShare Ubuntu Home Directories using SambaCreate a Samba User on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Rapid Planning: Next Generation MRP

    - by john.bermudez
    MRP has been a mainstay of manufacturing systems for 40 years. MRP evolved from simple inventory planning systems to become the heart of the MRPII systems which eventually became ERP. While the applications surrounding it have become broader, more sophisticated and web-based, MRP continues to operate in the loneliness of the Saturday night batch window quietly exploding bills of materials and logging exceptions for hours. During this same 40 years, manufacturing business processes have seen countless changes and improvements including JIT, TQM, Six Sigma, Flow Manufacturing, Lean Manufacturing and Supply Chain Management. Although much logic has been added to MRP to deal with new manufacturing processes, it has not been able to keep up with the real-time pace of today's supply chain. As a result, planners have devised ingenious ways to trick MRP to handle new processes but often need to dump the output into spreadsheets of their own design in the hope of wrestling thousands of exceptions to ground. Oracle's new Rapid Planning application is just what companies still running MRP have been waiting for! The newest member of the Value Chain Planning product line, Rapid Planning is designed to empower planners with comprehensive supply planning that runs online in minutes, not hours. It enables a planner simulate the incremental impact of a new order or re-run an entire plan in a separate sandbox. Rapid Planning does a complete multi-level bill of material explosion like MRP but plans orders considering material and capacity constraints. Considering material and capacity constraints in planning can help you quickly reduce inventory and improve on-time shipments. Rapid Planning is an APS application that leverages years of Oracle development experience and customer feedback. Rather than rely exclusively on black-box heuristics, Rapid Planning is designed to give planners the computing power to use their industry experience and business knowledge to improve MRP. For example, Rapid Planning has a powerful worksheet user interface with built-in query capability that allows the planner to locate the orders she is interested in and use a mass update function to make quick work of large changes. The planner can save these queries and unique user interface to personalize their planning environment. Most importantly, Rapid Planning is designed to do supply planning in today's dynamic supply chain environment. It can be used to supplement MRP or replace MRP entirely. It generates plans that provide order-by-order details with aggregate key performance indicators that enable planners to quickly assess the overall business impact of a plan. To find out more about how Rapid Planning can help improve your MRP, please contact me at [email protected] or your Oracle Account Manager.

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • Set a Video as Your Desktop Wallpaper with VLC

    - by DigitalGeekery
    Are you tired of static desktop wallpapers and want something a bit more entertaining? Today we’ll take a look at setting a video as wallpaper in VLC media player. Download and install VLC player. You’ll find the download link below. Open VLC and select Tools > Preferences. On the Preferences windows, select the Video button on the left. Under Video Settings, select DirectX video output from the Output dropdown list. Click Save before exiting and then restart VLC. Next, select a video and begin playing it with VLC. Right-click on the screen, select Video, then DirectX Wallpaper.   You can achieve the same result by selecting Video from the Menu and clicking DirectX Wallpaper.   If you’re using Windows Aero Themes, you may get the warning message below and your theme will switch automatically to a basic theme.   After the Wallpaper is enabled, minimize VLC player and enjoy the show as you work.     When you are ready to switch back to your normal wallpaper, click Video, and then close out of VLC.   Occasionally we had to manually change our wallpaper back to normal. You can do that by right clicking on the desktop and selecting your theme.   Conclusion This might not make the most productive desktop environment, but it is pretty cool. It’s definitely not the same old boring wallpaper! Download VLC Similar Articles Productive Geek Tips Dual Monitors: Use a Different Wallpaper on Each Desktop in Windows 7, Vista or XPDual Monitors: Use a Different Wallpaper on Each DesktopDesktop Fun: Video Game Icon PacksDesktop Fun: Starship Theme WallpapersDesktop Fun: Mountains Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app

    Read the article

  • Windows Azure Recipe: High Performance Computing

    - by Clint Edmonson
    One of the most attractive ways to use a cloud platform is for parallel processing. Commonly known as high-performance computing (HPC), this approach relies on executing code on many machines at the same time. On Windows Azure, this means running many role instances simultaneously, all working in parallel to solve some problem. Doing this requires some way to schedule applications, which means distributing their work across these instances. To allow this, Windows Azure provides the HPC Scheduler. This service can work with HPC applications built to use the industry-standard Message Passing Interface (MPI). Software that does finite element analysis, such as car crash simulations, is one example of this type of application, and there are many others. The HPC Scheduler can also be used with so-called embarrassingly parallel applications, such as Monte Carlo simulations. Whatever problem is addressed, the value this component provides is the same: It handles the complex problem of scheduling parallel computing work across many Windows Azure worker role instances. Drivers Elastic compute and storage resources Cost avoidance Solution Here’s a sketch of a solution using our Windows Azure HPC SDK: Ingredients Web Role – this hosts a HPC scheduler web portal to allow web based job submission and management. It also exposes an HTTP web service API to allow other tools (including Visual Studio) to post jobs as well. Worker Role – typically multiple worker roles are enlisted, including at least one head node that schedules jobs to be run among the remaining compute nodes. Database – stores state information about the job queue and resource configuration for the solution. Blobs, Tables, Queues, Caching (optional) – many parallel algorithms persist intermediate and/or permanent data as a result of their processing. These fast, highly reliable, parallelizable storage options are all available to all the jobs being processed. Training Here is a link to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure HPC Scheduler (3 labs)  The Windows Azure HPC Scheduler includes modules and features that enable you to launch and manage high-performance computing (HPC) applications and other parallel workloads within a Windows Azure service. The scheduler supports parallel computational tasks such as parametric sweeps, Message Passing Interface (MPI) processes, and service-oriented architecture (SOA) requests across your computing resources in Windows Azure. With the Windows Azure HPC Scheduler SDK, developers can create Windows Azure deployments that support scalable, compute-intensive, parallel applications. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • What to do if I am working on a language that I don't like

    - by Sayem Ahmed
    Hi there, I really don't know if this is the right place to ask this question, but if it isn't, then I guess someone will notify. Anyway, I am working in a software development farm which is currently using PowerBuilder to develop a mid-size ERP solution. The work environment and company management are so great that it may be the best in the whole Bangladesh. Only problem is the technology that are currently being used, which is this PowerBuilder. Now I am a guy who tends to prefer modern development technologies, like DI containers, ORM, TDD, JQuery etc. PowerBuilder is a great tool too, but I couldn' like the application techniques used to build PB applications. These techniques are so inheritance-dependent that many a times these create a great deal of sufferings. I remember two days ago I had to change some processing logic in a core user object and as a result I had to test and re-test all the forms that the application have(apparently, there are almost 20 forms there, each of them with 3-4 kinds of functionalities). Also, learning PB is tough, because online material on this thing is very, very low. I can't afford to read all the documentation that PB provide because I have hard deadlines on the work that I have to do. Another thing with PB is that applications tend to rely on business logic that are implemented on databases which causes debugging to be a nightmare. As a result, I don't feel motivated enough to work in this IDE/System/Framework (or whatever) anymore. My productivity has greatly decreased, and I am not delivering quality code. I think I have the following options available to me - Remain in the current job, keep delivering worse code and let my productivity decrease day by day, taking salaries and bonuses but not delivering quality codes/doing my job the way I should, Search for a new job. At this point number 2 seems a good option, but there are also some issues. As I mentioned before, our management may be the best in the country. Our company owner is himself a software developer with 24 years of experience in software development. He is currently our Team Leader and System Analyst. He is by far the greatest manager and boss I have ever seen. He understands developer's mentality very well(as he IS himself a developer). He is also a great, kind and generous guy. Our company is only a start-up company with 10 developers. Among them, only 3-4 people knows about the business logic behind the ERP, and I am one of them. If I switch my current job, it may hamper the development of this product which I really don't want. I couldn't decide what to do in this situation, so I turned to the community for advice.

    Read the article

  • How to include an apache library with my opensource code?

    - by OscarRyz
    I have this opensource code with MIT license that uses an Apache 2.0 licensed library. I want to include this in my project, so it can be built right away. In the point 4 of that license explains how to redistribute it: excerpt: 4 . Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: You must give any other recipients of the Work or Derivative Works a copy of this License; and You must cause any modified files to carry prominent notices stating that You changed the files; and You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. I'm not creating a derivative work ( I plan to provide it as it is ). I don't have a NOTICE file, just my my own LICENSE.txt file. Question: Where should I put something along the lines: "This project uses Xyz library distributed under Apache2.0 ..."? What's recommented? Should I provide the apache license file too? Or would be enough if I just say "Find the license online here...http://www.apache.org/licenses/LICENSE-2.0.html" I hope someone who has done this in the past may shed some light on the matter.

    Read the article

  • How to select the first ocurrence in the auto-completion menu by pressing Enter?

    - by janoChen
    Every time there's is a pop up menu. I select the first occurrence and press enter but nothing happens (the word is not completed with he selected occurrence). The only way is to press Tab until you reach the term for a second time. Is there a way of selecting the first occurrence pressing Enter (or other Vim hotkey)? My .vimrc: " SHORTCUTS nnoremap <F4> :set filetype=html<CR> nnoremap <F5> :set filetype=php<CR> nnoremap <F3> :TlistToggle<CR> " press space to turn off highlighting and clear any message already displayed. nnoremap <silent> <Space> :nohlsearch<Bar>:echo<CR> " set buffers commands nnoremap <silent> <M-F8> :BufExplorer<CR> nnoremap <silent> <F8> :bn<CR> nnoremap <silent> <S-F8> :bp<CR> " open NERDTree with start directory: D:\wamp\www nnoremap <F9> :NERDTree /home/alex/www<CR> " open MRU nnoremap <F10> :MRU<CR> " open current file (silently) nnoremap <silent> <F11> :let old_reg=@"<CR>:let @"=substitute(expand("%:p"), "/", "\\", "g")<CR>:silent!!cmd /cstart <C-R><C-R>"<CR><CR>:let @"=old_reg<CR> " open current file in localhost (default browser) nnoremap <F12> :! start "http://localhost" file:///"%:p""<CR> " open Vim's default Explorer nnoremap <silent> <F2> :Explore<CR> nnoremap <C-F2> :%s/\.html/.php/g<CR> " REMAPPING " map leader to , let mapleader = "," " remap ` to ' nnoremap ' ` nnoremap ` ' " remap increment numbers nnoremap <C-kPlus> <C-A> " COMPRESSION function Js_css_compress () let cwd = expand('<afile>:p:h') let nam = expand('<afile>:t:r') let ext = expand('<afile>:e') if -1 == match(nam, "[\._]src$") let minfname = nam.".min.".ext else let minfname = substitute(nam, "[\._]src$", "", "g").".".ext endif if ext == 'less' if executable('lessc') cal system( 'lessc '.cwd.'/'.nam.'.'.ext.' &') endif else if filewritable(cwd.'/'.minfname) if ext == 'js' && executable('closure-compiler') cal system( 'closure-compiler --js '.cwd.'/'.nam.'.'.ext.' > '.cwd.'/'.minfname.' &') elseif executable('yuicompressor') cal system( 'yuicompressor '.cwd.'/'.nam.'.'.ext.' > '.cwd.'/'.minfname.' &') endif endif endif endfunction autocmd FileWritePost,BufWritePost *.js :call Js_css_compress() autocmd FileWritePost,BufWritePost *.css :call Js_css_compress() autocmd FileWritePost,BufWritePost *.less :call Js_css_compress() " GUI " taglist right side let Tlist_Use_Right_Window = 1 " hide tool bar set guioptions-=T "remove scroll bars set guioptions+=LlRrb set guioptions-=LlRrb " set the initial size of window set lines=46 columns=180 " set default font set guifont=Monospace " set guifont=Monospace\ 10 " show line number set number " set default theme colorscheme molokai-2 " encoding set encoding=utf-8 setglobal fileencoding=utf-8 bomb set fileencodings=ucs-bom,utf-8,latin1 " SCSS syntax highlight au BufRead,BufNewFile *.scss set filetype=scss " LESS syntax highlight syntax on au BufNewFile,BufRead *.less set filetype=less " Haml syntax highlight "au! BufRead,BufNewFile *.haml "setfiletype haml " Sass syntax highlight "au! BufRead,BufNewFile *.sass "setfiletype sass " set filetype indent filetype indent on " for snipMate to work filetype plugin on " show breaks set showbreak=-----> " coding format set tabstop=4 set shiftwidth=4 set linespace=1 " CONFIG " set location of ctags let Tlist_Ctags_Cmd='D:\ctags58\ctags.exe' " keep the buffer around when left set hidden " enable matchit plugin source $VIMRUNTIME/macros/matchit.vim " folding set foldmethod=marker set foldmarker={,} let g:FoldMethod = 0 map <leader>ff :call ToggleFold()<cr> fun! ToggleFold() if g:FoldMethod == 0 exe 'set foldmethod=indent' let g:FoldMethod = 1 else exe 'set foldmethod=marker' let g:FoldMethod = 0 endif endfun " save and restore folds when a file is closed and re-opened "au BufWrite ?* mkview "au BufRead ?* silent loadview " auto-open NERDTree everytime Vim is invoked au VimEnter * NERDTree /home/alex/www " set omnicomplete autocmd FileType python set omnifunc=pythoncomplete#Complete autocmd FileType javascript set omnifunc=javascriptcomplete#CompleteJS autocmd FileType html set omnifunc=htmlcomplete#CompleteTags autocmd FileType css set omnifunc=csscomplete#CompleteCSS autocmd FileType xml set omnifunc=xmlcomplete#CompleteTags autocmd FileType php set omnifunc=phpcomplete#CompletePHP autocmd FileType c set omnifunc=ccomplete#Complete " Remove trailing white-space once the file is saved au BufWritePre * silent g/\s\+$/s/// " Use CTRL-S for saving, also in Insert mode noremap <C-S> :update!<CR> vnoremap <C-S> <C-C>:update!<CR> inoremap <C-S> <C-O>:update!<CR> " DEFAULT set nocompatible source $VIMRUNTIME/vimrc_example.vim "source $VIMRUNTIME/mswin.vim "behave mswin " disable creation of swap files set noswapfile " no back ups wwhile editing set nowritebackup " disable creation of backups set nobackup " no file change pop up warning autocmd FileChangedShell * echohl WarningMsg | echo "File changed shell." | echohl None set diffexpr=MyDiff() function MyDiff() let opt = '-a --binary ' if &diffopt =~ 'icase' | let opt = opt . '-i ' | endif if &diffopt =~ 'iwhite' | let opt = opt . '-b ' | endif let arg1 = v:fname_in if arg1 =~ ' ' | let arg1 = '"' . arg1 . '"' | endif let arg2 = v:fname_new if arg2 =~ ' ' | let arg2 = '"' . arg2 . '"' | endif let arg3 = v:fname_out if arg3 =~ ' ' | let arg3 = '"' . arg3 . '"' | endif let eq = '' if $VIMRUNTIME =~ ' ' if &sh =~ '\<cmd' let cmd = '""' . $VIMRUNTIME . '\diff"' let eq = '"' else let cmd = substitute($VIMRUNTIME, ' ', '" ', '') . '\diff"' endif else let cmd = $VIMRUNTIME . '\diff' endif silent execute '!' . cmd . ' ' . opt . arg1 . ' ' . arg2 . ' > ' . arg3 . eq endfunction

    Read the article

  • Naming Convention for Dedicated Thread Locking objects

    - by Chris Sinclair
    A relatively minor question, but I haven't been able to find official documentation or even blog opinion/discussions on it. Simply put: when I have a private object whose sole purpose is to serve for private lock, what do I name that object? class MyClass { private object LockingObject = new object(); void DoSomething() { lock(LockingObject) { //do something } } } What should we name LockingObject here? Also consider not just the name of the variable but how it looks in-code when locking. I've seen various examples, but seemingly no solid go-to advice: Plenty of usages of SyncRoot (and variations such as _syncRoot). Code Sample: lock(SyncRoot), lock(_syncRoot) This appears to be influenced by VB's equivalent SyncLock statement, the SyncRoot property that exists on some of the ICollection classes and part of some kind of SyncRoot design pattern (which arguably is a bad idea) Being in a C# context, not sure if I'd want to have a VBish naming. Even worse, in VB naming the variable the same as the keyword. Not sure if this would be a source of confusion or not. thisLock and lockThis from the MSDN articles: C# lock Statement, VB SyncLock Statement Code Sample: lock(thisLock), lock(lockThis) Not sure if these were named minimally purely for the example or not Kind of weird if we're using this within a static class/method. Several usages of PadLock (of varying casing) Code Sample: lock(PadLock), lock(padlock) Not bad, but my only beef is it unsurprisingly invokes the image of a physical "padlock" which I tend to not associate with the abstract threading concept. Naming the lock based on what it's intending to lock Code Sample: lock(messagesLock), lock(DictionaryLock), lock(commandQueueLock) In the VB SyncRoot MSDN page example, it has a simpleMessageList example with a private messagesLock object I don't think it's a good idea to name the lock against the type you're locking around ("DictionaryLock") as that's an implementation detail that may change. I prefer naming around the concept/object you're locking ("messagesLock" or "commandQueueLock") Interestingly, I very rarely see this naming convention for locking objects in code samples online or on StackOverflow. Question: What's your opinion generally about naming private locking objects? Recently, I've started naming them ThreadLock (so kinda like option 3), but I'm finding myself questioning that name. I'm frequently using this locking pattern (in the code sample provided above) throughout my applications so I thought it might make sense to get a more professional opinion/discussion about a solid naming convention for them. Thanks!

    Read the article

  • Simple heart container script for 2D game (Unity)?

    - by N1ghtshade3
    I'm attempting to create a simple mobile game (C#) that involves a simple three-heart life system. After searching for hours online, many of the solutions use OnGUI (which is apparently horrible for performance) and the rest are too complicated for me to understand and add to my code. The other solutions involve using a single texture and just hiding part of it when damage is taken. In my game, however, the player should be able to go over three hearts (for example, every 100 points). Sebastian Lague's Zelda-Style Health is what I'm looking for, but even though it's a tutorial there is way too much going on that I don't need or can't customize to fit in mine. What I have so far is a script called HealthScript.cs which contains a variable lives. I have another script, PlayerPhysics.cs which calls HealthScript and subtracts a life when an enemy is hit. The part I don't get is actually drawing the hearts. I think I understand what needs to happen, I just am not experienced enough with Unity to know how. The Start function should draw three (or whatever lives is set to) hearts in the top right corner. Since the game should be resolution-independent to accommodate the various sizes of Android devices, I'd rather use scaling rather than PixelInset. When the player hits an enemy as detected by PlayerPhysics.cs, it should subtract from lives. I think that I have this working using this.GetComponent<HealthScript>().lives -= 1 but I'm not sure if it actually works. This should trigger a redraw of the hearts so that there are now two hearts. The same principle would apply for adding hearts when a score is reached, except when lives > maxHeartsPerRow, the new hearts should be drawn below the old ones. I realise I don't have much code to show but believe me; I've tried for quite some time to figure this out and have little to show for it. Any help at all would be welcome; it seems like it shouldn't take that much code to put an image on the screen for each life there is, but I haven't found anything yet. Thanks!

    Read the article

  • View Images and Videos in 3D in Firefox

    - by Asian Angel
    Different websites have their own format for viewing images and videos, but may not be a lot of fun to use. The Cooliris extension for Firefox lets you view those same images and videos in a dynamic 3D format. Before For our example we conducted a search for nature photos at Flickr. You could view them in a static format or even as a slideshow but what about something more dynamic looking? After As soon as the extension has finished installing, you will notice a new toolbar button used for launching the Cooliris tab. When you launch the Cooliris tab you will have an expandable menu system in the upper left corner. A speed dial setup in the center. And a small toolbar in the lower right corner Before going further you should check and make any desired adjustments in the preferences to enhance your viewing experience. In the upper right corner you can start your search by selecting from the available sources. The same search for nature images is more focused and clean looking this time. Clicking on an image will bring it forward and enlarge it. You can use the slider tool at the bottom of the tab to browser left or right through the images and videos. And when you find one that interests you, click on the popout button to open it in a new tab. Conclusion The Cooliris extension makes viewing images and videos fun and interactive with its’ 3D style format. Links Download the Cooliris extension (Mozilla Add-ons) Download Cooliris for Firefox, Internet Explorer, Safari (Mac Only), & Chrome Similar Articles Productive Geek Tips Make Firefox Display Large Images Full SizeInstalling Windows Media Player Plugin for FirefoxStop YouTube Videos from Automatically Playing in FirefoxShare Text & Images the Easy Way with JustPaste.itEasily View Source of Included Files in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED

    Read the article

  • Six Unusual Blogs I Like

    - by Bill Graziano
    I subscribe to and read over 100 SQL Server blogs every day.  I link to posts that I think are interesting.  I also read a fair number of non-SQL Server blogs.  Here are a few that I think are interesting. danah boyd. She is a researcher with Microsoft and writes about privacy, social media and teenagers.  I discovered her blog while looking for strategies to keep my personal and professional life separate.  (I haven’t found a good solution to that yet.)  Her stories of how teenagers use Facebook and other social media tools are fascinating. Clayton’s Web Snacks.  Steve Clayton works at Microsoft and has a variety of blogs out there.  This one focuses on … hmmm.  His latest posts are on graffiti, infographics, paper tweets, cartoons and slow motion videos.  It’s mostly visual and you never really know what you’ll get.  It’s always interesting though and I like what he posts.  It’s good creative stuff. Seth Godin.  Seth writes about Marketing.  I read him for motivation to get off my butt and get things done.  He’s a great motivator who encourages you to think big.  And do something! Ask the Pilot.  Patrick Smith is a commercial airline pilot writing about the airline industry.  He’s a great debunker of myths (no they don’t reduce oxygen in the cabin to keep you docile).  My favorite topics include the TSA, flying myths, airport reviews and flight delays. My old favorite flight blog used to be enplaned.  No one knew who wrote it.  It focused on the economics of the airline industry.  It was fascinating stuff.  One day it was gone.  The entire blog was deleted.  Someone tracked down some partial archives and put them online. The Agent’s Journal.  Jack Bechta is an NFL agent.  He writes about the business side of the NFL, the draft and free agency.  Lately he’s been writing about the potential lockout.  He has a distinct lack of hype which I find very refreshing.  xkcd.  I call this the comic for smart people.  A little math, some IT and internet privacy thrown in all make an unusual comic. Funny and intelligent.

    Read the article

  • AdventureWorks 2014 Sample Databases Are Now Available

    - by aspiringgeek
      Where in the World is AdventureWorks? Recently, SQL Community feedback from twitter prompted me to look in vain for SQL Server 2014 versions of the AdventureWorks sample databases we’ve all grown to know & love. I searched Codeplex, then used the bing & even the google in an effort to locate them, yet all I could find were samples on different sites highlighting specific technologies, an incomplete collection inconsistent with the experience we users had learned to expect.  I began pinging internally & learned that an update to AdventureWorks wasn’t even on the road map.  Fortunately, SQL Marketing manager Luis Daniel Soto Maldonado (t) lent a sympathetic ear & got the update ball rolling; his direct report Darmodi Komo recently announced the release of the shiny new sample databases for OLTP, DW, Tabular, and Multidimensional models to supplement the extant In-Memory OLTP sample DB.  What Success Looks Like In my correspondence with the team, here’s how I defined success: 1. Sample AdventureWorks DBs hosted on Codeplex showcasing SQL Server 2014’s latest-&-greatest features, including:  In-Memory OLTP (aka Hekaton) Clustered Columnstore Online Operations Resource Governor IO 2. Where it makes sense to do so, consolidate the DBs (e.g., showcasing Columnstore likely involves a separate DW DB) 3. Documentation to support experimenting with these features As Microsoft Senior SDE Bonnie Feinberg (b) stated, “I think it would be great to see an AdventureWorks for SQL 2014.  It would be super helpful for third-party book authors and trainers.  It also provides a common way to share examples in blog posts and forum discussions, for example.”  Exactly.  We’ve established a rich & robust tradition of sample databases on Codeplex.  This is what our community & our customers expect.  The prompt response achieves what we all aim to do, i.e., manifests the Service Design Engineering mantra of “delighting the customer”.  Kudos to Luis’s team in SQL Server Marketing & Kevin Liu’s team in SQL Server Engineering for doing so. Download AdventureWorks 2014 Download your copies of SQL Server 2014 AdventureWorks sample databases here.

    Read the article

  • Given the presentation model pattern, is the view, presentation model, or model responsible for adding child views to an existing view at runtime?

    - by Ryan Taylor
    I am building a Flex 4 based application using the presentation model design pattern. This application will have several different components to it as shown in the image below. The MainView and DashboardView will always be visible and they each have corresponding presentation models and models as necessary. These views are easily created by declaring their MXML in the application root. <s:HGroup width="100%" height="100%"> <MainView width="75% height="100%"/> <DashboardView width="25%" height="100%"/> </s:HGroup> There will also be many WidgetViewN views that can be added to the DashboardView by the user at runtime through a simple drop down list. This will need to be accomplished via ActionScript. The drop down list should always show what WidgetViewN has already been added to the DashboardView. Therefore some state about which WidgetViewN's have been created needs to be stored. Since the list of available WidgetViewN and which ones are added to the DashboardView also need to be accessible from other components in the system I think this needs to be stored in a Model object. My understanding of the presentation model design pattern is that the view is very lean. It contains as close to zero logic as is practical. The view communicates/binds to the presentation model which contains all the necessary view logic. The presentation model is effectively an abstract representation of the view which supports low coupling and eases testability. The presentation model may have one or more models injected in in order to display the necessary information. The models themselves contain no view logic whatsoever. So I have a several questions around this design. Who should be responsible for creating the WidgetViewN components and adding these to the DashboardView? Is this the responsibility of the DashboardView, DashboardPresentationModel, DashboardModel or something else entirely? It seems like the DashboardPresentationModel would be responsible for creating/adding/removing any child views from it's display but how do you do this without passing in the DashboardView to the DashboardPresentationModel? The list of available and visible WidgetViewN components needs to be accessible to a few other components as well. Is it okay for a reference to a WidgetViewN to be stored/referenced in a model? Are there any good examples of the presentation model pattern online in Flex that also include creating child views at runtime?

    Read the article

  • links for 2011-02-18

    - by Bob Rhubart
    VirtualBox: Pre-Built Developer VMs "Learning your way around a new software stack is challenging enough without having to spend multiple cycles on the install process. Instead, we have packaged such stacks into pre-built Oracle VM VirtualBox appliances that you can download, install, and experience as a single unit." (tags: oracle virtualization virtualbox) Java Space on Parleys (The Java Source) "'Oracle partnered with Stephan Janssen, founder of Parleys to make this happen. Parleys website offers a user friendly experience to view online content. You can download some of the talks to your desktop or watch them on the go on mobile devices." (tags: oracle java parleys) Why ADF Developers Should Attend ODTUG This Year (Shay Shmeltzer's Weblog) Shay says: "A new track called the "Fusion Middleware" track has been formed and it has lots of sessions for any level of ADF developer. The track is run by several Oracle ACEs who are also involved in the ADF Enterprise Methodology Group." (tags: oracle otn odtug fusionmiddleware) Wrapping up an Exciting Mobile World Congress (The Java Source) "One of the more popular topics in our booth was the use of Java in the Smart Grid. In our booth we were showing off some of the work of the Hydra Consortium whose goal it is to leverage the emerging smart grid infrastructure to securely enable the delivery of personal health data..." (tags: oracle java smartgrid) How to Audit and Monitor BI Publisher Reports Access? (Oracle BI Publisher Blog) "Do you know who is accessing to which report at what time at your reporting environment ? As you delivered the BI Publisher reports to the production environment and your users start using them as part of their daily business operations you might wonder such questions." (tags: oracle otn businessintelligence) Oracle VM VirtualBox 4.0.4 Released! (Oracle's Virtualization Blog) Fat Bloke says: "Oracle made a maintenance update release of Oracle VM VirtualBox version 4.0.4 today. You can Download it now, or read about the changes in the ChangeLog." (tags: oracle otn virtualization virtualbox) Obama says Cloud and Data Center Consolidation Will Help Curb IT Costs | WHIR Web Hosting Industry News "In the report, he estimated that the federal government could reallocate some $20 billion of IT spending to cloud computing technologies and reduce 'data center infrastructure expenditure by approximately 30 percent' through cloud computing." (tags: cloud obama datacenter) Chris Muir: ADF BC: Creating an "EXISTS" View Criteria Oracle ACE Director Chris Muir shares some ADF tips. (tags: oracle otn oracleace adf) Translation and Multiple Languages with Oracle UCM | Bex Huff Bex says: "Last year, I gave a presentation at Oracle Open World about Creating and Maintaining an Internationalized Web Site. Well, I'm happy to announce that one of the several add-ons to UCM is now available for purchase!" (tags: oracle otn enterprise2.0 ecm oracleace) ORACLENERD: Design Documentation Oracle ACE Chet "ORACLENERD" Justice makes a pledge. (tags: oracle otn oracleace database)

    Read the article

  • Windows Azure Recipe: Big Data

    - by Clint Edmonson
    As the name implies, what we’re talking about here is the explosion of electronic data that comes from huge volumes of transactions, devices, and sensors being captured by businesses today. This data often comes in unstructured formats and/or too fast for us to effectively process in real time. Collectively, we call these the 4 big data V’s: Volume, Velocity, Variety, and Variability. These qualities make this type of data best managed by NoSQL systems like Hadoop, rather than by conventional Relational Database Management System (RDBMS). We know that there are patterns hidden inside this data that might provide competitive insight into market trends.  The key is knowing when and how to leverage these “No SQL” tools combined with traditional business such as SQL-based relational databases and warehouses and other business intelligence tools. Drivers Petabyte scale data collection and storage Business intelligence and insight Solution The sketch below shows one of many big data solutions using Hadoop’s unique highly scalable storage and parallel processing capabilities combined with Microsoft Office’s Business Intelligence Components to access the data in the cluster. Ingredients Hadoop – this big data industry heavyweight provides both large scale data storage infrastructure and a highly parallelized map-reduce processing engine to crunch through the data efficiently. Here are the key pieces of the environment: Pig - a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. Mahout - a machine learning library with algorithms for clustering, classification and batch based collaborative filtering that are implemented on top of Apache Hadoop using the map/reduce paradigm. Hive - data warehouse software built on top of Apache Hadoop that facilitates querying and managing large datasets residing in distributed storage. Directly accessible to Microsoft Office and other consumers via add-ins and the Hive ODBC data driver. Pegasus - a Peta-scale graph mining system that runs in parallel, distributed manner on top of Hadoop and that provides algorithms for important graph mining tasks such as Degree, PageRank, Random Walk with Restart (RWR), Radius, and Connected Components. Sqoop - a tool designed for efficiently transferring bulk data between Apache Hadoop and structured data stores such as relational databases. Flume - a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large log data amounts to HDFS. Database – directly accessible to Hadoop via the Sqoop based Microsoft SQL Server Connector for Apache Hadoop, data can be efficiently transferred to traditional relational data stores for replication, reporting, or other needs. Reporting – provides easily consumable reporting when combined with a database being fed from the Hadoop environment. Training These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. Hadoop Learning Resources (20+ tutorials and labs) Huge collection of resources for learning about all aspects of Apache Hadoop-based development on Windows Azure and the Hadoop and Windows Azure Ecosystems SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • To Serve Man?

    - by Dave Convery
    Since the announcement of Windows 8 and its 'Metro' interface, the .NET community has wondered if the skills they've spent so long developing might be swept aside,in favour of HTML5 and JavaScript. Mercifully, that only seems to be true of SilverLight (as Simon Cooper points out), but it did leave me thinking how easy it is to impose a technology upon people without directly serving their needs. Case in point: QR codes. Once, probably, benign in purpose, they seem to have become a marketer's tool for determining when someone has engaged with an advert in the real world, with the same certainty as is possible online. Nobody really wants to use QR codes - it's far too much hassle. But advertisers want that data - they want to know that someone actually read their billboard / poster / cereal box, and so this flawed technology is suddenly everywhere, providing little to no value to the people who are actually meant to use it. What about 3D cinema? Profits from the film industry have been steadily increasing throughout the period that digital piracy and mass sharing has been possible, yet the industry cinema chains have forced 3D films upon a broadly uninterested audience, as a way of providing more purpose to going to a cinema, rather than watching it at home. Despite advances in digital projection, 3D cinema is scarcely more immersive to us than were William Castle's hoary old tricks of skeletons on wires and buzzing chairs were to our grandparents. iTunes - originally just a piece of software that catalogued and ripped music for you, but which is now multi-purpose bloatware; a massive, system-hogging behemoth. If it was being built for the people that used it, it would have been split into three or more separate pieces of software long ago. But as bloatware, it serves Apple primarily rather than us, stuffed with Music, Video, Various stores and phone / iPad management all bolted into one. Why? It's because, that way, you're more likely to bump into something you want to buy. You can't even buy a new laptop without finding that a significant chunk of your hard drive has been sold to 'select partners' - advertisers, suppliers of virus-busting software, and endless bloatware-flogging pop-ups that make using a new laptop without reformatting the hard drive like stepping back in time. The product you want is not the one you paid for. This is without even looking at services like Facebook and Klout, who provide a notional service with the intention of slurping up as much data about you as possible (in Klout's case, whether you create an account with them or not). What technologies do you find annoying or intrusive, and who benefits from keeping them around?

    Read the article

  • Partner Blog: aurionPro SENA - Mobile Application Convenience, Flexibility & Innovation Delivered

    - by Darin Pendergraft
    About the Writer: Des Powley is Director of Product Management for aurionPro SENA inc. the leading global Oracle Identity and Access Management specialist delivery and product development partner. In October 2012 aurionPro SENA announced the release of the Mobile IDM application that delivers key Identity Management functions from any mobile device. The move towards an always on, globally interconnected world is shifting Business and Consumers alike away from traditional PC based Enterprise application access and more and more towards an ‘any device, same experience’ world. It is estimated that within five years in many developing regions of the world the PC will be obsolete, replaced entirely by cheaper mobile and tablet devices. This will give a vast amount of new entrants to the Internet their first experience of the online world, and it will only be via these newer, mobile access channels. Designed to address this shift in working and social environments and released in October of 2012 the aurionPro SENA Mobile IDM application directly addresses this emerging market and requirement by enhancing administrators, consumers and managers Identity Management (IDM) experience by delivering a mobile application that provides rapid access to frequently used IDM services from any Mobile device. Built on the aurionPro SENA Identity Service platform the mobile application uses Oracle’s Cloud, Mobile and Social capabilities and Oracle’s Identity Governance Suite for it’s core functions. The application has been developed using standards based API’s to ensure seamless integration with a client’s on premise IDM implementation or equally seamlessly with the aurionPro SENA Hosted Identity Service. The solution delivers multi platform support including iOS, Android and Blackberry and provides many key features including: • Providing easy to access view all of a users own access privileges • The ability for Managers to approve and track requests • Simply raising requests for new applications, roles and entitlements through the service catalogue This application has been designed and built with convenience and security in mind. We protect access to critical applications by enforcing PIN based authentication whilst also providing the user with mobile single sign on capability. This is just one of the many highly innovative products and services that aurionPro SENA is developing for our clients as we continually strive to enhance the value of their investment in Oracle’s class leading 11G R2 Identity and Access Management suite. The Mobile IDM application is a key component of our Identity Services Suite that also includes Managed, Hosted and Cloud Identity Services. The Identity Services Suite has been designed and built specifically to break the barriers to delivering Enterprise, Mobile and Social Identity Management services from the Cloud. aurionPro SENA - Building next generation Identity Services for modern enterprises. To view the app please visit http://youtu.be/btNgGtKxovc For more information please contact [email protected]

    Read the article

  • Getting developers and support to work together

    - by Matt Watson
    Agile development has ushered in the norm of rapid iterations and change within products. One of the biggest challenges for agile development is educating the rest of the company. At my last company our biggest challenge was trying to continually train 100 employees in our customer support and training departments. It's easy to write release notes and email them to everyone. But for complex software products, release notes are not usually enough detail. You really have to educate your employees on the WHO, WHAT, WHERE, WHY, WHEN of every item. If you don't do this, you end up with customer service people who know less about your product than your users do. Ever call a company and feel like you know more about their product than their customer service people do? Yeah. I'm talking about that problem.WHO does the change effect?WHAT was the actual change?WHERE do I find the change in the product?WHY was the change made? (It's hard to support something if you don't know why it was done.)WHEN will the change be released?One thing I want to stress is the importance of the WHY something was done. For customer support people to be really good at their job, they need to understand the product and how people use it. Knowing how to enable a feature is one thing. Knowing why someone would want to enable it, is a whole different thing and the difference in good customer service. Another challenge is getting support people to better test and document potential bugs before escalating them to development. Trying to fix bugs without examples is always fun... NOT. They might as well say "The sky is falling, please fix it!"We need to over train the support staff about product changes and continually stress how they document and test potential product bugs. You also have to train the sales staff and the marketing team. Then there is updating sales materials, your website, product documentation and other items there are always out of date. Every product release causes this vicious circle of trying to educate the rest of the company about the changes.Do we need to record a simple video explaining the changes and email it to everyone? Maybe we should  use a simple online training type app to help with this problem. Ultimately the struggle is taking the time to do the training, but it is time well spent. It may save you a lot of time answering questions and fixing bugs later. How do we efficiently transfer key product knowledge from developers and product owners to the rest of the company? How have you solved these issues at your company?

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • The Spotlight is on You

    - by Claudia McDonald
    On the field or off the field, in ballet slippers or singing your heart out on stage, offering a stellar performance every time is key to holding the attention of your audience and having them come back hungry for more. Similarly, showing up to a new business meeting wearing pink tights and a tutu might be one way to holding the attention of your customer, but offering them an unmatched and ground-breaking software solution certainly will get their attention! Simply put, the Oracle Exastack program enables both ISV's and OEM's to rapidly build and deliver faster, more reliable applications. It comes as no surprise that the success of the Oracle Exastack program is centered on establishing Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud as the highest performance, lowest cost platforms available in the industry today.  But here is where the real standing-ovation-worthy facts come in. The Oracle Exadata Database Machine is the only database machine that provides extreme performance for both data warehousing and online transaction processing (OLTP) workloads, making it the ideal platform for consolidating onto private clouds. Whereas the Oracle Exalogic Elastic Cloud is an engineered hardware and software system tested and tuned by Oracle to provide the best foundation for cloud computing, while allowing Java applications, Oracle Applications and other enterprise applications to run with extreme performance. – And the crowd goes wild, ladies and gentlemen! In just four months alone, our partners have already achieved over 150 Oracle Exastack Ready milestones for Oracle Solaris, Oracle Linux, Oracle Database and Oracle WebLogic Server.  As Judson has said, “With the Oracle Exastack program, Oracle is helping partners test, tune and optimize their applications to deliver optimal performance and reliability, accelerating innovation and delivering superior value to customers." And get this, not only are their applications running faster and more efficiently, they are actually being delivered at a lower cost to customers than ever before – extreme performance well deserving of 3 consecutive arabesques! If you haven’t already, check out what some of our partners are saying about the Oracle Exastack program in this video, and find out all that is available to you today. By participating in the Oracle Exastack program, partners now have the ability to achieve Oracle Exadata Optimized, Oracle Exalogic Optimized, Oracle Exadata Ready and Oracle Exalogic Ready status for their solutions. New Oracle Exastack labs, provide OPN members with access to Oracle technical resources, on-premise facilities and remote lab environments. With Oracle Exastack Optimized, partners experience faster and more reliable applications to run on the Oracle Exadata Database Machine, as well as the long awaited Oracle Exalogic Elastic Cloud. Savvy OPN members are leveraging the Oracle Exastack Optimized program toward their advancement to Platinum or Diamond level in OPN. Partners are achieving Oracle Exadata Ready and Oracle Exalogic Ready giving them a competitive advantage and signaling to customers that their applications readily support Oracle Exadata Database Machine or Oracle Exalogic Elastic Cloud to deliver extreme performance. Get your dancing shoes on, The OPN Communications Team

    Read the article

  • Ubuntu 12.10 Trackpoint not detected Thinkpad X130e

    - by killerknives
    I just did a fresh install of 12.10 and now only my touchpad works. The trackpoint and Left/Right buttons below the trackpoint do not work. The trackpoint worked fine as of 12.04. I've searched online and there was a hack that said that disabling the touchpad would enable the trackpoint. WRONG! You'll end up having to use the keyboard =/. I don't know what is needed so I'll just dump some stuff. lspci: 00:00.0 Host bridge: Advanced Micro Devices [AMD] Family 14h Processor Root Complex 00:01.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Wrestler [Radeon HD 6310] 00:01.1 Audio device: Advanced Micro Devices [AMD] nee ATI Wrestler HDMI Audio [Radeon HD 6250/6310] 00:05.0 PCI bridge: Advanced Micro Devices [AMD] Family 14h Processor Root Port 00:06.0 PCI bridge: Advanced Micro Devices [AMD] Family 14h Processor Root Port 00:07.0 PCI bridge: Advanced Micro Devices [AMD] Family 14h Processor Root Port 00:11.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] 00:12.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:12.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:13.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:13.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:14.0 SMBus: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller (rev 42) 00:14.2 Audio device: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) (rev 40) 00:14.3 ISA bridge: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 LPC host controller (rev 40) 00:14.4 PCI bridge: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge (rev 40) 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 0 (rev 43) 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 1 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 2 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 3 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 4 00:18.5 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 6 00:18.6 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 5 00:18.7 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 7 01:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter (rev 01) 02:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0) I've installged gpointing-device-setting and it only lists the touchpad. No trackpoint. What should I do? What data do you need?

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-22

    - by Bob Rhubart
    Guide to integration architecture | Stephanie Mann "The landscape of integration architecture is shifting as service-oriented and cloud-based architecture take the fore," says Stephanie Mann. "To ensure success, enterprise architects and developers are turning to lighter-weight infrastructure to support more complex integration projects." FY13 Oracle PartnerNetwork Kickoff - Tues June 26, 2012 Join us for a one-hour live online event hosted by the Oracle PartnerNetwork team as we kickoff FY13. Other dates/times for EMEA/LAD/JAPAN/APAC. Click the link for details. Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6? | Ricardo Ferreira Okay, you would expect an Oracle guy to make this argument. But Ferreira takes a very deep, very detailed technical dive into the issue. So hear the man out, will ya? Hibernate4 and Coherence | Rene van Wijk According to Oracle ACE Rene van Wijk, "there are two ways to integrate Hibernate and Coherence." In this post he illustrates one of them. Simple Made Easy | Rich Hickey Rich Hickey discusses simplicity, why it is important, how to achieve it in design and how to recognize its absence in the tools, language constructs and libraries in this presentation from QCon London 2012. Starting a cluster | Mark Nelson Fusion Middleware A-Team blogger Mark Nelson looks at Oracle SOA Suite, Oracle BPM, and Oracle Coherence, three products that are " commonly clustered, and which have somewhat different requirements." Why building SaaS well means giving up your servers | GigaOM The biggest benefit to PaaS, reports GigaOM's Derrick Harris, "might be a better product because the company is able to focus on building the app rather than managing servers." Personas - what, why & how | Mascha van Oosterhout "To be able to create a successful, user-friendly website or application," says Mascha van Oosterhout, "every decision you take, whether you are part of the marketing team, the design team or the development team, should be based on what you know about the user." Thought for the Day "Machines take me by surprise with great frequency." — Alan Turing(June 23, 1912 - June 7, 1954) Source: Brainy Quote

    Read the article

  • JCP EC Nomination Materials for 2012

    - by heathervc
    The nomination period of the 2012 Annual JCP EC Elections will begin at the end of September 2012.  The JCP will be accept self-nominations for 2 seats on what will become the merged JCP EC, starting 28 September, with the nomination period ending on Thursday, 11 October. JCP Members (JSPA 2 primary contacts) will receive messages with instructions for nominating and their login credentials via email.  You will need this credential information to login and complete the nomination.The JCP EC Special Election schedule is posted online in the JCP calendar, highlights are below:Nominations for elected seats: 28 September-11 OctoberBallot (ratified and elected): 16-29 OctoberNew members take office: 13 November The ballot with nominees for ratified and nominated seats begins on 16 October. The results will also be available on jcp.org on 30 October. If you are attending JavaOne 2012 in San Francisco, there are several events happening that you may be interested in attending, in particular the following BOF session.Meet the JCP Executive Committee CandidatesSession ID: BOF6307Location: Hilton San Francisco - Golden Gate 3/4/5Date and Time: 10/2/12, 4:30 PM - 5:15 PM We will also be hosting a call for all of the candidates following the nomination period.  The following information is required for self-nomination.1) Contact information/Biography Each EC seat is represented by two people - a primary and alternate representative. Provide the following information for each representative: - Name - Title - Email Address - Mailing Address - Phone Number - Fax - A brief biography (3-5 sentences/~100-200 words) for primary contact - Photograph (prefer jpg format, head only shot) for primary contactBios and photos for the EC members are posted here:http://jcp.org/en/press/news/ec-feature_MEhttp://jcp.org/en/press/news/ec-feature_SE2) Qualification StatementA brief (2-3 paragraph) description of your qualifications for an EC seat; this is a Qualification Statement for the organization you represent. It should include the value and perspective you bring to the EC, your interests in the JCP program, as well as a summary of your current participation or planned participation in the JCP program (your entire organization)--JSRs, participation on Expert Groups, meetings/events attended, etc.  This statement will appear on the ballot and will convince community members that they should vote for you, so please include relevant information about your experience within the JCP program and your investments in Java technology.A few sample qualification statements are available here.3) Position PaperOne of the pieces of information we make available to the JCP membership for voting purposes is a position paper.  If you would like to provide this type of information for the ballot, please prepare in pdf format for posting.  This would be more detail on areas that you would put focus into during your tenure on the JCP EC.You can read more about some of the topics under discussion in the EC here, including links to JCP.Next materials. If you have an interest in participating in the JCP EC, please start preparing these materials now.  We look forward to a successful election process.

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >