Search Results

Search found 2822 results on 113 pages for 'scheduled backups'.

Page 108/113 | < Previous Page | 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Is my hard drive about to die?

    - by Hristo Deshev
    I have two hard drives set up as a RAID 1 array on my server (Linux, software RAID using mdadm) and one of them just got me this "present" in syslog: Nov 23 02:05:29 h2 kernel: [7305215.338153] ata1.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 Nov 23 02:05:29 h2 kernel: [7305215.338178] ata1.00: irq_stat 0x40000008 Nov 23 02:05:29 h2 kernel: [7305215.338197] ata1.00: failed command: READ FPDMA QUEUED Nov 23 02:05:29 h2 kernel: [7305215.338220] ata1.00: cmd 60/08:00:d8:df:da/00:00:3a:00:00/40 tag 0 ncq 4096 in Nov 23 02:05:29 h2 kernel: [7305215.338221] res 41/40:08:d8:df:da/00:00:3a:00:00/00 Emask 0x409 (media error) <F> Nov 23 02:05:29 h2 kernel: [7305215.338287] ata1.00: status: { DRDY ERR } Nov 23 02:05:29 h2 kernel: [7305215.338305] ata1.00: error: { UNC } Nov 23 02:05:29 h2 kernel: [7305215.358901] ata1.00: configured for UDMA/133 Nov 23 02:05:32 h2 kernel: [7305218.269054] ata1.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 Nov 23 02:05:32 h2 kernel: [7305218.269081] ata1.00: irq_stat 0x40000008 Nov 23 02:05:32 h2 kernel: [7305218.269101] ata1.00: failed command: READ FPDMA QUEUED Nov 23 02:05:32 h2 kernel: [7305218.269125] ata1.00: cmd 60/08:00:d8:df:da/00:00:3a:00:00/40 tag 0 ncq 4096 in Nov 23 02:05:32 h2 kernel: [7305218.269126] res 41/40:08:d8:df:da/00:00:3a:00:00/00 Emask 0x409 (media error) <F> Nov 23 02:05:32 h2 kernel: [7305218.269196] ata1.00: status: { DRDY ERR } Nov 23 02:05:32 h2 kernel: [7305218.269215] ata1.00: error: { UNC } Nov 23 02:05:32 h2 kernel: [7305218.341565] ata1.00: configured for UDMA/133 Nov 23 02:05:35 h2 kernel: [7305221.193342] ata1.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 Nov 23 02:05:35 h2 kernel: [7305221.193368] ata1.00: irq_stat 0x40000008 Nov 23 02:05:35 h2 kernel: [7305221.193386] ata1.00: failed command: READ FPDMA QUEUED Nov 23 02:05:35 h2 kernel: [7305221.193408] ata1.00: cmd 60/08:00:d8:df:da/00:00:3a:00:00/40 tag 0 ncq 4096 in Nov 23 02:05:35 h2 kernel: [7305221.193409] res 41/40:08:d8:df:da/00:00:3a:00:00/00 Emask 0x409 (media error) <F> Nov 23 02:05:35 h2 kernel: [7305221.193474] ata1.00: status: { DRDY ERR } Nov 23 02:05:35 h2 kernel: [7305221.193491] ata1.00: error: { UNC } Nov 23 02:05:35 h2 kernel: [7305221.388404] ata1.00: configured for UDMA/133 Nov 23 02:05:38 h2 kernel: [7305224.426316] ata1.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 Nov 23 02:05:38 h2 kernel: [7305224.426343] ata1.00: irq_stat 0x40000008 Nov 23 02:05:38 h2 kernel: [7305224.426363] ata1.00: failed command: READ FPDMA QUEUED Nov 23 02:05:38 h2 kernel: [7305224.426387] ata1.00: cmd 60/08:00:d8:df:da/00:00:3a:00:00/40 tag 0 ncq 4096 in Nov 23 02:05:38 h2 kernel: [7305224.426388] res 41/40:08:d8:df:da/00:00:3a:00:00/00 Emask 0x409 (media error) <F> Nov 23 02:05:38 h2 kernel: [7305224.426459] ata1.00: status: { DRDY ERR } Nov 23 02:05:38 h2 kernel: [7305224.426478] ata1.00: error: { UNC } Nov 23 02:05:38 h2 kernel: [7305224.498133] ata1.00: configured for UDMA/133 Nov 23 02:05:41 h2 kernel: [7305227.400583] ata1.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 Nov 23 02:05:41 h2 kernel: [7305227.400608] ata1.00: irq_stat 0x40000008 Nov 23 02:05:41 h2 kernel: [7305227.400627] ata1.00: failed command: READ FPDMA QUEUED Nov 23 02:05:41 h2 kernel: [7305227.400649] ata1.00: cmd 60/08:00:d8:df:da/00:00:3a:00:00/40 tag 0 ncq 4096 in Nov 23 02:05:41 h2 kernel: [7305227.400650] res 41/40:08:d8:df:da/00:00:3a:00:00/00 Emask 0x409 (media error) <F> Nov 23 02:05:41 h2 kernel: [7305227.400716] ata1.00: status: { DRDY ERR } Nov 23 02:05:41 h2 kernel: [7305227.400734] ata1.00: error: { UNC } Nov 23 02:05:41 h2 kernel: [7305227.472432] ata1.00: configured for UDMA/133 From what I read so far, I am not sure if read errors mean that a hard drive is dying on me (no write errors so far). I've had hard drive errors in the past and those always had errors about failing to write to specific sectors in the logs. Not this time. Should I be replacing the drive? Could something else be causing the problem? I've scheduled a smartctl -t long test that will finish in a couple of hours. I hope this will give me some more info.

    Read the article

  • The Birth of a Method - Where did OUM come from?

    - by user702549
    It seemed fitting to start this blog entry with the OUM vision statement. The vision for the Oracle® Unified Method (OUM) is to support the entire Enterprise IT lifecycle, including support for the successful implementation of every Oracle product.  Well, it’s that time of year again; we just finished testing and packaging OUM 5.6.  It will be released for general availability to qualifying customers and partners this month.  Because of this, I’ve been reflecting back on how the birth of Oracle’s Unified method - OUM came about. As the Release Director of OUM, I’ve been honored to package every method release.  No, maybe you’d say it’s not so special.  Of course, anyone can use packaging software to create an .exe file.  But to me, it is pretty special, because so many people work together to make each release come about.  The rich content that results is what makes OUM’s history worth talking about.   To me, professionally speaking, working on OUM, well it’s been “a labor of love”.  My youngest child was just 8 years old when OUM was born, and she’s now in High School!  Watching her grow and change has been fascinating, if you ask her, she’s grown up hearing about OUM.  My son would often walk into my home office and ask “How is OUM today, Mom?”  I am one of many people that take care of OUM, and have watched the method “mature” over these last 6 years.  Maybe that makes me a "Method Mom" (someone in one of my classes last year actually said this outloud) but there are so many others who collaborate and care about OUM Development. I’ve thought about writing this blog entry for a long time just to reflect on how far the Method has come. Each release, as I prepare the OUM Contributors list, I see how many people’s experience and ideas it has taken to create this wealth of knowledge, process and task guidance as well as templates and examples.  If you’re wondering how many people, just go into OUM select the resources button on the top of most pages of the method, and on that resources page click the ABOUT link. So now back to my nostalgic moment as I finished release 5.6 packaging.  I reflected back, on all the things that happened that cause OUM to become not just a dream but to actually come to fruition.  Here are some key conditions that make it possible for each release of the method: A vision to have one method instead of many methods, thereby focusing on deeper, richer content People within Oracle’s consulting Organization  willing to contribute to OUM providing Subject Matter Experts who are willing to write down and share what they know. Oracle’s continued acquisition of software companies, the need to assimilate high quality existing materials from these companies The need to bring together people from very different backgrounds and provide a common language to support Oracle Product implementations that often involve multiple product families What came first, and then what was the strategy? Initially OUM 4.0 was based on Oracle’s J2EE Custom Development Method (JCDM), it was a good “backbone”  (work breakdown structure) it was Unified Process based, and had good content around UML as well as custom software development.  But it needed to be extended in order to achieve the OUM Vision. What happened after that was to take in the “best of the best”, the legacy and acquired methods were scheduled for assimilation into OUM, one release after another.  We incrementally built OUM.  We didn’t want to lose any of the expertise that was reflected in AIM (Oracle’s legacy Application Implementation Method), Compass (People Soft’s Application implementation method) and so many more. When was OUM born? OUM 4.1 published April 30, 2006.  This release allowed Oracles Advanced Technology groups to begin the very first implementations of Fusion Middleware.  In the early days of the Method we would prepare several releases a year.  Our iterative release development cycle began and continues to be refined with each Method release.  Now we typically see one major release each year. The OUM release development cycle is not unlike many Oracle Implementation projects in that we need to gather requirements, prioritize, prepare the content, test package and then go production.  Typically we develop an OUM release MoSCoW (must have, should have, could have, and won’t have) right after the prior release goes out.   These are the high level requirements.  We break the timeframe into increments, frequent checkpoints that help us assess the content and progress is measured through frequent checkpoints.  We work as a team to prioritize what should be done in each increment. Yes, the team provides the estimates for what can be done within a particular increment.  We sometimes have Method Development workshops (physically or virtually) to accelerate content development on a particular subject area, that is where the best content results. As the written content nears the final stages, it goes through edit and evaluation through peer reviews, and then moves into the release staging environment.  Then content freeze and testing of the method pack take place.  This iterative cycle is run using the OUM artifacts that make sense “fit for purpose”, project plans, MoSCoW lists, Test plans are just a few of the OUM work products we use on a Method Release project. In 2007 OUM 4.3, 4.4 and 4.5 were published.  With the release of 4.5 our Custom BI Method (Data Warehouse Method FastTrack) was assimilated into OUM.  These early releases helped us align Oracle’s Unified method with other industry standards Then in 2008 we made significant changes to the OUM “Backbone” to support Applications Implementation projects with that went to the OUM 5.0 release.  Now things started to get really interesting.  Next we had some major developments in the Envision focus area in the area of Enterprise Architecture.  We acquired some really great content from the former BEA, Liquid Enterprise Method (LEM) along with some SMEs who were willing to work at bringing this content into OUM.  The Service Oriented Architecture content in OUM is extensive and can help support the successful implementation of Fusion Middleware, as well as Fusion Applications. Of course we’ve developed a wealth of OUM training materials that work also helps to improve the method content.  It is one thing to write “how to”, and quite another to be able to teach people how to use the materials to improve the success of their projects.  I’ve learned so much by teaching people how to use OUM. What's next? So here toward the end of 2012, what’s in store in OUM 5.6, well, I’m sure you won’t be surprised the answer is Cloud Computing.   More details to come in the next couple of weeks!  The best part of being involved in the development of OUM is to see how many people have “adopted” OUM over these six years, Clients, Partners, and Oracle Consultants.  The content just gets better with each release.   I’d love to hear your comments on how OUM has evolved, and ideas for new content you’d like to see in the upcoming releases.

    Read the article

  • Self-signed certificates for a known community

    - by costlow
    Recently announced changes scheduled for Java 7 update 51 (January 2014) have established that the default security slider will require code signatures and the Permissions Manifest attribute. Code signatures are a common practice recommended in the industry because they help determine that the code your computer will run is the same code that the publisher created. This post is written to help users that need to use self-signed certificates without involving a public Certificate Authority. The role of self-signed certificates within a known community You may still use self-signed certificates within a known community. The difference between self-signed and purchased-from-CA is that your users must import your self-signed certificate to indicate that it is valid, whereas Certificate Authorities are already trusted by default. This works for known communities where people will trust that my certificate is mine, but does not scale widely where I cannot actually contact or know the systems that will need to trust my certificate. Public Certificate Authorities are widely trusted already because they abide by many different requirements and frequent checks. An example would be students in a university class sharing their public certificates on a mailing list or web page, employees publishing on the intranet, or a system administrator rolling certificates out to end-users. Managed machines help this because you can automate the rollout, but they are not required -- the major point simply that people will trust and import your certificate. How to distribute self-signed certificates for a known community There are several steps required to distribute a self-signed certificate to users so that they will properly trust it. These steps are: Creating a public/private key pair for signing. Exporting your public certificate for others Importing your certificate onto machines that should trust you Verify work on a different machine Creating a public/private key pair for signing Having a public/private key pair will give you the ability both to sign items yourself and issue a Certificate Signing Request (CSR) to a certificate authority. Create your public/private key pair by following the instructions for creating key pairs.Every Certificate Authority that I looked at provided similar instructions, but for the sake of cohesiveness I will include the commands that I used here: Generate the key pair.keytool -genkeypair -alias erikcostlow -keyalg EC -keysize 571 -validity 730 -keystore javakeystore_keepsecret.jks Provide a good password for this file. The alias "erikcostlow" is my name and therefore easy to remember. Substitute your name of something like "mykey." The sigalg of EC (Elliptical Curve) and keysize of 571 will give your key a good strong lifetime. All keys are set to expire. Two years or 730 days is a reasonable compromise between not-long-enough and too-long. Most public Certificate Authorities will sign something for one to five years. You will be placing your keys in javakeystore_keepsecret.jks -- this file will contain private keys and therefore should not be shared. If someone else gets these private keys, they can impersonate your signature. Please be cautious about automated cloud backup systems and private key stores. Answer all the questions. It is important to provide good answers because you will stick with them for the "-validity" days that you specified above.What is your first and last name?  [Unknown]:  First LastWhat is the name of your organizational unit?  [Unknown]:  Line of BusinessWhat is the name of your organization?  [Unknown]:  MyCompanyWhat is the name of your City or Locality?  [Unknown]:  City NameWhat is the name of your State or Province?  [Unknown]:  CAWhat is the two-letter country code for this unit?  [Unknown]:  USIs CN=First Last, OU=Line of Business, O=MyCompany, L=City, ST=CA, C=US correct?  [no]:  yesEnter key password for <erikcostlow>        (RETURN if same as keystore password): Verify your work:keytool -list -keystore javakeystore_keepsecret.jksYou should see your new key pair. Exporting your public certificate for others Public Key Infrastructure relies on two simple concepts: the public key may be made public and the private key must be private. By exporting your public certificate, you are able to share it with others who can then import the certificate to trust you. keytool -exportcert -keystore javakeystore_keepsecret.jks -alias erikcostlow -file erikcostlow.cer To verify this, you can open the .cer file by double-clicking it on most operating systems. It should show the information that you entered during the creation prompts. This is the file that you will share with others. They will use this certificate to prove that artifacts signed by this certificate came from you. If you do not manage machines directly, place the certificate file on an area that people within the known community should trust, such as an intranet page. Import the certificate onto machines that should trust you In order to trust the certificate, people within your known network must import your certificate into their keystores. The first step is to verify that the certificate is actually yours, which can be done through any band: email, phone, in-person, etc. Known networks can usually do this Determine the right keystore: For an individual user looking to trust another, the correct file is within that user’s directory.e.g. USER_HOME\AppData\LocalLow\Sun\Java\Deployment\security\trusted.certs For system-wide installations, Java’s Certificate Authorities are in JAVA_HOMEe.g. C:\Program Files\Java\jre8\lib\security\cacerts File paths for Mac and Linux are included in the link above. Follow the instructions to import the certificate into the keystore. keytool -importcert -keystore THEKEYSTOREFROMABOVE -alias erikcostlow -file erikcostlow.cer In this case, I am still using my name for the alias because it’s easy for me to remember. You may also use an alias of your company name. Scaling distribution of the import The easiest way to apply your certificate across many machines is to just push the .certs or cacerts file onto them. When doing this, watch out for any changes that people would have made to this file on their machines. Trusted.certs: When publishing into user directories, your file will overwrite any keys that the user has added since last update. CACerts: It is best to re-run the import command with each installation rather than just overwriting the file. If you just keep the same cacerts file between upgrades, you will overwrite any CAs that have been added or removed. By re-importing, you stay up to date with changes. Verify work on a different machine Verification is a way of checking on the client machine to ensure that it properly trusts signed artifacts after you have added your signing certificate. Many people have started using deployment rule sets. You can validate the deployment rule set by: Create and sign the deployment rule set on the computer that holds the private key. Copy the deployment rule set on to the different machine where you have imported the signing certificate. Verify that the Java Control Panel’s security tab shows your deployment rule set. Verifying an individual JAR file or multiple JAR files You can test a certificate chain by using the jarsigner command. jarsigner -verify filename.jar If the output does not say "jar verified" then run the following command to see why: jarsigner -verify -verbose -certs filename.jar Check the output for the term “CertPath not validated.”

    Read the article

  • How to select the first ocurrence in the auto-completion menu by pressing Enter?

    - by janoChen
    Every time there's is a pop up menu. I select the first occurrence and press enter but nothing happens (the word is not completed with he selected occurrence). The only way is to press Tab until you reach the term for a second time. Is there a way of selecting the first occurrence pressing Enter (or other Vim hotkey)? My .vimrc: " SHORTCUTS nnoremap <F4> :set filetype=html<CR> nnoremap <F5> :set filetype=php<CR> nnoremap <F3> :TlistToggle<CR> " press space to turn off highlighting and clear any message already displayed. nnoremap <silent> <Space> :nohlsearch<Bar>:echo<CR> " set buffers commands nnoremap <silent> <M-F8> :BufExplorer<CR> nnoremap <silent> <F8> :bn<CR> nnoremap <silent> <S-F8> :bp<CR> " open NERDTree with start directory: D:\wamp\www nnoremap <F9> :NERDTree /home/alex/www<CR> " open MRU nnoremap <F10> :MRU<CR> " open current file (silently) nnoremap <silent> <F11> :let old_reg=@"<CR>:let @"=substitute(expand("%:p"), "/", "\\", "g")<CR>:silent!!cmd /cstart <C-R><C-R>"<CR><CR>:let @"=old_reg<CR> " open current file in localhost (default browser) nnoremap <F12> :! start "http://localhost" file:///"%:p""<CR> " open Vim's default Explorer nnoremap <silent> <F2> :Explore<CR> nnoremap <C-F2> :%s/\.html/.php/g<CR> " REMAPPING " map leader to , let mapleader = "," " remap ` to ' nnoremap ' ` nnoremap ` ' " remap increment numbers nnoremap <C-kPlus> <C-A> " COMPRESSION function Js_css_compress () let cwd = expand('<afile>:p:h') let nam = expand('<afile>:t:r') let ext = expand('<afile>:e') if -1 == match(nam, "[\._]src$") let minfname = nam.".min.".ext else let minfname = substitute(nam, "[\._]src$", "", "g").".".ext endif if ext == 'less' if executable('lessc') cal system( 'lessc '.cwd.'/'.nam.'.'.ext.' &') endif else if filewritable(cwd.'/'.minfname) if ext == 'js' && executable('closure-compiler') cal system( 'closure-compiler --js '.cwd.'/'.nam.'.'.ext.' > '.cwd.'/'.minfname.' &') elseif executable('yuicompressor') cal system( 'yuicompressor '.cwd.'/'.nam.'.'.ext.' > '.cwd.'/'.minfname.' &') endif endif endif endfunction autocmd FileWritePost,BufWritePost *.js :call Js_css_compress() autocmd FileWritePost,BufWritePost *.css :call Js_css_compress() autocmd FileWritePost,BufWritePost *.less :call Js_css_compress() " GUI " taglist right side let Tlist_Use_Right_Window = 1 " hide tool bar set guioptions-=T "remove scroll bars set guioptions+=LlRrb set guioptions-=LlRrb " set the initial size of window set lines=46 columns=180 " set default font set guifont=Monospace " set guifont=Monospace\ 10 " show line number set number " set default theme colorscheme molokai-2 " encoding set encoding=utf-8 setglobal fileencoding=utf-8 bomb set fileencodings=ucs-bom,utf-8,latin1 " SCSS syntax highlight au BufRead,BufNewFile *.scss set filetype=scss " LESS syntax highlight syntax on au BufNewFile,BufRead *.less set filetype=less " Haml syntax highlight "au! BufRead,BufNewFile *.haml "setfiletype haml " Sass syntax highlight "au! BufRead,BufNewFile *.sass "setfiletype sass " set filetype indent filetype indent on " for snipMate to work filetype plugin on " show breaks set showbreak=-----> " coding format set tabstop=4 set shiftwidth=4 set linespace=1 " CONFIG " set location of ctags let Tlist_Ctags_Cmd='D:\ctags58\ctags.exe' " keep the buffer around when left set hidden " enable matchit plugin source $VIMRUNTIME/macros/matchit.vim " folding set foldmethod=marker set foldmarker={,} let g:FoldMethod = 0 map <leader>ff :call ToggleFold()<cr> fun! ToggleFold() if g:FoldMethod == 0 exe 'set foldmethod=indent' let g:FoldMethod = 1 else exe 'set foldmethod=marker' let g:FoldMethod = 0 endif endfun " save and restore folds when a file is closed and re-opened "au BufWrite ?* mkview "au BufRead ?* silent loadview " auto-open NERDTree everytime Vim is invoked au VimEnter * NERDTree /home/alex/www " set omnicomplete autocmd FileType python set omnifunc=pythoncomplete#Complete autocmd FileType javascript set omnifunc=javascriptcomplete#CompleteJS autocmd FileType html set omnifunc=htmlcomplete#CompleteTags autocmd FileType css set omnifunc=csscomplete#CompleteCSS autocmd FileType xml set omnifunc=xmlcomplete#CompleteTags autocmd FileType php set omnifunc=phpcomplete#CompletePHP autocmd FileType c set omnifunc=ccomplete#Complete " Remove trailing white-space once the file is saved au BufWritePre * silent g/\s\+$/s/// " Use CTRL-S for saving, also in Insert mode noremap <C-S> :update!<CR> vnoremap <C-S> <C-C>:update!<CR> inoremap <C-S> <C-O>:update!<CR> " DEFAULT set nocompatible source $VIMRUNTIME/vimrc_example.vim "source $VIMRUNTIME/mswin.vim "behave mswin " disable creation of swap files set noswapfile " no back ups wwhile editing set nowritebackup " disable creation of backups set nobackup " no file change pop up warning autocmd FileChangedShell * echohl WarningMsg | echo "File changed shell." | echohl None set diffexpr=MyDiff() function MyDiff() let opt = '-a --binary ' if &diffopt =~ 'icase' | let opt = opt . '-i ' | endif if &diffopt =~ 'iwhite' | let opt = opt . '-b ' | endif let arg1 = v:fname_in if arg1 =~ ' ' | let arg1 = '"' . arg1 . '"' | endif let arg2 = v:fname_new if arg2 =~ ' ' | let arg2 = '"' . arg2 . '"' | endif let arg3 = v:fname_out if arg3 =~ ' ' | let arg3 = '"' . arg3 . '"' | endif let eq = '' if $VIMRUNTIME =~ ' ' if &sh =~ '\<cmd' let cmd = '""' . $VIMRUNTIME . '\diff"' let eq = '"' else let cmd = substitute($VIMRUNTIME, ' ', '" ', '') . '\diff"' endif else let cmd = $VIMRUNTIME . '\diff' endif silent execute '!' . cmd . ' ' . opt . arg1 . ' ' . arg2 . ' > ' . arg3 . eq endfunction

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Wifi hotspot disconnected after some time

    - by Rohit Bansal
    I am trying to use my Ubuntu system as Wifi Hotspot, but for some reason Hotspot get disconnected on its own. Searching for the solution, I found this help : Why is my ethernet connection connecting and disconnecting repeatedly? Reading through the above article I used the following command sudo killall dnsmasq as a result I manage to establish hotspot for around 5-10 sec before getting disconnected as against immediately.... Here's the system log (in case needed) tail -f /var/log/syslog : Apr 1 23:31:42 NetworkManager[901]: <info> Starting dnsmasq... Apr 1 23:31:42 NetworkManager[901]: <info> (wlan0): device state change: ip-config -> activated (reason 'none') [70 100 0] Apr 1 23:31:42 dnsmasq[4159]: started, version 2.57 cachesize 150 Apr 1 23:31:42 dnsmasq[4159]: compile time options: IPv6 GNU-getopt DBus I18N DHCP TFTP IDN Apr 1 23:31:42 dnsmasq-dhcp[4159]: DHCP, IP range 10.42.43.10 -- 10.42.43.100, lease time 1h Apr 1 23:31:42 dnsmasq[4159]: reading /etc/resolv.conf Apr 1 23:31:42 dnsmasq[4159]: using nameserver 220.226.6.104#53 Apr 1 23:31:42 dnsmasq[4159]: using nameserver 220.226.100.40#53 Apr 1 23:31:42 dnsmasq[4159]: cleared cache Apr 1 23:31:42 NetworkManager[901]: <info> Activation (wlan0) successful, device activated. Apr 1 23:31:42 NetworkManager[901]: <info> Activation (wlan0) Stage 5 of 5 (IP Configure Commit) complete. Apr 1 23:31:42 NetworkManager[901]: <info> Activation (wlan0) Stage 4 of 5 (IP4 Configure Get) complete. Apr 1 23:31:42 dbus[885]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Apr 1 23:31:42 dbus[885]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Connection established at this point....now disconnecting after 10 sec... Apr 1 23:31:52 ntpdate[4194]: adjust time server 91.189.94.4 offset -0.011589 sec Apr 1 23:32:01 NetworkManager[901]: <info> (wlan0): IP6 addrconf timed out or failed. Apr 1 23:32:01 NetworkManager[901]: <info> Activation (wlan0) Stage 4 of 5 (IP6 Configure Timeout) scheduled... Apr 1 23:32:01 NetworkManager[901]: <info> Activation (wlan0) Stage 4 of 5 (IP6 Configure Timeout) started... Apr 1 23:32:01 NetworkManager[901]: <info> Activation (wlan0) Stage 5 of 5 (IP Configure Commit) started... Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol tcp --destination-port 53 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol udp --destination-port 53 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol tcp --destination-port 67 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol udp --destination-port 67 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --in-interface wlan0 --jump REJECT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --out-interface wlan0 --jump REJECT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --in-interface wlan0 --out-interface wlan0 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --source 10.42.43.0/255.255.255.0 --in-interface wlan0 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --destination 10.42.43.0/255.255.255.0 --out-interface wlan0 --match state --state ESTABLISHED,RELATED --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table nat --insert POSTROUTING --source 10.42.43.0/255.255.255.0 ! --destination 10.42.43.0/255.255.255.0 --jump MASQUERADE Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol tcp --destination-port 53 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol udp --destination-port 53 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol tcp --destination-port 67 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol udp --destination-port 67 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --in-interface wlan0 --jump REJECT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --out-interface wlan0 --jump REJECT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --in-interface wlan0 --out-interface wlan0 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --source 10.42.43.0/255.255.255.0 --in-interface wlan0 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --destination 10.42.43.0/255.255.255.0 --out-interface wlan0 --match state --state ESTABLISHED,RELATED --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table nat --insert POSTROUTING --source 10.42.43.0/255.255.255.0 ! --destination 10.42.43.0/255.255.255.0 --jump MASQUERADE Apr 1 23:32:01 NetworkManager[901]: <info> Starting dnsmasq... Apr 1 23:32:01 NetworkManager[901]: <info> Activation (wlan0) Stage 5 of 5 (IP Configure Commit) complete. Apr 1 23:32:01 NetworkManager[901]: <info> Activation (wlan0) Stage 4 of 5 (IP6 Configure Timeout) complete. Apr 1 23:32:01 NetworkManager[901]: <warn> dnsmasq died with signal 9 Apr 1 23:32:01 NetworkManager[901]: <info> (wlan0): device state change: activated -> failed (reason 'sharing-start-failed') [100 120 18] Apr 1 23:32:01 dnsmasq[4235]: started, version 2.57 cachesize 150 Apr 1 23:32:01 dnsmasq[4235]: compile time options: IPv6 GNU-getopt DBus I18N DHCP TFTP IDN Apr 1 23:32:01 dnsmasq-dhcp[4235]: DHCP, IP range 10.42.43.10 -- 10.42.43.100, lease time 1h Apr 1 23:32:01 NetworkManager[901]: <warn> Activation (wlan0) failed for access point (Reppify Ubuntu) Apr 1 23:32:01 dnsmasq[4235]: reading /etc/resolv.conf Apr 1 23:32:01 dnsmasq[4235]: using nameserver 220.226.6.104#53 Apr 1 23:32:01 dnsmasq[4235]: using nameserver 220.226.100.40#53 Apr 1 23:32:01 dnsmasq[4235]: cleared cache Apr 1 23:32:01 NetworkManager[901]: <warn> Activation (wlan0) failed. Apr 1 23:32:01 NetworkManager[901]: <info> (wlan0): device state change: failed -> disconnected (reason 'none') [120 30 0] Apr 1 23:32:01 NetworkManager[901]: <info> (wlan0): deactivating device (reason 'none') [0] Apr 1 23:32:01 dbus[885]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Apr 1 23:32:01 dbus[885]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Apr 1 23:32:01 NetworkManager[901]: <error> [1333303321.565351] [nm-device-wifi.c:1815] nm_device_wifi_set_mode(): (wlan0): error setting mode 2

    Read the article

  • Merge replication stopping without errors in SQL 2008 R2

    - by Rob Farley
    A non-SQL MVP friend of mine, who also happens to be a client, asked me for some help again last week. I was planning on writing this up even before Rob Volk (@sql_r) listed his T-SQL Tuesday topic for this month. Earlier in the year, I (well, LobsterPot Solutions, although I’d been the person mostly involved) had helped out with a merge replication problem. The Merge Agent on the subscriber was just stopping every time, shortly after it started. With no errors anywhere – not in the Windows Event Log, the SQL Agent logs, not anywhere. We’d managed to get the system working again, but didn’t have a good reason about what had happened, and last week, the problem occurred again. I asked him about writing up the experience in a blog post, largely because of the red herrings that we encountered. It was an interesting experience for me, also because I didn’t end up touching my computer the whole time – just tapping on my phone via Twitter and Live Msgr. You see, the thing with replication is that a useful troubleshooting option is to reinitialise the thing. We’d done that last time, and it had started to work again – eventually. I say eventually, because the link being used between the sites is relatively slow, and it took a long while for the initialisation to finish. Meanwhile, we’d been doing some investigation into what the problem could be, and were suitably pleased when the problem disappeared. So I got a message saying that a replication problem had occurred again. Reinitialising wasn’t going to be an option this time either. In this scenario, the subscriber having the problem happened to be in a different domain to the publisher. The other subscribers (within the domain) were fine, just this one in a different domain had the problem. Part of the problem seemed to be a log file that wasn’t being backed up properly. They’d been trying to back up to a backup device that had a corruption, and the log file was growing. Turned out, this wasn’t related to the problem, but of course, any time you’re troubleshooting and you see something untoward, you wonder. Having got past that problem, my next thought was that perhaps there was a problem with the account being used. But the other subscribers were using the same account, without any problems. The client pointed out that that it was almost exactly six months since the last failure (later shown to be a complete red herring). It sounded like something might’ve expired. Checking through certificates and trusts showed no sign of anything, and besides, there wasn’t a problem running a command-prompt window using the account in question, from the subscriber box. ...except that when he ran the sqlcmd –E –S servername command I recommended, it failed with a Named Pipes error. I’ve seen problems with firewalls rejecting connections via Named Pipes but letting TCP/IP through, so I got him to look into SQL Configuration Manager to see what kind of connection was being preferred... Everything seemed fine. And strangely, he could connect via Management Studio. Turned out, he had a typo in the servername of the sqlcmd command. That particular red herring must’ve been reflected in his cheeks as he told me. During the time, I also pinged a friend of mine to find out who I should ask, and Ted Kruger (@onpnt) ‘s name came up. Ted (and thanks again, Ted – really) reconfirmed some of my thoughts around the idea of an account expiring, and also suggesting bumping up the logging to level 4 (2 is Verbose, 4 is undocumented ridiculousness). I’d just told the client to push the logging up to level 2, but the log file wasn’t appearing. Checking permissions showed that the user did have permission on the folder, but still no file was appearing. Then it was noticed that the user had been switched earlier as part of the troubleshooting, and switching it back to the real user caused the log file to appear. Still no errors. A lot more information being pushed out, but still no errors. Ted suggested making sure the FQDNs were okay from both ends, in case the servers were unable to talk to each other. DNS problems can lead to hassles which can stop replication from working. No luck there either – it was all working fine. Another server started to report a problem as well. These two boxes were both SQL 2008 R2 (SP1), while the others, still working, were SQL 2005. Around this time, the client tried an idea that I’d shown him a few years ago – using a Profiler trace to see what was being called on the servers. It turned out that the last call being made on the publisher was sp_MSenumschemachange. A quick interwebs search on that showed a problem that exists in SQL Server 2008 R2, when stored procedures have more than 4000 characters. Running that stored procedure (with the same parameters) manually on SQL 2005 listed three stored procedures, the first of which did indeed have more than 4000 characters. Still no error though, and the problem as listed at http://support.microsoft.com/kb/2539378 describes an error that should occur in the Event log. However, this problem is the type of thing that is fixed by a reinitialisation (because it doesn’t need to send the procedure change across as a transaction). And a look in the change history of the long stored procs (you all keep them, right?), showed that the problem from six months earlier could well have been down to this too. Applying SP2 (with sufficient paranoia about backups and how to get back out again if necessary) fixed the problem. The stored proc changes went through immediately after the service pack was applied, and it’s been running happily since. The funny thing is that I didn’t solve the problem. He had put the Profiler trace on the server, and had done the search that found a forum post pointing at this particular problem. I’d asked Ted too, and although he’d given some useful information, nothing that he’d come up with had actually been the solution either. Sometimes, asking for help is the most useful thing you can do. Often though, you don’t end up getting the help from the person you asked – the sounding board is actually what you need. @rob_farley

    Read the article

  • overriding new ubuntu installation

    - by tkoomzaaskz
    I've got a ubuntu 11.10 which has lost its support in May 2013, now I'd like to reintall up to the most up-to-date LTS, which is 12.04. My question is regarding my current partitions and doing backups. Is there a safe way to backup my data on some local partitions instead of copying files into DVDs/external drives (this is very uncormortable in my situation). Following are system commands shoing my disk: $ lsblk NAME MAJ:MIN RM SIZE RO MOUNTPOINT sda 8:0 0 232,9G 0 +-sda1 8:1 0 48,8G 0 +-sda2 8:2 0 63G 0 +-sda3 8:3 0 1K 0 +-sda4 8:4 0 53,7G 0 / +-sda5 8:5 0 18,6G 0 +-sda6 8:6 0 25,5G 0 +-sda7 8:7 0 23,3G 0 [SWAP] sr0 11:0 1 1024M 0 and $ sudo fdisk -l [sudo] password for xyz: Disk /dev/sda: 250.1 GB, 250059350016 bytes glowic: 255, sektorów/sciezke: 63, cylindrów: 30401, w sumie sektorów: 488397168 Jednostka = sektorów, czyli 1 * 512 = 512 bajtów Rozmiar sektora (logiczny/fizyczny) w bajtach: 512 / 512 Rozmiar we/wy (minimalny/optymalny) w bajtach: 512 / 512 Identyfikator dysku: 0xc3ffc3ff Device Boot Beginning End Blocks ID System /dev/sda1 * 2048 102402047 51200000 7 HPFS/NTFS/exFAT /dev/sda2 215044096 347080703 66018304 7 HPFS/NTFS/exFAT /dev/sda3 347082750 488392064 70654657+ 5 Extended /dev/sda4 102402048 215042047 56320000 83 Linux /dev/sda5 395905923 434975939 19535008+ 83 Linux /dev/sda6 434976003 488392064 26708031 83 Linux /dev/sda7 347082752 395905023 24411136 82 Linux swap / Solaris In the beginning I had Windows Vista pre-installed with the machine when it was bought (damn!) and I installed linux (the one I have now). The windows-program in master boot record has been overriden by grub and now I can boot with both Windows and Linux. This is list of mounted devices: $ mount /dev/sda4 on / type ext4 (rw,errors=remount-ro,commit=0) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/tomasz/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=tomasz) It's strange (I don't remember such thing) that my current linux uses only one partition (/dev/sda4). But, anyway, it seems like that. My final question is: am I able to use one of the existing linux partitions for a backup and install ubuntu 12.04 without removing neither windows nor ubuntu 11.04? I mean - will grub automatically accept both old windows vista and 2 linuxes (old 11.10 and "new" 12.04)? Is there any hidden operation done while installation that could harm my custom-backup-partition while installing? my fstab file: proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda4 during installation UUID=d44e89f5-9da2-48eb-83b3-887652ec95d2 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda7 during installation UUID=bbe50535-ba57-434a-9272-211d859f0e00 none swap sw 0 0 sda5 and sda6 are trash partitions created during unsuccessful linux installation (this was linux installation before my current installation), I didn't delete these partitions, but I have access to them (and I can use them as backup partitions). edit: second question is: why does lsblk show /dev/sda having 232,9G while fdisk shows that it has 250.1GB? Where does the difference come from?

    Read the article

  • Create and Backup Multiple Profiles in Google Chrome

    - by Asian Angel
    Other browsers such as Firefox and SeaMonkey allow you to have multiple profiles but not Chrome…at least not until now. If you want to use multiple profiles and create backups for them then join us as we look at Google Chrome Backup. Note: There is a paid version of this program available but we used the free version for our article. Google Chrome Backup in Action During the installation process you will run across this particular window. It will have a default user name filled in as shown here…you will not need to do anything except click on Next to continue installing the program. When you start the program for the first time this is what you will see. Your default Chrome Profile will already be visible in the window. A quick look at the Profile Menu… In the Tools Menu you can go ahead and disable the Start program at Windows Startup setting…the only time that you will need the program running is if you are creating or restoring a profile. When you create a new profile the process will start with this window. You can access an Advanced Options mode if desired but most likely you will not need it. Here is a look at the Advanced Options mode. It is mainly focused on adding Switches to the new Chrome Shortcut. The drop-down menu for the Switches available… To create your new profile you will need to choose: A profile location A profile name (as you type/create the profile name it will automatically be added to the Profile Path) Make certain that the Create a new shortcut to access new profile option is checked For our example we decided to try out the Disable plugins switch option… Click OK to create the new profile. Once you have created your new profile, you will find a new shortcut on the Desktop. Notice that the shortcut’s name will be Google Chrome + profile name that you chose. Note: On our system we were able to move the new shortcut to the “Start Menu” without problems. Clicking on our new profile’s shortcut opened up a fresh and clean looking instance of Chrome. Just out of curiosity we did decide to check the shortcut to see if the Switch set up correctly. Unfortunately it did not in this instance…so your mileage with the Switches may vary. This was just a minor quirk and nothing to get excited or upset over…especially considering that you can create multiple profiles so easily. After opening up our default profile of Chrome you can see the individual profile icons (New & Default in order) sitting in the Taskbar side-by-side. And our two profiles open at the same time on our Desktop… Backing Profiles Up For the next part of our tests we decided to create a backup for each of our profiles. Starting the wizard will allow you to choose between creating or restoring a profile. Note: To create or restore a backup click on Run Wizard. When you reach the second part of the process you can go with the Backup default profile option or choose a particular one from a drop-down list using the Select a profile to backup option. We chose to backup the Default Profile first… In the third part of the process you will need to select a location to save the profile to. Once you have selected the location you will see the Target Path as shown here. You can choose your own name for the backup file…we decided to go with the default name instead since it contained the backup’s calendar date. A very nice feature is the ability to have the cache cleared before creating the backup. We clicked on Yes…choose the option that best suits your needs. Once you have chosen either Yes or No the backup will then be created. Click Finish to complete the process. The backup file for our Default Profile at 14.0 MB in size. And the backup file for our Chrome Fresh Profile…2.81 MB. Restoring Profiles For the final part of our tests we decided to do a Restore. Select Restore and click Next to get the process started. In the second step you will need to browse for the Profile Backup File (and select the desired profile if you have created multiples). For our example we decided to overwrite the original Default Profile with the Chrome Fresh Profile. The third step lets you choose where to restore the chosen profile to…you can go with the Default Profile or choose one from the drop-down list using the Restore to a selected profile option. The final step will get you on your way to restoring the chosen profile. The program will conduct a check regarding the previous/old profile and ask if you would like to proceed with overwriting it. Definitely nice in case you change your mind at the last moment. Clicking Yes will finish the restoration. The only other odd quirk that we noticed while using the program was that the Next Button did not function after restoring the profile. You can easily get around the problem by clicking to close the window. Which one is which? After the restore process we had identical twins. Conclusion If you have been looking for a way to create multiple profiles in Google Chrome, then you might want to add this program to your system. Links Download Google Chrome Backup Similar Articles Productive Geek Tips Backup and Restore Firefox Profiles EasilyBackup Different Browsers Easily with FavBackupBackup Your Browser with the New FavBackupStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

  • SBS2003 to SBS2011 Migration - Installation Error

    - by Shawn Gradwell
    Microsoft Small Business Server 2003 to 2011 Migration. I followed the Migration Guide from Microsoft and the source server had no errors when running the various tests prior to the migration. I have completed the destination server setup using the Answer File and the server is up and running. It all looks good, I can access Exchange and AD and the only problem is the error message when you log in stating that the setup did not complete and to check the logs. Because all looks good I am continuing the migration to the destination server. I also have to state that this client does not use Sharepoint at all. Do I have to redo everything? Herewith the logs: [4992] 121016.225454.5905: Task: Starting Add User or Group access VSS registry. [4992] 121016.225454.7645: TaskManagement: In TaskScheduler.RunTasks(): The "ConfigureSharePointVSSRegistryTask" Task threw an Exception during the Run() call:System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) [4992] 121016.225454.7655: Setup: An error was encountered on the TME thread: System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter._RunTasks(Object sender, DoWorkEventArgs e) [4956] 121016.225455.0685: Setup: _UnhandledExceptionHandler: Setup encountered an error: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Reflection.TargetInvocationException: The TME thread failed (see the inner exception). ---> System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter._RunTasks(Object sender, DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter.TasksCompleted(Object sender, RunWorkerCompletedEventArgs e) --- End of inner exception stack trace --- at System.RuntimeMethodHandle._InvokeMethodFast(IRuntimeMethodInfo method, Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeType typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Delegate.DynamicInvokeImpl(Object[] args) at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object obj) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbacks() at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at Microsoft.WindowsServerSolutions.Common.Wizards.Framework.WizardChainEngine.Launch() at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass._LaunchWizard() at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass.RealMain(String[] args) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass.Main(String[] args) [4956] 121016.225455.0865: Setup: Removed the password. [4956] 121016.225455.0905: Setup: Deleting scheduled task at path Microsoft\Windows\Windows Small Business Server 2011 Standard with name Setup [4956] 121016.225455.8055: Setup: Removed SBSSetup from the RunOnce.

    Read the article

  • H1 Visa interview tips–What you must know before attending the interview?

    - by Gopinath
    USA’s H1 visa allows highly qualified professionals from other countries to work in America. Many IT professionals in India aspire to go to USA on H1 and work for their clients. Recently I had a chance to study H1 visa process to help one of my friends and I would like to share what I learned. With the assumption that your H1 petition is approved and you got an interview scheduled at US Embassy for your visa stamping, here are tips you must know before attending the interview Dress Code – Formals Say no to casuals or any fancy dress when you attend the interview. It’s not a party or friends home you are visiting. Consider H1 Visa interview as your job interview and dress up in formals. There is no option B for your, you must be in formals. A plain formal shirt with a matching pant is suggested for men. Tie and Suit would not be required, but if you are a professional at management level you can consider wearing suit. Women can wear either formal Salwar or formal pant-shirt. Avoid heavy jewellery, wear what is must as per your tradition or culture. Body Language -  Smile on your face Your body language reflects what you are and what’s going on in your mind. Don’t be nervous or restless, be relaxed and wear a beautiful smile on your face. A smile is a curve that sets everything straight. When you are called for the interview, greet the interviewer with a beautiful smile. Say Good Morning/Afternoon/Evening depending on time you are visiting them. Whenever appropriate say Thank You. Generally American professionals are very friendly people and they reciprocate for your greetings. Make sure that you make them comfortable to start the interview. Carry original documents in a separate folder I don’t want to talk much about the documents that are required for your H1B interview as it’s big subject on it’s own and it requires a separate post. I assume that your consultant or employer helped you in gathering all the required documents like – petition, DS 160 forms, education & job related documents, resume, interview call letters, client letters, etc. For all the documents you are going to submit at the interview make sure that you have originals in a separate folder.  If required interviewer may ask you show the originals of any of the document you submitted for visa processing. Don’t mix the original documents with the documents you need to submit for interview. Have a separate folder for them. For those who are going to stamping along with their spouse and children, they need to carry few extra original documents like – marriage certificate, marriage photos(30 numbers)/album, birth certificates, passports, education and profession related certificates of the spouse and children. Know your role & responsibilities The interviewer will ask you questions on your roles and responsibilities at client location. Be clear what is your day to day tasks at client place and prepared to face detailed questions on the same. When asked explain clearly and also make sure what you say is inline with what is mentioned in your petition and client invitation letter. At times they may ask you questions specific to the project/technology you are going to work. So doing some homework in this area will help you easily answer the questions. Failing to answer basic questions on your role & responsibilities may result in rejection. You work for your Employer at Client location but NOT FOR CLIENT One of the important things to keep in mind that you work for your employer and you are being deputed to client location on a work visa.  Your employer is going to be solely responsible for your salary, work, promotion, pay hikes or what so ever during your stay at USA. Your client will not be responsible for anything. Lets say you are employed with Company X in India and they are applying for H1B to work at your client(ex: Microsoft) in USA, you must keep in my mind that Microsoft is not your employer. Microsoft will not pay your salaries or responsible for any employment related activities. Company X will be solely responsible for all your employer related activities. If you don’t get this correctly and say to Visa interviewer that your client is responsible, then you may get into troubles. Know your client It’s always good to know the clients with whom you are going to work in USA and their business. If your client is a well know organisation then you may not get many questions from interviewer else you need to be well prepared to provide details like – nature of business, location, size of the organisation, etc.  Get to know the basic details about your client and be confident while providing those details to the interviewer. Also make sure that you never talk about any confidential details of your client projects and business. Revealing confidential details of your client may land your job itself in soup. Make sure that your spouse is also in sync with you If you’ve applied a H4 visa for your spouse along with your H1, make sure that spouse is in sync with you. Your spouse also should know the basic details of your job, your employer, client and location where you will be travelling. Your spouse should also be prepared to answers questions related to marriage, their profession(if working), kids, education, etc. Interviewers will try to asses your spouse communication skills, whereabouts while staying in USA and would they prefer to work USA or not. On H4, which is a dependent visa, your spouse is not allowed to work in USA and at any point your spouse should not show the intentions to search for work in USA. Less luggage more comfort You would have definitely heard that there are lot of restrictions on what you can carry along with you to an US Embassy while attending the interview. To be frank it’s not good to say there are many restrictions, but there are a hell a lot of restrictions. There are unbelievable restrictions and it’s for the safety of everyone. You are not allowed to carry mobile phones, CD/DVDs, USBs, bank cards, cameras, cosmetics, food(except baby food), water, wallets, backpacks, sealed covers, etc. Trust me most of the things we carry with us regularly every day are not allowed inside. As there are 100s of restrictions, it would be easier if you understand what you can carry along with you and just carry them alone. Ask your employer/consultant to provide you a checklist of items that you can carry. Most what you would require are H1B related documents provided by the employer/consultant Photographs All original documents supporting your H1B Passports Some cash for your travel expenses (avoid coins) Any important phone number / details written in a paper(like your cab driver number, etc.) If you carry restricted stuff then you will be stopped at security checks, you have to find people who can safely keep all the restricted items. Due to heavy restrictions in and around the US Embassy you will not find any  place to keep your luggage. So just carry the bare minimum things required so that you feel more comfortable. Useful Links THE U.S. NON IMMIGRANT VISA APPLICATION PROCESS U.S VISA SECURITY REGULATIONS GENERAL FAQS Hope this information is helpful to you and best of luck for your interview. Creative commons Image credit: Flickr/ alexfrance, vinothchandar. hughelectronic, architratan, striatic

    Read the article

  • Windows 7 sometimes boots in VGA mode

    - by TuxRug
    I have an Asus G50VT-x5 laptop with nVidia GeForce9800M-GS graphics. Normally, Windows boots normally, but about 20% of the time (rough estimate), it will boot with the fallback VGA driver, maxing out at 800x600 with no Aero. I've checked the system logs and there is nothing indicating an error loading the nVidia driver. It even specifies in the logs that the Nvidia Display Driver service started successfully, even though it has booted in safe graphics mode. This has been happening for a while, but it's happening a little more often now than it was before. Since the first time my system exhibited this behavior, I have updated my graphics driver a handful of times. I used System Information for Windows to check for problems there, but the only thing that stood out was the following: Core Temperature 4486449 °C (8075639 °F) Shaders Temperature 1171513530 °C (2108724330 °F) I know this reading is incorrect, because my laptop is nowhere near the surface of the sun and my desk has not burst into flames. When it's opererating normally, I get a sane reading like [Core Temperature 58 °C (136 °F)] with no Shaders Temperature listed. All I have to do to resolve the issue is reboot. I have seen no stability issues with the graphics or anything else. A long time ago, I had an issue with this computer where my framerate would suddenly drop during a 3D game from 40fps to <1fps, but after looking at the temperature readout immediately after quitting a game, I removed the bottom panel and blew the dust out of the vent and heatsink. Since then I have no drops in framerate under any situation. I have uploaded a zip containing the SIW reports for when the problem is occurring and when the computer is operating normally. I don't have a paid account so it can only be downloaded 10 times, so please only download the reports if you think you can use them. If you try to download the reports and they are no longer available, please comment and I will re-upload them. If you want to look at the files, they are on Rapidshare. EDIT It happened again, and I looked a little deeper into the System logs. When this happens, there are a lot of errors about other device drivers unable to start. All of these errors are for PnP drivers. Also, my USB keyboard and mouse take a few moments before they actually start working, although this happens sometimes the first normal boot as well. I am quite sure this is related, so I am adding the pnp tag. Also, CHKDSK will not run on boot. Even if a check is scheduled or a volume is manually set as dirty, CHKDSK will be skipped entirely, not even leaving an entry in the System logs. I tried running CHKNTFS /D, which did not work. I then manually changed my HKLM\System\CurrentControlSet\Control\Session Manager BootExecute value to the default listed on Microsoft's website. That did not work either. I ended up booting to repair mode and running CHKDSK there, which found a number of minor inconsistencies on my system drive, but none on my data drive. I have no idea if this is related. Some more information for those who don't download my SIW report file: Antivirus and Firewall are ESET Smart Security I have three different virutalization programs installed: VMware Player, Windows Virtual PC, and VirtualBox. The network adapters for these show up in the log of failed device starts. EDIT 2 I tried running sfc /scannow, which reported that it found corrupted files that could not be fixed. The CBS log is extremely cryptic. I tried booting to my install disk, launching repair mode, and doing an offline sfc from there, which produced the same result.

    Read the article

  • CodePlex Daily Summary for Saturday, June 16, 2012

    CodePlex Daily Summary for Saturday, June 16, 2012Popular ReleasesCosmos (C# Open Source Managed Operating System): Release 92560: Prerequisites Visual Studio 2010 - Any version including Express. Express users must also install Visual Studio 2010 Integrated Shell runtime VMWare - Cosmos can run on real hardware as well as other virtualization environments but our default debug setup is configured for VMWare. VMWare Player (Free). or Workstation VMWare VIX API 1.11AutoUpdaterdotNET : Autoupdate for VB.NET and C# Developer: AutoUpdater.NET 1.1: Release Notes *New feature added that allows user to select remind later interval.Sumzlib: API document: API documentMicrosoft SQL Server Product Samples: Database: AdventureWorks 2008 OLTP Script: Install AdventureWorks2008 OLTP database from script The AdventureWorks database can be created by running the instawdb.sql DDL script contained in the AdventureWorks 2008 OLTP Script.zip file. The instawdb.sql script depends on two path environment variables: SqlSamplesDatabasePath and SqlSamplesSourceDataPath. The SqlSamplesDatabasePath environment variable is set to the default Microsoft ® SQL Server 2008 path. You will need to change the SqlSamplesSourceDataPath environment variable to th...HigLabo: HigLabo_20120613: Bug fix HigLabo.Mail Decode header encoded by CP1252Jasc (just another script compressor): 1.3.1: Updated Ajax Minifier to 4.55.WipeTouch, a jQuery touch plugin: 1.2.0: Changes since 1.1.0: New: wipeMove event, triggered while moving the mouse/finger. New: added "source" to the result object. Bug fix: sometimes vertical wipe events would not trigger correctly. Bug fix: improved tapToClick handler. General code refactoring. Windows Phone 7 is not supported, yet! Its behaviour is completely broken and would require some special tricks to make it work. Maybe in the future...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3026 (June 2012): Fixes: round( 0.0 ) local TimeZone name TimeZone search compiling multi-script-assemblies PhpString serialization DocDocument::loadHTMLFile() token_get_all() parse_url()BlackJumboDog: Ver5.6.4: 2012.06.13 Ver5.6.4  (1) Web???????、???POST??????????????????Yahoo! UI Library: YUI Compressor for .Net: Version 2.0.0.0 - Ferret: - Merging both 3.5 and 2.0 codebases to a single .NET 2.0 assembly. - MSBuild Task. - NAnt Task.Bumblebee: Version 0.3.1: Changed default config values to decent ones. Restricted visibility of Hive.fs to internal. Added some XML documentation. Added Array.shuffle utility. The dll is also available on NuGet My apologies, the initial source code referenced was missing one file which prevented it from building The source code contains two examples, one in C#, one in F#, illustrating the usage of the framework on the Travelling Salesman Problem: Source CodeSharePoint XSL Templates: SPXSLT 0.0.9: Added new template FixAmpersands. Fixed the contents of the MultiSelectValueCheck.xsl file, which was missing the stylesheet wrapper.ExcelFileEditor: .CS File: nothingBizTalk Scheduled Task Adapter: Release 4.0: Works with BizTalk Server 2010. Compiled in .NET Framework 4.0. In this new version are available small improvements compared to the current version (3.0). We can highlight the following improvements or changes: 24 hours support in “start time” property. Previous versions had an issue with setting the start time, as it shown 12 hours watch but no AM/PM. Daily scheduler review. Solved a small bug on Daily Properties: unable to switch between “Every day” and “on these days” Installation e...Weapsy - ASP.NET MVC CMS: 1.0.0 RC: - Upgrade to Entity Framework 4.3.1 - Added AutoMapper custom version (by nopCommerce Team) - Added missed model properties and localization resources of Plugin Definitions - Minor changes - Fixed some bugsXenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0 Beta: Catalog and Publication reviews and ratings Store language packs in data base Improve reporting system Improve Import/Export system A lot of WebAdmin app UI improvements Initial implementation of the WebForum app DB indexes Improve and simplify architecture Less abstractions Modernize architecture Improve, simplify and unify API Simplify and improve testing A lot of new unit tests Codebase refactoring and ReSharpering Utilize Castle Windsor Utilize NHibernate ORM ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.55: Properly handle IE extension to CSS3 grammar that allows for multiple parameters to functional pseudo-class selectors. add new switch -braces:(new|same) that affects where opening braces are placed in multi-line output. The default, "new" puts them on their own new line; "same" outputs them at the end of the previous line. add new optional values to the -inline switch: -inline:(force|noforce), which can be combined with the existing boolean value via comma-separators; value "force" (which...Microsoft Media Platform: Player Framework: MMP Player Framework 2.7 (Silverlight and WP7): Additional DownloadsSMFv2.7 Full Installer (MSI) - This will install everything you need in order to develop your own SMF player application, including the IIS Smooth Streaming Client. It only includes the assemblies. If you want the source code please follow the link above. Smooth Streaming Sample Player - This is a pre-built player that includes support for IIS Smooth Streaming. You can configure the player to playback your content by simplying editing a configuration file - no need to co...Liberty: v3.2.1.0 Release 10th June 2012: Change Log -Added -Liberty is now digitally signed! If the certificate on Liberty.exe is missing, invalid, or does not state that it was developed by "Xbox Chaos, Open Source Developer," your copy of Liberty may have been altered in some (possibly malicious) way. -Reach Mass biped max health and shield changer -Fixed -H3/ODST Fixed all of the glitches that users kept reporting (also reverted the changes made in 3.2.0.2) -Reach Made some tag names clearer and more consistent between m...Media Companion: Media Companion 3.503b: It has been a while, so it's about time we release another build! Major effort has been for fixing trailer downloads, plus a little bit of work for episode guide tag in TV show NFOs.New Projects.NinJa (dotNinja): An extensive JavaScript Framework revolving around principles found in .NET and aiming to integrate full Intellisense support. bab-rizg: solve unemployment problemBizTalk Multi-part Message Attachments Zipper Pipeline Component: This pipeline component replaces all attachments of a multi-part message, in a send pipeline, for its zipped equivalent.Boggle.Net: A basic implementation of Boggle for WPF.CFScript: CFScript is an ANT-like scripting system for Compact Framework. Tasks like copying files, setting registry values o install CAB files can be done with CFScript.Diablo3: Diablo3Dygraphs.NET: Dygraphs.NETDynamics CRM plugin for nopCommerce: This plugins is a bridge between nopCommerce and Dynamics CRM. nms.gaming: Place holderProject Bright Star: Project Bright Star. Deal with it.RDFSharp: RDFSharp is a library designed to ease the development of .NET applications based on the RDF and Semantic Web data model.SlamCMS: An application framework that allows you to build content managed sites leveraging SharePoint 2010 for publishing with tools to query and manifest your data.test02: no

    Read the article

  • How to Recover From a Virus Infection: 3 Things You Need to Do

    - by Chris Hoffman
    If your computer becomes infected with a virus or another piece of malware, removing the malware from your computer is only the first step. There’s more you need to do to ensure you’re secure. Note that not every antivirus alert is an actual infection. If your antivirus program catches a virus before it ever gets a chance to run on your computer, you’re safe. If it catches the malware later, you have a bigger problem. Change Your Passwords You’ve probably used your computer to log into your email, online banking websites, and other important accounts. Assuming you had malware on your computer, the malware could have logged your passwords and uploaded them to a malicious third party. With just your email account, the third party could reset your passwords on other websites and gain access to almost any of your online accounts. To prevent this, you’ll want to change the passwords for your important accounts — email, online banking, and whatever other important accounts you’ve logged into from the infected computer. You should probably use another computer that you know is clean to change the passwords, just to be safe. When changing your passwords, consider using a password manager to keep track of strong, unique passwords and two-factor authentication to prevent people from logging into your important accounts even if they know your password. This will help protect you in the future. Ensure the Malware Is Actually Removed Once malware gets access to your computer and starts running, it has the ability to do many more nasty things to your computer. For example, some malware may install rootkit software and attempt to hide itself from the system. Many types of Trojans also “open the floodgates” after they’re running, downloading many different types of malware from malicious web servers to the local system. In other words, if your computer was infected, you’ll want to take extra precautions. You shouldn’t assume it’s clean just because your antivirus removed what it found. It’s probably a good idea to scan your computer with multiple antivirus products to ensure maximum detection. You may also want to run a bootable antivirus program, which runs outside of Windows. Such bootable antivirus programs will be able to detect rootkits that hide themselves from Windows and even the software running within Windows. avast! offers the ability to quickly create a bootable CD or USB drive for scanning, as do many other antivirus programs. You may also want to reinstall Windows (or use the Refresh feature on Windows 8) to get your computer back to a clean state. This is more time-consuming, especially if you don’t have good backups and can’t get back up and running quickly, but this is the only way you can have 100% confidence that your Windows system isn’t infected. It’s all a matter of how paranoid you want to be. Figure Out How the Malware Arrived If your computer became infected, the malware must have arrived somehow. You’ll want to examine your computer’s security and your habits to prevent more malware from slipping through in the same way. Windows is complex. For example, there are over 50 different types of potentially dangerous file extensions that can contain malware to keep track of. We’ve tried to cover many of the most important security practices you should be following, but here are some of the more important questions to ask: Are you using an antivirus? – If you don’t have an antivirus installed, you should. If you have Microsoft Security Essentials (known as Windows Defender on Windows 8), you may want to switch to a different antivirus like the free version of avast!. Microsoft’s antivirus product has been doing very poorly in tests. Do you have Java installed? – Java is a huge source of security problems. The majority of computers on the Internet have an out-of-date, vulnerable version of Java installed, which would allow malicious websites to install malware on your computer. If you have Java installed, uninstall it. If you actually need Java for something (like Minecraft), at least disable the Java browser plugin. If you’re not sure whether you need Java, you probably don’t. Are any browser plugins out-of-date? – Visit Mozilla’s Plugin Check website (yes, it also works in other browsers, not just Firefox) and see if you have any critically vulnerable plugins installed. If you do, ensure you update them — or uninstall them. You probably don’t need older plugins like QuickTime or RealPlayer installed on your computer, although Flash is still widely used. Are your web browser and operating system set to automatically update? – You should be installing updates for Windows via Windows Update when they appear. Modern web browsers are set to automatically update, so they should be fine — unless you went out of your way to disable automatic updates. Using out-of-date web browsers and Windows versions is dangerous. Are you being careful about what you run? – Watch out when downloading software to ensure you don’t accidentally click sketchy advertisements and download harmful software. Avoid pirated software that may be full of malware. Don’t run programs from email attachments. Be careful about what you run and where you get it from in general. If you can’t figure out how the malware arrived because everything looks okay, there’s not much more you can do. Just try to follow proper security practices. You may also want to keep an extra-close eye on your credit card statement for a while if you did any online-shopping recently. As so much malware is now related to organized crime, credit card numbers are a popular target.     

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Reviewing Retail Predictions for 2011

    - by David Dorf
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} I've been busy thinking about what 2012 and beyond will look like for retail, and I have some interesting predictions to share.  But before I go there, let’s first review this year’s predictions before making new ones for 2012. 1. Alternate Payments We've seen several alternate payment schemes emerge over the last two years, and 2011 may be the year one of them takes hold. Any competition that can drive down fees will be good for everyone. I'm betting that Apple will add NFC chips to their next version of the iPhone, then enable payments in stores using iTunes accounts on the backend. Paypal will continue to make inroads, and Isis will announce a pilot. The iPhone 4S did not contain an NFC chip, so we’ll have to continuing waiting for the iPhone 5. PayPal announced its moving into in-store payments, and Google launched its wallet in selected cities.  Overall I think the payment scene is heating up and that trend will continue. 2. Engineered Systems The industry is moving toward purpose-built appliances that are optimized across the entire stack. Oracle calls these "engineered systems" and the first two examples are Exadata and Exalogic, but there are other examples from other vendors. These are particularly important to the retail industry because of the volume of data that must be processed. There should be continued adoption in 2011. Oracle reports that Exadata is its fasting growing product, and at the recent OpenWorld it announced the SuperCluster and Exalytics products, both continuing the engineered systems trend. SAP’s HANA continues to receive attention, and IBM also seems to be moving in this direction. 3. Social Analytics There are lots of tools that provide insight into how a brand is perceived across popular internet sites, but as far as I know, these tools are not industry specific. The next step needs to mine the data and determine how it should influence retail operations. The data needs to help retailers determine how they create promotions, which products to stock, and how to keep consumers engaged. Social data alone does not provide the answers, but its one more data point that will help retailers make better decisions. Look for some vendor consolidation to help make this happen. In March, Salesforce.com acquired leading social monitoring vendor Radian6 and followed up with acquisitions of Heroku and Model Metrics. The notion of Social CRM seems to be going more mainstream now. 4. 2-D Barcodes Look for more QRCodes on shelf-tags, in newspaper circulars, and on billboards. It's a great portal from the physical world into the digital one that buys us time until augmented reality matures further. Nobody wants to type "www", backslash, and ".com" on their phones. QRCodes are everywhere. ‘Nuff said. 5. In the words of Microsoft, "To the Cloud!" My favorite "cloud application" is Evernote. If you take notes on your work laptop, you will inevitably need those notes on your home PC. And if you manage to solve that problem, you'll need to access them from your mobile phone. Evernote stores your notes in the cloud and provides easy ways to access them. Being able to access a service from anywhere and not having to worry about backups, upgrades, etc. is great. Retailers will start to rely on cloud services, both public and private, in the coming year. There were no shortage of announcements in this area: Amazon’s cloud-based Kindle Fire, Apple’s iCloud, Oracle’s Public Cloud, etc. I saw an interesting presentation showing how BevMo moved their systems to the cloud.  Seems like retailers are starting to consider the cloud for specific uses. 6. F-CommerceTop of Form Move over "E" and "M" so we can introduce "F-Commerce," which should go mainstream in 2011. Already several retailers have created small stores on Facebook, and it won't be long before Facebook becomes a full-fledged channel in the omni-channel world of retail. The battle between Facebook and Google will heat up over retail, where both stand to make lots of money. JCPenney and ASOS both put their entire catalogs on Facebook, and lots of other retailers have connected Facebook to their e-commerce site. I still think selling from the newsfeed is the best approach, and several retailers are trying that approach as well. I just don’t see Google+ as a threat to Facebook, so I think that battle is over.  I called 2011 The Year of F-Commerce, and that was probably accurate. Its good to look back at predictions, but we also have to think about what was missed.  I didn't see Amazon entering the tablet business with such a splash, although in hindsight it was obvious. Nor did I think HP would fall so far so fast.  Look for my 2012 predictions coming soon.

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • md/raid:md2: cannot start dirty degraded array, kernel panic

    - by nl-x
    After having made use of a remote power switch, my server did not come back online. When I went to the datacenter and reboot the computer on the spot I see the server booting (I see the centos progress bar with running almost all the way to the end) and eventually giving the following messages: md/raid:md2: cannot start dirty degraded array. md/raid:md2: failed to run raid set. md: pers->run() failed ... md/raid:md2: cannot start dirty degraded array. md/raid:md2: failed to run raid set. md: pers->run() failed ... Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init not tainted 2.6.32-279.1.1.el6.i686 #1 Call Trace: [<c083bfbc>] ? panic+0x68/0x11c [<c045a501>] ? do_exit+0x741/0x750 [<c045a54c>] ? do_group_exit+0x3c/0xa0 [<c045a5c1>] ? sys_exit_group+0x11/0x20 [<c083eba4>] ? syscall_call+0x7/0xb [<c083007b>] ? cmos_wake_setup+0x62/0x112 The server runs CentOS and has software raid, and I don't have backups of the raid settings. The only backup I have is of /home and the database dumps. (Glad to at least have those though.) Since the server is an old Dell PowerEdge 1750 with no CD-ROM drive, I have no way of booting the machine from a boot disk. I also remember in the past that the server also wouldn't boot from a bootable USB disk. So the only way I know how to boot the server is to go to the datacenter, pick up the server and take it to the office. Screw open the server. Attach a cdrom drive to an IDE slot on the motherboard. And then boot it. I am hoping you guys could help me avoid this. I have looked a bit through the boot options and I found the following boot options. When CentOS is about to boot and interrupt the boot-countdown: CentOS (2.6.32-279.1.1.el63.i686) CentOS Linux (2.6.32-71.29.1.el6.i686) centos (2.6.32-71.el6.i686) I think the first configuration is the default one, because choosing that gets me to the above mentioned kernel panic. The other ones end with something like "Sleeping forever". I can press 'e' to edit boot commands, press 'a' to modify kernel arguments and press 'c' for grub command line. The command line gives a grub prompt. But I have no idea how to get the system to boot without (trying to) access the dirty partitions. What I want to do is off course: - boot the machine - check hard drive for errors - mark the drive as clean

    Read the article

  • Scheduling thread tiles with C++ AMP

    - by Daniel Moth
    This post assumes you are totally comfortable with, what some of us call, the simple model of C++ AMP, i.e. you could write your own matrix multiplication. We are now ready to explore the tiled model, which builds on top of the non-tiled one. Tiling the extent We know that when we pass a grid (which is just an extent under the covers) to the parallel_for_each call, it determines the number of threads to schedule and their index values (including dimensionality). For the single-, two-, and three- dimensional cases you can go a step further and subdivide the threads into what we call tiles of threads (others may call them thread groups). So here is a single-dimensional example: extent<1> e(20); // 20 units in a single dimension with indices from 0-19 grid<1> g(e);      // same as extent tiled_grid<4> tg = g.tile<4>(); …on the 3rd line we subdivided the single-dimensional space into 5 single-dimensional tiles each having 4 elements, and we captured that result in a concurrency::tiled_grid (a new class in amp.h). Let's move on swiftly to another example, in pictures, this time 2-dimensional: So we start on the left with a grid of a 2-dimensional extent which has 8*6=48 threads. We then have two different examples of tiling. In the first case, in the middle, we subdivide the 48 threads into tiles where each has 4*3=12 threads, hence we have 2*2=4 tiles. In the second example, on the right, we subdivide the original input into tiles where each has 2*2=4 threads, hence we have 4*3=12 tiles. Notice how you can play with the tile size and achieve different number of tiles. The numbers you pick must be such that the original total number of threads (in our example 48), remains the same, and every tile must have the same size. Of course, you still have no clue why you would do that, but stick with me. First, we should see how we can use this tiled_grid, since the parallel_for_each function that we know expects a grid. Tiled parallel_for_each and tiled_index It turns out that we have additional overloads of parallel_for_each that accept a tiled_grid instead of a grid. However, those overloads, also expect that the lambda you pass in accepts a concurrency::tiled_index (new in amp.h), not an index<N>. So how is a tiled_index different to an index? A tiled_index object, can have only 1 or 2 or 3 dimensions (matching exactly the tiled_grid), and consists of 4 index objects that are accessible via properties: global, local, tile_origin, and tile. The global index is the same as the index we know and love: the global thread ID. The local index is the local thread ID within the tile. The tile_origin index returns the global index of the thread that is at position 0,0 of this tile, and the tile index is the position of the tile in relation to the overall grid. Confused? Here is an example accompanied by a picture that hopefully clarifies things: array_view<int, 2> data(8, 6, p_my_data); parallel_for_each(data.grid.tile<2,2>(), [=] (tiled_index<2,2> t_idx) restrict(direct3d) { /* todo */ }); Given the code above and the picture on the right, what are the values of each of the 4 index objects that the t_idx variables exposes, when the lambda is executed by T (highlighted in the picture on the right)? If you can't work it out yourselves, the solution follows: t_idx.global       = index<2> (6,3) t_idx.local          = index<2> (0,1) t_idx.tile_origin = index<2> (6,2) t_idx.tile             = index<2> (3,1) Don't move on until you are comfortable with this… the picture really helps, so use it. Tiled Matrix Multiplication Example – part 1 Let's paste here the C++ AMP matrix multiplication example, bolding the lines we are going to change (can you guess what the changes will be?) 01: void MatrixMultiplyTiled_Part1(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M, N, vC); 07: parallel_for_each(c.grid, 08: [=](index<2> idx) restrict(direct3d) { 09: 10: int row = idx[0]; int col = idx[1]; 11: float sum = 0.0f; 12: for(int i = 0; i < W; i++) 13: sum += a(row, i) * b(i, col); 14: c[idx] = sum; 15: }); 16: } To turn this into a tiled example, first we need to decide our tile size. Let's say we want each tile to be 16*16 (which assumes that we'll have at least 256 threads to process, and that c.grid.extent.size() is divisible by 256, and moreover that c.grid.extent[0] and c.grid.extent[1] are divisible by 16). So we insert at line 03 the tile size (which must be a compile time constant). 03: static const int TS = 16; ...then we need to tile the grid to have tiles where each one has 16*16 threads, so we change line 07 to be as follows 07: parallel_for_each(c.grid.tile<TS,TS>(), ...that means that our index now has to be a tiled_index with the same characteristics as the tiled_grid, so we change line 08 08: [=](tiled_index<TS, TS> t_idx) restrict(direct3d) { ...which means, without changing our core algorithm, we need to be using the global index that the tiled_index gives us access to, so we insert line 09 as follows 09: index<2> idx = t_idx.global; ...and now this code just works and it is tiled! Closing thoughts on part 1 The process we followed just shows the mechanical transformation that can take place from the simple model to the tiled model (think of this as step 1). In fact, when we wrote the matrix multiplication example originally, the compiler was doing this mechanical transformation under the covers for us (and it has additional smarts to deal with the cases where the total number of threads scheduled cannot be divisible by the tile size). The point is that the thread scheduling is always tiled, even when you use the non-tiled model. But with this mechanical transformation, we haven't gained anything… Hint: our goal with explicitly using the tiled model is to gain even more performance. In the next post, we'll evolve this further (beyond what the compiler can automatically do for us, in this first release), so you can see the full usage of the tiled model and its benefits… Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Disaster, or Migration?

    - by Rob Farley
    This post is in two parts – technical and personal. And I should point out that it’s prompted in part by this month’s T-SQL Tuesday, hosted by Allen Kinsel. First, the technical: I’ve had a few conversations with people recently about migration – moving a SQL Server database from one box to another (sometimes, but not primarily, involving an upgrade). One question that tends to come up is that of downtime. Obviously there will be some period of time between the old server being available and the new one. The way that most people seem to think of migration is this: Build a new server. Stop people from using the old server. Take a backup of the old server Restore it on the new server. Reconfigure the client applications (or alternatively, configure the new server to use the same address as the old) Make the new server online. There are other things involved, such as testing, of course. But this is essentially the process that people tell me they’re planning to follow. The bit that I want to look at today (as you’ve probably guessed from my title) is the “backup and restore” section. If a SQL database is using the Simple Recovery Model, then the only restore option is the last database backup. This backup could be full or differential. The transaction log never gets backed up in the Simple Recovery Model. Instead, it truncates regularly to stay small. One that’s using the Full Recovery Model (or Bulk-Logged) won’t truncate its log – the log must be backed up regularly. This provides the benefit of having a lot more option available for restores. It’s a requirement for most systems of High Availability, because if you’re making sure that a spare box is up-and-running, ready to take over, then you have to be interested in the logs that are happening on the current box, rather than truncating them all the time. A High Availability system such as Mirroring, Replication or Log Shipping will initialise the spare machine by restoring a full database backup (and maybe a differential backup if available), and then any subsequent log backups. Once the secondary copy is close, transactions can be applied to keep the two in sync. The main aspect of any High Availability system is to have a redundant system that is ready to take over. So the similarity for migration should be obvious. If you need to move a database from one box to another, then introducing a High Availability mechanism can help. By turning on the Full Recovery Model and then taking a backup (so that the now-interesting logs have some context), logs start being kept, and are therefore available for getting the new box ready (even if it’s an upgraded version). When the migration is ready to occur, a failover can be done, letting the new server take over the responsibility of the old, just as if a disaster had happened. Except that this is a planned failover, not a disaster at all. There’s a fine line between a disaster and a migration. Failovers can be useful in patching, upgrading, maintenance, and more. Hopefully, even an unexpected disaster can be seen as just another failover, and there can be an opportunity there – perhaps to get some work done on the principal server to increase robustness. And if I’ve just set up a High Availability system for even the simplest of databases, it’s not necessarily a bad thing. :) So now the personal: It’s been an interesting time recently... June has been somewhat odd. A court case with which I was involved got resolved (through mediation). I can’t go into details, but my lawyers tell me that I’m allowed to say how I feel about it. The answer is ‘lousy’. I don’t regret pursuing it as long as I did – but in the end I had to make a decision regarding the commerciality of letting it continue, and I’m going to look forward to the days when the kind of money I spent on my lawyers is small change. Mind you, if I had a similar situation with an employer, I’d do the same again, but that doesn’t really stop me feeling frustrated about it. The following day I had to fly to country Victoria to see my grandmother, who wasn’t expected to last the weekend. She’s still around a week later as I write this, but her 92-year-old body has basically given up on her. She’s been a Christian all her life, and is looking forward to eternity. We’ll all miss her though, and it’s hard to see my family grieving. Then on Tuesday, I was driving back to the airport with my family to come home, when something really bizarre happened. We were travelling down the freeway, just pulled out to go past a truck (farm-truck sized, not a semi-trailer), when a car-sized mass of metal fell off it. It was something like an industrial air-conditioner, but from where I was sitting, it was just a mass of spinning metal, like something out of a movie (one friend described it as “holidays by Michael Bay”). Somehow, and I’m really don’t know how, the part of it nearest us bounced high enough to clear the car, and there wasn’t even a scratch. We pulled over the check, and I was just thanking God that we’d changed lanes when we had, and that we remained unharmed. I had all kinds of thoughts about what could’ve happened if we’d had something that size land on the windscreen... All this has drilled home that while I feel that I haven’t provided as well for the family as I could’ve done (like by pursuing an expensive legal case), I shouldn’t even consider that I have proper control over things. I get to live life, and make decisions based on what I feel is right at the time. But I’m not going to get everything right, and there will be things that feel like disasters, some which could’ve been in my control and some which are very much beyond my control. The case feels like something I could’ve pursued differently, a disaster that could’ve been avoided in some way. Gran dying is lousy of course. An accident on the freeway would have been awful. I need to recognise that the worst disasters are ones that I can’t affect, and that I need to look at things in context – perhaps seeing everything that happens as a migration instead. Life is never the same from one day to the next. Every event has a before and an after – sometimes it’s clearly positive, sometimes it’s not. I remember good events in my life (such as my wedding), and bad (such as the loss of my father when I was ten, or the back injury I had eight years ago). I’m not suggesting that I know how to view everything from the “God works all things for good” perspective, but I am trying to look at last week as a migration of sorts. Those things are behind me now, and the future is in God’s hands. Hopefully I’ve learned things, and will be able to live accordingly. I’ve come through this time now, and even though I’ll miss Gran, I’ll see her again one day, and the future is bright.

    Read the article

  • How to Eliminate Tape Backup and Off-site Storage Service?

    - by Daniel Lucas
    PLEASE READ UPDATE AT THE BOTTOM. THANKS! ;) Environment Info (all Windows): 2 sites 30 servers site #1 (3TB of backup data) 5 servers site #2 (1TB of backup data) MPLS backbone tunnel connecting site #1 and site #2 Current Backup Process: Online Backup (disk-to-disk) Site #1 has a server running Symantec Backup Exec 12.5 with four 1TB USB 2.0 disks. BE jobs for full backups run nightly on all servers in site #1 to these disks. Site #2 backs up to a central file server there using software they already had when we purchased them. A BE job pulls that data nightly to site #1 and stores them on said disks. Off-site Backup (tape) Connected to our backup server is a tape drive. BE backs up the external disks to tape once a week which gets picked up by our off-site storage company. Obviously we rotate two tape libraries, one is always here and one is always there. Requirements: Eliminate the need for tape and off-site storage service by doing disk-to-disk at each site and replicating site #1 to site #2 and vice versa. Software based solution as hardware options have been too pricey (ie, SonicWall, Arkeia). Agents for Exchange, SharePoint, and SQL. Some Ideas So Far: Storage DroboPro at each site with an initial 8TB of storage (these are expandable up to 16TB at present). I like these because they are rackmountable, allow disparate drives, and have iSCSI interfaces. They are relatively cheap too. Software Symantec Backup Exec 12.5 already has all the agents and licenses we need. I'd like to keep using it unless there is a better solution, similarly priced, that does everything BE does plus deduplication and replication. Server Because there is no more need for a SCSI adapter (for tape drive) we are going to virtualize our backup server as it is currently the only physical machine save for SQL boxes. Problems: When replicating between sites we want as little data as possible to go across the pipe. There is no deduplication or compression in what I have laid out here so far. The files being replicated are BE's virtual tape libraries from our disk-to-disk backup. Because of this each of those huge files will go across the wire every week because they change every day. And Finally, the Question: Is there any software out there that does deduplication, or at least compression, to handle just our site-to-site replication? Or, looking at our setup, is there any other solution that I am missing that might be cheaper, faster, better? Thanks. Sorry so long. UPDATE 2: I've set a bounty on this question to get it more attention. I'm looking for software that will handle replication of data between two sites using the least amount of data possible (either compression, deduplication, or some other method). Something similar to rsync would work but it needs to be native to Windows and not a port involving shenanigans to get up and running. Prefer a GUI based product and I don't mind shelling out a few bones if it works. Please, answers that meet the above criteria only. If you don't think one exists or if you think I'm being to restrictive keep it to yourself. If after seven days there is no answer at all, so be it. Thanks again everyone. UPDATE 2: I really appreciate everyone coming forward with suggestions. There is no way for me to try all of these before the bounty expires. For now I'm going to let this bounty run out and whoever has the most votes will get the 100 rep points. Thanks again!

    Read the article

  • Causes of sudden massive filesystem damage? ("root inode is not a directory")

    - by poolie
    I have a laptop running Maverick (very happily until yesterday), with a Patriot Torx SSD; LUKS encryption of the whole partition; one lvm physical volume on top of that; then home and root in ext4 logical volumes on top of that. When I tried to boot it yesterday, it complained that it couldn't mount the root filesystem. Running fsck, basically every inode seems to be wrong. Both home and root filesystems show similar problems. Checking a backup superblock doesn't help. e2fsck 1.41.12 (17-May-2010) lithe_root was not cleanly unmounted, check forced. Resize inode not valid. Recreate? no Pass 1: Checking inodes, blocks, and sizes Root inode is not a directory. Clear? no Root inode has dtime set (probably due to old mke2fs). Fix? no Inode 2 is in use, but has dtime set. Fix? no Inode 2 has a extra size (4730) which is invalid Fix? no Inode 2 has compression flag set on filesystem without compression support. Clear? no Inode 2 has INDEX_FL flag set but is not a directory. Clear HTree index? no HTREE directory inode 2 has an invalid root node. Clear HTree index? no Inode 2, i_size is 9581392125871137995, should be 0. Fix? no Inode 2, i_blocks is 40456527802719, should be 0. Fix? no Reserved inode 3 (<The ACL index inode>) has invalid mode. Clear? no Inode 3 has compression flag set on filesystem without compression support. Clear? no Inode 3 has INDEX_FL flag set but is not a directory. Clear HTree index? no .... Running strings across the filesystems, I can see there are what look like filenames and user data there. I do have sufficiently good backups (touch wood) that it's not worth grovelling around to pull back individual files, though I might save an image of the unencrypted disk before I rebuild, just in case. smartctl doesn't show any errors, neither does the kernel log. Running a write-mode badblocks across the swap lv doesn't find problems either. So the disk may be failing, but not in an obvious way. At this point I'm basically, as they say, fscked? Back to reinstalling, perhaps running badblocks over the disk, then restoring from backup? There doesn't even seem to be enough data to file a meaningful bug... I don't recall that this machine crashed last time I used it. At this point I suspect a bug or memory corruption caused it to write garbage across the disks when it was last running, or some kind of subtle failure mode for the SSD. What do you think would have caused this? Is there anything else you'd try?

    Read the article

  • Building an OpenStack Cloud for Solaris Engineering, Part 1

    - by Dave Miner
    One of the signature features of the recently-released Solaris 11.2 is the OpenStack cloud computing platform.  Over on the Solaris OpenStack blog the development team is publishing lots of details about our version of OpenStack Havana as well as some tips on specific features, and I highly recommend reading those to get a feel for how we've leveraged Solaris's features to build a top-notch cloud platform.  In this and some subsequent posts I'm going to look at it from a different perspective, which is that of the enterprise administrator deploying an OpenStack cloud.  But this won't be just a theoretical perspective: I've spent the past several months putting together a deployment of OpenStack for use by the Solaris engineering organization, and now that it's in production we'll share how we built it and what we've learned so far.In the Solaris engineering organization we've long had dedicated lab systems dispersed among our various sites and a home-grown reservation tool for developers to reserve those systems; various teams also have private systems for specific testing purposes.  But as a developer, it can still be difficult to find systems you need, especially since most Solaris changes require testing on both SPARC and x86 systems before they can be integrated.  We've added virtual resources over the years as well in the form of LDOMs and zones (both traditional non-global zones and the new kernel zones).  Fundamentally, though, these were all still deployed in the same model: our overworked lab administrators set up pre-configured resources and we then reserve them.  Sounds like pretty much every traditional IT shop, right?  Which means that there's a lot of opportunity for efficiencies from greater use of virtualization and the self-service style of cloud computing.  As we were well into development of OpenStack on Solaris, I was recruited to figure out how we could deploy it to both provide more (and more efficient) development and test resources for the organization as well as a test environment for Solaris OpenStack.At this point, let's acknowledge one fact: deploying OpenStack is hard.  It's a very complex piece of software that makes use of sophisticated networking features and runs as a ton of service daemons with myriad configuration files.  The web UI, Horizon, doesn't often do a good job of providing detailed errors.  Even the command-line clients are not as transparent as you'd like, though at least you can turn on verbose and debug messaging and often get some clues as to what to look for, though it helps if you're good at reading JSON structure dumps.  I'd already learned all of this in doing a single-system Grizzly-on-Linux deployment for the development team to reference when they were getting started so I at least came to this job with some appreciation for what I was taking on.  The good news is that both we and the community have done a lot to make deployment much easier in the last year; probably the easiest approach is to download the OpenStack Unified Archive from OTN to get your hands on a single-system demonstration environment.  I highly recommend getting started with something like it to get some understanding of OpenStack before you embark on a more complex deployment.  For some situations, it may in fact be all you ever need.  If so, you don't need to read the rest of this series of posts!In the Solaris engineering case, we need a lot more horsepower than a single-system cloud can provide.  We need to support both SPARC and x86 VM's, and we have hundreds of developers so we want to be able to scale to support thousands of VM's, though we're going to build to that scale over time, not immediately.  We also want to be able to test both Solaris 11 updates and a release such as Solaris 12 that's under development so that we can work out any upgrade issues before release.  One thing we don't have is a requirement for extremely high availability, at least at this point.  We surely don't want a lot of down time, but we can tolerate scheduled outages and brief (as in an hour or so) unscheduled ones.  Thus I didn't need to spend effort on trying to get high availability everywhere.The diagram below shows our initial deployment design.  We're using six systems, most of which are x86 because we had more of those immediately available.  All of those systems reside on a management VLAN and are connected with a two-way link aggregation of 1 Gb links (we don't yet have 10 Gb switching infrastructure in place, but we'll get there).  A separate VLAN provides "public" (as in connected to the rest of Oracle's internal network) addresses, while we use VxLANs for the tenant networks. One system is more or less the control node, providing the MySQL database, RabbitMQ, Keystone, and the Nova API and scheduler as well as the Horizon console.  We're curious how this will perform and I anticipate eventually splitting at least the database off to another node to help simplify upgrades, but at our present scale this works.I had a couple of systems with lots of disk space, one of which was already configured as the Automated Installation server for the lab, so it's just providing the Glance image repository for OpenStack.  The other node with lots of disks provides Cinder block storage service; we also have a ZFS Storage Appliance that will help back-end Cinder in the near future, I just haven't had time to get it configured in yet.There's a separate system for Neutron, which is our Elastic Virtual Switch controller and handles the routing and NAT for the guests.  We don't have any need for firewalling in this deployment so we're not doing so.  We presently have only two tenants defined, one for the Solaris organization that's funding this cloud, and a separate tenant for other Oracle organizations that would like to try out OpenStack on Solaris.  Each tenant has one VxLAN defined initially, but we can of course add more.  Right now we have just a single /24 network for the floating IP's, once we get demand up to where we need more then we'll add them.Finally, we have started with just two compute nodes; one is an x86 system, the other is an LDOM on a SPARC T5-2.  We'll be adding more when demand reaches the level where we need them, but as we're still ramping up the user base it's less work to manage fewer nodes until then.My next post will delve into the details of building this OpenStack cloud's infrastructure, including how we're using various Solaris features such as Automated Installation, IPS packaging, SMF, and Puppet to deploy and manage the nodes.  After that we'll get into the specifics of configuring and running OpenStack itself.

    Read the article

  • Feedback on meeting of the Linux User Group of Mauritius

    Once upon a time in a country far far away... Okay, actually it's not that bad but it has been a while since the last meeting of the Linux User Group of Mauritius (LUGM). There have been plans in the past but it never really happened. Finally, Selven took the opportunity and organised a new meetup with low administrative overhead, proper scheduling on alternative dates and a small attendee's survey on the preferred option. All the pre-work was nicely executed. First, I wasn't sure whether it would be possible to attend. Luckily I got some additional information, like children should come, too, and I was sold to this community gathering. According to other long-term members of the LUGM it was the first time 'ever' that a gathering was organised outside of Quatre Bornes, and I have to admit it was great! LUGM - user group meeting on the 15.06.2013 in L'Escalier Quick overview of Linux & the LUGM With a little bit of delay the LUGM meeting officially started with a quick overview and introduction to Linux presented by Avinash. During the session he told the audience that there had been quite some activity over the island some years ago but unfortunately it had been quiet during recent times. Of course, we also spoke about the acknowledged world dominance of Linux - thanks to Android - and the interesting possibilities for countries like Mauritius. It is known that a couple of public institutions have there back-end infrastructure running on Red Hat Linux systems but the presence on the desktop is still very low. Users are simply hanging on to Windows XP and older versions of Microsoft Office. Following the introduction of the LUGM Ajay joined into the session and it quickly changed into a panel discussion with lots of interesting questions and answers, sharing of first-hand experience either on the job or in private use of Linux, and a couple of ideas about how the LUGM could promote Linux a bit more in Mauritius. It was great to get an insight into other attendee's opinion and activities. Especially taking into consideration that I'm already using Linux since around 1996/97. Frankly speaking, I bought a SuSE 4.x distribution back in those days because I couldn't achieve certain tasks on Windows NT 4.0 without spending a fortune. OpenELEC Mediacenter Next, Selven gave us decent introduction on OpenELEC: Open Embedded Linux Entertainment Center (OpenELEC) is a small Linux distribution built from scratch as a platform to turn your computer into an XBMC media center. OpenELEC is designed to make your system boot fast, and the install is so easy that anyone can turn a blank PC into a media machine in less than 15 minutes. I didn't know about it until this presentation. In the past, I was mainly attached to Video Disk Recorder (VDR) as it allows the use of satellite receiver cards very easily. Hm, somehow I'm still missing my precious HTPC that I had to leave back in Germany years ago. It was great piece of hardware and software; self-built PC in a standard HiFi-sized (43cm) black desktop casing with 2 full-featured Hauppauge DVB-s cards, an old-fashioned Voodoo graphics card, WiFi card, Pioneer slot-in DVD drive, and fully remote controlled via infra-red thanks to Debian, VDR and LIRC. With EP Guide, scheduled recordings and general multimedia centre it offered all the necessary comfort in the living room, besides a Nintendo game console; actually a GameCube at that time... But I have to admit that putting OpenELEC on a Raspberry Pi would be a cool DIY project in the near future. LUGM - our next generation of linux users (15.06.2013) Project Evil Genius (PEG) Don't be scared of the paragraph header. Ish gave us a cool explanation why he named it PEG - Project Evil Genius; it's because of the time of the day when he was scripting down his ideas to be able to build, package and provide software applications to various Linux distributions. The main influence came from openSuSE but the platform didn't cater for his needs and ideas, so he started to work out something on his own. During his passionate session he also talked about the amazing experience he had due to other Linux users from all over the world. During the next couple of days Ish promised to put his script to GitHub... Looking forward to that. Check out Ish's personal blog over at hacklog.in. Highly recommended to read. Why India? Simply because the registration fees per year for an Indian domain are approximately 20 times less than for a Mauritian domain (.mu). Exploring the beach of L'Escalier af the meeting 'After-party' at the beach of L'Escalier Puh, after such interesting sessions, ideas around Linux and good conversation during the breaks and over lunch it was time for a little break-out. Selven suggested that we all should head down to the beach of L'Escalier and get some impressions of nature down here in the south of the island. Talking about 'beach' ;-) - absolutely not comparable to the white-sanded ones here in Flic en Flac... There are no lagoons down at the south coast of Mauriitus, and watching the breaking waves is a different experience and joy after all. Unfortunately, I was a little bit worried about the thoughtless littering at such a remote location. You have to drive on natural paths through the sugar cane fields and I was really shocked by the amount of rubbish lying around almost everywhere. Sad, really sad and it concurs with Yasir's recent article on the same topic. Resumé & outlook It was a great event. I met with new people, had some good conversations, and even my children enjoyed themselves the whole day. The location was well-chosen, enough space for each and everyone, parking spaces and even a playground for the children. Also, a big "Thank You" to Selven and his helpers for the organisation and preparation of lunch. I'm kind of sure that this was an exceptional meeting of LUGM and I'm really looking forward to the next gathering of Linux geeks. Hopefully, soon. All images are courtesy of Avinash Meetoo. More pictures are available on Flickr.

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113  | Next Page >