Search Results

Search found 7637 results on 306 pages for 'cd drives'.

Page 304/306 | < Previous Page | 300 301 302 303 304 305 306  | Next Page >

  • Why does ffmpeg stop randomly in the middle of a process?

    - by acidzombie24
    ffmpeg feels like its taking a long time. I then look at my output file and i see it stops between 6 and 8mbs. A fully encoded file is about 14mb. Why does ffmpeg stop? My code locks up on StandardOutput.ReadToEnd();. I had to kill the process (after seeing it not move for more then 10 seconds when i see it update every second previously) then i get the results of stdout and err. stdout is "" stderr is below. The output msg shows the filesize ended. I also see a drop in my CPU usage when it stops. I copyed the argument from visual studios. CD to the same working directory and ran the cmd (bin/ffmpeg) and pasted the argument. It was able to complete. int soundProcess(string infn, string outfn) { string aa, aa2; aa = aa2 = "DEAD"; var app = new Process(); app.StartInfo.UseShellExecute = false; app.StartInfo.RedirectStandardOutput = true; app.StartInfo.RedirectStandardError = true; //*/ app.StartInfo.FileName = @"bin\ffmpeg.exe"; app.StartInfo.Arguments = string.Format(@"-i ""{0}"" -ab 192k -y {2} ""{1}""", infn, outfn, param); app.Start(); try { app.PriorityClass = ProcessPriorityClass.BelowNormal; } catch (Exception ex) { if (!Regex.IsMatch(ex.Message, @"Cannot process request because the process .*has exited")) throw ex; } aa = app.StandardOutput.ReadToEnd(); aa2 = app.StandardError.ReadToEnd(); app.WaitForExit(); if (aa2.IndexOf("could not find codec parameters") != -1) return 1; else if (aa == "DEAD" || aa2 == "DEAD") return -1; else if (aa2.Length != 0) return -2; else return 0; } The output of stderr. stdout is empty. FFmpeg version SVN-r15815, Copyright (c) 2000-2008 Fabrice Bellard, et al. configuration: --enable-memalign-hack --enable-postproc --enable-swscale --enable-gpl --enable-libfaac --enable-libfaad --enable-libgsm --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libx264 --enable-libxvid --disable-ffserver --disable-vhook --enable-avisynth --enable-pthreads libavutil 49.12. 0 / 49.12. 0 libavcodec 52. 3. 0 / 52. 3. 0 libavformat 52.23. 1 / 52.23. 1 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 6. 1 / 0. 6. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 13 2008 10:28:29, gcc: 4.2.4 (TDM-1 for MinGW) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\dev\src\trunk\prjname\prjname\App_Data/temp/m/o/6304266424778814852': Duration: 00:12:53.36, start: 0.000000, bitrate: 154 kb/s Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16 Output #0, ipod, to 'C:\dev\src\trunk\prjname\prjname\App_Data\temp\m\o\2.m4a': Stream #0.0(und): Audio: libfaac, 44100 Hz, stereo, s16, 192 kb/s Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding size= 87kB time=4.74 bitrate= 150.7kbits/s size= 168kB time=9.06 bitrate= 151.9kbits/s size= 265kB time=14.28 bitrate= 151.8kbits/s size= 377kB time=20.29 bitrate= 152.1kbits/s size= 487kB time=26.22 bitrate= 152.1kbits/s size= 594kB time=32.02 bitrate= 152.1kbits/s size= 699kB time=37.64 bitrate= 152.1kbits/s size= 808kB time=43.54 bitrate= 152.0kbits/s size= 930kB time=50.09 bitrate= 152.2kbits/s size= 1058kB time=57.05 bitrate= 152.0kbits/s size= 1193kB time=64.23 bitrate= 152.1kbits/s size= 1329kB time=71.63 bitrate= 152.0kbits/s size= 1450kB time=78.16 bitrate= 152.0kbits/s size= 1578kB time=85.05 bitrate= 152.0kbits/s size= 1706kB time=92.00 bitrate= 152.0kbits/s size= 1836kB time=98.94 bitrate= 152.0kbits/s size= 1971kB time=106.25 bitrate= 151.9kbits/s size= 2107kB time=113.57 bitrate= 152.0kbits/s size= 2214kB time=119.33 bitrate= 152.0kbits/s size= 2345kB time=126.39 bitrate= 152.0kbits/s size= 2479kB time=133.56 bitrate= 152.0kbits/s size= 2611kB time=140.76 bitrate= 152.0kbits/s size= 2745kB time=147.91 bitrate= 152.1kbits/s size= 2880kB time=155.20 bitrate= 152.0kbits/s size= 3013kB time=162.40 bitrate= 152.0kbits/s size= 3146kB time=169.58 bitrate= 152.0kbits/s size= 3277kB time=176.61 bitrate= 152.0kbits/s size= 3412kB time=183.90 bitrate= 152.0kbits/s size= 3540kB time=190.80 bitrate= 152.0kbits/s size= 3670kB time=197.81 bitrate= 152.0kbits/s size= 3805kB time=205.08 bitrate= 152.0kbits/s size= 3932kB time=211.93 bitrate= 152.0kbits/s size= 4052kB time=218.38 bitrate= 152.0kbits/s size= 4171kB time=224.82 bitrate= 152.0kbits/s size= 4277kB time=230.55 bitrate= 152.0kbits/s size= 4378kB time=235.96 bitrate= 152.0kbits/s size= 4486kB time=241.79 bitrate= 152.0kbits/s size= 4592kB time=247.50 bitrate= 152.0kbits/s size= 4698kB time=253.21 bitrate= 152.0kbits/s size= 4804kB time=258.95 bitrate= 152.0kbits/s size= 4906kB time=264.41 bitrate= 152.0kbits/s size= 5012kB time=270.09 bitrate= 152.0kbits/s size= 5118kB time=275.85 bitrate= 152.0kbits/s size= 5234kB time=282.10 bitrate= 152.0kbits/s size= 5331kB time=287.39 bitrate= 151.9kbits/s size= 5445kB time=293.55 bitrate= 152.0kbits/s size= 5555kB time=299.40 bitrate= 152.0kbits/s size= 5665kB time=305.37 bitrate= 152.0kbits/s size= 5766kB time=310.80 bitrate= 152.0kbits/s size= 5876kB time=316.70 bitrate= 152.0kbits/s size= 5984kB time=322.50 bitrate= 152.0kbits/s size= 6094kB time=328.49 bitrate= 152.0kbits/s size= 6212kB time=334.76 bitrate= 152.0kbits/s size= 6327kB time=340.99 bitrate= 152.0kbits/s

    Read the article

  • Embedding mercurial revision information in Visual Studio c# projects automatically

    - by Mark Booth
    Original Problem In building our projects, I want the mercurial id of each repository to be embedded within the product(s) of that repository (the library, application or test application). I find it makes it so much easier to debug an application ebing run by custiomers 8 timezones away if you know precisely what went into building the particular version of the application they are using. As such, every project (application or library) in our systems implement a way of getting at the associated revision information. I also find it very useful to be able to see if an application has been compiled with clean (un-modified) changesets from the repository. 'Hg id' usefully appends a + to the changeset id when there are uncommitted changes in a repository, so this allows is to easily see if people are running a clean or a modified version of the code. My current solution is detailed below, and fulfills the basic requirements, but there are a number of problems with it. Current Solution At the moment, to each and every Visual Studio solution, I add the following "Pre-build event command line" commands: cd $(ProjectDir) HgID I also add an HgID.bat file to the Project directory: @echo off type HgId.pre > HgId.cs For /F "delims=" %%a in ('hg id') Do <nul >>HgID.cs set /p = @"%%a" echo ; >> HgId.cs echo } >> HgId.cs echo } >> HgId.cs along with an HgId.pre file, which is defined as: namespace My.Namespace { /// <summary> Auto generated Mercurial ID class. </summary> internal class HgID { /// <summary> Mercurial version ID [+ is modified] [Named branch]</summary> public const string Version = When I build my application, the pre-build event is triggered on all libraries, creating a new HgId.cs file (which is not kept under revision control) and causing the library to be re-compiled with with the new 'hg id' string in 'Version'. Problems with the current solution The main problem is that since the HgId.cs is re-created at each pre-build, every time we need to compile anything, all projects in the current solution are re-compiled. Since we want to be able to easily debug into our libraries, we usually keep many libraries referenced in our main application solution. This can result in build times which are significantly longer than I would like. Ideally I would like the libraries to compile only if the contents of the HgId.cs file has actually changed, as opposed to having been re-created with exactly the same contents. The second problem with this method is it's dependence on specific behaviour of the windows shell. I've already had to modify the batch file several times, since the original worked under XP but not Vista, the next version worked under Vista but not XP and finally I managed to make it work with both. Whether it will work with Windows 7 however is anyones guess and as time goes on, I see it more likely that contractors will expect to be able to build our apps on their Windows 7 boxen. Finally, I have an aesthetic problem with this solution, batch files and bodged together template files feel like the wrong way to do this. My actual questions How would you solve/how are you solving the problem I'm trying to solve? What better options are out there than what I'm currently doing? Rejected Solutions to these problems Before I implemented the current solution, I looked at Mercurials Keyword extension, since it seemed like the obvious solution. However the more I looked at it and read peoples opinions, the more that I came to the conclusion that it wasn't the right thing to do. I also remember the problems that keyword substitution has caused me in projects at previous companies (just the thought of ever having to use Source Safe again fills me with a feeling of dread *8'). Also, I don't particularly want to have to enable Mercurial extensions to get the build to complete. I want the solution to be self contained, so that it isn't easy for the application to be accidentally compiled without the embedded version information just because an extension isn't enabled or the right helper software hasn't been installed. I also thought of writing this in a better scripting language, one where I would only write HgId.cs file if the content had actually changed, but all of the options I could think of would require my co-workers, contractors and possibly customers to have to install software they might not otherwise want (for example cygwin). Any other options people can think of would be appreciated. Update Partial solution Having played around with it for a while, I've managed to get the HgId.bat file to only overwrite the HgId.cs file if it changes: @echo off type HgId.pre > HgId.cst For /F "delims=" %%a in ('hg id') Do <nul >>HgId.cst set /p = @"%%a" echo ; >> HgId.cst echo } >> HgId.cst echo } >> HgId.cst fc HgId.cs HgId.cst >NUL if %errorlevel%==0 goto :ok copy HgId.cst HgId.cs :ok del HgId.cst Problems with this solution Even though HgId.cs is no longer being re-created every time, Visual Studio still insists on compiling everything every time. I've tried looking for solutions and tried checking "Only build startup projects and dependencies on Run" in Tools|Options|Projects and Solutions|Build and Run but it makes no difference. The second problem also remains, and now I have no way to test if it will work with Vista, since that contractor is no longer with us. If anyone can test this batch file on a Windows 7 and/or Vista box, I would appreciate hearing how it went. Finally, my aesthetic problem with this solution, is even strnger than it was before, since the batch file is more complex and this there is now more to go wrong. If you can think of any better solution, I would love to hear about them.

    Read the article

  • Inconsistent results in R with RNetCDF - why?

    - by sarcozona
    I am having trouble extracting data from NetCDF data files using RNetCDF. The data files each have 3 dimensions (longitude, latitude, and a date) and 3 variables (latitude, longitude, and a climate variable). There are four datasets, each with a different climate variable. Here is some of the output from print.nc(p8m.tmax) for clarity. The other datasets are identical except for the specific climate variable. dimensions: month = UNLIMITED ; // (1368 currently) lat = 3105 ; lon = 7025 ; variables: float lat(lat) ; lat:long_name = "latitude" ; lat:standard_name = "latitude" ; lat:units = "degrees_north" ; float lon(lon) ; lon:long_name = "longitude" ; lon:standard_name = "longitude" ; lon:units = "degrees_east" ; short tmax(lon, lat, month) ; tmax:missing_value = -9999 ; tmax:_FillValue = -9999 ; tmax:units = "degree_celsius" ; tmax:scale_factor = 0.01 ; tmax:valid_min = -5000 ; tmax:valid_max = 6000 ; I am getting behavior I don't understand when I use the var.get.nc function from the RNetCDF package. For example, when I attempt to extract 82 values beginning at stval from the maximum temperature data (p8m.tmax <- open.nc(tmaxdataset.nc)) with > var.get.nc(p8m.tmax,'tmax', start=c(lon_val, lat_val, stval),count=c(1,1,82)) (where lon_val and lat_val specify the location in the dataset of the coordinates I'm interested in and stval is stval is set to which(time_vec==200201), which in this case equaled 1285.) I get Error: Invalid argument But after successfully extracting 80 and 81 values > var.get.nc(p8m.tmax,'tmax', start=c(lon_val, lat_val, stval),count=c(1,1,80)) > var.get.nc(p8m.tmax,'tmax', start=c(lon_val, lat_val, stval),count=c(1,1,81)) the command with 82 works: > var.get.nc(p8m.tmax,'tmax', start=c(lon_val, lat_val, stval),count=c(1,1,82)) [1] 444 866 1063 ... [output snipped] The same problem occurs in the identically structured tmin file, but at 36 instead of 82: > var.get.nc(p8m.tmin,'tmin', start=c(lon_val, lat_val, stval),count=c(1,1,36)) produces Error: Invalid argument But after repeating with counts of 30, 31, etc > var.get.nc(p8m.tmin,'tmin', start=c(lon_val, lat_val, stval), count=c(1,1,36)) works. These examples make it seem like the function is failing at the last count, but that actually isn't the case. In the first example, var.get.nc gave Error: Invalid argument after I asked for 84 values. I then narrowed the failure down to the 82nd count by varying the starting point in the dataset and asking for only 1 value at a time. The particular number the problem occurs at also varies. I can close and reopen the dataset and have the problem occur at a different location. In the particular examples above, lon_val and lat_val are 1595 and 1751, respectively, identifying the location in the dataset along the lat and lon dimensions for the latitude and longitude I'm interested in. The 1595th latitude and 1751th longitude are not the problem, however. The problem occurs with all other latitude and longitudes I've tried. Varying the starting location in the dataset along the climate variable dimension (stval) and/or specifying it different (as a number in the command instead of the object stval) also does not fix the problem. This problem doesn't always occur. I can run identical code three times in a row (clearing all objects in between runs) and get a different outcome each time. The first run may choke on the 7th entry I'm trying to get, the second might work fine, and the third run might choke on the 83rd entry. I'm absolutely baffled by such inconsistent behavior. The open.nc function has also started to fail with the same Error: Invalid argument. Like the var.get.nc problems, it also occurs inconsistently. Does anyone know what causes the initial failure to extract the variable? And how I might prevent it? Could have to do with the size of the data files (~60GB each) and/or the fact that I'm accessing them through networked drives? This was also asked here: https://stat.ethz.ch/pipermail/r-help/2011-June/281233.html > sessionInfo() R version 2.13.0 (2011-04-13) Platform: i386-pc-mingw32/i386 (32-bit) locale: [1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 [3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C [5] LC_TIME=English_United States.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] reshape_0.8.4 plyr_1.5.2 RNetCDF_1.5.2-2 loaded via a namespace (and not attached): [1] tools_2.13.0

    Read the article

  • Why load increase ,ssd iops increase but cpu iowait decrease?

    - by mq44944
    There is a strange thing on my server which has a mysql running on it. The QPS is more than 4000 but TPS is less than 20. The server load is more than 80 and cpu usr is more than 86% but iowait is less than 8%. The disk iops is more than 16000 and util of disk is more than 99%. When the QPS decreases, the load decreases, the cpu iowait increases. I can't catch this! root@mypc # dmidecode | grep "Product Name" Product Name: PowerEdge R510 Product Name: 084YMW root@mypc # megacli -PDList -aALL |grep "Inquiry Data" Inquiry Data: SEAGATE ST3600057SS ES656SL316PT Inquiry Data: SEAGATE ST3600057SS ES656SL30THV Inquiry Data: ATA INTEL SSDSA2CW300362CVPR201602A6300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR2044037K300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR204402PX300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR204403WN300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR202000HU300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR202001E7300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR204402WE300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR204404E5300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR204401QF300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR20450001300EGN the mysql data files lie on the ssd disks which are organizaed using RAID 10. root@mypc # megacli -LDInfo -L1 -a0 Adapter 0 -- Virtual Drive Information: Virtual Disk: 1 (Target Id: 1) Name: RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0 Size:1427840MB State: Optimal Stripe Size: 64kB Number Of Drives:2 Span Depth:5 Default Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Exit Code: 0x00 -------- -----load-avg---- ---cpu-usage--- ---swap--- -------------------------io-usage----------------------- -QPS- -TPS- -Hit%- time | 1m 5m 15m |usr sys idl iow| si so| r/s w/s rkB/s wkB/s queue await svctm %util| ins upd del sel iud| lor hit| 09:05:29|79.80 64.49 42.00| 82 7 6 5| 0 0|16421.1 10.6262705.9 85.2 8.3 0.5 0.1 99.5| 0 0 0 3968 0| 495482 96.58| 09:05:30|79.80 64.49 42.00| 79 7 8 6| 0 0|15907.4 230.6254409.7 6357.5 8.4 0.5 0.1 98.5| 0 0 0 4195 0| 496434 96.68| 09:05:31|81.34 65.07 42.31| 81 7 7 5| 0 0|16198.7 8.6259029.2 99.8 8.1 0.5 0.1 99.3| 0 0 0 4220 0| 508983 96.70| 09:05:32|81.34 65.07 42.31| 82 7 5 5| 0 0|16746.6 8.7267853.3 92.4 8.5 0.5 0.1 99.4| 0 0 0 4084 0| 503834 96.54| 09:05:33|81.34 65.07 42.31| 81 7 6 5| 0 0|16498.7 9.6263856.8 92.3 8.0 0.5 0.1 99.3| 0 0 0 4030 0| 507051 96.60| 09:05:34|81.34 65.07 42.31| 80 8 7 6| 0 0|16328.4 11.5261101.6 95.8 8.1 0.5 0.1 98.3| 0 0 0 4119 0| 504409 96.63| 09:05:35|81.31 65.33 42.52| 82 7 6 5| 0 0|16374.0 8.7261921.9 92.5 8.1 0.5 0.1 99.7| 0 0 0 4127 0| 507279 96.66| 09:05:36|81.31 65.33 42.52| 81 8 6 5| 0 0|16496.2 8.6263832.0 84.5 8.5 0.5 0.1 99.2| 0 0 0 4100 0| 505054 96.59| 09:05:37|81.31 65.33 42.52| 82 8 6 4| 0 0|16239.4 9.6259768.8 84.3 8.0 0.5 0.1 99.1| 0 0 0 4273 0| 510621 96.72| 09:05:38|81.31 65.33 42.52| 81 7 6 5| 0 0|16349.6 8.7261439.2 81.4 8.2 0.5 0.1 98.9| 0 0 0 4171 0| 510145 96.67| 09:05:39|81.31 65.33 42.52| 82 7 6 5| 0 0|16116.8 8.7257667.6 96.5 8.0 0.5 0.1 99.1| 0 0 0 4348 0| 513093 96.74| 09:05:40|79.60 65.24 42.61| 79 7 7 7| 0 0|16154.2 242.9258390.4 6388.4 8.5 0.5 0.1 99.0| 0 0 0 4033 0| 507244 96.70| 09:05:41|79.60 65.24 42.61| 79 7 8 6| 0 0|16583.1 21.2265129.6 173.5 8.2 0.5 0.1 99.1| 0 0 0 3995 0| 501474 96.57| 09:05:42|79.60 65.24 42.61| 81 8 6 5| 0 0|16281.0 9.7260372.2 69.5 8.3 0.5 0.1 98.7| 0 0 0 4221 0| 509322 96.70| 09:05:43|79.60 65.24 42.61| 80 7 7 6| 0 0|16355.3 8.7261515.5 104.3 8.2 0.5 0.1 99.6| 0 0 0 4087 0| 502052 96.62| -------- -----load-avg---- ---cpu-usage--- ---swap--- -------------------------io-usage----------------------- -QPS- -TPS- -Hit%- time | 1m 5m 15m |usr sys idl iow| si so| r/s w/s rkB/s wkB/s queue await svctm %util| ins upd del sel iud| lor hit| 09:05:44|79.60 65.24 42.61| 83 7 5 4| 0 0|16469.4 11.6263387.0 138.8 8.2 0.5 0.1 98.7| 0 0 0 4292 0| 509979 96.65| 09:05:45|79.07 65.37 42.77| 80 7 6 6| 0 0|16659.5 9.7266478.7 85.0 8.4 0.5 0.1 98.5| 0 0 0 3899 0| 496234 96.54| 09:05:46|79.07 65.37 42.77| 78 7 7 8| 0 0|16752.9 8.7267921.8 97.1 8.4 0.5 0.1 98.9| 0 0 0 4126 0| 508300 96.57| 09:05:47|79.07 65.37 42.77| 82 7 6 5| 0 0|16657.2 9.6266439.3 84.3 8.3 0.5 0.1 98.9| 0 0 0 4086 0| 502171 96.57| 09:05:48|79.07 65.37 42.77| 79 8 6 6| 0 0|16814.5 8.7268924.1 77.6 8.5 0.5 0.1 99.0| 0 0 0 4059 0| 499645 96.52| 09:05:49|79.07 65.37 42.77| 81 7 6 5| 0 0|16553.0 6.8264708.6 42.5 8.3 0.5 0.1 99.4| 0 0 0 4249 0| 501623 96.60| 09:05:50|79.63 65.71 43.01| 79 7 7 7| 0 0|16295.1 246.9260475.0 6442.4 8.7 0.5 0.1 99.1| 0 0 0 4231 0| 511032 96.70| 09:05:51|79.63 65.71 43.01| 80 7 6 6| 0 0|16568.9 8.7264919.7 104.7 8.3 0.5 0.1 99.7| 0 0 0 4272 0| 517177 96.68| 09:05:53|79.63 65.71 43.01| 79 7 7 6| 0 0|16539.0 8.6264502.9 87.6 8.4 0.5 0.1 98.9| 0 0 0 3992 0| 496728 96.52| 09:05:54|79.63 65.71 43.01| 79 7 7 7| 0 0|16527.5 11.6264363.6 92.6 8.5 0.5 0.1 98.8| 0 0 0 4045 0| 502944 96.59| 09:05:55|79.63 65.71 43.01| 80 7 7 6| 0 0|16374.7 12.5261687.2 134.9 8.6 0.5 0.1 99.2| 0 0 0 4143 0| 507006 96.66| 09:05:56|76.05 65.20 42.96| 77 8 8 8| 0 0|16464.9 9.6263314.3 111.9 8.5 0.5 0.1 98.9| 0 0 0 4250 0| 505417 96.64| 09:05:57|76.05 65.20 42.96| 79 7 6 7| 0 0|16460.1 8.8263283.2 93.4 8.3 0.5 0.1 98.8| 0 0 0 4294 0| 508168 96.66| 09:05:58|76.05 65.20 42.96| 80 7 7 7| 0 0|16176.5 9.6258762.1 127.3 8.3 0.5 0.1 98.9| 0 0 0 4160 0| 509349 96.72| 09:05:59|76.05 65.20 42.96| 75 7 9 10| 0 0|16522.0 10.7264274.6 93.1 8.6 0.5 0.1 97.5| 0 0 0 4034 0| 492623 96.51| -------- -----load-avg---- ---cpu-usage--- ---swap--- -------------------------io-usage----------------------- -QPS- -TPS- -Hit%- time | 1m 5m 15m |usr sys idl iow| si so| r/s w/s rkB/s wkB/s queue await svctm %util| ins upd del sel iud| lor hit| 09:06:00|76.05 65.20 42.96| 79 7 7 7| 0 0|16369.6 21.2261867.3 262.5 8.4 0.5 0.1 98.9| 0 0 0 4305 0| 494509 96.59| 09:06:01|75.33 65.23 43.09| 73 6 9 12| 0 0|15864.0 209.3253685.4 6238.0 10.0 0.6 0.1 98.7| 0 0 0 3913 0| 483480 96.62| 09:06:02|75.33 65.23 43.09| 73 7 8 12| 0 0|15854.7 12.7253613.2 93.6 11.0 0.7 0.1 99.0| 0 0 0 4271 0| 483771 96.64| 09:06:03|75.33 65.23 43.09| 75 7 9 9| 0 0|16074.8 8.7257104.3 81.7 8.1 0.5 0.1 98.5| 0 0 0 4060 0| 480701 96.55| 09:06:04|75.33 65.23 43.09| 76 7 8 9| 0 0|16221.7 9.7259500.1 139.4 8.1 0.5 0.1 97.6| 0 0 0 3953 0| 486774 96.56| 09:06:05|74.98 65.33 43.24| 78 7 8 8| 0 0|16330.7 8.7261166.5 85.3 8.2 0.5 0.1 98.5| 0 0 0 3957 0| 481775 96.53| 09:06:06|74.98 65.33 43.24| 75 7 9 9| 0 0|16093.7 11.7257436.1 93.7 8.2 0.5 0.1 99.2| 0 0 0 3938 0| 489251 96.60| 09:06:07|74.98 65.33 43.24| 75 7 5 13| 0 0|15758.9 19.2251989.4 188.2 14.7 0.9 0.1 99.7| 0 0 0 4140 0| 494738 96.70| 09:06:08|74.98 65.33 43.24| 69 7 10 15| 0 0|16166.3 8.7258474.9 81.2 8.9 0.5 0.1 98.7| 0 0 0 3993 0| 487162 96.58| 09:06:09|74.98 65.33 43.24| 74 7 9 10| 0 0|16071.0 8.7257010.9 93.3 8.2 0.5 0.1 99.2| 0 0 0 4098 0| 491557 96.61| 09:06:10|70.98 64.66 43.14| 71 7 9 12| 0 0|15549.6 216.1248701.1 6188.7 8.3 0.5 0.1 97.8| 0 0 0 3879 0| 480832 96.66| 09:06:11|70.98 64.66 43.14| 71 7 10 13| 0 0|16233.7 22.4259568.1 257.1 8.2 0.5 0.1 99.2| 0 0 0 4088 0| 493200 96.62| 09:06:12|70.98 64.66 43.14| 78 7 8 7| 0 0|15932.4 10.6254779.5 108.1 8.1 0.5 0.1 98.6| 0 0 0 4168 0| 489838 96.63| 09:06:13|70.98 64.66 43.14| 71 8 9 12| 0 0|16255.9 11.5259902.3 103.9 8.3 0.5 0.1 98.0| 0 0 0 3874 0| 481246 96.52| 09:06:14|70.98 64.66 43.14| 60 6 16 18| 0 0|15621.0 9.7249826.1 81.9 8.0 0.5 0.1 99.3| 0 0 0 3956 0| 480278 96.65|

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • problems mounting an external IDE drive via USB in ubuntu

    - by Roy Rico
    I am having a problem connecting a specific IDE drive to my linux box. It's an old drive which I just want to get about 3 GB of files off of. INFO I am trying to connect a 200GB IDE Maxtor Drive, internally and externally... externally: I am using an self powered USB IDE external drive enclosure which I have used to connect various drives, under ubuntu and windows, in the past. The other posts stated it coudl be a problem I think i may have formatted the /dev/sdc partition instead of /dev/sdc1 partition when i originally formatted the drive. internally: I only have one machine left that has an internal IDE interface, and it's got XP on it. I plugged this drive internally into this machine with windows XP and used the ext2/ext3 drivers to mount this drive, but some files have question marks (?) in the file names which is messing up my copy process in windows. I can't delete the files under windows. Ubuntu Linux will not install on my only remaining machine that has IDE controller. I have tried the suggestions in the questions below http://superuser.com/questions/88182/mount-an-external-drive-in-ubuntu http://superuser.com/questions/23210/ubuntu-fails-to-mount-usb-drive it looks like i can see the drive in /proc/partitions $ cat /proc/partitions major minor #blocks name 8 0 78125000 sda 8 1 74894998 sda1 8 2 1 sda2 8 5 3229033 sda5 8 16 199148544 sdb <-- could be my drive? but it's not listed under fdisk -l $ fdisk -l Disk /dev/sda: 80.0 GB, 80000000000 bytes 255 heads, 63 sectors/track, 9726 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xd0f4738c Device Boot Start End Blocks Id System /dev/sda1 * 1 9324 74894998+ 83 Linux /dev/sda2 9325 9726 3229065 5 Extended /dev/sda5 9325 9726 3229033+ 82 Linux swap / Solaris and here is my log of /var/log/messages. with a bunch of weird output, can someone let me know what that weird output is? Mar 3 19:49:40 mala kernel: [687455.112029] usb 1-7: new high speed USB device using ehci_hcd and address 3 Mar 3 19:49:41 mala kernel: [687455.248576] usb 1-7: configuration #1 chosen from 1 choice Mar 3 19:49:41 mala kernel: [687455.267450] Initializing USB Mass Storage driver... Mar 3 19:49:41 mala kernel: [687455.269180] scsi4 : SCSI emulation for USB Mass Storage devices Mar 3 19:49:41 mala kernel: [687455.269410] usbcore: registered new interface driver usb-storage Mar 3 19:49:41 mala kernel: [687455.269416] USB Mass Storage support registered. Mar 3 19:49:46 mala kernel: [687460.270917] scsi 4:0:0:0: Direct-Access Maxtor 6 Y200P0 YAR4 PQ: 0 ANSI: 2 Mar 3 19:49:46 mala kernel: [687460.271485] sd 4:0:0:0: Attached scsi generic sg2 type 0 Mar 3 19:49:46 mala kernel: [687460.278858] sd 4:0:0:0: [sdb] 398297088 512-byte logical blocks: (203 GB/189 GiB) Mar 3 19:49:46 mala kernel: [687460.280866] sd 4:0:0:0: [sdb] Write Protect is off Mar 3 19:50:16 mala kernel: [687460.283784] sdb: Mar 3 19:50:16 mala kernel: [687491.112020] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:50:47 mala kernel: [687522.120030] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:18 mala kernel: [687553.112034] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:49 mala kernel: [687584.116025] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:52:02 mala kernel: [687596.170632] type=1505 audit(1267671122.035:31): operation="profile_replace" pid=8426 name=/usr/lib/cups/backend/cups-pdf Mar 3 19:52:02 mala kernel: [687596.171551] type=1505 audit(1267671122.035:32): operation="profile_replace" pid=8426 name=/usr/sbin/cupsd Mar 3 19:52:06 mala kernel: [687600.908056] async/0 D c08145c0 0 7655 2 0x00000000 Mar 3 19:52:06 mala kernel: [687600.908062] e5601d38 00000046 e5774000 c08145c0 e4c2a848 c08145c0 d203973a 0002713d Mar 3 19:52:06 mala kernel: [687600.908072] c08145c0 c08145c0 e4c2a848 c08145c0 00000000 0002713d c08145c0 f0a98c00 Mar 3 19:52:06 mala kernel: [687600.908079] e4c2a5b0 c20125c0 00000002 e5601d80 e5601d44 c056f3be e5601d78 e5601d4c Mar 3 19:52:06 mala kernel: [687600.908087] Call Trace: Mar 3 19:52:06 mala kernel: [687600.908099] [<c056f3be>] io_schedule+0x1e/0x30 Mar 3 19:52:06 mala kernel: [687600.908107] [<c01b2cf5>] sync_page+0x35/0x40 Mar 3 19:52:06 mala kernel: [687600.908111] [<c056f8f7>] __wait_on_bit_lock+0x47/0x90 Mar 3 19:52:06 mala kernel: [687600.908115] [<c01b2cc0>] ? sync_page+0x0/0x40 Mar 3 19:52:06 mala kernel: [687600.908121] [<c020f390>] ? blkdev_readpage+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908125] [<c01b2ca9>] __lock_page+0x79/0x80 Mar 3 19:52:06 mala kernel: [687600.908130] [<c015c130>] ? wake_bit_function+0x0/0x50 Mar 3 19:52:06 mala kernel: [687600.908135] [<c01b459f>] read_cache_page_async+0xbf/0xd0 Mar 3 19:52:06 mala kernel: [687600.908139] [<c01b45c2>] read_cache_page+0x12/0x60 Mar 3 19:52:06 mala kernel: [687600.908144] [<c0232dca>] read_dev_sector+0x3a/0x80 Mar 3 19:52:06 mala kernel: [687600.908148] [<c0233d3e>] adfspart_check_ICS+0x1e/0x160 Mar 3 19:52:06 mala kernel: [687600.908152] [<c023339f>] ? disk_name+0xaf/0xc0 Mar 3 19:52:06 mala kernel: [687600.908157] [<c0233d20>] ? adfspart_check_ICS+0x0/0x160 Mar 3 19:52:06 mala kernel: [687600.908161] [<c02334de>] check_partition+0x10e/0x180 Mar 3 19:52:06 mala kernel: [687600.908165] [<c02335f6>] rescan_partitions+0xa6/0x330 Mar 3 19:52:06 mala kernel: [687600.908171] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908175] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908180] [<c039fc43>] ? get_device+0x13/0x20 Mar 3 19:52:06 mala kernel: [687600.908185] [<c03c263f>] ? sd_open+0x5f/0x1b0 Mar 3 19:52:06 mala kernel: [687600.908189] [<c020fda0>] __blkdev_get+0x140/0x310 Mar 3 19:52:06 mala kernel: [687600.908194] [<c020f0ac>] ? bdget+0xec/0x100 Mar 3 19:52:06 mala kernel: [687600.908198] [<c020ff7a>] blkdev_get+0xa/0x10 Mar 3 19:52:06 mala kernel: [687600.908202] [<c0232f30>] register_disk+0x120/0x140 Mar 3 19:52:06 mala kernel: [687600.908207] [<c0308b4d>] ? blk_register_region+0x2d/0x40 Mar 3 19:52:06 mala kernel: [687600.908211] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908216] [<c0308cf0>] add_disk+0x80/0x140 Mar 3 19:52:06 mala kernel: [687600.908221] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908225] [<c0308860>] ? exact_lock+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908230] [<c03c53df>] sd_probe_async+0xff/0x1c0

    Read the article

  • How to stop a ICMP attack?

    - by cumhur onat
    We are under a heavy icmp flood attack. Tcpdump shows the result below. Altough we have blocked ICMP with iptables tcpdump still prints icmp packets. I've also attached iptables configuration and "top" result. Is there any thing I can do to completely stop icmp packets? [root@server downloads]# tcpdump icmp -v -n -nn tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes 03:02:47.810957 IP (tos 0x0, ttl 49, id 16007, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 124, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.811559 IP (tos 0x0, ttl 49, id 16010, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 52, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.811922 IP (tos 0x0, ttl 49, id 16012, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 122, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.812485 IP (tos 0x0, ttl 49, id 16015, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 126, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.812613 IP (tos 0x0, ttl 49, id 16016, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 122, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.812992 IP (tos 0x0, ttl 49, id 16018, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 122, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.813582 IP (tos 0x0, ttl 49, id 16020, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 52, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.814092 IP (tos 0x0, ttl 49, id 16023, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 120, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.814233 IP (tos 0x0, ttl 49, id 16024, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 120, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.815579 IP (tos 0x0, ttl 49, id 16025, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 50, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.815726 IP (tos 0x0, ttl 49, id 16026, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 50, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.815890 IP (tos 0x0, ttl 49, id 16027, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 iptables configuration: [root@server etc]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ofis tcp -- anywhere anywhere tcp dpt:mysql ofis tcp -- anywhere anywhere tcp dpt:ftp DROP icmp -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination DROP icmp -- anywhere anywhere Chain ofis (2 references) target prot opt source destination ACCEPT all -- OUR_OFFICE_IP anywhere DROP all -- anywhere anywhere top: top - 03:12:19 up 400 days, 15:43, 3 users, load average: 1.49, 1.67, 2.61 Tasks: 751 total, 3 running, 748 sleeping, 0 stopped, 0 zombie Cpu(s): 8.2%us, 1.0%sy, 0.0%ni, 87.9%id, 2.1%wa, 0.1%hi, 0.7%si, 0.0%st Mem: 32949948k total, 26906844k used, 6043104k free, 4707676k buffers Swap: 10223608k total, 0k used, 10223608k free, 14255584k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 36 root 39 19 0 0 0 R 100.8 0.0 17:03.56 ksoftirqd/11 10552 root 15 0 11408 1460 676 R 5.7 0.0 0:00.04 top 7475 lighttpd 15 0 304m 22m 15m S 3.8 0.1 0:05.37 php-cgi 1294 root 10 -5 0 0 0 S 1.9 0.0 380:54.73 kjournald 3574 root 15 0 631m 11m 5464 S 1.9 0.0 0:00.65 node 7766 lighttpd 16 0 302m 19m 14m S 1.9 0.1 0:05.70 php-cgi 10237 postfix 15 0 52572 2216 1692 S 1.9 0.0 0:00.02 scache 1 root 15 0 10372 680 572 S 0.0 0.0 0:07.99 init 2 root RT -5 0 0 0 S 0.0 0.0 0:16.72 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.06 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 1:10.46 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:01.11 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 2:36.15 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.19 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 3:48.91 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.20 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 uname -a [root@server etc]# uname -a Linux thisis.oursite.com 2.6.18-238.19.1.el5 #1 SMP Fri Jul 15 07:31:24 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux arp -an [root@server downloads]# arp -an ? (77.92.136.194) at 00:25:90:04:F0:90 [ether] on eth0 ? (192.168.0.2) at 00:25:90:04:F0:91 [ether] on eth1 ? (77.92.136.193) at 00:23:9C:0B:CD:01 [ether] on eth0

    Read the article

  • Sendmail to local domain ignoring MX records (part 2)

    - by FractalizeR
    Hello. I have the exact problem, like in this post: http://serverfault.com/questions/25068/sendmail-to-local-domain-ignoring-mx-records I am also using email provider like GMail For Your Domain (which stores your mail and manages it). I am sending mail from my server directly, but receiving mail is done via Yandex (email provider). Since the server hosts forum, I prefer to send mail directly from it because using another mail provider can slow things. Also, when I send 300.000 emails to my subscribers, email provider will surely block me thinking I send spam. My DNS zone now is: ; ; GSMFORUM.RU ; $TTL 1H gsmforum.ru. SOA ns1.hc.ru. support.hc.ru. ( 2009122268 ; Serial 1H ; Refresh 30M ; Retry 1W ; Expire 1H ) ; Minimum gsmforum.ru. NS ns1.hc.ru. gsmforum.ru. NS ns2.hc.ru. @ A 79.174.68.223 *.gsmforum.ru. CNAME @ ns1 A 79.174.68.223 ns2 A 79.174.68.224 @ MX 10 mx.yandex.ru. mail CNAME domain.mail.yandex.net. yamail-xxxxxxxxx CNAME mail.yandex.ru. Server hostname is server.gsmforum.ru. May be this is the cause? Can someone explain the reason of the matter (the rules that make sendmail consider domain to be local)? Can I easily change *.gsmforum.ru. CNAME @ into *.gsmforum.ru. A 79.174.68.224 to solve this problem? [root@server ~]# cat /etc/mail/local-host-names localhost localhost.localdomain This server hosts gsmforum.ru so I cannot put it into another domain like David Mackintosh suggests. Putting domain in mailertable doesn't solve the problem also. sendmail -bt still shows, that address is local. DontProbeInterfaces is also set to true at sendmail config. M4 file follows: divert(-1)dnl dnl # dnl # This is the sendmail macro config file for m4. If you make changes to dnl # /etc/mail/sendmail.mc, you will need to regenerate the dnl # /etc/mail/sendmail.cf file by confirming that the sendmail-cf package is dnl # installed and then performing a dnl # dnl # make -C /etc/mail dnl # include(`/usr/share/sendmail-cf/m4/cf.m4')dnl VERSIONID(`setup for linux')dnl OSTYPE(`linux')dnl dnl # dnl # Do not advertize sendmail version. dnl # dnl define(`confSMTP_LOGIN_MSG', `$j Sendmail; $b')dnl dnl # dnl # default logging level is 9, you might want to set it higher to dnl # debug the configuration dnl # dnl define(`confLOG_LEVEL', `9')dnl dnl # dnl # Uncomment and edit the following line if your outgoing mail needs to dnl # be sent out through an external mail server: dnl # dnl define(`SMART_HOST', `smtp.your.provider')dnl dnl # define(`confDEF_USER_ID', ``8:12'')dnl dnl define(`confAUTO_REBUILD')dnl define(`confTO_CONNECT', `1m')dnl define(`confTRY_NULL_MX_LIST', `True')dnl define(`confDONT_PROBE_INTERFACES',`True') define(`PROCMAIL_MAILER_PATH', `/usr/bin/procmail')dnl define(`ALIAS_FILE', `/etc/aliases')dnl define(`STATUS_FILE', `/var/log/mail/statistics')dnl define(`UUCP_MAILER_MAX', `2000000')dnl define(`confUSERDB_SPEC', `/etc/mail/userdb.db')dnl define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl define(`confAUTH_OPTIONS', `A')dnl dnl # dnl # The following allows relaying if the user authenticates, and disallows dnl # plaintext authentication (PLAIN/LOGIN) on non-TLS links dnl # dnl define(`confAUTH_OPTIONS', `A p')dnl dnl # dnl # PLAIN is the preferred plaintext authentication method and used by dnl # Mozilla Mail and Evolution, though Outlook Express and other MUAs do dnl # use LOGIN. Other mechanisms should be used if the connection is not dnl # guaranteed secure. dnl # Please remember that saslauthd needs to be running for AUTH. dnl # dnl TRUST_AUTH_MECH(`EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl # dnl # Rudimentary information on creating certificates for sendmail TLS: dnl # cd /usr/share/ssl/certs; make sendmail.pem dnl # Complete usage: dnl # make -C /usr/share/ssl/certs usage dnl # dnl define(`confCACERT_PATH', `/etc/pki/tls/certs')dnl dnl define(`confCACERT', `/etc/pki/tls/certs/ca-bundle.crt')dnl dnl define(`confSERVER_CERT', `/etc/pki/tls/certs/sendmail.pem')dnl dnl define(`confSERVER_KEY', `/etc/pki/tls/certs/sendmail.pem')dnl dnl # dnl # This allows sendmail to use a keyfile that is shared with OpenLDAP's dnl # slapd, which requires the file to be readble by group ldap dnl # dnl define(`confDONT_BLAME_SENDMAIL', `groupreadablekeyfile')dnl dnl # dnl define(`confTO_QUEUEWARN', `4h')dnl dnl define(`confTO_QUEUERETURN', `5d')dnl dnl define(`confQUEUE_LA', `12')dnl dnl define(`confREFUSE_LA', `18')dnl define(`confTO_IDENT', `0')dnl dnl FEATURE(delay_checks)dnl FEATURE(`no_default_msa', `dnl')dnl FEATURE(`smrsh', `/usr/sbin/smrsh')dnl FEATURE(`mailertable', `hash -o /etc/mail/mailertable.db')dnl FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl FEATURE(redirect)dnl FEATURE(always_add_domain)dnl FEATURE(use_cw_file)dnl FEATURE(use_ct_file)dnl dnl # dnl # The following limits the number of processes sendmail can fork to accept dnl # incoming messages or process its message queues to 20.) sendmail refuses dnl # to accept connections once it has reached its quota of child processes. dnl # dnl define(`confMAX_DAEMON_CHILDREN', `20')dnl dnl # dnl # Limits the number of new connections per second. This caps the overhead dnl # incurred due to forking new sendmail processes. May be useful against dnl # DoS attacks or barrages of spam. (As mentioned below, a per-IP address dnl # limit would be useful but is not available as an option at this writing.) dnl # dnl define(`confCONNECTION_RATE_THROTTLE', `3')dnl dnl # dnl # The -t option will retry delivery if e.g. the user runs over his quota. dnl # FEATURE(local_procmail, `', `procmail -t -Y -a $h -d $u')dnl FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl FEATURE(`blacklist_recipients')dnl EXPOSED_USER(`root')dnl dnl # dnl # For using Cyrus-IMAPd as POP3/IMAP server through LMTP delivery uncomment dnl # the following 2 definitions and activate below in the MAILER section the dnl # cyrusv2 mailer. dnl # dnl define(`confLOCAL_MAILER', `cyrusv2')dnl dnl define(`CYRUSV2_MAILER_ARGS', `FILE /var/lib/imap/socket/lmtp')dnl dnl # dnl # The following causes sendmail to only listen on the IPv4 loopback address dnl # 127.0.0.1 and not on any other network devices. Remove the loopback dnl # address restriction to accept email from the internet or intranet. dnl # DAEMON_OPTIONS(`Name=MTA,Port=smtp') dnl # dnl # The following causes sendmail to additionally listen to port 587 for dnl # mail from MUAs that authenticate. Roaming users who can't reach their dnl # preferred sendmail daemon due to port 25 being blocked or redirected find dnl # this useful. dnl # dnl DAEMON_OPTIONS(`Port=submission, Name=MSA, M=Ea')dnl dnl # dnl # The following causes sendmail to additionally listen to port 465, but dnl # starting immediately in TLS mode upon connecting. Port 25 or 587 followed dnl # by STARTTLS is preferred, but roaming clients using Outlook Express can't dnl # do STARTTLS on ports other than 25. Mozilla Mail can ONLY use STARTTLS dnl # and doesn't support the deprecated smtps; Evolution <1.1.1 uses smtps dnl # when SSL is enabled-- STARTTLS support is available in version 1.1.1. dnl # dnl # For this to work your OpenSSL certificates must be configured. dnl # dnl DAEMON_OPTIONS(`Port=smtps, Name=TLSMTA, M=s')dnl dnl # dnl # The following causes sendmail to additionally listen on the IPv6 loopback dnl # device. Remove the loopback address restriction listen to the network. dnl # dnl DAEMON_OPTIONS(`port=smtp,Addr=::1, Name=MTA-v6, Family=inet6')dnl dnl # dnl # enable both ipv6 and ipv4 in sendmail: dnl # dnl DAEMON_OPTIONS(`Name=MTA-v4, Family=inet, Name=MTA-v6, Family=inet6') dnl # dnl # We strongly recommend not accepting unresolvable domains if you want to dnl # protect yourself from spam. However, the laptop and users on computers dnl # that do not have 24x7 DNS do need this. dnl # FEATURE(`accept_unresolvable_domains')dnl dnl # dnl FEATURE(`relay_based_on_MX')dnl dnl # dnl # Also accept email sent to "localhost.localdomain" as local email. dnl # LOCAL_DOMAIN(`localhost.localdomain')dnl dnl # dnl # The following example makes mail from this host and any additional dnl # specified domains appear to be sent from mydomain.com dnl # dnl MASQUERADE_AS(`mydomain.com')dnl dnl # dnl # masquerade not just the headers, but the envelope as well dnl # dnl FEATURE(masquerade_envelope)dnl dnl # dnl # masquerade not just @mydomainalias.com, but @*.mydomainalias.com as well dnl # dnl FEATURE(masquerade_entire_domain)dnl dnl # dnl MASQUERADE_DOMAIN(localhost)dnl dnl MASQUERADE_DOMAIN(localhost.localdomain)dnl dnl MASQUERADE_DOMAIN(mydomainalias.com)dnl dnl MASQUERADE_DOMAIN(mydomain.lan)dnl MAILER(smtp)dnl MAILER(procmail)dnl dnl MAILER(cyrusv2)dnl FEATURE(`dnsbl',`zen.spamhaus.org',`Rejected - your IP is blacklisted by http://www.spamhaus.org')

    Read the article

  • problems mounting an external IDE drive via USB in ubuntu

    - by Roy Rico
    I am having a problem connecting a specific IDE drive to my linux box. It's an old drive which I just want to get about 3 GB of files off of. INFO I am trying to connect a 200GB IDE Maxtor Drive, internally and externally... externally: I am using an self powered USB IDE external drive enclosure which I have used to connect various drives, under ubuntu and windows, in the past. The other posts stated it coudl be a problem I think i may have formatted the /dev/sdc partition instead of /dev/sdc1 partition when i originally formatted the drive. internally: I only have one machine left that has an internal IDE interface, and it's got XP on it. I plugged this drive internally into this machine with windows XP and used the ext2/ext3 drivers to mount this drive, but some files have question marks (?) in the file names which is messing up my copy process in windows. I can't delete the files under windows. Ubuntu Linux will not install on my only remaining machine that has IDE controller. I have tried the suggestions in the questions below http://superuser.com/questions/88182/mount-an-external-drive-in-ubuntu http://superuser.com/questions/23210/ubuntu-fails-to-mount-usb-drive it looks like i can see the drive in /proc/partitions $ cat /proc/partitions major minor #blocks name 8 0 78125000 sda 8 1 74894998 sda1 8 2 1 sda2 8 5 3229033 sda5 8 16 199148544 sdb <-- could be my drive? but it's not listed under fdisk -l $ fdisk -l Disk /dev/sda: 80.0 GB, 80000000000 bytes 255 heads, 63 sectors/track, 9726 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xd0f4738c Device Boot Start End Blocks Id System /dev/sda1 * 1 9324 74894998+ 83 Linux /dev/sda2 9325 9726 3229065 5 Extended /dev/sda5 9325 9726 3229033+ 82 Linux swap / Solaris and here is my log of /var/log/messages. with a bunch of weird output, can someone let me know what that weird output is? Mar 3 19:49:40 mala kernel: [687455.112029] usb 1-7: new high speed USB device using ehci_hcd and address 3 Mar 3 19:49:41 mala kernel: [687455.248576] usb 1-7: configuration #1 chosen from 1 choice Mar 3 19:49:41 mala kernel: [687455.267450] Initializing USB Mass Storage driver... Mar 3 19:49:41 mala kernel: [687455.269180] scsi4 : SCSI emulation for USB Mass Storage devices Mar 3 19:49:41 mala kernel: [687455.269410] usbcore: registered new interface driver usb-storage Mar 3 19:49:41 mala kernel: [687455.269416] USB Mass Storage support registered. Mar 3 19:49:46 mala kernel: [687460.270917] scsi 4:0:0:0: Direct-Access Maxtor 6 Y200P0 YAR4 PQ: 0 ANSI: 2 Mar 3 19:49:46 mala kernel: [687460.271485] sd 4:0:0:0: Attached scsi generic sg2 type 0 Mar 3 19:49:46 mala kernel: [687460.278858] sd 4:0:0:0: [sdb] 398297088 512-byte logical blocks: (203 GB/189 GiB) Mar 3 19:49:46 mala kernel: [687460.280866] sd 4:0:0:0: [sdb] Write Protect is off Mar 3 19:50:16 mala kernel: [687460.283784] sdb: Mar 3 19:50:16 mala kernel: [687491.112020] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:50:47 mala kernel: [687522.120030] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:18 mala kernel: [687553.112034] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:49 mala kernel: [687584.116025] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:52:02 mala kernel: [687596.170632] type=1505 audit(1267671122.035:31): operation="profile_replace" pid=8426 name=/usr/lib/cups/backend/cups-pdf Mar 3 19:52:02 mala kernel: [687596.171551] type=1505 audit(1267671122.035:32): operation="profile_replace" pid=8426 name=/usr/sbin/cupsd Mar 3 19:52:06 mala kernel: [687600.908056] async/0 D c08145c0 0 7655 2 0x00000000 Mar 3 19:52:06 mala kernel: [687600.908062] e5601d38 00000046 e5774000 c08145c0 e4c2a848 c08145c0 d203973a 0002713d Mar 3 19:52:06 mala kernel: [687600.908072] c08145c0 c08145c0 e4c2a848 c08145c0 00000000 0002713d c08145c0 f0a98c00 Mar 3 19:52:06 mala kernel: [687600.908079] e4c2a5b0 c20125c0 00000002 e5601d80 e5601d44 c056f3be e5601d78 e5601d4c Mar 3 19:52:06 mala kernel: [687600.908087] Call Trace: Mar 3 19:52:06 mala kernel: [687600.908099] [<c056f3be>] io_schedule+0x1e/0x30 Mar 3 19:52:06 mala kernel: [687600.908107] [<c01b2cf5>] sync_page+0x35/0x40 Mar 3 19:52:06 mala kernel: [687600.908111] [<c056f8f7>] __wait_on_bit_lock+0x47/0x90 Mar 3 19:52:06 mala kernel: [687600.908115] [<c01b2cc0>] ? sync_page+0x0/0x40 Mar 3 19:52:06 mala kernel: [687600.908121] [<c020f390>] ? blkdev_readpage+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908125] [<c01b2ca9>] __lock_page+0x79/0x80 Mar 3 19:52:06 mala kernel: [687600.908130] [<c015c130>] ? wake_bit_function+0x0/0x50 Mar 3 19:52:06 mala kernel: [687600.908135] [<c01b459f>] read_cache_page_async+0xbf/0xd0 Mar 3 19:52:06 mala kernel: [687600.908139] [<c01b45c2>] read_cache_page+0x12/0x60 Mar 3 19:52:06 mala kernel: [687600.908144] [<c0232dca>] read_dev_sector+0x3a/0x80 Mar 3 19:52:06 mala kernel: [687600.908148] [<c0233d3e>] adfspart_check_ICS+0x1e/0x160 Mar 3 19:52:06 mala kernel: [687600.908152] [<c023339f>] ? disk_name+0xaf/0xc0 Mar 3 19:52:06 mala kernel: [687600.908157] [<c0233d20>] ? adfspart_check_ICS+0x0/0x160 Mar 3 19:52:06 mala kernel: [687600.908161] [<c02334de>] check_partition+0x10e/0x180 Mar 3 19:52:06 mala kernel: [687600.908165] [<c02335f6>] rescan_partitions+0xa6/0x330 Mar 3 19:52:06 mala kernel: [687600.908171] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908175] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908180] [<c039fc43>] ? get_device+0x13/0x20 Mar 3 19:52:06 mala kernel: [687600.908185] [<c03c263f>] ? sd_open+0x5f/0x1b0 Mar 3 19:52:06 mala kernel: [687600.908189] [<c020fda0>] __blkdev_get+0x140/0x310 Mar 3 19:52:06 mala kernel: [687600.908194] [<c020f0ac>] ? bdget+0xec/0x100 Mar 3 19:52:06 mala kernel: [687600.908198] [<c020ff7a>] blkdev_get+0xa/0x10 Mar 3 19:52:06 mala kernel: [687600.908202] [<c0232f30>] register_disk+0x120/0x140 Mar 3 19:52:06 mala kernel: [687600.908207] [<c0308b4d>] ? blk_register_region+0x2d/0x40 Mar 3 19:52:06 mala kernel: [687600.908211] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908216] [<c0308cf0>] add_disk+0x80/0x140 Mar 3 19:52:06 mala kernel: [687600.908221] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908225] [<c0308860>] ? exact_lock+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908230] [<c03c53df>] sd_probe_async+0xff/0x1c0

    Read the article

  • Why did my flash drive become "read only" and (how) can I fix it?

    - by Bob
    I have a brand new flash drive (one week old) that has become marked as read only, by Windows, Kubuntu and a bootable partitioner. Why did this happen? Is it fixable? If it is, how can I fix this? The problem Firstly, this drive is new. It's certainly not been used enough to die from normal wear and tear, though I would not discount defective components. The drive itself has somehow become locked in a read only state. Windows' Disk management: Diskpart: Generic Flash Disk USB Device Disk ID: 33FA33FA Type : USB Status : Online Path : 0 Target : 0 LUN ID : 0 Location Path : UNAVAILABLE Current Read-only State : Yes Read-only : No Boot Disk : No Pagefile Disk : No Hibernation File Disk : No Crashdump Disk : No Clustered Disk : No What really confuses me is Current Read-only State : Yes and Read-only : No. Attempted solutions So far, I've tried: Formatting it in Windows (in Disk management, the format options are greyed out when right clicking). DiskPart Clean (CLEAN - Clear the configuration information, or all information, off the disk.): DISKPART> clean DiskPart has encountered an error: The media is write protected. See the System Event Log for more information. There was nothing in the event log. Windows command line format >format G: Insert new disk for drive G: and press ENTER when ready... The type of the file system is FAT32. Verifying 7740M Cannot format. This volume is write protected. Windows chkdsk: see below for details Kubuntu fsck (through VirtualBox USB passthrough): see below for details Acronis True Image to format, to convert to GPT, to destroy and rebuild MBR, basically anything: failed (could not write to MBR) Details (and a nice story) Background This was a brand new, generic, 8GB flash drive I wanted to create a multiboot flash drive with. It came formatted as FAT32, though oddly a little larger than most 8 GIGAbyte flash drives I've come across. Approximately 127MB was listed as "used" by Windows. I never discovered why. The end usable space was about what I normally expect from a 8GB drive (approx 7.4 GIBIbytes). I had thrown quite a few Linux distros on, along with a copy of Hiren's. They would all boot perfectly. They were put on with YUMI. When I tried to put the Knoppix DVD on, YUMI added an odd video option to its boot comman which caused Knoppix to boot with a black screen on X. ttys 1 through 6 still worked as text only interfaces. A few days later, I took some time to take that odd video option off, making the boot command match the one that comes with Knoppix. On the attempt to boot, Knoppix reported some form of LZMA corruption. Leading up to the current issue I was thinking the Knoppix files may have been corrupted somehow, so I tried reloading it. The drive was nearly full (45MB free), so I deleted a generic ISO that also was not booting. That went fine. I then went through YUMI to 'uninstall' Knoppix, i.e. delete files and remove from the menus. The files went first, then the menus were cleared successfully. However, the free space was stuck at about 700MB, same as it was before removing Knoppix. In the old Knoppix folder, there was a 0 byte file named KNOPPIX that could not be deleted. I tried reinserting the drive to delete this file - without safely removing, if that made a difference (hey, first time for everything). Running the standard Windows chkdsk scan without /r or /f reported errors found. Running with /r just got it stuck. I decided to give fsck a shot, so I loaded up my Kubuntu VM and attached the drive to it with VirtualBox's USB 2.0 passthrough. I umounted it (/dev/sda1) and ran a fsck. There are differences between boot sector and its backup. I chose No action. It told me FATs differ and asked me to select either the first or second FAT. Whichever I selected, I got a notice of Free cluster summary wrong. If I chose Correct, it gave a list of incorrect file names. To try to fix something, at least, I ran it with the -p option. Halfway through fixing the files, the VM froze - I ended its process about ten minutes later. Cause? My next attempt was to use YUMI, again, to rebuild the whole drive. I used YUMI's built in reformat (to FAT32) option and installed a Kubuntu ISO (700MB). The format was successful, however, the extract and copy of Kubuntu (which YUMI uses a 7zip binary for) froze at about 60% done. After waiting for about fifteen minutes (longer than the 3.5GB Knoppix ISO took last time), I pulled the drive out. The drive at this point was already formatted, SYSLINUX already installed, just waiting on the unpacking of an ISO and the modifying of the boot menus. Plugging it back in, it came up as normal - however, any write action would fail. Disk management reported it as read only. On reconnect, it would come up as normal but a write operation would cause it to go read only again. After a few attempts, it started coming up as read only on insertion. Attempts to fix This is when I ran through the attempts listed above, to try and reformat it in case of a faulty format. However the inability to do so even on a bootable disk indicated something more serious is wrong. chkdsk now reports nothing is wrong, and fsck still reports MBR inconsistencies, but now always chooses first FAT automatically after telling me FATs differ. It still does the same Free cluster summary wrong afterwards. I cannot run with -p anymore because it is now marked as read only. It also managed to corrupt my VM's disk somehow on the first attempt (yes, I'm sure I chose sda, which is mapped to a 7.4GB drive - I triple checked). Thank god for snapshots? I'm just about out of ideas. To my inexperienced mind it looks like something in the drive's firmware set it to read only "permanently" somehow - is there any way to reset this? I don't particularly care about keeping data, considering I've reformatted it twice. Also, fixes that keep me in Windows are better; it reduces the risk of me accidentally nuking my main hard drive. Update 1: I pulled apart the drive out of curiosity. As you can see, there are no obvious write protect switches. There is an IC on the other side, ALCOR branded labelled AU6989HL, if that matters. If there appears to be no way to fix this, I'll probably pull out the (glued down) card and put it in a card reader to check if it's the card or the controller that died. Update 2: I've pulled the card off, Windows detects the drive as a card reader now. The contacts on the card don't appear to be used, and there are several rows of holes on the card itself. Putting it into the card reader only detects about 30MB total, RAW. It's probably either the reader incorrectly reporting the card as faulty (as if a real SD card's write protect was switched on) or a bad contact somewhere. If nothing else, I have a spare 8GB Micro SD card now... as soon as I figure out how to format it as 8GB.

    Read the article

  • What is the RSA SecurID packet format?

    - by bmatthews68
    I am testing a client application that authenticates using RSA SecurID hardware tokens. The authentication is failing and I am not finding any useful information in the log files. I am using Authentication Manager 8.0 and the Java SDK. I have a traffic capture which I would like to analyze with Wireshark to and from port 5500 on the authentication agent. But I can't find the packet format searching the internet or on the the RSA SecurCare knowledge base. Can anybody direct me to the packet format? Here is an extract from the rsa_api_debug.log file which dumps the UDP payload of the request and the response: [2013-11-06 15:11:08,602] main - b.a():? - Sending 508 bytes to 192.168.10.121; contents: 5c 5 0 3 3 5 0 0 2 0 0 0 0 0 1 ea 71 ee 50 6e 45 83 95 8 39 4 72 e 55 cf cc 62 6d d5 a4 10 79 89 13 d5 23 6a c1 ab 33 8 c3 a1 91 92 93 4f 1e 4 8d 2a 22 2c d0 c3 7 fc 96 5f ba bf 0 80 60 60 9d 1d 9c b9 f3 58 4b 43 18 5f e0 6d 5e f5 f4 5d df bf 41 b9 9 ae 46 a0 a9 66 2d c7 6 f6 d7 66 f1 4 f8 ad 8a 9f 4d 7e e5 9c 45 67 16 15 33 70 f0 1 d5 c0 38 39 f5 fd 5e 15 4f e3 fe ea 70 fa 30 c9 e0 18 ab 64 a9 fe 2c 89 78 a2 96 b6 76 3e 2e a2 ae 2e e0 69 80 8d 51 9 56 80 f4 1a 73 9a 70 f3 e7 c1 49 49 c3 41 3 c6 ce 3e a8 68 71 3f 2 b2 9b 27 8e 63 ce 59 38 64 d1 75 b7 b7 1f 62 eb 4d 1d de c7 21 e0 67 85 b e6 c3 80 0 60 54 47 e ef 3 f9 33 7b 78 e2 3e db e4 8e 76 73 45 3 38 34 1e dd 43 3e 72 a7 37 72 5 34 8e f4 ba 9d 71 6c e 45 49 fa 92 a f6 b bf 5 b 4f dc bd 19 0 7e d2 ef 94 d 3b 78 17 37 d9 ae 19 3a 7e 46 7d ea e4 3a 8c e1 e5 9 50 a2 eb df f2 57 97 bc f2 c3 a7 6f 19 7f 2c 1a 3f 94 25 19 4b b2 37 ed ce 97 f ae f ec c9 f5 be f0 8f 72 1c 34 84 1b 11 25 dd 44 8b 99 75 a4 77 3d e1 1d 26 41 58 55 5f d5 27 82 c d3 2a f8 4 aa 8d 5e e4 79 0 49 43 59 27 5e 15 87 a f4 c4 57 b6 e1 f8 79 3b d3 20 69 5e d0 80 6a 6b 9f 43 79 84 94 d0 77 b6 fc f 3 22 ca b9 35 c0 e8 7b e9 25 26 7f c9 fb e4 a7 fc bb b7 75 ac 7b bc f4 bb 4f a8 80 9b 73 da 3 94 da 87 e7 94 4c 80 b3 f1 2e 5b d8 2 65 25 bb 92 f4 92 e3 de 8 ee 2 30 df 84 a4 69 a6 a1 d0 9c e7 8e f 8 71 4b d0 1c 14 ac 7c c6 e3 2a 2e 2a c2 32 bc 21 c4 2f 4d df 9a f3 10 3e e5 c5 7f ad e4 fb ae 99 bf 58 0 20 0 0 0 0 0 0 0 0 0 0 [2013-11-06 15:11:08,602] main - b.b():? - Enterring getResponse [2013-11-06 15:11:08,618] main - b.a():? - Enterring getTimeoutValue(AceRequest AceAuthV4Request[AbstractAceRequest[ hdr=AcePacketHeader[Type=92 Ver=5 AppID=3 Enc=ENCRYPT Hi-Proto=5 Opt=0 CirID=0] created=1383750668571 trailer=AcePackeTrailer[nonce=39e7a607b517c4dd crc=722833884]] user=bmatthews node-sec-req=0 wpcodes=null resp-mac=0 m-resp-mac=0 client=192.168.10.3 passcode==ZTmY|? sec-sgmt=AceSecondarySegments[ cnt=3] response=none]) [2013-11-06 15:11:08,618] main - b.a():? - acm base timeout: 5 [2013-11-06 15:11:08,618] main - b.b():? - Timeout is 5000 [2013-11-06 15:11:08,618] main - b.b():? - Current retries: 0 [2013-11-06 15:11:10,618] main - b.b():? - Received 508 bytes from 192.168.10.121; contents: 6c 5 0 3 3 6 0 0 0 0 0 1 4d 18 55 ca 18 df 84 49 70 ee 24 4a a5 c3 1c 4e 36 d8 51 ad c7 ef 49 89 6e 2e 23 b4 7e 49 73 4 15 d f4 d5 c0 bf fc 72 5b be d1 62 be e0 de 23 56 bf 26 36 7f b f0 ba 42 61 9b 6f 4b 96 88 9c e9 86 df c6 82 e5 4c 36 ee dc 1e d8 a1 0 71 65 89 dc ca ee 87 ae d6 60 c 86 1c e8 ef 9f d9 b9 4c ed 7 55 77 f3 fc 92 61 f9 32 70 6f 32 67 4d fc 17 4e 7b eb c3 c7 8c 64 3f d0 d0 c7 86 ad 4e 21 41 a2 80 dd 35 ba 31 51 e2 a0 ef df 82 52 d0 a8 43 cb 7c 51 c 85 4 c5 b2 ec 8f db e1 21 90 f5 d7 1b d7 14 ca c0 40 c5 41 4e 92 ee 3 ec 57 7 10 45 f3 54 d7 e4 e6 6e 79 89 9a 21 70 7a 3f 20 ab af 68 34 21 b7 1b 25 e1 ab d 9f cd 25 58 5a 59 b1 b8 98 58 2f 79 aa 8a 69 b9 4c c1 7d 36 28 a3 23 f5 cc 2b ab 9e f a1 79 ab 90 fd 5f 76 9f d9 86 d1 fc 4c 7a 4 24 6d de 64 f1 53 22 b0 b7 91 9a 7c a2 67 2a 35 68 83 74 6a 21 ac eb f8 a2 29 53 21 2f 5a 42 d6 26 b8 f6 7f 79 96 5 3b c2 15 3a b d0 46 42 b7 74 4e 1f 6a ad f5 73 70 46 d3 f8 e a3 83 a3 15 29 6e 68 2 df 56 5c 88 8d 6c 2f ab 11 f1 5 73 58 ec 4 5f 80 e3 ca 56 ce 8 b9 73 7c 79 fc 3 ff f1 40 97 bb e3 fb 35 d1 8d ba 23 fc 2d 27 5b f7 be 15 de 72 30 b e d6 5c 98 e8 44 bd ed a4 3d 87 b8 9b 35 e9 64 80 9a 2a 3c a2 cf 3e 39 cb f6 a2 f4 46 c7 92 99 bc f7 4a de 7e 79 9d 9b d9 34 7f df 27 62 4f 5b ef 3a 4c 8d 2e 66 11 f7 8 c3 84 6e 57 ba 2a 76 59 58 78 41 18 66 76 fd 9d cb a2 14 49 e1 59 4a 6e f5 c3 94 ae 1a ba 51 fc 29 54 ba 6c 95 57 6b 20 87 cc b8 dc 5f 48 72 9c c0 2c dd 60 56 4e 4c 6c 1d 40 bd 4 a1 10 4e a4 b1 87 83 dd 1c f2 df 4c [2013-11-06 15:11:10,618] main - a.a():? - Response status is: 1 [2013-11-06 15:11:10,618] main - a.a():? - Authenticaton failed for bmatthews ! [2013-11-06 15:11:10,618] main - AuthSessionFactory.shutdown():? - RSA Authentication API shutdown invoked [2013-11-06 15:11:10,618] main - AuthSessionFactory.shutdown():? - RSA Authentication API shutdown successful

    Read the article

  • Linux servers going into Halt when pressing Control-D in putty or exit in the shell

    - by Itai Ganot
    Since today at noon, there's a number of Linux CentOS servers which are going to Halt whenever i type exit or use Control-D to close the putty window. Did anyone encounter this weird behavior before? I've checked the aliases list on the servers and there is no alias regarding halt command. After the server came online i've checked the history and saw a "logout" command there but nothing which is related to Halt. At first, i thought it happens only from my computer but later i realized that it happens to everyone which types exit, logout or control+d. 2 of these server are our main iptables firewalls and so it's super critical, your assistance is much appreciated. It looks like that, and it only happens on servers with active IPTables: [root@srv1 bin]# ssh srv2 root@srv2's password: Last login: Sun Nov 11 17:19:41 2012 from 192.168.12.98 [root@srv2 ~]# vim /etc/crontab [root@srv2 ~]# exit logout Broadcast message from root (pts/1) (Tue Nov 13 10:44:04 2012): The system is going down for system halt NOW! Connection to srv2 closed. [root@srv1 bin]# In my troubleshooting steps i came across the command strace, and so i've opened two bash windows to one of the problematic server and i used strace -p PID_of_bash. When i typed in exit in the first shell it did go to halt, attached is the strace output, if you can check it out and tell me if you see anything suspicious i'd be more than thankful. RER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGALRM, {0x4484f0, [HUP INT ILL TRAP ABRT BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM SYS], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTSTP, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTTOU, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTTIN, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGWINCH, {0x448370, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c410, [], SA_RESTORER|SA_RESTART, 0x2b0e45a8f2f0}, 8) = 0 socket(PF_NETLINK, SOCK_RAW, 9) = 3 sendmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(2)=[{"\25\0\0\0d\4\1\0\0\0\0\0\0\0\0\0", 16}, {"exit\0", 5}], msg_controllen=0, msg_flags=0}, 0) = 21 close(3) = 0 rt_sigaction(SIGINT, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 write(2, "logout\n", 7) = 7 write(2, "There are stopped jobs.\n", 24) = 24 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigaction(SIGINT, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, [INT CHLD], [], 8) = 0 pipe([3, 4]) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x2b0e45db6fe0) = 23717 setpgid(23717, 23717) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 close(3) = 0 close(4) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD TSTP TTIN TTOU], [CHLD], 8) = 0 ioctl(255, TIOCSPGRP, [23717]) = 0 rt_sigprocmask(SIG_SETMASK, [CHLD], NULL, 8) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], WSTOPPED|WCONTINUED, NULL) = 23717 rt_sigprocmask(SIG_BLOCK, [CHLD TSTP TTIN TTOU], [CHLD], 8) = 0 ioctl(255, TIOCSPGRP, [20458]) = 0 rt_sigprocmask(SIG_SETMASK, [CHLD], NULL, 8) = 0 ioctl(255, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(255, TIOCGWINSZ, {ws_row=53, ws_col=211, ws_xpixel=0, ws_ypixel=0}) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- wait4(-1, 0x7fff395da984, WNOHANG|WSTOPPED|WCONTINUED, NULL) = 0 rt_sigreturn(0x11) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD TSTP TTIN TTOU], [], 8) = 0 ioctl(255, TIOCSPGRP, [20458]) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigaction(SIGINT, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigprocmask(SIG_BLOCK, [INT], [], 8) = 0 ioctl(0, TIOCGWINSZ, {ws_row=53, ws_col=211, ws_xpixel=0, ws_ypixel=0}) = 0 ioctl(0, TIOCSWINSZ, {ws_row=53, ws_col=211, ws_xpixel=0, ws_ypixel=0}) = 0 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(0, SNDCTL_TMR_STOP or TCSETSW, {B38400 opost isig -icanon -echo ...}) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigprocmask(SIG_BLOCK, [INT QUIT ALRM TSTP TTIN TTOU], [], 8) = 0 rt_sigaction(SIGINT, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTERM, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTERM, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGQUIT, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGQUIT, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGALRM, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x4484f0, [HUP INT ILL TRAP ABRT BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM SYS], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTSTP, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTSTP, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTTOU, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTTOU, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTTIN, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTTIN, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigaction(SIGWINCH, {0x47c410, [], SA_RESTORER|SA_RESTART, 0x2b0e45a8f2f0}, {0x448370, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 write(2, "[root@g2-lga ~]# ", 17) = 17 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "e", 1) = 1 write(2, "e", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "x", 1) = 1 write(2, "x", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "i", 1) = 1 write(2, "i", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "t", 1) = 1 write(2, "t", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "\r", 1) = 1 write(2, "\n", 1) = 1 rt_sigprocmask(SIG_BLOCK, [INT], [], 8) = 0 ioctl(0, SNDCTL_TMR_STOP or TCSETSW, {B38400 opost isig icanon echo ...}) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigaction(SIGINT, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTERM, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGQUIT, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGALRM, {0x4484f0, [HUP INT ILL TRAP ABRT BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM SYS], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTSTP, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTTOU, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTTIN, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGWINCH, {0x448370, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c410, [], SA_RESTORER|SA_RESTART, 0x2b0e45a8f2f0}, 8) = 0 socket(PF_NETLINK, SOCK_RAW, 9) = 3 sendmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(2)=[{"\25\0\0\0d\4\1\0\0\0\0\0\0\0\0\0", 16}, {"exit\0", 5}], msg_controllen=0, msg_flags=0}, 0) = 21 close(3) = 0 rt_sigaction(SIGINT, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 write(2, "logout\n", 7) = 7 open("/root/.bash_logout", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=24, ...}) = 0 read(3, "# ~/.bash_logout\n\nclear\n", 24) = 24 close(3) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 stat(".", {st_mode=S_IFDIR|0750, st_size=12288, ...}) = 0 stat("/usr/kerberos/sbin/clear", 0x7fff395da960) = -1 ENOENT (No such file or directory) stat("/usr/kerberos/bin/clear", 0x7fff395da960) = -1 ENOENT (No such file or directory) stat("/usr/local/sbin/clear", 0x7fff395da960) = -1 ENOENT (No such file or directory) stat("/usr/local/bin/clear", 0x7fff395da960) = -1 ENOENT (No such file or directory) stat("/sbin/clear", 0x7fff395da960) = -1 ENOENT (No such file or directory) stat("/bin/clear", 0x7fff395da960) = -1 ENOENT (No such file or directory) stat("/usr/sbin/clear", 0x7fff395da960) = -1 ENOENT (No such file or directory) stat("/usr/bin/clear", {st_mode=S_IFREG|0755, st_size=12712, ...}) = 0 access("/usr/bin/clear", X_OK) = 0 access("/usr/bin/clear", R_OK) = 0 stat("/usr/bin/clear", {st_mode=S_IFREG|0755, st_size=12712, ...}) = 0 access("/usr/bin/clear", X_OK) = 0 access("/usr/bin/clear", R_OK) = 0 rt_sigprocmask(SIG_BLOCK, [INT CHLD], [], 8) = 0 pipe([3, 4]) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x2b0e45db6fe0) = 23726 setpgid(23726, 23726) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 close(3) = 0 close(4) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD TSTP TTIN TTOU], [CHLD], 8) = 0 ioctl(255, TIOCSPGRP, [23726]) = 0 rt_sigprocmask(SIG_SETMASK, [CHLD], NULL, 8) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 wait4(-1, Broadcast message from root (pts/0) (Wed Nov 14 12:41:44 2012): The system is going down for system halt NOW! [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], WSTOPPED|WCONTINUED, NULL) = 23726 rt_sigprocmask(SIG_BLOCK, [CHLD TSTP TTIN TTOU], [CHLD], 8) = 0 ioctl(255, TIOCSPGRP, [20458]) = 0 rt_sigprocmask(SIG_SETMASK, [CHLD], NULL, 8) = 0 ioctl(255, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(255, TIOCGWINSZ, {ws_row=53, ws_col=211, ws_xpixel=0, ws_ypixel=0}) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- wait4(-1, 0x7fff395da634, WNOHANG|WSTOPPED|WCONTINUED, NULL) = 0 rt_sigreturn(0x11) = 0 open("/etc/bash.bash_logout", O_RDONLY) = -1 ENOENT (No such file or directory) rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigaction(SIGINT, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 stat("/root/.bash_history", {st_mode=S_IFREG|0600, st_size=28900, ...}) = 0 open("/root/.bash_history", O_WRONLY|O_APPEND) = 3 write(3, "cd /etc/profile.d/\nls\nls -alrt\ng"..., 1120) = 1120 close(3) = 0 open("/root/.bash_history", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0600, st_size=30020, ...}) = 0 read(3, "history \nping g1-lga\nping g1-lga"..., 30020) = 30020 close(3) = 0 open("/root/.bash_history", O_WRONLY|O_TRUNC) = 3 write(3, "grep \"216.18\" *\nhistory \nexit\nvi"..., 27609) = 27609 close(3) = 0 kill(4294965658, SIGTERM) = 0 kill(4294965658, SIGCONT) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- wait4(-1, [{WIFSIGNALED(s) && WTERMSIG(s) == SIGTERM}], WNOHANG|WSTOPPED|WCONTINUED, NULL) = 1638 wait4(-1, 0x7fff395dac34, WNOHANG|WSTOPPED|WCONTINUED, NULL) = -1 ECHILD (No child processes) rt_sigreturn(0x11) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- wait4(-1, 0x7fff395dac34, WNOHANG|WSTOPPED|WCONTINUED, NULL) = -1 ECHILD (No child processes) rt_sigreturn(0xffffffffffffffff) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD TSTP TTIN TTOU], [], 8) = 0 ioctl(255, TIOCSPGRP, [20458]) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 setpgid(0, 20458) = -1 EPERM (Operation not permitted) exit_group(1) = ? Process 20458 detached [root@g2-lga ~]#

    Read the article

  • What other tool is using my hotkey?

    - by Sammy
    I use Greenshot for screenshots, and it's been nagging about some other software tool using the same hotkey. I started receiving this warning message about two days ago. It shows up each time I reboot and log on to Windows. The hotkey(s) "Ctrl + Shift + PrintScreen" could not be registered. This problem is probably caused by another tool claiming usage of the same hotkey(s)! You could either change your hotkey settings or deactivate/change the software making use of the hotkey(s). What's this all about? The only software I recently installed is CPU-Z Core Temp Speed Fan HD Tune Epson Print CD NetStress What I would like to know is how to find out what other tool is causing this conflict? Do I really have to uninstall each program, one by one, until there is no conflict anymore? I see no option for customizing any hotkeys in CPU-Z, and according to docs there are only a few keyboard shortcuts. These are F5 through F9, but they are no hotkeys. There is nothing in Core Temp, and from what I can see... nothing in Speed Fan. Is any of these programs known to use Ctrl + Shift + PrintScreen hotkey for screenshots? I am actually suspecting the Dropbox client. I think I saw a warning recently coming from Dropbox program, something to do with hotkeys or keyboard shortcuts. I see that it has an option for sharing screenshots under Preferences menu, but I see no option for hotkeys. Core Temp actually also has an option for taking screenshots (F9) but it's just that - a keyboard shortcut, not a hotkey. And again, there's no option actually for changing this setting in Options/Settings menu. How do you resolve this type of conflicts? Are there any general methods you can use to pinpoint the second conflicting software? Like... is there some Windows registry key that holds the hotkeys? Or is it just down to mere luck and trial and error? Addendum I forgot to mention, when I do use the Ctrl + Shift + PrintScreen hotkey, what happens is that the Greenshot context menu shows up, asking me where I want to save the screenshot. So it appears to be working. But I am still getting the darn warning every time I reboot and log on to Windows?! I actually tried changing the key bindings in Greenshot preferences, but after a reboot it seems to have returned back to the settings I had previously. Update I can't see any hotkey conflicts in the Widnows Hotkey Explorer. The aforementioned hotkey is reserved by Greenshot, and I don't see any other program using the same hotkey binding. But when I went into Greenshot preferences, this is what I discovered. As you can see it's the Greenshot itself that uses the same hotkey twice! I guess that's why no other program was listed above as using this hotkey. But how can Greenshot be so stupid to use the same hotkey more than once? I didn't do this! It's not my fault... I'm not that stupid. This is what it's set to right now: Capture full screen: Ctrl + Skift + Prntscrn Capture window: Alt + Prntscrn Capture region: Ctrl + Prntscrn Capture last region: Skift + Prntscrn Capture Internet Explorer: Ctrl + Skift + Prntscrn And this is my preferred setting: Capture full screen: Prntscrn Capture window: Alt + Prntscrn Capture region: Ctrl + Prntscrn Capture last region: Capture Internet Explorer: I don't use any hotkey for "last region" and IE. But when I set this to my liking, as listed here, Greenshot gives me the same warning message, even as I tab through the hotkey entry fields. Sometimes it even gives me the warning when I just click Cancel button. This is really crazy! On the side note... You might have noticed that I have "update check" set to 0 (zero). This is because, in my experience, Greenshot changes all or only some of my preferences back to default settings whenever it automatically updates to a new version. So I opted to stay off updates to get rid of the problem. It has done so for the past three updates or so. I hoped to receive a new update that would fix the issue, but I think it still reverts back to default settings after each update to a new version, including setting default hotkeys. Update 2 I'll give you just one example of how Greenshot behaves. This is the dialog I have in front of me right now. As you can see, I have removed the last two hotkeys and changed the first one to my own liking. While I was clicking in the fields and removing the two hotkeys I was getting the warning message. So let's say I click in the "capture last region" field. Then I get this: Note that none of the entries include "Ctrl + Shift + PrintScreen" that it's warning about. Now I will change all the hotkeys so I get something like this: So now I'm using QWERTY letters for binding, like Ctrl+Alt+Q, Ctrl+Alt+W and so on. As far as I know no Windows program is using these. While I was clicking through the different fields it was giving me the warning. Now when I try to click OK to save the changes, it once again gives me a warning about "ctrl + shift + printscreen". Update 3 After setting the above key bindings (QWERTY) and saving changes, and then rebooting, the conflict seems to have been resolved. I was then able to set following key bindings. Capture full screen: Prntscrn Capture window: Alt + Prntscrn Capture region: Ctrl + Prntscrn I was not prompted with the warning message this time. Perhaps changing key binding required a system reboot? Sounds far fetched but that appears to be the case. I'm still not sure what caused this conflict, but I know for sure that it started after installing aforementioned programs. It might just have to do with Greenshot itself, and not some other program. Like I said, I know from experience that Greenshot likes to mess with users' settings after each update. I wouldn't be surprised if it was actually silently updated, even though I have specified not to check for updates, then it changed the key bindings back to defaults and caused a conflict with the hotkeys that were registered with the operating system previously. I rarely reboot the system, so that could have added to the conflict. Next time if I see this I will run Hotkey Explorer immediately and see if there is another program causing the conflict.

    Read the article

  • Alternate way to create a clone of a UNIX System

    - by Spirit
    THE STORY: (If you don't like to read much, down below is the question :) ) Where I work we have two HP RP2470 servers same hardware same number of hard drives same everything :). One of them is a production server and runs HP-UX 11.00. The poor ba***rd hasn't been turned off for years and now I have to make a clone of it on the other server - just in case, for redundancy. The problem is simple (or not simple) as I have to make the the other server exactly the same. However the old version of OS (UX 11.00 is a history now) and the old software running on it, have made my task almost impossible. On the production server there is also a cloning/recover utility Ignite-UX. I tried many times to create a recovery tape with it. Then when I load the tape on the backup server, it succeeds with the loading of the tape (no errors no warnings) but on the next restart it fails to load the OS :S and drops into HP`s ISL prompt. --- THE QUESTION: Is there an alternate way to create a clone of the Unix System? The environment is: 1. 2x HP RP2470 Servers (non-Intel), same hardware, same number od HDDs (two each of them) same everything. 2. OS running: HP-UX 11.00 The production server has to be cloned without downtime - sadly :( as I hope that they will reconsider on this one For example (like on Windows platforms), if you try to copy an entire HDD with Windows inside on another HDD, and then put that HDD on another PC it will still work, as long as the hardware is the same. Can I do something like that with a Unix system? Can I somehow COPY the contents of the entire HDD, put those on another HDD, and then just load the HDD into the other server? (If you haven't read the story the servers are exactly the same) Will it work? Can it be done with ordinary commands like cp or dump or something like that? Does any one have a similar experience? --- UPDATE: 26.01.2012 NOTE: The update is related to "The Story". If you haven't read that part then you can skip this update. This is just a short update on the recover logs from the Ignite Tape.. someone with more exp. might notice something.. ... --- READING CONTENTS OF THE IGNITE TAPE --- --- OUTPUT OMITED --- ... ... x ./configure3, 413696 bytes, 808 tape blocks x ./monitor_bpr, 20480 bytes, 40 tape blocks * Download_mini-system: Complete * Loading_software: Begin * Installing boot area on disk. * Enabling swap areas. * Backing up LVM configuration for "vg00". * Processing the archive source (Recovery Archive). * Wed Jan 25 15:27:32 EST 2012: Starting archive load of the source (Recovery Archive). * Positioning the tape (/dev/rmt/0mn). * Archive extraction from tape is beginning. Please wait. * Wed Jan 25 15:39:52 EST 2012: Completed archive load of the source (Recovery Archive). * Executing user specified script: "/opt/ignite/data/scripts/os_arch_post_l". * Running in recovery mode (os_arch_post_l). * Running the ioinit command ("/sbin/ioinit -c") * Creating device files via the insf command. insf: Installing special files for sdisk instance 0 address 0/0/1/1.15.0 insf: Installing special files for sdisk instance 1 address 0/0/2/0.1.0 insf: Installing special files for sdisk instance 2 address 0/0/2/1.15.0 insf: Installing special files for stape instance 0 address 0/0/1/0.3.0 insf: Installing special files for btlan instance 0 address 0/0/0/0 insf: Installing special files for btlan instance 1 address 0/2/0/0 insf: Installing special files for pseudo driver dlpi insf: Installing special files for pseudo driver kepd insf: Installing special files for pseudo driver framebuf insf: Installing special files for pseudo driver sad * Running "/opt/upgrade/bin/tlinstall -v" and correcting transition link permissions. * Constructing the bootconf file. * Setting primary boot path to "0/0/1/1.15.0". * Executing: "/var/adm/sw/products/PHSS_20146/pfiles/iux_postload". * Executing: "/var/adm/sw/products/PHSS_25982/pfiles/iux_postload". NOTE: tlinstall is searching filesystem - please be patient NOTE: Successfully completed * Loading_software: Complete * Build_Kernel: Begin NOTE: Since the /stand/vmunix kernel is already in place, the kernel will not be re-built. Note that no mod_kernel directives will be processed. * Build_Kernel: Complete * Boot_From_Client_Disk: Begin * Rebooting machine as expected. NOTE: Rebooting system. sync'ing disks (0 buffers to flush): 0 buffers not flushed 0 buffers still dirty Closing open logical volumes... Done Console reset done. Boot device reset done. ********** VIRTUAL FRONT PANEL ********** System Boot detected ***************************************** LEDs: RUN ATTENTION FAULT REMOTE POWER FLASH OFF OFF ON ON LED State: Running non-OS code. (i.e. Boot or Diagnostics) ... ... ... --- SERVER IS PERFORMING POST SEQUENCE HERE --- --- OUTPUT OMITED --- ... ... ... ***************************************** ************ EARLY BOOT VFP ************* End of early boot detected ***************************************** Firmware Version 43.50 Duplex Console IO Dependent Code (IODC) revision 1 ------------------------------------------------------------------------------ (c) Copyright 1995-2002, Hewlett-Packard Company, All rights reserved ------------------------------------------------------------------------------ Processor Speed State CoProcessor State Cache Size Number State Inst Data --------- -------- --------------------- ----------------- ------------ 0 650 MHz Active Functional 750 KB 1.5 MB 1 650 MHz Idle Functional 750 KB 1.5 MB Central Bus Speed (in MHz) : 120 Available Memory : 2097152 KB Good Memory Required : 16140 KB Primary boot path: 0/0/1/1.15 Alternate boot path: 0/0/2/1.15 Console path: 0/0/4/1.643 Keyboard path: 0/0/4/0.0 Processor is starting autoboot process. To discontinue, press any key within 10 seconds. 10 seconds expired. Proceeding... Trying Primary Boot Path ------------------------ Booting... Boot IO Dependent Code (IODC) revision 1 HARD Booted. ISL Revision A.00.38 OCT 26, 1994 ISL booting hpux ISL>

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe Irony

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • Network throughput issue (ARP-related)

    - by Joel Coel
    The small college where I work is having some very strange network issues. I'm looking for any advice or ideas here. We were fine over the summer, but the trouble began few days after students returned to campus in force for the fall term. Symptoms The main symptom is that internet access will work, but it's very slow... often to the point of timeouts. As an example, a typical result from Speedtest.net will return .4Mbps download, but allow 3 to 8 Mbps upload speed. Lesser symptoms may include severely limited performance transferring data to and from our file server, or even in some cases the inability to log in to the computer (cannot reach the domain controller). The issue crosses multiple vlans, and has effected devices on nearly every vlan we operate. The issue does not impact all machines on the network. An unaffected machine will typically see at least 11Mbps download from speedtest.net, and perhaps much more depending on larger campus traffic patterns at the time. There is one variation on the larger issue. We have one vlan where users were unable to log into nearly all of the machines at all. IT staff would log in using a local administrator account (or in some cases cached credentials), and from there a release/renew or pinging the gateway would allow the machine to work... for a while. Complicating this issue is that this vlan covers our computer labs, which use software called Deep Freeze to completely reset the hard drives after a reboot. It could just the same issue manifesting differently because of stale data on machines that have not permanently altered low-level info for weeks. We were able to solve this, however, by creating a new vlan and moving the labs over to the new vlan wholesale. Instigations Eventually we noticed that the effected machines all had recent dhcp leases. We can predict when a machine will become "slow" by watching when a dhcp lease comes up for renewal. We played with setting the lease time very short for a test vlan, but all that did was remove our ability to predict when the machine would become slow. Machines with static IPs have pretty much always worked normally. Manually releasing/renewing an address will never cause a machine to become slow. In fact, in some cases this process has fixed a machine in that state. Most of the time, though, it doesn't help. We also noticed that mobile machines like laptops are likely to become slow when they cross to new vlans. Wireless on campus is divided up into "zones", where each zone maps to a small set of buildings. Moving to a new building can place you in a zone, thereby causing you to get a new address. A machine resuming from sleep mode is also very likely to be slow. Mitigations Sometimes, but not always, clearing the arp cache on an effected machine will allow it to work normally again. As already mentioned, releasing/renewing a local machine's IP address can fix that machine, but it's not guaranteed. Pinging the default gateway can also sometimes help with a slow machine. What seems to help most to mitigate the issue is clearing the arp cache on our core layer-3 switch. This switch is used for our dhcp system as the default gateway on all vlans, and it handles inter-vlan routing. The model is a 3Com 4900SX. To try to mitigate the issue, we have the cache timeout set on the switch all the way down to the lowest possible time, but it hasn't helped. I also put together a script that runs every few minutes to automatically connect to the switch and reset the cache. Unfortunately, this does not always work, and can even cause some machines to end up in the slow state for a short time (though these seem to correct themselves after a few minutes). We currently have a scheduled job that runs every 10 minutes to force the core switch to clear it's ARP cache, but this is far from perfect or desirable. Reproduction We now have a test machine that we can force into the slow state at will. It is connected to a switch with ports set up for each of our vlans. We make the machine slow by connecting to different vlans, and after a new connection or two it will be slow. It's also worth noting in this section that this has happened before at the start of prior terms, but in the past the problem has gone away on it's own after a few days. It solved itself before we had a chance to do much diagnostic work... hence why we've allowed it to drag so long into the term this time 'round; the expectation was this would be a short-lived situation. Other Factors It's worth mentioning that we have had about half a dozen switches just outright fail over the last year. These are mainly 2003/2004-era 3Coms (mostly 4200's) that were all put in at about the same time. They should still be covered under warranty, buy HP has made getting service somewhat difficult. Mostly in power supplies that have failed, but in a couple cases we have used a power supply from a switch with a failed mainboard to bring a switch with a failed power supply back to life. We do have UPS devices on all but three of four switches now, but that was not the case when I started two and a half years ago. Severe budget constraints (we were on the Dept. of Ed's financially challenged institutions list a couple years back) have forced me to look to the likes of Netgear and TrendNet for replacements, but so far these low-end models seem to be holding their own. It's also worth mentioning that the big change on our network this summer was migrating from a single cross-campus wireless SSID to the zoned approach mentioned earlier. I don't think this is the source of the issue, as like I've said: we've seen this before. However, it's possible this is exacerbating the issue, and may be much of the reason it's been so hard to isolate. Diagnosis At first it seemed clear to us, given the timing and persistent nature of the problem, that the source of the issue was an infected (or malicious) student machine doing ARP cache poisoning. However, repeated attempts to isolate the source have failed. Those attempts include numerous wireshark packet traces, and even taking entire buildings offline for brief periods. We have not been able even to find a smoking gun bad ARP entry. My current best guess is an overloaded or failing core switch, but I'm not sure on how to test for this, and the cost of replacing it blindly is steep. Again, any ideas appreciated.

    Read the article

  • Content Management for WebCenter Installation Guide

    - by Gary Niu
    Overvew As we known, there are two way to install Content Management for WebCenter. One way is install it by WebCenter installer wizard, another way is to install it use their own installer. This guide is for the later one. For SSO purpose, I also mentioned how to config OID identity store for Content Management for WebCenter. Content Management for WebCenter( 10.1.3.5.1) Oracle Enterprise Linux R5U4 Basic Installation -bash-3.2$ ./setup.sh Please select your locale from the list.           1. Chinese-Simplified           2. Chinese-Traditional           3. Deutsch          *4. English-US           5. English-UK           6. Español           7. Français           8. Italiano           9. Japanese          10. Korean          11. Nederlands          12. Português-Brazil Choice? Throughout the install, when entering a text value, you can press Enter to accept the default that appears between square brackets ([]). When selecting from a list, you can select the choice followed by an asterisk by pressing Enter. Select installation type from the list.         *1. Install new server          2. Update a server Choice? Content Server Installation Directory Please enter the full pathname to the installation directory. Content Server Core Folder [/oracle/ucm/server]:/opt/oracle/ucm/server Create Directory         *1. yes          2. no Choice? Java virtual machine         *1. Sun Java 1.5.0_11 JDK          2. Specify a custom Java virtual machine Choice? Installing with Java version 1.5.0_11. Enter the location of the native file repository. This directory contains the native files checked in by contributors. Content Server Native Vault Folder [/opt/oracle/ucm/server/vault/]: Create Directory         *1. yes          2. no Choice? Enter the location of the web-viewable file repository. This directory contains files that can be accessed through the web server. Content Server Weblayout Folder [/opt/oracle/ucm/server/weblayout/]: Create Directory         *1. yes          2. no Choice? This server can be configured to manage its own authentication or to allow another master to act as an authentication proxy. Configure this server as a master or proxied server.         *1. Configure as a master server.          2. Configure as server proxied by a local master server. Choice? During installation, an admin server can be installed and configured to manage this server. If there is already an admin server on this system, you can have the installer configure it to administrate this server instead. Select admin server configuration.         *1. Install an admin server to manage this server.          2. Configure an existing admin server to manage this server.          3. Don't configure an admin server. Choice? Enter the location of an executable to start your web browser. This browser will be used to display the online help. Web Browser Path [/usr/bin/firefox]: Content Server System locale           1. Chinese-Simplified           2. Chinese-Traditional           3. Deutsch          *4. English-US           5. English-UK           6. Español           7. Français           8. Italiano           9. Japanese          10. Korean          11. Nederlands          12. Português-Brazil Choice? Please select the region for your timezone from the list.         *1. Use the timezone setting for your operating system          2. Pacific          3. America          4. Atlantic          5. Europe          6. Africa          7. Asia          8. Indian          9. Australia Choice? Please enter the port number that will be used to connect to the Content Server. This port must be otherwise unused. Content Server Port [4444]: Please enter the port number that will be used to connect to the Admin Server. This port must be otherwise unused. Admin Server Port [4440]: Enter a security filter for the server port. Hosts which are allowed to communicate directly with the server port may access any resources managed by the server. Insure that hosts which need access are included in the filter. See the installation guide for more details. Incoming connection address filter [127.0.0.1]:*.*.*.* *** Content Server URL Prefix The URL prefix specified here is used when generating HTML pages that refer to the contents of the weblayout directory within the installation. This prefix must be mapped in the web server Additional Document Directories section of the Content Management administration menu to the physical location of the weblayout directory. For example, "/idc/" would be used in your installation to refer to the URL http://ucm.company.com/idc which would be mapped in the web server to the physical location /oracle/ucm/server/weblayout. Web Server Relative Root [/idc/]: Enter the name of the local mail server. The server will contact this system to deliver email. Company Mail Server [mail]: Enter the e-mail address for the system administrator. Administrator E-Mail Address [sysadmin@mail]: *** Web Server Address Many generated HTML pages refer to the web server you are using. The address specified here will be used when generating those pages. The address should include the host and domain name in most cases. If your webserver is running on a port other than 80, append a colon and the port number. Examples: www.company.com, ucm.company.com:90 Web Server HTTP Address [yekki]:yekki.cn.oracle.com:7777 Enter the name for this instance. This name should be unique across your entire enterprise. It may not contain characters other than letters, numbers, and underscores. Server Instance Name [idc]: Enter a short label for this instance. This label is used on web pages to identify this instance. It should be less than 12 characters long. Server Instance Label [idc]: Enter a long description for this instance. Server Description [Content Server idc]: Web Server         *1. Apache          2. Sun ONE          3. Configure manually Choice? Please select a database from the list below to use with the Content Server. Content Server Database         *1. Oracle          2. Microsoft SQL Server 2005          3. Microsoft SQL Server 2000          4. Sybase          5. DB2          6. Custom JDBC settings          7. Skip database configuration Choice? Manually configure JDBC settings for this database          1. yes         *2. no Choice? Oracle Server Hostname [localhost]: Oracle Listener Port Number [1521]: *** Database User ID The user name is used to log into the database used by the content server. Oracle User [user]:YEKKI_OCSERVER *** Database Password The password is used to log into the database used by the content server. Oracle Password []:oracle Oracle Instance Name [ORACLE]:orcl Configure the JVM to find the JDBC driver in a specific jar file          1. yes         *2. no Choice? The installer can attempt to create the database tables or you can manually create them. If you choose to manually create the tables, you should create them now. Attempt to create database tables          1. yes         *2. no Choice? Select components to install.          1. ContentFolios: Collect related items in folios          2. Folders_g: Organize content into hierarchical folders          3. LinkManager8: Hypertext link management support          4. OracleTextSearch: External Oracle 11g database as search indexer support          5. ThreadedDiscussions: Threaded discussion management Enter numbers separated by commas to toggle, 0 to unselect all, F to finish: 1,2,3,4,5         *1. ContentFolios: Collect related items in folios         *2. Folders_g: Organize content into hierarchical folders         *3. LinkManager8: Hypertext link management support         *4. OracleTextSearch: External Oracle 11g database as search indexer support         *5. ThreadedDiscussions: Threaded discussion management Enter numbers separated by commas to toggle, 0 to unselect all, F to finish: F Checking configuration. . . Configuration OK. Review install settings. . . Content Server Core Folder: /opt/oracle/ucm/server Java virtual machine: Sun Java 1.5.0_11 JDK Content Server Native Vault Folder: /opt/oracle/ucm/server/vault/ Content Server Weblayout Folder: /opt/oracle/ucm/server/weblayout/ Proxy authentication through another server: no Install admin server: yes Web Browser Path: /usr/bin/firefox Content Server System locale: English-US Content Server Port: 4444 Admin Server Port: 4440 Incoming connection address filter: *.*.*.* Web Server Relative Root: /idc/ Company Mail Server: mail Administrator E-Mail Address: sysadmin@mail Web Server HTTP Address: yekki.cn.oracle.com:7777 Server Instance Name: idc Server Instance Label: idc Server Description: Content Server idc Web Server: Apache Content Server Database: Oracle Manually configure JDBC settings for this database: false Oracle Server Hostname: localhost Oracle Listener Port Number: 1521 Oracle User: YEKKI_OCSERVER Oracle Password: 6GP1gBgzSyKa4JW10U8UqqPznr/lzkNn/Ojf6M8GJ8I= Oracle Instance Name: orcl Configure the JVM to find the JDBC driver in a specific jar file: false Attempt to create database tables: no Components: ContentFolios,Folders_g,LinkManager8,OracleTextSearch,ThreadedDiscussions Proceed with install         *1. Proceed          2. Change configuration          3. Recheck the configuration          4. Abort installation Choice? Finished install type Install with warnings at 4/2/10 12:32 AM. Run Scripts -bash-3.2$ ./wc_contentserverconfig.sh /opt/oracle/ucm/server /mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/CS10gR35UpdateBundle.zip' Service 'DELETE_DOC' Extended Service 'DELETE_BYREV_REVISION' Extended Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/ContentAccess/ContentAccess-linux.zip' (internal)      04.02 00:40:38.019      main    updateDocMetaDefinitionV11: adding decimal column Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/Folders_g.zip' Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/FusionLibraries.zip' Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/JpsUserProvider.zip' Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/WcConfigure.zip' Apr 2, 2010 12:41:24 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:24 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Apr 2, 2010 12:41:27 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:27 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Apr 2, 2010 12:41:28 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:28 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Restart Content Server to apply updates. Configuring Apache Web Server append the following lines at httpd.conf: include "/opt/oracle/ucm/server/data/users/apache22/apache.conf" Configuring the Identity Store( Optional ) 1.  Stop Oracle Content Server and the Admin Server 2.  Update the Oracle Content Server's JPS configuration file, jps-config.xml: a. add a service instance <serviceInstance provider="idstore.ldap.provider" name="idstore.oid"> <property name="subscriber.name" value="dc=cn,dc=oracle,dc=com"></property> <property name="idstore.type" value="OID"></property> <property name="security.principal.key" value="ldap.credential"></property> <property name="security.principal.alias" value="JPS"></property> <property name="ldap.url" value="ldap://yekki.cn.oracle.com:3060"></property> <extendedProperty> <name>user.search.bases</name> <values> <value>cn=users,dc=cn,dc=oracle,dc=com</value> </values> </extendedProperty> <extendedProperty> <name>group.search.bases</name> <values> <value>cn=groups,dc=cn,dc=oracle,dc=com</value> </values> </extendedProperty> <property name="username.attr" value="uid"></property> <property name="user.login.attr" value="uid"></property> <property name="groupname.attr" value="cn"></property> </serviceInstance> b. Ensure that the <jpsContext> entry in the jps-config.xml file refers to the new serviceInstance, that is, idstore.oid and not idstore.ldap: <jpsContext name="default"> <serviceInstanceRef ref="idstore.oid"/> 3. Run the new script to setup the credentials for idstore.oid in the credential store: cd CONTENT_SERVER_HOME/custom/FusionLibraries/tools -bash-3.2$ ./run_credtool.sh Buildfile: ./../tools/credtool.xml     [input] skipping input as property action has already been set.     [input] Alias: [JPS]     [input] Key: [ldap.credential]     [input] User Name: cn=orcladmin     [input] Password: welcome1     [input] JPS Config: [/opt/oracle/ucm/server/custom/FusionLibraries/tools/../../../config/jps-config.xml] manage-creds:      [echo] @@@ Help: run 'ant manage-creds' command to see the detailed usage      [java] Using default context in /opt/oracle/ucm/server/custom/FusionLibraries/tools/../../../config/jps-config.xml file for credential store.      [java] Credential store location : /opt/oracle/ucm/server/config      [java] Credential with map JPS key ldap.credential stored successfully!      [java]      [java]      [java]     Credential for map JPS and key ldap.credential is:      [java]             PasswordCredential name : cn=orcladmin      [java]             PasswordCredential password : welcome1 BUILD SUCCESSFUL Total time: 1 minute 27 seconds Testing 1. acces http://yekki.cn.oracle.com:7777/idc 2. login in with OID user, for example: orcladmin/welcome1 3. make sure your JpsUserProvider status is "good"

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • An Introduction to Meteor

    - by Stephen.Walther
    The goal of this blog post is to give you a brief introduction to Meteor which is a framework for building Single Page Apps. In this blog entry, I provide a walkthrough of building a simple Movie database app. What is special about Meteor? Meteor has two jaw-dropping features: Live HTML – If you make any changes to the HTML, CSS, JavaScript, or data on the server then every client shows the changes automatically without a browser refresh. For example, if you change the background color of a page to yellow then every open browser will show the new yellow background color without a refresh. Or, if you add a new movie to a collection of movies, then every open browser will display the new movie automatically. With Live HTML, users no longer need a refresh button. Changes to an application happen everywhere automatically without any effort. The Meteor framework handles all of the messy details of keeping all of the clients in sync with the server for you. Latency Compensation – When you modify data on the client, these modifications appear as if they happened on the server without any delay. For example, if you create a new movie then the movie appears instantly. However, that is all an illusion. In the background, Meteor updates the database with the new movie. If, for whatever reason, the movie cannot be added to the database then Meteor removes the movie from the client automatically. Latency compensation is extremely important for creating a responsive web application. You want the user to be able to make instant modifications in the browser and the framework to handle the details of updating the database without slowing down the user. Installing Meteor Meteor is licensed under the open-source MIT license and you can start building production apps with the framework right now. Be warned that Meteor is still in the “early preview” stage. It has not reached a 1.0 release. According to the Meteor FAQ, Meteor will reach version 1.0 in “More than a month, less than a year.” Don’t be scared away by that. You should be aware that, unlike most open source projects, Meteor has financial backing. The Meteor project received an $11.2 million round of financing from Andreessen Horowitz. So, it would be a good bet that this project will reach the 1.0 mark. And, if it doesn’t, the framework as it exists right now is still very powerful. Meteor runs on top of Node.js. You write Meteor apps by writing JavaScript which runs both on the client and on the server. You can build Meteor apps on Windows, Mac, or Linux (Although the support for Windows is still officially unofficial). If you want to install Meteor on Windows then download the MSI from the following URL: http://win.meteor.com/ If you want to install Meteor on Mac/Linux then run the following CURL command from your terminal: curl https://install.meteor.com | /bin/sh Meteor will install all of its dependencies automatically including Node.js. However, I recommend that you install Node.js before installing Meteor by installing Node.js from the following address: http://nodejs.org/ If you let Meteor install Node.js then Meteor won’t install NPM which is the standard package manager for Node.js. If you install Node.js and then you install Meteor then you get NPM automatically. Creating a New Meteor App To get a sense of how Meteor works, I am going to walk through the steps required to create a simple Movie database app. Our app will display a list of movies and contain a form for creating a new movie. The first thing that we need to do is create our new Meteor app. Open a command prompt/terminal window and execute the following command: Meteor create MovieApp After you execute this command, you should see something like the following: Follow the instructions: execute cd MovieApp to change to your MovieApp directory, and run the meteor command. Executing the meteor command starts Meteor on port 3000. Open up your favorite web browser and navigate to http://localhost:3000 and you should see the default Meteor Hello World page: Open up your favorite development environment to see what the Meteor app looks like. Open the MovieApp folder which we just created. Here’s what the MovieApp looks like in Visual Studio 2012: Notice that our MovieApp contains three files named MovieApp.css, MovieApp.html, and MovieApp.js. In other words, it contains a Cascading Style Sheet file, an HTML file, and a JavaScript file. Just for fun, let’s see how the Live HTML feature works. Open up multiple browsers and point each browser at http://localhost:3000. Now, open the MovieApp.html page and modify the text “Hello World!” to “Hello Cruel World!” and save the change. The text in all of the browsers should update automatically without a browser refresh. Pretty amazing, right? Controlling Where JavaScript Executes You write a Meteor app using JavaScript. Some of the JavaScript executes on the client (the browser) and some of the JavaScript executes on the server and some of the JavaScript executes in both places. For a super simple app, you can use the Meteor.isServer and Meteor.isClient properties to control where your JavaScript code executes. For example, the following JavaScript contains a section of code which executes on the server and a section of code which executes in the browser: if (Meteor.isClient) { console.log("Hello Browser!"); } if (Meteor.isServer) { console.log("Hello Server!"); } console.log("Hello Browser and Server!"); When you run the app, the message “Hello Browser!” is written to the browser JavaScript console. The message “Hello Server!” is written to the command/terminal window where you ran Meteor. Finally, the message “Hello Browser and Server!” is execute on both the browser and server and the message appears in both places. For simple apps, using Meteor.isClient and Meteor.isServer to control where JavaScript executes is fine. For more complex apps, you should create separate folders for your server and client code. Here are the folders which you can use in a Meteor app: · client – This folder contains any JavaScript which executes only on the client. · server – This folder contains any JavaScript which executes only on the server. · common – This folder contains any JavaScript code which executes on both the client and server. · lib – This folder contains any JavaScript files which you want to execute before any other JavaScript files. · public – This folder contains static application assets such as images. For the Movie App, we need the client, server, and common folders. Delete the existing MovieApp.js, MovieApp.html, and MovieApp.css files. We will create new files in the right locations later in this walkthrough. Combining HTML, CSS, and JavaScript Files Meteor combines all of your JavaScript files, and all of your Cascading Style Sheet files, and all of your HTML files automatically. If you want to create one humongous JavaScript file which contains all of the code for your app then that is your business. However, if you want to build a more maintainable application, then you should break your JavaScript files into many separate JavaScript files and let Meteor combine them for you. Meteor also combines all of your HTML files into a single file. HTML files are allowed to have the following top-level elements: <head> — All <head> files are combined into a single <head> and served with the initial page load. <body> — All <body> files are combined into a single <body> and served with the initial page load. <template> — All <template> files are compiled into JavaScript templates. Because you are creating a single page app, a Meteor app typically will contain a single HTML file for the <head> and <body> content. However, a Meteor app typically will contain several template files. In other words, all of the interesting stuff happens within the <template> files. Displaying a List of Movies Let me start building the Movie App by displaying a list of movies. In order to display a list of movies, we need to create the following four files: · client\movies.html – Contains the HTML for the <head> and <body> of the page for the Movie app. · client\moviesTemplate.html – Contains the HTML template for displaying the list of movies. · client\movies.js – Contains the JavaScript for supplying data to the moviesTemplate. · server\movies.js – Contains the JavaScript for seeding the database with movies. After you create these files, your folder structure should looks like this: Here’s what the client\movies.html file looks like: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} </body>   Notice that it contains <head> and <body> top-level elements. The <body> element includes the moviesTemplate with the syntax {{> moviesTemplate }}. The moviesTemplate is defined in the client/moviesTemplate.html file: <template name="moviesTemplate"> <ul> {{#each movies}} <li> {{title}} </li> {{/each}} </ul> </template> By default, Meteor uses the Handlebars templating library. In the moviesTemplate above, Handlebars is used to loop through each of the movies using {{#each}}…{{/each}} and display the title for each movie using {{title}}. The client\movies.js JavaScript file is used to bind the moviesTemplate to the Movies collection on the client. Here’s what this JavaScript file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; The Movies collection is a client-side proxy for the server-side Movies database collection. Whenever you want to interact with the collection of Movies stored in the database, you use the Movies collection instead of communicating back to the server. The moviesTemplate is bound to the Movies collection by assigning a function to the Template.moviesTemplate.movies property. The function simply returns all of the movies from the Movies collection. The final file which we need is the server-side server\movies.js file: // Declare server Movies collection Movies = new Meteor.Collection("movies"); // Seed the movie database with a few movies Meteor.startup(function () { if (Movies.find().count() == 0) { Movies.insert({ title: "Star Wars", director: "Lucas" }); Movies.insert({ title: "Memento", director: "Nolan" }); Movies.insert({ title: "King Kong", director: "Jackson" }); } }); The server\movies.js file does two things. First, it declares the server-side Meteor Movies collection. When you declare a server-side Meteor collection, a collection is created in the MongoDB database associated with your Meteor app automatically (Meteor uses MongoDB as its database automatically). Second, the server\movies.js file seeds the Movies collection (MongoDB collection) with three movies. Seeding the database gives us some movies to look at when we open the Movies app in a browser. Creating New Movies Let me modify the Movies Database App so that we can add new movies to the database of movies. First, I need to create a new template file – named client\movieForm.html – which contains an HTML form for creating a new movie: <template name="movieForm"> <fieldset> <legend>Add New Movie</legend> <form> <div> <label> Title: <input id="title" /> </label> </div> <div> <label> Director: <input id="director" /> </label> </div> <div> <input type="submit" value="Add Movie" /> </div> </form> </fieldset> </template> In order for the new form to show up, I need to modify the client\movies.html file to include the movieForm.html template. Notice that I added {{> movieForm }} to the client\movies.html file: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} {{> movieForm }} </body> After I make these modifications, our Movie app will display the form: The next step is to handle the submit event for the movie form. Below, I’ve modified the client\movies.js file so that it contains a handler for the submit event raised when you submit the form contained in the movieForm.html template: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Movies.insert(newMovie); } }; The Template.movieForm.events property contains an event map which maps event names to handlers. In this case, I am mapping the form submit event to an anonymous function which handles the event. In the event handler, I am first preventing a postback by calling e.preventDefault(). This is a single page app, no postbacks are allowed! Next, I am grabbing the new movie from the HTML form. I’m taking advantage of the template find() method to retrieve the form field values. Finally, I am calling Movies.insert() to insert the new movie into the Movies collection. Here, I am explicitly inserting the new movie into the client-side Movies collection. Meteor inserts the new movie into the server-side Movies collection behind the scenes. When Meteor inserts the movie into the server-side collection, the new movie is added to the MongoDB database associated with the Movies app automatically. If server-side insertion fails for whatever reasons – for example, your internet connection is lost – then Meteor will remove the movie from the client-side Movies collection automatically. In other words, Meteor takes care of keeping the client Movies collection and the server Movies collection in sync. If you open multiple browsers, and add movies, then you should notice that all of the movies appear on all of the open browser automatically. You don’t need to refresh individual browsers to update the client-side Movies collection. Meteor keeps everything synchronized between the browsers and server for you. Removing the Insecure Module To make it easier to develop and debug a new Meteor app, by default, you can modify the database directly from the client. For example, you can delete all of the data in the database by opening up your browser console window and executing multiple Movies.remove() commands. Obviously, enabling anyone to modify your database from the browser is not a good idea in a production application. Before you make a Meteor app public, you should first run the meteor remove insecure command from a command/terminal window: Running meteor remove insecure removes the insecure package from the Movie app. Unfortunately, it also breaks our Movie app. We’ll get an “Access denied” error in our browser console whenever we try to insert a new movie. No worries. I’ll fix this issue in the next section. Creating Meteor Methods By taking advantage of Meteor Methods, you can create methods which can be invoked on both the client and the server. By taking advantage of Meteor Methods you can: 1. Perform form validation on both the client and the server. For example, even if an evil hacker bypasses your client code, you can still prevent the hacker from submitting an invalid value for a form field by enforcing validation on the server. 2. Simulate database operations on the client but actually perform the operations on the server. Let me show you how we can modify our Movie app so it uses Meteor Methods to insert a new movie. First, we need to create a new file named common\methods.js which contains the definition of our Meteor Methods: Meteor.methods({ addMovie: function (newMovie) { // Perform form validation if (newMovie.title == "") { throw new Meteor.Error(413, "Missing title!"); } if (newMovie.director == "") { throw new Meteor.Error(413, "Missing director!"); } // Insert movie (simulate on client, do it on server) return Movies.insert(newMovie); } }); The addMovie() method is called from both the client and the server. This method does two things. First, it performs some basic validation. If you don’t enter a title or you don’t enter a director then an error is thrown. Second, the addMovie() method inserts the new movie into the Movies collection. When called on the client, inserting the new movie into the Movies collection just updates the collection. When called on the server, inserting the new movie into the Movies collection causes the database (MongoDB) to be updated with the new movie. You must add the common\methods.js file to the common folder so it will get executed on both the client and the server. Our folder structure now looks like this: We actually call the addMovie() method within our client code in the client\movies.js file. Here’s what the updated file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Meteor.call( "addMovie", newMovie, function (err, result) { if (err) { alert("Could not add movie " + err.reason); } } ); } }; The addMovie() method is called – on both the client and the server – by calling the Meteor.call() method. This method accepts the following parameters: · The string name of the method to call. · The data to pass to the method (You can actually pass multiple params for the data if you like). · A callback function to invoke after the method completes. In the JavaScript code above, the addMovie() method is called with the new movie retrieved from the HTML form. The callback checks for an error. If there is an error then the error reason is displayed in an alert (please don’t use alerts for validation errors in a production app because they are ugly!). Summary The goal of this blog post was to provide you with a brief walk through of a simple Meteor app. I showed you how you can create a simple Movie Database app which enables you to display a list of movies and create new movies. I also explained why it is important to remove the Meteor insecure package from a production app. I showed you how to use Meteor Methods to insert data into the database instead of doing it directly from the client. I’m very impressed with the Meteor framework. The support for Live HTML and Latency Compensation are required features for many real world Single Page Apps but implementing these features by hand is not easy. Meteor makes it easy.

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • Unable to boot Windows 7 after installing Ubuntu

    - by Devendra
    I have Windows 7 on my machine and then installed Ubuntu 12.04 using a live CD. I can see both Windows 7 and Ubuntu in the grub menu, but when I select Windows 7 it shows a black screen for about 2 seconds and the returns to the Grub menu. But if I select Ubuntu it's working fine. This is the contents of the boot-repair log: Boot Info Script 0.61.full + Boot-Repair extra info [Boot-Info November 20th 2012] ============================= Boot Info Summary: =============================== => Grub2 (v2.00) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks in partition 1 for (,msdos6)/boot/grub. sda1: __________________________________________________________________________ File system: ntfs Boot sector type: Grub2 (v1.99-2.00) Boot sector info: Grub2 (v2.00) is installed in the boot sector of sda1 and looks at sector 388911128 of the same hard drive for core.img. core.img is at this location and looks in partition 1 for (,msdos6)/boot/grub. No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda2: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda3: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda4: __________________________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: According to the info in the boot sector, sda5 starts at sector 2048. Operating System: Boot files: sda6: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 12.10 Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/i386-pc/core.img sda7: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 206,848 146,802,687 146,595,840 7 NTFS / exFAT / HPFS /dev/sda2 147,007,488 293,623,807 146,616,320 7 NTFS / exFAT / HPFS /dev/sda3 293,623,808 332,820,613 39,196,806 7 NTFS / exFAT / HPFS /dev/sda4 332,822,526 1,465,145,343 1,132,322,818 f W95 Extended (LBA) /dev/sda5 461,342,720 1,465,145,343 1,003,802,624 7 NTFS / exFAT / HPFS /dev/sda6 332,822,528 453,171,199 120,348,672 83 Linux /dev/sda7 453,173,248 461,338,623 8,165,376 82 Linux swap / Solaris "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/sda1 F6AE2C13AE2BCB47 ntfs /dev/sda2 DC2273012272DFC6 ntfs /dev/sda3 1E76E43376E40D79 ntfs New Volume /dev/sda5 5ED60ACDD60AA57D ntfs /dev/sda6 9e70fd16-b48b-4f88-adcf-e443aef83124 ext4 /dev/sda7 52f3dd94-6be7-4a7b-a3ae-f43eb8810483 swap ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sda6 / ext4 (rw,errors=remount-ro) =========================== sda6/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi font="/usr/share/grub/unicode.pf2" fi if loadfont $font ; then set gfxmode=auto load_video insmod gfxterm set locale_dir=$prefix/locale set lang=en_IN insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=10 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "${linux_gfx_mode}" != "text" ]; then load_video; fi menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-9e70fd16-b48b-4f88-adcf-e443aef83124' { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi linux /boot/vmlinuz-3.5.0-17-generic root=UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 ro quiet splash $vt_handoff initrd /boot/initrd.img-3.5.0-17-generic } submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-9e70fd16-b48b-4f88-adcf-e443aef83124' { menuentry 'Ubuntu, with Linux 3.5.0-17-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.5.0-17-generic-advanced-9e70fd16-b48b-4f88-adcf-e443aef83124' { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi echo 'Loading Linux 3.5.0-17-generic ...' linux /boot/vmlinuz-3.5.0-17-generic root=UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 ro quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.5.0-17-generic } menuentry 'Ubuntu, with Linux 3.5.0-17-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.5.0-17-generic-recovery-9e70fd16-b48b-4f88-adcf-e443aef83124' { recordfail insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi echo 'Loading Linux 3.5.0-17-generic ...' linux /boot/vmlinuz-3.5.0-17-generic root=UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.5.0-17-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry 'Windows 7 (loader) (on /dev/sda1)' --class windows --class os $menuentry_id_option 'osprober-chain-F6AE2C13AE2BCB47' { insmod part_msdos insmod ntfs set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 F6AE2C13AE2BCB47 else search --no-floppy --fs-uuid --set=root F6AE2C13AE2BCB47 fi chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/30_uefi-firmware ### ### END /etc/grub.d/30_uefi-firmware ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- =============================== sda6/etc/fstab: ================================ -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda6 during installation UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda7 during installation UUID=52f3dd94-6be7-4a7b-a3ae-f43eb8810483 none swap sw 0 0 -------------------------------------------------------------------------------- =================== sda6: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) 162.831275940 = 174.838751232 boot/grub/grub.cfg 1 163.036647797 = 175.059267584 boot/initrd.img-3.5.0-17-generic 1 206.871749878 = 222.126850048 boot/vmlinuz-3.5.0-17-generic 1 163.036647797 = 175.059267584 initrd.img 1 163.036647797 = 175.059267584 initrd.img.old 1 206.871749878 = 222.126850048 vmlinuz 1 =============================== StdErr Messages: =============================== cat: write error: Broken pipe cat: write error: Broken pipe ADDITIONAL INFORMATION : =================== log of boot-repair 2012-12-11__00h59 =================== boot-repair version : 3.195~ppa28~quantal boot-sav version : 3.195~ppa28~quantal glade2script version : 3.2.2~ppa45~quantal boot-sav-extra version : 3.195~ppa28~quantal boot-repair is executed in installed-session (Ubuntu 12.10, quantal, Ubuntu, x86_64) CPU op-mode(s): 32-bit, 64-bit BOOT_IMAGE=/boot/vmlinuz-3.5.0-17-generic root=UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 ro quiet splash vt.handoff=7 =================== os-prober: /dev/sda6:The OS now in use - Ubuntu 12.10 CurrentSession:linux /dev/sda1:Windows 7 (loader):Windows:chain =================== blkid: /dev/sda1: UUID="F6AE2C13AE2BCB47" TYPE="ntfs" /dev/sda2: UUID="DC2273012272DFC6" TYPE="ntfs" /dev/sda3: LABEL="New Volume" UUID="1E76E43376E40D79" TYPE="ntfs" /dev/sda5: UUID="5ED60ACDD60AA57D" TYPE="ntfs" /dev/sda6: UUID="9e70fd16-b48b-4f88-adcf-e443aef83124" TYPE="ext4" /dev/sda7: UUID="52f3dd94-6be7-4a7b-a3ae-f43eb8810483" TYPE="swap" 1 disks with OS, 2 OS : 1 Linux, 0 MacOS, 1 Windows, 0 unknown type OS. Warning: extended partition does not start at a cylinder boundary. DOS and Linux will interpret the contents differently. =================== /etc/default/grub : # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" =================== /etc/grub.d/ : drwxr-xr-x 2 root root 4096 Oct 17 20:29 grub.d total 72 -rwxr-xr-x 1 root root 7541 Oct 14 23:06 00_header -rwxr-xr-x 1 root root 5488 Oct 4 15:00 05_debian_theme -rwxr-xr-x 1 root root 10891 Oct 14 23:06 10_linux -rwxr-xr-x 1 root root 10258 Oct 14 23:06 20_linux_xen -rwxr-xr-x 1 root root 1688 Oct 11 19:40 20_memtest86+ -rwxr-xr-x 1 root root 10976 Oct 14 23:06 30_os-prober -rwxr-xr-x 1 root root 1426 Oct 14 23:06 30_uefi-firmware -rwxr-xr-x 1 root root 214 Oct 14 23:06 40_custom -rwxr-xr-x 1 root root 216 Oct 14 23:06 41_custom -rw-r--r-- 1 root root 483 Oct 14 23:06 README =================== UEFI/Legacy mode: This installed-session is not in EFI-mode. EFI in dmesg. Please report this message to [email protected] [ 0.000000] ACPI: UEFI 00000000bafe7000 0003E (v01 DELL QA09 00000002 PTL 00000002) [ 0.000000] ACPI: UEFI 00000000bafe6000 00042 (v01 PTL COMBUF 00000001 PTL 00000001) [ 0.000000] ACPI: UEFI 00000000bafe3000 00256 (v01 DELL QA09 00000002 PTL 00000002) SecureBoot disabled. =================== PARTITIONS & DISKS: sda6 : sda, not-sepboot, grubenv-ok grub2, grub-pc , update-grub, 64, with-boot, is-os, not--efi--part, fstab-without-boot, fstab-without-efi, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, apt-get, grub-install, with--usr, fstab-without-usr, not-sep-usr, standard, farbios, . sda1 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, is-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, haswinload, no-recov-nor-hid, bootmgr, is-winboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, not-far, /mnt/boot-sav/sda1. sda2 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, farbios, /mnt/boot-sav/sda2. sda3 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, farbios, /mnt/boot-sav/sda3. sda5 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, farbios, /mnt/boot-sav/sda5. sda : not-GPT, BIOSboot-not-needed, has-no-EFIpart, not-usb, has-os, 2048 sectors * 512 bytes =================== parted -l: Model: ATA WDC WD7500BPKT-7 (scsi) Disk /dev/sda: 750GB Sector size (logical/physical): 512B/4096B Partition Table: msdos Number Start End Size Type File system Flags 1 106MB 75.2GB 75.1GB primary ntfs boot 2 75.3GB 150GB 75.1GB primary ntfs 3 150GB 170GB 20.1GB primary ntfs 4 170GB 750GB 580GB extended lba 6 170GB 232GB 61.6GB logical ext4 7 232GB 236GB 4181MB logical linux-swap(v1) 5 236GB 750GB 514GB logical ntfs =================== parted -lm: BYT; /dev/sda:750GB:scsi:512:4096:msdos:ATA WDC WD7500BPKT-7; 1:106MB:75.2GB:75.1GB:ntfs::boot; 2:75.3GB:150GB:75.1GB:ntfs::; 3:150GB:170GB:20.1GB:ntfs::; 4:170GB:750GB:580GB:::lba; 6:170GB:232GB:61.6GB:ext4::; 7:232GB:236GB:4181MB:linux-swap(v1)::; 5:236GB:750GB:514GB:ntfs::; =================== mount: /dev/sda6 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755) gvfsd-fuse on /run/user/dev/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,user=dev) /dev/sda1 on /mnt/boot-sav/sda1 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda2 on /mnt/boot-sav/sda2 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda3 on /mnt/boot-sav/sda3 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda5 on /mnt/boot-sav/sda5 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) =================== ls: /sys/block/sda (filtered): alignment_offset bdi capability dev device discard_alignment events events_async events_poll_msecs ext_range holders inflight power queue range removable ro sda1 sda2 sda3 sda4 sda5 sda6 sda7 size slaves stat subsystem trace uevent /sys/block/sr0 (filtered): alignment_offset bdi capability dev device discard_alignment events events_async events_poll_msecs ext_range holders inflight power queue range removable ro size slaves stat subsystem trace uevent /dev (filtered): alarm ashmem autofs binder block bsg btrfs-control bus cdrom cdrw char console core cpu cpu_dma_latency disk dri dvd dvdrw ecryptfs fb0 fb1 fd full fuse hpet input kmsg kvm log mapper mcelog mei mem net network_latency network_throughput null oldmem port ppp psaux ptmx pts random rfkill rtc rtc0 sda sda1 sda2 sda3 sda4 sda5 sda6 sda7 sg0 sg1 shm snapshot snd sr0 stderr stdin stdout uinput urandom v4l vga_arbiter vhost-net video0 zero ls /dev/mapper: control =================== df -Th: Filesystem Type Size Used Avail Use% Mounted on /dev/sda6 ext4 57G 2.7G 51G 6% / udev devtmpfs 1.9G 12K 1.9G 1% /dev tmpfs tmpfs 770M 892K 769M 1% /run none tmpfs 5.0M 0 5.0M 0% /run/lock none tmpfs 1.9G 260K 1.9G 1% /run/shm none tmpfs 100M 44K 100M 1% /run/user /dev/sda1 fuseblk 70G 36G 35G 51% /mnt/boot-sav/sda1 /dev/sda2 fuseblk 70G 66G 4.8G 94% /mnt/boot-sav/sda2 /dev/sda3 fuseblk 19G 87M 19G 1% /mnt/boot-sav/sda3 /dev/sda5 fuseblk 479G 436G 44G 92% /mnt/boot-sav/sda5 =================== fdisk -l: Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x1dc69d0b Device Boot Start End Blocks Id System /dev/sda1 * 206848 146802687 73297920 7 HPFS/NTFS/exFAT /dev/sda2 147007488 293623807 73308160 7 HPFS/NTFS/exFAT /dev/sda3 293623808 332820613 19598403 7 HPFS/NTFS/exFAT /dev/sda4 332822526 1465145343 566161409 f W95 Ext'd (LBA) Partition 4 does not start on physical sector boundary. /dev/sda5 461342720 1465145343 501901312 7 HPFS/NTFS/exFAT /dev/sda6 332822528 453171199 60174336 83 Linux /dev/sda7 453173248 461338623 4082688 82 Linux swap / Solaris Partition table entries are not in disk order =================== Recommended repair Recommended-Repair This setting will reinstall the grub2 of sda6 into the MBR of sda. Additional repair will be performed: unhide-bootmenu-10s grub-install (GRUB) 2.00-7ubuntu11,grub-install (GRUB) 2. Reinstall the GRUB of sda6 into the MBR of sda Installation finished. No error reported. grub-install /dev/sda: exit code of grub-install /dev/sda:0 update-grub Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.5.0-17-generic Found initrd image: /boot/initrd.img-3.5.0-17-generic Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/sda1 Unhide GRUB boot menu in sda6/boot/grub/grub.cfg Boot successfully repaired. You can now reboot your computer. The boot files of [The OS now in use - Ubuntu 12.10] are far from the start of the disk. Your BIOS may not detect them. You may want to retry after creating a /boot partition (EXT4, >200MB, start of the disk). This can be performed via tools such as gParted. Then select this partition via the [Separate /boot partition:] option of [Boot Repair]. (https://help.ubuntu.com/community/BootPartition)

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • NTFS Corruption: Files created in Linux corrupted when Windows Boots

    - by Logan Mayfield
    I'm getting some file loss and corruption on my Win7/Ubuntu 12.04 dual boot setup. I have a large shared NTFS partition. I have my Windows Docs/Music/etc. directories on that file and have the comparable directors in Linux setup as a sym. link. I'm using ntfs-3g on the linux side of things to manage the ntfs partition. The shared partition is on a logical partition along with my Linux /home / and /swap partitions. The ntfs partition is mounted at boot time via fstab with the following options: ntfs-3g users,nls=utf8,locale=en_US.UTF-8,exec,rw The problem seems to be confined to newly created and recently edited files. I have not see data loss or corruption when creating/editing files in Windows and then moving over to Ubuntu. I've been using the sync command aggressively in Ubuntu to try to ensure everything is getting written to the HDD. I do not use hibernate in Windows so I know it's not the usual missing files due to Hibernation problem. I'm not seeing any mount related issues on dmesg. Most recently I had a set of files related to a LaTeX document go bad. Some of them show up in Ubuntu but I am unable to delete them. In the GUI file browser they are given thumbnails associated with files I created on my last boot of Windows. To be more specific: I created a few png files in Windows. The files corrupted by that Windows boot are associated with running PdfLatex on a file and are not image files. However, two of the corrupted files show up with the thumbnail image of one of the previously mentioned png files. The png files are not in the same directory as the latex files but they are both win the Document Folder tree. I've had sucess with using NTFS for shared data in the past and am hoping there's some quirk here I'm missing and it's not just bad luck. On one hand this appears to be some kind of Windows problem as data loss occurs when I boot to Windows after having worked in Ubuntu for a while. However, I'm assuming it's more on the Ubuntu end as it requires the special NTFS drivers. Edit for more info: This is a Lenovo Thinkpad L430. Purchased new in the last month. So it's a fairly fresh install. Many of the files on the shared partition were copied over from a previous NTFS formatted shared partition on another HDD. As requested: here's a sample chkdsk log. Some of the files its mentioning were files that got deleted off the partition while in Ubuntu. Others were created/edited but not deleted. Checking file system on D: Volume dismounted. All opened handles to this volume are now invalid. Volume label is Files. CHKDSK is verifying files (stage 1 of 3)... Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x789f47 for possibly 0x21 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x42 is already in use. Deleting corrupt attribute record (128, "") from file record segment 66. 86496 file records processed. File verification completed. 385 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 3)... Deleted invalid filename Screenshot from 2012-09-09 09:51:27.png (72) in directory 46. The NTFS file name attribute in file 0x48 is incorrect. 53 00 63 00 72 00 65 00 65 00 6e 00 73 00 68 00 S.c.r.e.e.n.s.h. 6f 00 74 00 20 00 66 00 72 00 6f 00 6d 00 20 00 o.t. .f.r.o.m. . 32 00 30 00 31 00 32 00 2d 00 30 00 39 00 2d 00 2.0.1.2.-.0.9.-. 30 00 39 00 20 00 30 00 39 00 3a 00 35 00 31 00 0.9. .0.9.:.5.1. 3a 00 32 00 37 00 2e 00 70 00 6e 00 67 00 0d 00 :.2.7...p.n.g... 00 00 00 00 00 00 90 94 49 1f 5e 00 00 80 d4 00 ......I.^.... File 72 has been orphaned since all its filenames were invalid Windows will recover the file in the orphan recovery phase. Correcting minor file name errors in file 72. Index entry found.000 of index $I30 in file 0x5 points to unused file 0x11. Deleting index entry found.000 in index $I30 of file 5. Index entry found.001 of index $I30 in file 0x5 points to unused file 0x16. Deleting index entry found.001 in index $I30 of file 5. Index entry found.002 of index $I30 in file 0x5 points to unused file 0x15. Deleting index entry found.002 in index $I30 of file 5. Index entry DOWNLO~1 of index $I30 in file 0x28 points to unused file 0x2b6. Deleting index entry DOWNLO~1 in index $I30 of file 40. Unable to locate the file name attribute of index entry Screenshot from 2012-09-09 09:51:27.png of index $I30 with parent 0x2e in file 0x48. Deleting index entry Screenshot from 2012-09-09 09:51:27.png in index $I30 of file 46. An index entry of index $I30 in file 0x32 points to file 0x151e8 which is beyond the MFT. Deleting index entry latexsheet.tex in index $I30 of file 50. An index entry of index $I30 in file 0x58bc points to file 0x151eb which is beyond the MFT. Deleting index entry D8CZ82PK in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151f7 which is beyond the MFT. Deleting index entry EGA4QEAX in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151e9 which is beyond the MFT. Deleting index entry NGTB469M in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151fb which is beyond the MFT. Deleting index entry WU5RKXAB in index $I30 of file 22716. Index entry comp220-lab3.synctex.gz of index $I30 in file 0xda69 points to unused file 0xd098. Deleting index entry comp220-lab3.synctex.gz in index $I30 of file 55913. Unable to locate the file name attribute of index entry comp220-numberGrammars.aux of index $I30 with parent 0xda69 in file 0xa276. Deleting index entry comp220-numberGrammars.aux in index $I30 of file 55913. The file reference 0x500000000cd43 of index entry comp220-numberGrammars.out of index $I30 with parent 0xda69 is not the same as 0x600000000cd43. Deleting index entry comp220-numberGrammars.out in index $I30 of file 55913. The file reference 0x500000000cd45 of index entry comp220-numberGrammars.pdf of index $I30 with parent 0xda69 is not the same as 0xc00000000cd45. Deleting index entry comp220-numberGrammars.pdf in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15290 which is beyond the MFT. Deleting index entry gram.aux in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15291 which is beyond the MFT. Deleting index entry gram.out in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15292 which is beyond the MFT. Deleting index entry gram.pdf in index $I30 of file 55913. Unable to locate the file name attribute of index entry comp230-quiz1.synctex.gz of index $I30 with parent 0xda6f in file 0xd183. Deleting index entry comp230-quiz1.synctex.gz in index $I30 of file 55919. An index entry of index $I30 in file 0xf3cc points to file 0x15283 which is beyond the MFT. Deleting index entry require-transform.rkt in index $I30 of file 62412. An index entry of index $I30 in file 0xf3cc points to file 0x15284 which is beyond the MFT. Deleting index entry set.rkt in index $I30 of file 62412. An index entry of index $I30 in file 0xf497 points to file 0x15280 which is beyond the MFT. Deleting index entry logger.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf497 points to file 0x15281 which is beyond the MFT. Deleting index entry misc.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf497 points to file 0x15282 which is beyond the MFT. Deleting index entry more-scheme.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf5bf points to file 0x15285 which is beyond the MFT. Deleting index entry core-layout.rkt in index $I30 of file 62911. An index entry of index $I30 in file 0xf5e0 points to file 0x15286 which is beyond the MFT. Deleting index entry ref.scrbl in index $I30 of file 62944. An index entry of index $I30 in file 0xf6f0 points to file 0x15287 which is beyond the MFT. Deleting index entry base-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x15288 which is beyond the MFT. Deleting index entry html-properties.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x15289 which is beyond the MFT. Deleting index entry html-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528b which is beyond the MFT. Deleting index entry latex-prefix.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528c which is beyond the MFT. Deleting index entry latex-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528e which is beyond the MFT. Deleting index entry scribble.tex in index $I30 of file 63216. An index entry of index $I30 in file 0xf717 points to file 0x1528a which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63255. An index entry of index $I30 in file 0xf721 points to file 0x1528d which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63265. An index entry of index $I30 in file 0xf764 points to file 0x1528f which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63332. An index entry of index $I30 in file 0x14261 points to file 0x15270 which is beyond the MFT. Deleting index entry fddff3ae9ae2221207f144821d475c08ec3d05 in index $I30 of file 82529. An index entry of index $I30 in file 0x14621 points to file 0x15268 which is beyond the MFT. Deleting index entry FETCH_HEAD in index $I30 of file 83489. An index entry of index $I30 in file 0x14650 points to file 0x15272 which is beyond the MFT. Deleting index entry 86 in index $I30 of file 83536. An index entry of index $I30 in file 0x14651 points to file 0x15266 which is beyond the MFT. Deleting index entry pack-7f54ce9f8218d2cd8d6815b8c07461b50584027f.idx in index $I30 of file 83537. An index entry of index $I30 in file 0x14651 points to file 0x15265 which is beyond the MFT. Deleting index entry pack-7f54ce9f8218d2cd8d6815b8c07461b50584027f.pack in index $I30 of file 83537. An index entry of index $I30 in file 0x146f1 points to file 0x15275 which is beyond the MFT. Deleting index entry master in index $I30 of file 83697. An index entry of index $I30 in file 0x146f6 points to file 0x15276 which is beyond the MFT. Deleting index entry remotes in index $I30 of file 83702. An index entry of index $I30 in file 0x1477d points to file 0x15278 which is beyond the MFT. Deleting index entry pad.rkt in index $I30 of file 83837. An index entry of index $I30 in file 0x14797 points to file 0x1527c which is beyond the MFT. Deleting index entry pad1.rkt in index $I30 of file 83863. An index entry of index $I30 in file 0x14810 points to file 0x1527d which is beyond the MFT. Deleting index entry cm.rkt in index $I30 of file 83984. An index entry of index $I30 in file 0x14926 points to file 0x1527e which is beyond the MFT. Deleting index entry multi-file-search.rkt in index $I30 of file 84262. An index entry of index $I30 in file 0x149ef points to file 0x1527f which is beyond the MFT. Deleting index entry com.rkt in index $I30 of file 84463. An index entry of index $I30 in file 0x14b47 points to file 0x15202 which is beyond the MFT. Deleting index entry COMMIT_EDITMSG in index $I30 of file 84807. An index entry of index $I30 in file 0x14b47 points to file 0x15279 which is beyond the MFT. Deleting index entry index in index $I30 of file 84807. An index entry of index $I30 in file 0x14b4c points to file 0x15274 which is beyond the MFT. Deleting index entry master in index $I30 of file 84812. An index entry of index $I30 in file 0x14b61 points to file 0x1520b which is beyond the MFT. Deleting index entry 02 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1525a which is beyond the MFT. Deleting index entry 28 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15208 which is beyond the MFT. Deleting index entry 29 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1521f which is beyond the MFT. Deleting index entry 2c in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15261 which is beyond the MFT. Deleting index entry 2e in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151f0 which is beyond the MFT. Deleting index entry 45 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1523e which is beyond the MFT. Deleting index entry 47 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151e5 which is beyond the MFT. Deleting index entry 49 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15214 which is beyond the MFT. Deleting index entry 58 in index $I30 of file 84833. Index entry 6e of index $I30 in file 0x14b61 points to unused file 0xd182. Deleting index entry 6e in index $I30 of file 84833. Unable to locate the file name attribute of index entry a0 of index $I30 with parent 0x14b61 in file 0xd29c. Deleting index entry a0 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1521b which is beyond the MFT. Deleting index entry cd in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15249 which is beyond the MFT. Deleting index entry d6 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15242 which is beyond the MFT. Deleting index entry df in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15227 which is beyond the MFT. Deleting index entry ea in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1522e which is beyond the MFT. Deleting index entry f3 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151f2 which is beyond the MFT. Deleting index entry ff in index $I30 of file 84833. An index entry of index $I30 in file 0x14b62 points to file 0x15254 which is beyond the MFT. Deleting index entry 1ed39b36ad4bd48c91d22cbafd7390f1ea38da in index $I30 of file 84834. An index entry of index $I30 in file 0x14b75 points to file 0x15224 which is beyond the MFT. Deleting index entry 96260247010fe9811fea773c08c5f3a314df3f in index $I30 of file 84853. An index entry of index $I30 in file 0x14b79 points to file 0x15219 which is beyond the MFT. Deleting index entry 8f689724ca23528dd4f4ab8b475ace6edcb8f5 in index $I30 of file 84857. An index entry of index $I30 in file 0x14b7c points to file 0x15223 which is beyond the MFT. Deleting index entry 1df17cf850656be42c947cba6295d29c248d94 in index $I30 of file 84860. An index entry of index $I30 in file 0x14b7c points to file 0x15217 which is beyond the MFT. Deleting index entry 31db8a3c72a3e44769bbd8db58d36f8298242c in index $I30 of file 84860. An index entry of index $I30 in file 0x14b7c points to file 0x15267 which is beyond the MFT. Deleting index entry 8e1254d755ff1882d61c07011272bac3612f57 in index $I30 of file 84860. An index entry of index $I30 in file 0x14b82 points to file 0x15246 which is beyond the MFT. Deleting index entry f959bfaf9643c1b9e78d5ecf8f669133efdbf3 in index $I30 of file 84866. An index entry of index $I30 in file 0x14b88 points to file 0x151fe which is beyond the MFT. Deleting index entry 7e9aa15b1196b2c60116afa4ffa613397f2185 in index $I30 of file 84872. An index entry of index $I30 in file 0x14b8a points to file 0x151ea which is beyond the MFT. Deleting index entry 73cb0cd248e494bb508f41b55d862e84cdd6e0 in index $I30 of file 84874. An index entry of index $I30 in file 0x14b8e points to file 0x15264 which is beyond the MFT. Deleting index entry bd555d9f0383cc14c317120149e9376a8094c4 in index $I30 of file 84878. An index entry of index $I30 in file 0x14b96 points to file 0x15212 which is beyond the MFT. Deleting index entry 630dba40562d991bc6cbb6fed4ba638542e9c5 in index $I30 of file 84886. An index entry of index $I30 in file 0x14b99 points to file 0x151ec which is beyond the MFT. Deleting index entry 478be31ca8e538769246e22bba3330d81dc3c8 in index $I30 of file 84889. An index entry of index $I30 in file 0x14b99 points to file 0x15258 which is beyond the MFT. Deleting index entry 66c60c0a0f3253bc9a5112697e4cbb0dfc0c78 in index $I30 of file 84889. An index entry of index $I30 in file 0x14b9c points to file 0x15238 which is beyond the MFT. Deleting index entry 1c7ceeddc2953496f9ffbfc0b6fb28846e3fe3 in index $I30 of file 84892. An index entry of index $I30 in file 0x14b9c points to file 0x15247 which is beyond the MFT. Deleting index entry ae6e32ffc49d897d8f8aeced970a90d3653533 in index $I30 of file 84892. An index entry of index $I30 in file 0x14ba0 points to file 0x15233 which is beyond the MFT. Deleting index entry f71c7d874e45179a32e138b49bf007e5bbf514 in index $I30 of file 84896. Index entry 2e04fefbd794f050d45e7a717d009e39204431 of index $I30 in file 0x14ba7 points to unused file 0xd097. Deleting index entry 2e04fefbd794f050d45e7a717d009e39204431 in index $I30 of file 84903. An index entry of index $I30 in file 0x14baa points to file 0x15241 which is beyond the MFT. Deleting index entry 0dda7dec1c635cd646dfef308e403c2843d5dc in index $I30 of file 84906. An index entry of index $I30 in file 0x14baa points to file 0x151fc which is beyond the MFT. Deleting index entry 98151e654dd546edcfdec630bc82d90619ac8e in index $I30 of file 84906. An index entry of index $I30 in file 0x14bb1 points to file 0x151e9 which is beyond the MFT. Deleting index entry 1997c5be62ffeebc99253cced7608415e38e4e in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb1 points to file 0x1521d which is beyond the MFT. Deleting index entry 6bf3aedefd3ac62d9c49cad72d05e8c0ad242c in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb1 points to file 0x151f4 which is beyond the MFT. Deleting index entry 907b755afdca14c00be0010962d0861af29264 in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb3 points to file 0x15218 which is beyond the MFT. Deleting index entry

    Read the article

  • WLS MBeans

    - by Jani Rautiainen
    WLS provides a set of Managed Beans (MBeans) to configure, monitor and manage WLS resources. We can use the WLS MBeans to automate some of the tasks related to the configuration and maintenance of the WLS instance. The MBeans can be accessed a number of ways; using various UIs and programmatically using Java or WLST Python scripts.For customization development we can use the features to e.g. manage the deployed customization in MDS, control logging levels, automate deployment of dependent libraries etc. This article is an introduction on how to access and use the WLS MBeans. The goal is to illustrate the various access methods in a single article; the details of the features are left to the linked documentation.This article covers Windows based environment, steps for Linux would be similar however there would be some differences e.g. on how the file paths are defined. MBeansThe WLS MBeans can be categorized to runtime and configuration MBeans.The Runtime MBeans can be used to access the runtime information about the server and its resources. The data from runtime beans is only available while the server is running. The runtime beans can be used to e.g. check the state of the server or deployment.The Configuration MBeans contain information about the configuration of servers and resources. The configuration of the domain is stored in the config.xml file and the configuration MBeans can be used to access and modify the configuration data. For more information on the WLS MBeans refer to: Understanding WebLogic Server MBeans WLS MBean reference Java Management Extensions (JMX)We can use JMX APIs to access the WLS MBeans. This allows us to create Java programs to configure, monitor, and manage WLS resources. In order to use the WLS MBeans we need to add the following library into the class-path: WL_HOME\lib\wljmxclient.jar Connecting to a WLS MBean server The WLS MBeans are contained in a Mbean server, depending on the requirement we can connect to (MBean Server / JNDI Name): Domain Runtime MBean Server weblogic.management.mbeanservers.domainruntime Runtime MBean Server weblogic.management.mbeanservers.runtime Edit MBean Server weblogic.management.mbeanservers.edit To connect to the WLS MBean server first we need to create a map containing the credentials; Hashtable<String, String> param = new Hashtable<String, String>(); param.put(Context.SECURITY_PRINCIPAL, "weblogic");        param.put(Context.SECURITY_CREDENTIALS, "weblogic1");        param.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote"); These define the user, password and package containing the protocol. Next we create the connection: JMXServiceURL serviceURL =     new JMXServiceURL("t3","127.0.0.1",7101,     "/jndi/weblogic.management.mbeanservers.domainruntime"); JMXConnector connector = JMXConnectorFactory.connect(serviceURL, param); MBeanServerConnection connection = connector.getMBeanServerConnection(); With the connection we can now access the MBeans for the WLS instance. For a complete example see Appendix A of this post. For more details refer to Accessing WebLogic Server MBeans with JMX Accessing WLS MBeans The WLS MBeans are structured hierarchically; in order to access content we need to know the path to the MBean we are interested in. The MBean is accessed using “MBeanServerConnection. getAttribute” API.  WLS provides entry points to the hierarchy allowing us to navigate all the WLS MBeans in the hierarchy (MBean Server / JMX object name): Domain Runtime MBean Server com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean Runtime MBean Servers com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean Edit MBean Server com.bea:Name=EditService,Type=weblogic.management.mbeanservers.edit.EditServiceMBean For example we can access the Domain Runtime MBean using: ObjectName service = new ObjectName( "com.bea:Name=DomainRuntimeService," + "Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean"); Same syntax works for any “child” WLS MBeans e.g. to find out all application deployments we can: ObjectName domainConfig = (ObjectName)connection.getAttribute(service,"DomainConfiguration"); ObjectName[] appDeployments = (ObjectName[])connection.getAttribute(domainConfig,"AppDeployments"); Alternatively we could access the same MBean using the full syntax: ObjectName domainConfig = new ObjectName("com.bea:Location=DefaultDomain,Name=DefaultDomain,Type=Domain"); ObjectName[] appDeployments = (ObjectName[])connection.getAttribute(domainConfig,"AppDeployments"); For more details refer to Accessing WebLogic Server MBeans with JMX Invoking operations on WLS MBeans The WLS MBean operations can be invoked with MBeanServerConnection. invoke API; in the following example we query the state of “AppsLoggerService” application: ObjectName appRuntimeStateRuntime = new ObjectName("com.bea:Name=AppRuntimeStateRuntime,Type=AppRuntimeStateRuntime"); Object[] parameters = { "AppsLoggerService", "DefaultServer" }; String[] signature = { "java.lang.String", "java.lang.String" }; String result = (String)connection.invoke(appRuntimeStateRuntime,"getCurrentState",parameters, signature); The result returned should be "STATE_ACTIVE" assuming the "AppsLoggerService" application is up and running. WebLogic Scripting Tool (WLST) The WebLogic Scripting Tool (WLST) is a command-line scripting environment that we can access the same WLS MBeans. The tool is located under: $MW_HOME\oracle_common\common\bin\wlst.bat Do note that there are several instances of the wlst script under the $MW_HOME, each of them works, however the commands available vary, so we want to use the one under “oracle_common”. The tool is started in offline mode. In offline mode we can access and manipulate the domain configuration. In online mode we can access the runtime information. We connect to the Administration Server : connect("weblogic","weblogic1", "t3://127.0.0.1:7101") In both online and offline modes we can navigate the WLS MBean using commands like "ls" to print content and "cd" to navigate between objects, for example: All the commands available can be obtained with: help('all') For details of the tool refer to WebLogic Scripting Tool and for the commands available WLST Command and Variable Reference. Also do note that the WLST tool can be invoked from Java code in Embedded Mode. Running Scripts The WLST tool allows us to automate tasks using Python scripts in Script Mode. The script can be manually created or recorded by the WLST tool. Example commands of recording a script: startRecording("c:/temp/recording.py") <commands that we want to record> stopRecording() We can run the script from WLST: execfile("c:/temp/recording.py") We can also run the script from the command line: C:\apps\Oracle\Middleware\oracle_common\common\bin\wlst.cmd c:/temp/recording.py There are various sample scripts are provided with the WLS instance. UI to Access the WLS MBeans There are various UIs through which we can access the WLS MBeans. Oracle Enterprise Manager Fusion Middleware Control Oracle WebLogic Server Administration Console Fusion Middleware Control MBean Browser In the integrated JDeveloper environment only the Oracle WebLogic Server Administration Console is available to us. For more information refer to the documentation, one noteworthy feature in the console is the ability to record WLST scripts based on the navigation. In addition to the UIs above the JConsole included in the JDK can be used to access the WLS MBeans. The JConsole needs to be started with specific parameter to force WLS objects to be used and jar files in the classpath: "C:\apps\Oracle\Middleware\jdk160_24\bin\jconsole" -J-Djava.class.path=C:\apps\Oracle\Middleware\jdk160_24\lib\jconsole.jar;C:\apps\Oracle\Middleware\jdk160_24\lib\tools.jar;C:\apps\Oracle\Middleware\wlserver_10.3\server\lib\wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote For more details refer to the Accessing Custom MBeans from JConsole. Summary In this article we have covered various ways we can access and use the WLS MBeans in context of integrated WLS in JDeveloper to be used for Fusion Application customization development. References Developing Custom Management Utilities With JMX for Oracle WebLogic Server Accessing WebLogic Server MBeans with JMX WebLogic Server MBean Reference WebLogic Scripting Tool WLST Command and Variable Reference Appendix A package oracle.apps.test; import java.io.IOException;import java.net.MalformedURLException;import java.util.Hashtable;import javax.management.MBeanServerConnection;import javax.management.MalformedObjectNameException;import javax.management.ObjectName;import javax.management.remote.JMXConnector;import javax.management.remote.JMXConnectorFactory;import javax.management.remote.JMXServiceURL;import javax.naming.Context;/** * This class contains simple examples on how to access WLS MBeans using JMX. */public class BlogExample {    /**     * Connection to the WLS MBeans     */    private MBeanServerConnection connection;    /**     * Constructor that takes in the connection information for the      * domain and obtains the resources from WLS MBeans using JMX.     * @param hostName host name to connect to for the WLS server     * @param port port to connect to for the WLS server     * @param userName user name to connect to for the WLS server     * @param password password to connect to for the WLS server     */    public BlogExample(String hostName, String port, String userName,                       String password) {        super();        try {            initConnection(hostName, port, userName, password);        } catch (Exception e) {            throw new RuntimeException("Unable to connect to the domain " +                                       hostName + ":" + port);        }    }    /**     * Default constructor.     * Tries to create connection with default values. Runtime exception will be     * thrown if the default values are not used in the local instance.     */    public BlogExample() {        this("127.0.0.1", "7101", "weblogic", "weblogic1");    }    /**     * Initializes the JMX connection to the WLS Beans     * @param hostName host name to connect to for the WLS server     * @param port port to connect to for the WLS server     * @param userName user name to connect to for the WLS server     * @param password password to connect to for the WLS server     * @throws IOException error connecting to the WLS MBeans     * @throws MalformedURLException error connecting to the WLS MBeans     * @throws MalformedObjectNameException error connecting to the WLS MBeans     */    private void initConnection(String hostName, String port, String userName,                                String password)                                 throws IOException, MalformedURLException,                                        MalformedObjectNameException {        String protocol = "t3";        String jndiroot = "/jndi/";        String mserver = "weblogic.management.mbeanservers.domainruntime";        JMXServiceURL serviceURL =            new JMXServiceURL(protocol, hostName, Integer.valueOf(port),                              jndiroot + mserver);        Hashtable<String, String> h = new Hashtable<String, String>();        h.put(Context.SECURITY_PRINCIPAL, userName);        h.put(Context.SECURITY_CREDENTIALS, password);        h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,              "weblogic.management.remote");        JMXConnector connector = JMXConnectorFactory.connect(serviceURL, h);        connection = connector.getMBeanServerConnection();    }    /**     * Main method used to invoke the logic for testing     * @param args arguments passed to the program     */    public static void main(String[] args) {        BlogExample blogExample = new BlogExample();        blogExample.testEntryPoint();        blogExample.testDirectAccess();        blogExample.testInvokeOperation();    }    /**     * Example of using an entry point to navigate the WLS MBean hierarchy.     */    public void testEntryPoint() {        try {            System.out.println("testEntryPoint");            ObjectName service =             new ObjectName("com.bea:Name=DomainRuntimeService,Type=" +"weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean");            ObjectName domainConfig =                (ObjectName)connection.getAttribute(service,                                                    "DomainConfiguration");            ObjectName[] appDeployments =                (ObjectName[])connection.getAttribute(domainConfig,                                                      "AppDeployments");            for (ObjectName appDeployment : appDeployments) {                String resourceIdentifier =                    (String)connection.getAttribute(appDeployment,                                                    "SourcePath");                System.out.println(resourceIdentifier);            }        } catch (Exception e) {            throw new RuntimeException(e);        }    }    /**     * Example of accessing WLS MBean directly with a full reference.     * This does the same thing as testEntryPoint in slightly difference way.     */    public void testDirectAccess() {        try {            System.out.println("testDirectAccess");            ObjectName appDeployment =                new ObjectName("com.bea:Location=DefaultDomain,"+                               "Name=AppsLoggerService,Type=AppDeployment");            String resourceIdentifier =                (String)connection.getAttribute(appDeployment, "SourcePath");            System.out.println(resourceIdentifier);        } catch (Exception e) {            throw new RuntimeException(e);        }    }    /**     * Example of invoking operation on a WLS MBean.     */    public void testInvokeOperation() {        try {            System.out.println("testInvokeOperation");            ObjectName appRuntimeStateRuntime =                new ObjectName("com.bea:Name=AppRuntimeStateRuntime,"+                               "Type=AppRuntimeStateRuntime");            String identifier = "AppsLoggerService";            String serverName = "DefaultServer";            Object[] parameters = { identifier, serverName };            String[] signature = { "java.lang.String", "java.lang.String" };            String result =                (String)connection.invoke(appRuntimeStateRuntime, "getCurrentState",                                          parameters, signature);            System.out.println("State of " + identifier + " = " + result);        } catch (Exception e) {            throw new RuntimeException(e);        }    }}

    Read the article

< Previous Page | 300 301 302 303 304 305 306  | Next Page >