Search Results

Search found 23832 results on 954 pages for 'home drive'.

Page 268/954 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • Partition Configuration to avoid reinstalling applications after update

    - by nightcrawler
    The major bane when I update Ubuntu (which is way more frequent than Windows) is that I lose all installed application. To be precise I do a lot of Maple, Matlab, Geogebra & for all of those I install Java platform which too isnt very straightforward plus the license management things, which really give me craps. I don't install application in /home (to be made available to all users) thus a separate /home partition is meaningless. Can we circumvent this problem somehow such that Java dependent applications along with JDK doesn't blow away after update, may be by a separate partition (just like /home) where only custom (other than provided by Ubuntu Software Center) install application resides Further: I use specific binary of Java (Java6 update 32), its an important requirement for me, thus I don't want to let it crash/overwritten or similar

    Read the article

  • About partition sizes

    - by Lassi
    I am going to install Ubuntu on a new computer, but I'm not quite sure how big each partition should be. If I create only root, home and swap partitions, on what partition will programs be installed? Will they go to /home or to root? Basically does it make sense for instance to have following partitions: / - 6GB /home - 80GB /swap - 4GB Is 6GB large enough for my root partition? Also are these 3 partitions a good choice, or is there a better configuration? I have at the moment 3 operating systems installed, and I do make changes quite often.

    Read the article

  • run .profile function as cron job

    - by Don
    In the .profile file of the root user I have defined a function, e.g. function printDate() { date } I want to run this function every minute and append the output to cron.log. I tried adding the following crontab entry: * * * * * printDate > $HOME/cron.log 2>&1 But it doesn't work. The cron.log file gets created, but it's empty. I guess this is because the .profile isn't read by cron, so any functions/aliases defined therein are unavailable to it. So I tried changing the crontab entry to: * * * * * source $HOME/.profile;printDate >> $HOME/cron.log 2>&1 But this doesn't work either. It seems cron still doesn't have access to the printDate function because I see the following in cron.log /bin/sh: printDate: not found

    Read the article

  • Change administrator username

    - by Fazlan
    I have accidentally typed my name wrong when I created the administrator account. Although I managed to change the user name at the login screen, I am unable to rename the /home/oldusername to /home/newusername. I tried most of the online tutorials, and it failed. The code I tried was this: usermod -l newusername -m -d /home/newusername oldusername But the output is: cannot lock /etc/passwd; try again later. How can I fix the issue and change the folder to newusername and expect all the applications to work as before?

    Read the article

  • To mount NAS on a Laptop?

    - by deckoff
    So, I bought a NAS, which I configured successfully in /etc/fstab, on mu Kubuntu 10.10 Thinkpad x40. It works just fine when I am at home. A few days I went out with my laptop and the problem is, that when not at home, both suspend and hibernate functions seem forever to work. I commented out the entry on fstab and the laptop started to work as expected. I played with autofs, but it seems just dies at one moment and I cannot access anything. It works for some time, and then just goes off. Is there any consistent way, to make my laptop access the drive when at home and work OK when away? Probably a script that runs at startup, checks if the mount is there and mounts it if available... or a script that umount the drive at suspend|hibernate and loads it back at startup. Any useful ideas?

    Read the article

  • How to automatically mount a Windows shared folder on every boot up?

    - by Zabba
    I am able to access Windows' shared folder from Ubuntu 10.10 Nautilus like so: Type into the Location Bar : smb://box/projects Now, I can see the folder in Nautilus, create/read files in it. Also, on desktop I get a folder called "projects on box". But, that folder on the desktop goes away when I reboot. So, I thought that I can automount the Windows' shared projects folder by adding this to my fstab: //box/Projects /home/base/Projects smbfs rw,user,username=jack,password=www222,fmask=666,dmask=777 0 0 (base is my user name on Ubuntu) Now, I get a folder called "Projects" in my home folder after boot up, but it is empty (cannot see the same files that I can see in Nautilus). What's am I doing wrong? Some more detail: This is what I see of the Projects folder when I do ls -l in my home folder: ... drwxr-xr-x 2 root root 4096 2011-01-01 10:22 Projects drwxr-xr-x 2 base base 4096 2011-01-01 09:06 Public ... Note the two "roots". Is that somehow the problem?

    Read the article

  • How to enable a symlink in this case

    - by Bragboy
    I cannot categorize this question under ubuntu since it has nothing to do with it. But I know people here can definitely answer this. I login to one of my deployment boxes using SSH (no ubuntu here). I am working in a tool called TeamCity which uses a folder called ".BuildServer" under home directory of the user. This folder may grow in size as the application runs but the current user is only given a limited amount of space. But the good thing this I got a folder access outside /home/deploy (deploy being the user here) folder. I now want to link this .BuildServer inside the /home/deploy directory to the other folder where I got permission for (meaning all the files should be re-routed to that directory) Hope my question was clear, please help.

    Read the article

  • How to copy files via terminal?

    - by Levan
    This might sound silly for some people but I'm new to Linux and don't know how to use it as good as other people, yes I rad about copying files with terminal but these examples will help me a lot. So here is what I want to do: Examples: I have a file in /home/levan/kdenlive untitelds.mpg and I want to copy this file to /media/sda3/SkyDrive and do not want to delete any thing in SkyDrive directory. I have a file in /media/sda3/SkyDrive untitelds.mpg and I want to copy this file to /home/levan/kdenlive and do not want to delete any thing in kdenlive directory I want to copy a folder from home directory to sda3 and do not want to delete any thing on sda3 directory and opposite I want to cut a folder/file and copy to other place without deleting files in that directory I cut it into.

    Read the article

  • Why do I get this error trying to compile libxml2?

    - by bfaskiplar
    Although, I installed libxml2 once and reinstalled it for a few times. I cannot compile c-source code because compiler cannot find where the header file is. I am able to locate where it is (in the folder where I downloaded the tar.gz package) but I had a feeling in my guts that this package isn't installed correctly because when I tried to sudo make install, it says /bin/bash: /home/bfaskiplar/Downloads/tar.gz: No such file or directory make[2]: *** [install-libLTLIBRARIES] Error 127 make[2]: Leaving directory `/home/bfaskiplar/Downloads/tar.gz packages/libxml2-2.8.0' make[1]: *** [install-am] Error 2 make[1]: Leaving directory `/home/bfaskiplar/Downloads/tar.gz packages/libxml2-2.8.0' make: *** [install-recursive] Error 1 that's why I installed synaptic package manager and reinstalled it, but in this case, isn't it supposed to put header files in default directory where gcc normally searches in currently, I am able to compile it with -I option, but I wonder why do I have to copy headers manually even if I used synaptic for installation and why I am getting Error 1 and Error 2 when trying to install the package manually. thanks in advance

    Read the article

  • Finalement il y aura une application Firefox sur l'iPhone mais ce ne sera pas un navigateur : Mozill

    Finalement il y aura une application Firefox sur l'iPhone Mais ce ne sera pas un navigateur A la première lecture (trop) rapide du titre du billet du blog de de la Fondation Mozilla, on se dit que « ça y est !!! », Tritan Nitot nous a dit des cracks, il y aura bien un Firefox pour l'iPhone. En fait, il y aura bien un Firefox mais ce ne sera pas le navigateur. Il s'agira d'un service baptisé Firefox Home, d'où le titre du billet de Mozilla : « Firefox Home Coming Soon to the iPhone ». « Firefox Home [propose] sur les terminaux ou les plateformes où nous ne sommes pas en mesure de fournir un navigateur ?complet? (pour des raisons techni...

    Read the article

  • Why Gsettings isn't enough to reset my nautilus settings?

    - by berdario
    I'm trying to narrow down a bug (to eventually report it or give up if it'll turn out that resetting a simple setting will be enough to get rid of it) I noticed that with a brand new user the bug doesn't pop up so I tried to reset the config of my user's nautilus I renamed .config/nautilus I renamed .nautilus2 (strange name... why the 2?) and I did a gsettings reset-recursively org.gnome.nautilus strangely enough, it didn't work: I previously set up to use my home as the desktop folder, and now the key has correctly been resetted $gsettings get org.gnome.nautilus.preferences desktop-is-home-dir false and yet on my desktop I see the contents of my home folder (meaning that the settings have not been resetted) I know that with the transition to gtk3 there've been lots of changes (before I should've used gconf... but now there's dconf and gsettings), so there're quite a bit of question about nautilus... but these unfortunately now seem to be outdated

    Read the article

  • Unable to start GUI app from upstart

    - by novice
    As part of post-start of my app say "mydaemon" i want to launch a gui app say "mygui" I am unable to do this. I have verified user perm using xhost, DISPLAY variable is set correctly. conf file in /etc/init/ is given below me@ubuntu:~/term$ cat /etc/init/agentd.conf description "my daemon" author "me" start on runlevel [2345] stop on runlevel [016] console output kill timeout 60 respawn respawn limit 3 15 Allow some clean up time post-stop script env DISPLAY=:0.0 cd /home/me ./mygui sleep 1 end script script cd /home/me ./myapp end script post-start script env DISPLAY=:0.0 cd /home/me ./mygui end script sdn@ubuntu:~/term$ any suggestions?

    Read the article

  • Runescape - Ubuntu 12.04 - Jaggex cache folder

    - by user214179
    does anyone here know how to hide the jaggexcache folder, jagexappletviewer file and jagex_cl_runescape_LIVE file from the home folder? The problem is this, everytime you play runescape (in both windows or linux), it creates those files and folders. The problem is that on windows, you can hide them and everything is fine. On linux, if you add ( /.) to the folder or file to hide it, when you play runescape, those files and folders are again created, because they cannot be found. As far as I know, the game is Java based (needs java iced tea plug in and java 7), so is there any way of changing the directory where the game puts all those files? like home/documents instead of just /home thanks!

    Read the article

  • Exclude a sub directory in a protected directory

    - by user1351358
    I need to exclude protection on one of the folder inside a protected directory with .htaccess I put .htaccess in here: /home/mysite/public_html/new/administrator/.htaccess The directory need to be exclude from protection: /home/mysite/public_html/new/administrator/components/com_phocagallery/ My .htaccess file : AuthUserFile "/home/mysite/.htpasswds/public_html/new/administrator/passwd" AuthType Basic AuthName "admin" require valid-user SetEnvIf Request_URI "(/components/com_phocagallery/)$" allow Order allow,deny Allow from env=allow Satisfy any I tried but not working on my purpose. I suspect my path to the excluded directory may have some mistakes. Please advise me. Thanks.

    Read the article

  • gfortran in ubuntu 12.10

    - by user115334
    I hope my message will get read soon and somebody will give me a solution. I use fortran to do simulation and gfortran is the compiler I use. Recently I migrated from Ubuntu 10.10 to 12.10. After installing gfortran then I tried to compile and run my fortran programs then the problem started. I successfully compiled the program but I am unable to execute it. (I work in a directory in shared partition, not in HOME directory). When I compiled the program and run it within HOME directory, everything worked fine. On my Ubuntu 10.10, I was able to compile and execute fortran program from everywhere not only within HOME directory. This is what I do for compiling and executing fortran program: gfortran hello.f90 -o hello # to compile it ./hello # to execute it I'm blind about PATH or anything like it (this has to do with it, I suspect) so please give me direction.

    Read the article

  • System.getProperty("user.dir") cannot get my project root path ,but the path which my eclipse is located

    - by facebook-100005613813158
    As the title goes , I have class named GetException.java,inside it ,I read a xml file in a static code block like(Because this document is shared): static{ ... document = db.parse(new File(System.getProperty("user.dir")+"/src/exception/ExceptionCode.xml")); ... } To test if the file path is correct, I write a main function just inside GetException.java, it proves that the path is correct ,xml file can be read successfully. My project root dir is "/home/wuchang/workspace/MongodbI". But When this Class is loaded from other class,such as I called one of its static functions , it reports the error message: /home/mrs/??/eclipse/src/exception/ExceptionCode.xml (No such file or directory) /home/mrs/??/eclipse/ is actually my eclipse installation directory.So , I wander how System.getProperty("user.dir") returned the eclipse installation directory to me ,instead of my project root directory?

    Read the article

  • Samba login before requesting list of shares?

    - by user69359
    I'm relatively inexperienced at this and am starting to get a little frustrated: I have a Ubuntu Server with several user accounts. The Samba file server hosts a number of public shares + the home directories of each user as a private share. When connecting to the server using my OSX machine, I am prompted to enter my login data, and get a overview of all the public shares + my accounts home directory share. Exactly the way I want it. How ever, when I connect to the server from a Ubuntu machine, I am never prompted to enter any login data, and can only see the public folders. If a user wants to connect to their home directory on the server, they have to go through "connect to server" and mount their specific private share. Is there anyway to configure Samba or the Ubuntu client in a way that will make the user experience more similar to how it is on my OSX box? Thank you for your time!

    Read the article

  • Delphi Pascal - Using SetFilePointerEx and GetFileSizeEx, Getting Physical Media exact size when reading as a file

    - by SuicideClutchX2
    I am having trouble understanding how to delcare GetFileSizeEx and SetFilePointerEx in Delphi 2009 so that I can use them since they are not in the RTL or Windows.pas. I was able to compile with the following: function GetFileSizeEx(hFile: THandle; lpFileSizeHigh: Pointer): DWORD; external 'kernel32'; Then using GetFileSizeEx(PD, Pointer(DriveSize)); to get the size. But could not get it to work, the disk handle I am using is valid and I have had no problem reading the data or working under the 2gb mark with the older API's. GetFileSize of course returns 4294967295. I have had greater trouble trying to use SetFilePointerEx with the data types it uses. The overall project needs to read the data from a flash card, which is not a problem at all I can do this. My problem is that I can not find the length or size of the media I will be reading. I have code I have used in the past to do this with media under 2GB. But now that I need to read media over 2GB it is a problem. If you still dont understand I am dumping a card with all data including the boot record, etc. This is the code I would normally use to read from the physical disk to grab say the boot record and dump it to file: SetFilePointer(PD,0,nil,FILE_BEGIN); SetLength(Buffer,512); ReadFile(PD,Buffer[0],512,BytesReturned,nil); I just need to figure out how to find the end of an 8gb card and so on as well as being able to set a file pointer beyond the 2gb barrier. I guess any help in the external declarations as well as understand the values that SetFilePointerEx uses (I do not understand the whole High Low thing) would be of great help. var Form1: TForm1; function GetFileSizeEx(hFile: THandle; var FileSize: Int64): DWORD; stdcall; external 'kernel32'; implementation {$R *.dfm} function GetLD(Drive: Char): Cardinal; var Buffer : String; begin Buffer := Format('\\.\%s:',[Drive]); Result := CreateFile(PChar(Buffer),GENERIC_READ Or GENERIC_WRITE,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); If Result = INVALID_HANDLE_VALUE Then begin Result := CreateFile(PChar(Buffer),GENERIC_READ,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); end; end; function GetPD(Drive: Byte): Cardinal; var Buffer : String; begin If Drive = 0 Then begin Result := INVALID_HANDLE_VALUE; Exit; end; Buffer := Format('\\.\PHYSICALDRIVE%d',[Drive]); Result := CreateFile(PChar(Buffer),GENERIC_READ Or GENERIC_WRITE,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); If Result = INVALID_HANDLE_VALUE Then begin Result := CreateFile(PChar(Buffer),GENERIC_READ,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); end; end; function GetPhysicalDiskNumber(Drive: Char): Byte; var LD : DWORD; DiskExtents : PVolumeDiskExtents; DiskExtent : TDiskExtent; BytesReturned : Cardinal; begin Result := 0; LD := GetLD(Drive); If LD = INVALID_HANDLE_VALUE Then Exit; Try DiskExtents := AllocMem(Max_Path); DeviceIOControl(LD,IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS,nil,0,DiskExtents,Max_Path,BytesReturned,nil); If DiskExtents^.NumberOfDiskExtents > 0 Then begin DiskExtent := DiskExtents^.Extents[0]; Result := DiskExtent.DiskNumber; end; Finally CloseHandle(LD); end; end; procedure TForm1.Button1Click(Sender: TObject); var PD : DWORD; BytesReturned : Cardinal; Buffer : Array Of Byte; myFile: File; DriveSize: Int64; begin PD := GetPD(GetPhysicalDiskNumber(Edit1.Text[1])); If PD = INVALID_HANDLE_VALUE Then Exit; Try GetFileSizeEx(PD, DriveSize); //SetFilePointer(PD,0,nil,FILE_BEGIN); //etLength(Buffer,512); //ZeroMemory(@Buffer,SizeOf(Buffer)); //ReadFile(PD,Buffer[0],512,BytesReturned,nil); //AssignFile(myFile, 'StickDump.bin'); //ReWrite(myFile, 512); //BlockWrite(myFile, Buffer[0], 1); //CloseFile(myFile); Finally CloseHandle(PD); End; end;

    Read the article

  • Delphi - Using DeviceIoControl passing IOCTL_DISK_GET_LENGTH_INFO to get flash media physical size (Not Partition)

    - by SuicideClutchX2
    Alright this is the result of a couple of other questions. It appears I was doing something wrong with the suggestions and at this point have come up with an error when using the suggested API to get the media size. Those new to my problem I am working at the physical disk level, not within the confines of a partition or file system. Here is the pastebin code for the main unit (Delphi 2009) - http://clutchx2.pastebin.com/iMnq8kSx Here is the application source and executable with a form built to output the status of whats going on - http://www.mediafire.com/?js8e6ci8zrjq0de Its probably easier to use the download, unless your just looking for problems within the code. I will also paste the code here. unit Main; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls; type TfrmMain = class(TForm) edtDrive: TEdit; lblDrive: TLabel; btnMethod1: TButton; btnMethod2: TButton; lblSpace: TLabel; edtSpace: TEdit; lblFail: TLabel; edtFail: TEdit; lblError: TLabel; edtError: TEdit; procedure btnMethod1Click(Sender: TObject); private { Private declarations } public { Public declarations } end; TDiskExtent = record DiskNumber: Cardinal; StartingOffset: Int64; ExtentLength: Int64; end; DISK_EXTENT = TDiskExtent; PDiskExtent = ^TDiskExtent; TVolumeDiskExtents = record NumberOfDiskExtents: Cardinal; Extents: array[0..0] of TDiskExtent; end; VOLUME_DISK_EXTENTS = TVolumeDiskExtents; PVolumeDiskExtents = ^TVolumeDiskExtents; var frmMain: TfrmMain; const FILE_DEVICE_DISK = $00000007; METHOD_BUFFERED = 0; FILE_ANY_ACCESS = 0; IOCTL_DISK_BASE = FILE_DEVICE_DISK; IOCTL_VOLUME_BASE = DWORD('V'); IOCTL_DISK_GET_LENGTH_INFO = $80070017; IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS = ((IOCTL_VOLUME_BASE shl 16) or (FILE_ANY_ACCESS shl 14) or (0 shl 2) or METHOD_BUFFERED); implementation {$R *.dfm} function GetLD(Drive: Char): Cardinal; var Buffer : String; begin Buffer := Format('\\.\%s:',[Drive]); Result := CreateFile(PChar(Buffer),GENERIC_READ Or GENERIC_WRITE,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); If Result = INVALID_HANDLE_VALUE Then begin Result := CreateFile(PChar(Buffer),GENERIC_READ,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); end; end; function GetPD(Drive: Byte): Cardinal; var Buffer : String; begin If Drive = 0 Then begin Result := INVALID_HANDLE_VALUE; Exit; end; Buffer := Format('\\.\PHYSICALDRIVE%d',[Drive]); Result := CreateFile(PChar(Buffer),GENERIC_READ Or GENERIC_WRITE,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); If Result = INVALID_HANDLE_VALUE Then begin Result := CreateFile(PChar(Buffer),GENERIC_READ,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); end; end; function GetPhysicalDiskNumber(Drive: Char): Byte; var LD : DWORD; DiskExtents : PVolumeDiskExtents; DiskExtent : TDiskExtent; BytesReturned : Cardinal; begin Result := 0; LD := GetLD(Drive); If LD = INVALID_HANDLE_VALUE Then Exit; Try DiskExtents := AllocMem(Max_Path); DeviceIOControl(LD,IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS,nil,0,DiskExtents,Max_Path,BytesReturned,nil); If DiskExtents^.NumberOfDiskExtents > 0 Then begin DiskExtent := DiskExtents^.Extents[0]; Result := DiskExtent.DiskNumber; end; Finally CloseHandle(LD); end; end; procedure TfrmMain.btnMethod1Click(Sender: TObject); var PD : DWORD; CardSize: Int64; BytesReturned: DWORD; CallSuccess: Boolean; begin PD := GetPD(GetPhysicalDiskNumber(edtDrive.Text[1])); If PD = INVALID_HANDLE_VALUE Then Begin ShowMessage('Invalid Physical Disk Handle'); Exit; End; CallSuccess := DeviceIoControl(PD, IOCTL_DISK_GET_LENGTH_INFO, nil, 0, @CardSize, SizeOf(CardSize), BytesReturned, nil); if not CallSuccess then begin edtError.Text := IntToStr(GetLastError()); edtFail.Text := 'True'; end else edtFail.Text := 'False'; CloseHandle(PD); end; end. I placed a second method button on the form so I can write a different set of code into the app if I feel like it. Only minimal error handling and safeguards are there is nothing that wasn't necessary for debugging this via source. I tried this on a Sony Memory Stick using a PSP as the reader because I cant find the adapter for using a duo in my machine. The target is an MS and half of my users use a PSP for a reader half dont. However this should work fine on SD cards and that is a secondary target for my work as well. I tried this on a usb memory card reader and several SD cards. Now that I have fixed my attempt I get an error returned. 50 ERROR_NOT_SUPPORTED The request is not supported. I have found an application that uses this API as well as alot of related functions for what I am trying todo. I am getting ready to look into it the application is called DriveImage and its source is here - http://sourceforge.net/projects/diskimage/ The only thing I have really noticed from that application is there use of TFileStream and using that to get a handle on the physical disk.

    Read the article

  • Overview of SOA Diagnostics in 11.1.1.6

    - by ShawnBailey
    What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections. In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. RDA (Remote Diagnostic Agent) RDA is a standalone tool that is used to collect both static configuration and dynamic runtime information from the SOA environment. RDA is generally run manually from the command line against a domain or single server. When opening a new Service Request, including an RDA collection can dramatically decrease the back and forth required to collect logs and configuration information for Support. After installing RDA you configure it to use the SOA Suite module as decribed in the referenced resources. The SOA module includes the Oracle WebLogic Server (WLS) module by default in order to include all of the relevant information for the environment. In addition to this basic configuration there is also an advanced mode where you can set the number of thread dumps for the collections, log files, Incidents, etc. When would you use it? When creating a Service Request or otherwise working with Oracle resources on an issue, capturing environment snapshots to baseline your configuration or to diagnose an issue on your own. How is it related to the other tools? RDA is related to DFW in that it collects the last 10 Incidents from the server by default. In a similar manner, RDA is related to ODL through its collection of the diagnostic logs and these may contain information from Selective Tracing sessions. Examples of what it currently collects: (for details please see the links in the Resources section) Diagnostic Logs (ODL) Diagnostic Framework Incidents (DFW) SOA MDS Deployment Descriptors SOA Repository Summary Statistics Thread Dumps Complete Domain Configuration RDA Resources: Webcast Recording: Using RDA with Oracle SOA Suite 11g Blog Post: Diagnose SOA Suite 11g Issues Using RDA Download RDA How to Collect Analysis Information Using RDA for Oracle SOA Suite 11g Products [ID 1350313.1] How to Collect Analysis Information Using RDA for Oracle SOA Suite and BPEL Process Manager 11g [ID 1352181.1] Getting Started With Remote Diagnostic Agent: Case Study - Oracle WebLogic Server (Video) [ID 1262157.1] top DFW (Diagnostic Framework) DFW provides the ability to collect specific information for a particular problem when that problem occurs. DFW is included with your SOA Suite installation and deployed to the domain. Let's define the components of DFW. Diagnostic Dumps: Specific diagnostic collections that are defined at either the 'system' or product level. Examples would be diagnostic logs or thread dumps. Incident: A collection of Diagnostic Dumps associated with a particular problem Log Conditions: An Oracle Diagnostic Logging event that DFW is configured to listen for. If the event is identified then an Incident will be created. WLDF Watch: The WebLogic Diagnostic Framework or 'WLDF' is not a component of DFW, however, it can be a source of DFW Incident creation through the use of a 'Watch'. WLDF Notification: A Notification is a component of WLDF and is the link between the Watch and DFW. You can configure multiple Notification types in WLDF and associate them with your Watches. 'FMWDFW-notification' is available to you out of the box to allow for DFW notification of Watch execution. Rule: Defines a WLDF Watch or Log Condition for which we want to associate a set of Diagnostic Dumps. When triggered the specified dumps will be collected and added to the Incident Rule Action: Defines the specific Diagnostic Dumps to collect for a particular rule ADR: Automatic Diagnostics Repository; Defined for every server in a domain. This is where Incidents are stored Now let's walk through a simple flow: Oracle Web Services error message OWS-04086 (SOAP Fault) is generated on managed server 1 DFW Log Condition for OWS-04086 evaluates to TRUE DFW creates a new Incident in the ADR for managed server 1 DFW executes the specified Diagnostic Dumps and adds the output to the Incident In this case we'll grab the diagnostic log and thread dump. We might also want to collect the WSDL binding information and SOA audit trail When would you use it? When you want to automatically collect Diagnostic Dumps at a particular time using a trigger or when you want to manually collect the information. In either case it can be readily uploaded to Oracle Support through the Service Request. How is it related to the other tools? DFW generates Incidents which are collections of Diagnostic Dumps. One of the system level Diagonstic Dumps collects the current server diagnostic log which is generated by ODL and can contain information from Selective Tracing sessions. Incidents are included in RDA collections by default and ADRCI is a tool that is used to package an Incident for upload to Oracle Support. In addition, both ODL and DMS can be used to trigger Incident creation through DFW. The conditions and rules for generating Incidents can become quite complicated and the below resources go into more detail. A simpler approach to leveraging at least the Diagnostic Dumps is through WLST (WebLogic Scripting Tool) where there are commands to do the following: Create an Incident Execute a single Diagnostic Dump Describe a Diagnostic Dump List the available Diagnostic Dumps The WLST option offers greater control in what is generated and when. It can be a great help when collecting information for Support. There are overlaps with RDA, however, DFW is geared towards collecting specific runtime information when an issue occurs while existing Incidents are collected by RDA. There are 3 WLDF Watches configured by default in a SOA Suite 11g domain: Stuck Threads, Unchecked Exception and Deadlock. These Watches are enabled by default and will generate Incidents in ADR. They are configured to reset automatically after 30 seconds so they have the potential to create multiple Incidents if these conditions are consistent. The Incidents generated by these Watches will only contain System level Diagnostic Dumps. These same System level Diagnostic Dumps will be included in any application scoped Incident as well. Starting in 11.1.1.6, SOA Suite is including its own set of application scoped Diagnostic Dumps that can be executed from WLST or through a WLDF Watch or Log Condition. These Diagnostic Dumps can be added to an Incident such as in the earlier example using the error code OWS-04086. soa.config: MDS configuration files and deployed-composites.xml soa.composite: All artifacts related to the deployed composite soa.wsdl: Summary of endpoints configured for the composite soa.edn: EDN configuration summary if applicable soa.db: Summary DB information for the SOA repository soa.env: Coherence cluster configuration summary soa.composite.trail: Partial audit trail information for the running composite The current release of RDA has the option to collect the soa.wsdl and soa.composite Diagnostic Dumps. More Diagnostic Dumps for SOA Suite products are planned for future releases along with enhancements to DFW itself. DFW Resources: Webcast Recording: SOA Diagnostics Sessions: Diagnostic Framework Diagnostic Framework Documentation DFW WLST Command Reference Documentation for SOA Diagnostic Dumps in 11.1.1.6 top Selective Tracing Selective Tracing is a facility available starting in version 11.1.1.4 that allows you to increase the logging level for specific loggers and for a specific context. What this means is that you have greater capability to collect needed diagnostic log information in a production environment with reduced overhead. For example, a Selective Tracing session can be executed that only increases the log level for one composite, only one logger, limited to one server in the cluster and for a preset period of time. In an environment where dozens of composites are deployed this can dramatically reduce the volume and overhead of the logging without sacrificing relevance. Selective Tracing can be administered either from Enterprise Manager or through WLST. WLST provides a bit more flexibility in terms of exactly where the tracing is run. When would you use it? When there is an issue in production or another environment that lends itself to filtering by an available context criteria and increasing the log level globally results in too much overhead or irrelevant information. The information is written to the server diagnostic log and is exportable from Enterprise Manager How is it related to the other tools? Selective Tracing output is written to the server diagnostic log. This log can be collected by a system level Diagnostic Dump using DFW or through a default RDA collection. Selective Tracing also heavily leverages ODL fields to determine what to trace and to tag information that is part of a particular tracing session. Available Context Criteria: Application Name Client Address Client Host Composite Name User Name Web Service Name Web Service Port Selective Tracing Resources: Webcast Recording: SOA Diagnostics Session: Using Selective Tracing to Diagnose SOA Suite Issues How to Use Selective Tracing for SOA [ID 1367174.1] Selective Tracing WLST Reference top DMS (Dynamic Monitoring Service) DMS exposes runtime information for monitoring. This information can be monitored in two ways: Through the DMS servlet As exposed MBeans The servlet is deployed by default and can be accessed through http://<host>:<port>/dms/Spy (use administrative credentials to access). The landing page of the servlet shows identical columns of what are known as Noun Types. If you select a Noun Type you will see a table in the right frame that shows the attributes (Sensors) for the Noun Type and the available instances. SOA Suite has several exposed Noun Types that are available for viewing through the Spy servlet. Screenshots of the Spy servlet are available in the Knowledge Base article How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS). Every Noun instance in the runtime is exposed as an MBean instance. As such they are generally available through an MBean browser and available for monitoring through WLDF. You can configure a WLDF Watch to monitor a particular attribute and fire a notification when the threshold is exceeded. A WLDF Watch can use the out of the box DFW notification type to notify DFW to create an Incident. When would you use it? When you want to monitor a metric or set of metrics either manually or through an automated system. When you want to trigger a WLDF Watch based on a metric exposed through DMS. How is it related to the other tools? DMS metrics can be monitored with WLDF Watches which can in turn notify DFW to create an Incident. DMS Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How to Reset a SOA 11g DMS Metric DMS Documentation top ODL (Oracle Diagnostic Logging) ODL is the primary facility for most Fusion Middleware applications to log what they are doing. Whenever you change a logging level through Enterprise Manager it is ultimately exposed through ODL and written to the server diagnostic log. A notable exception to this is WebLogic Server which uses its own log format / file. ODL logs entries in a consistent, structured way using predefined fields and name/value pairs. Here's an example of a SOA Suite entry: [2012-04-25T12:49:28.083-06:00] [AdminServer] [ERROR] [] [oracle.soa.bpel.engine] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: ] [ecid: 0963fdde7e77631c:-31a6431d:136eaa46cda:-8000-00000000000000b4,0] [errid: 41] [WEBSERVICE_PORT.name: BPELProcess2_pt] [APP: soa-infra] [composite_name: TestProject2] [J2EE_MODULE.name: fabric] [WEBSERVICE.name: bpelprocess1_client_ep] [J2EE_APP.name: soa-infra] Error occured while handling a post operation[[ When would you use it? You'll use ODL almost every time you want to identify and diagnose a problem in the environment. The entries are written to the server diagnostic log. How is it related to the other tools? The server diagnostic logs are collected by DFW and RDA. Selective Tracing writes its information to the diagnostic log as well. Additionally, DFW log conditions are triggered by ODL log events. ODL Resources: ODL Documentation top ADR (Automatic Diagnostics Repository) ADR is not a tool in and of itself but is where DFW stores the Incidents it creates. Every server in the domain has an ADR location which can be found under <SERVER_HOME>/adr. This is referred to the as the ADR 'Base' location. ADR also has what are known as 'Home' locations. Example: You have a domain called 'myDomain' and an associated managed server called 'myServer'. Your admin server is called 'AdminServer'. Your domain home directory is called 'myDomain' and it contains a 'servers' directory. The 'servers' directory contains a directory for the managed server called 'myServer' and here is where you'll find the 'adr' directory which is the ADR 'Base' location for myServer. To get to the ADR 'Home' locations we drill through a few levels: diag/ofm/myDomain/ In an 11.1.1.6 SOA Suite domain you will see 2 directories here, 'myServer' and 'soa-infra'. These are the ADR 'Home' locations. 'myServer' is the 'system' ADR home and contains system level Incidents. 'soa-infra' is the name that SOA Suite used to register with DFW and this ADR home contains SOA Suite related Incidents Each ADR home location contains a series of directories, one of which is called 'incident'. This is where your Incidents are stored. When would you use it? It's a good idea to check on these locations from time to time to see whether a lot of Incidents are being generated. They can be cleaned out by deleting the Incident directories or through the ADRCI tool. If you know that an Incident is of particular interest for an issue you're working with Oracle you can simply zip it up and provide it. How does it relate to the other tools? ADR is obviously very important for DFW since it's where the Incidents are stored. Incidents contain Diagnostic Dumps that may relate to diagnostic logs (ODL) and DMS metrics. The most recent 10 Incident directories are collected by RDA by default and ADRCI relies on the ADR locations to help manage the contents. top ADRCI (Automatic Diagnostics Repository Command Interpreter) ADRCI is a command line tool for packaging and managing Incidents. When would you use it? When purging Incidents from an ADR Home location or when you want to package an Incident along with an offline RDA collection for upload to Oracle Support. How does it relate to the other tools? ADRCI contains a tool called the Incident Packaging System or IPS. This is used to package an Incident for upload to Oracle Support through a Service Request. Starting in 11.1.1.6 IPS will attempt to collect an offline RDA collection and include it with the Incident package. This will only work if Perl is available on the path, otherwise it will give a warning and package only the Incident files. ADRCI Resources: How to Use the Incident Packaging System (IPS) in SOA 11g [ID 1381259.1] ADRCI Documentation top WLDF (WebLogic Diagnostic Framework) WLDF is functionality available in WebLogic Server since version 9. Starting with FMw 11g a link has been added between WLDF and the pre-existing DFW, the WLDF Watch Notification. Let's take a closer look at the flow: There is a need to monitor the performance of your SOA Suite message processing A WLDF Watch is created in the WLS console that will trigger if the average message processing time exceeds 2 seconds. This metric is monitored through a DMS MBean instance. The out of the box DFW Notification (the Notification is called FMWDFW-notification) is added to the Watch. Under the covers this notification is of type JMX. The Watch is triggered when the threshold is exceeded and fires the Notification. DFW has a listener that picks up the Notification and evaluates it according to its rules, etc When it comes to automatic Incident creation, WLDF is a key component with capabilities that will grow over time. When would you use it? When you want to monitor the WLS server log or an MBean metric for some condition and fire a notification when the Watch is triggered. How does it relate to the other tools? WLDF is used to automatically trigger Incident creation through DFW using the DFW Notification. WLDF Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How To Script the Creation of a SOA WLDF Watch in 11g [ID 1377986.1] WLDF Documentation top

    Read the article

  • Best choice for a personal "online backup" in Europe

    - by marc_s
    I'm looking for an online backup solution for personal use - besides all the usual requirements (like not too expensive, since it's for personal use), I'd like to add two requirements to it: data center should be in Europe (I don't want my personal data stored in the US, when the next crazed president comes along and wants to confiscate and rifle through everybody's files.....) the online backup store should be accessible through a drive letter in cmd.exe So far, I've looked at a few services, but none have totally convinced me: Dropbox is looking ok, but they insist on creating a silly "My Dropbox" directory in my data path - and there's no way I can choose that name. Sorry - "My everything" is for dummies - I don't like that, I like to name my files and folders according to my liking LiveDrive is OK, too - they offer European storage, drive letter and all - but those drive letters are only available in the Windows Explorer - and not on the cmd.exe command line :-( and since I do 99% of my work on the command line, this is a major drawback..... Any other services I haven't looked at worth checking out? Marc

    Read the article

  • Using "offline files sync" to sync with a local resource [closed]

    - by Kije
    Possible Duplicate: Which is the best application to Sync two folders? I have been trying get my machine (XP-Pro SP3) to sync files to a local USB drive in the same way as I can with mapped network drives. I particularly want to the sync to happen automatically when the USB is connected - in the same way that Off-line files will sync when the network drive comes on line. I can get a folder on the USB mounted as a network drive, but cannot get XP to offer the off-line files functionality. I have tried MS's SyncToy - it works as advertised, and will do scheduled and ad-hoc sync's, but does not seem to offer automatic syncs. I suspect I could do this by plugging my USB into another machine on my network - but that seems heavy handed. All insight appreciated - If you know this cannot be done please say so.

    Read the article

  • Can't re-install OS on Asus Eee PC 1018P

    - by cornerback84
    My friend has an Asus Eee PC 1018P. It has no CD/DVD drive (neither does he have a USB CD/DVD drive). The OS wasn't working fine, so we decided to restore the system using the provided OS backup from HD. But mid way through the installation it was interrupted and the computer was restarted (not a hardware or software issue – we did it). Now we cannot run the backup and we also tried to install Windows 7 through USB, but as soon as we start to install the OS, it says that some device driver is missing. It turns out that the OS needs a USB device driver to continue. It has USB 3.0 – maybe that's why the OS needs the driver. I tried disabling 3.0 and enabling 2.0 but then it does not boot from USB drive for some reason. We are stuck with this. The backup doesn't run and when booting from USB, it says that it needs a device driver. Anyone has any idea what we could do?

    Read the article

  • What’s the password for my FileVault 2 boot volume?

    - by cbowns
    Apple’s support document for FileVault 2 (a.k.a. “full disk encryption” or “FDE”) has lots of information about enabling FDE and what it means for booting the machine. However, it doesn’t cover one very important thing I’m trying to do: mount the drive in the Recovery HD environment to reinstall OS X on it. The Recovery HD environment asks me for the volume passphrase so it can mount my drive and try to install OS X onto it. If this were an external drive which I’d manually enabled FDE on with diskutil, or an external Time Machine volume, I’d know it because it makes you pick one (just like a regular login password), but FileVault 2 never asked me for a volume passphrase (I assume it selects one behind the scenes). I’ve tried my main user’s password, but that doesn’t work, and neither does the recovery key set for the volume. Keychain Access doesn’t have anything that I could find. How do I unlock this volume?

    Read the article

  • Hyper-V File Server Clustering - at my wit’s end

    - by René Kåbis
    I am at my wit’s end with File Server clustering under Hyper-V. I am hoping that someone might be able to help me figure out this Gordian Knot of a technology that seems to have dead ends (like forcing cluster VMs to use iSCSI drives where normally-attached VHDX drives could suffice) where logic and reason would normally provide a logical solution. My hardware: I will be running three servers (in the end), but right now everything is taking place on one server. One of the secondary servers will exist purely as a witness/quorum, and another slightly more powerful one will be acting as an emergency backup (with additional storage, just not redundant) to hold the secondary AD VM and the other halves of a set of clustered VMs: the SQL VM and the file system VM. Please note, these each are the depreciated nodes of a cluster, the main nodes will be on the most powerful first machine. My heavy lifter is a machine that also contains all of the truly redundant storage on the network. If this gives anyone the heebie-geebies, too bad. It has a 6TB (usable) RAID-10 array, and will (in the end) hold the primary nodes of both aforementioned clusters, but is right now holding all VMs. This is, right now: DC01, DC02, SQL01, SQL02, FS01 & FS02. Eventually, I will be adding additional VMs to handle Exchange, Sharepoint and Lync, but only to this main server (the secondary server won't be able to handle more than three or four VMs, so why burden it? The AD, SQL & FS VMs are the most critical for the business). If anyone is now saying, “wait, what about a SAN or a NAS for the file servers?”, well too bad. What exists on the main machine is what I have to deal with. I followed these instructions, but I seem to be unable to get things to work. In order to make the file server truly redundant, I cannot trust any one machine to hold the only data store on the network. Therefore, I have created a set of iSCSI drives on the VM-host of the main machine, and attached one to each file server VM. The end result is that I want my FS01 to sit on the heavy lifter, along with its iSCSI “drive”, and FS02 will sit on the secondary machine with its own iSCSI “drive” there as well. That is, neither iSCSI drive will end up sitting on the same machine as the other. As such, the clustered FS will utterly duplicate the contents of the iSCSI drives between each other, so that if one physical machine (or the FS VM) goes toes-up, the other has got a full copy of the data on its own iSCSI drive. My problem occurs when I try to apply the file server role within the failover cluster manager. Actually, it is even before that -- it occurs when adding the disks. Since I have added each disk preferentially to a specific VM (by limiting the initiator by DNS hostname, and by adding two-way CHAP authentication), this forces each VM to be in control of its own iSCSI disk. However, when I try to add the disks to the Disks section of Storage within Failover Cluster Manager, the entire process fails for a random disk of the pair. That is, one will get online, but the other will remain offline because it does not have the correct “owner node”. I mean, really -- WTF? Of course it doesn’t have the right owner node, both drives are showing the same node name!! I cannot seem to have one drive show up with one node name as owner, and the other drive show up with the other node name as owner. And because both drives are not “online”, I cannot create a pool to apply to a cluster role. Talk about getting stuck between a rock and a hard place! I’ve got more to add, but my work is closing for the day and I have to wrap things up. I will try to add more tomorrow morning when I get in. My main objective is to have a file server VM on each machine, the storage on each machine, but a transparent failover in case one physical machine fails. Essentially, a failover FS that doesn’t care which machine fails -- the storage contents are replicated equally on each machine. Am I even heading in the right direction?

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >