Search Results

Search found 867 results on 35 pages for 'ed kirk'.

Page 22/35 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • how to make python load dylib on osx

    - by navicore
    Hi, Trying to load a shared lib out of the current '.' dir in a unit test on osx. What works on Linux and Netbsd there is a symlink _mymodule.so --> ../.libs/libmymodule.so but on osx, python's import mymodule won't find _mymodule.dylib --> ../.libs/libmymodule.dylib I've tried adding export DYLD_LIBRARY_PATH=.:$DYLD_LIBRARY_PATH to the script env, nogo. Any help appreciated. -Ed

    Read the article

  • Ubuntu hard disk problem

    - by Henadzy
    Hello! I have got the error with a hard disk on Ubuntu 9.10. It slows down my system, applications have not been responding for a long time. But when I mount and use filesystem which placed on this hard disk at other computer it works properly. disk: SAMSUNG HD161HJ (SATA) syslog: Apr 25 00:28:25 vare6gin kernel: [ 885.773839] ata3.00: exception Emask 0x1 SAct 0x1e SErr 0x0 action 0x6 frozen Apr 25 00:28:25 vare6gin kernel: [ 885.773845] ata3.00: Ata error. fis:0x21 Apr 25 00:28:25 vare6gin kernel: [ 885.773861] ata3.00: cmd 60/08:08:3f:00:ad/00:00:10:00:00/40 tag 1 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773864] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773871] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773877] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773890] ata3.00: cmd 60/18:10:9f:6b:ed/00:00:0e:00:00/40 tag 2 ncq 12288 in Apr 25 00:28:25 vare6gin kernel: [ 885.773893] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773900] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773904] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773918] ata3.00: cmd 60/08:18:3f:5f:ed/00:00:0e:00:00/40 tag 3 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773921] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773927] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773932] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773946] ata3.00: cmd 60/08:20:67:c8:91/00:00:05:00:00/40 tag 4 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773948] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773955] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773960] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773970] ata3: hard resetting link Apr 25 00:28:25 vare6gin kernel: [ 885.773974] ata3: nv: skipping hardreset on occupied port Apr 25 00:28:25 vare6gin kernel: [ 886.240073] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 25 00:28:25 vare6gin kernel: [ 886.256277] ata3.00: configured for UDMA/133 Apr 25 00:28:25 vare6gin kernel: [ 886.256305] ata3: EH complete Apr 25 00:28:27 vare6gin kernel: [ 888.176088] ata3: EH in SWNCQ mode,QC:qc_active 0xF sactive 0xF Apr 25 00:28:27 vare6gin kernel: [ 888.176099] ata3: SWNCQ:qc_active 0xF defer_bits 0x0 last_issue_tag 0x3 Apr 25 00:28:27 vare6gin kernel: [ 888.176102] dhfis 0xF dmafis 0x1 sdbfis 0x0 Apr 25 00:28:27 vare6gin kernel: [ 888.176109] ata3: ATA_REG 0x51 ERR_REG 0x40 Apr 25 00:28:27 vare6gin kernel: [ 888.176113] ata3: tag : dhfis dmafis sdbfis sacitve Apr 25 00:28:27 vare6gin kernel: [ 888.176120] ata3: tag 0x0: 1 1 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176126] ata3: tag 0x1: 1 0 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176131] ata3: tag 0x2: 1 0 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176136] ata3: tag 0x3: 1 0 0 1

    Read the article

  • Apache and file permissions

    - by Matthew
    I'm running LAMP on Ubuntu 8.04. Apache's username and group are www-data. I put my connection details and AES key in a file in a directory that's not web served. I chown-ed the files to www-data:www-data and set the permissions to 700. Still, the script that require()s these files will only run if I chmod the files to 755. What am I missing?

    Read the article

  • /usr/bin/ld: cannot find -llibeststring.a

    - by Peeyush
    I am using festival TTS c++ API in my program.I have downloaded all files from http://www.cstr.ed.ac.uk/downloads/festival/2.0.95/ and install festival and speech_tools successfully on my UBUNTU 10.04 now when compile my c++ programme gcc gives error: g++ -L/usr/lib -L/home/peeyush/Desktop/festival/src/lib -L/home/peeyush/Desktop/speech_tools/lib -o"peeyush" ./src/peeyush.o -llibeststring.a -llibestbase.a -llibestools.a -llibFestival.a /usr/bin/ld: cannot find -llibeststring.a collect2: ld returned 1 exit status make: *** [peeyush] Error 1 so please help me to sort out this error. -Thanks Peeyush Chandel(INDIA)

    Read the article

  • Process limit for user in Linux

    - by BrainCore
    This is the standard question, "How do I set a process limit for a user account in Linux to prevent fork-bombing," with an additional twist. The running program originates as a root-owned Python process, which then setuids/setgids itself as a regular user. As far as I know, at this point, any limits set in /etc/security/limits.conf do not apply; the setuid-ed process may now fork bomb. Any ideas how to prevent this?

    Read the article

  • How to see if an object is an array?

    - by edbras
    How can I see in Java if an Object is an array without using reflection? And how can I iterate through all items without using reflection? I use Google GWT so I am not allowed to use reflection :( I would love to implement the following methods without using refelection: private boolean isArray(final Object obj) { ??.. } private String toString(final Object arrayObject) { ??.. } BTW: neither do I want to use Javascript such that I can use it in non-GWT environments Ed

    Read the article

  • Alternatives to fread and fwrite for use with structured data

    - by forest58
    The book Beginning Linux Programming (3rd ed) says "Note that fread and fwrite are not recommended for use with structured data. Part of the problem is that files written with fwrite are potentially nonportable between different machines." What does that mean exactly? What calls should I use if I want to write a portable structured data reader or writer? Direct system calls?

    Read the article

  • special character in UNIX

    - by Happy Mittal
    I want to add backspace character literally in my file named junk. So I did following $ ed a my name is happy\b (here b means I typed backspace so \ gets disapperaed and cursor sits sfter y) . w junk q But when I do $ od -cb junk it doesn't show backspace.

    Read the article

  • error in encryption program

    - by Raja
    #include<iostream> #include<math.h> #include<string> using namespace std; int gcd(int n,int m) { if(m<=n && n%m ==0) return m; if(n<m) return gcd(m,n); else return gcd(m,n%m); } int REncryptText(char m) { int p = 11, q = 3; int e = 3; int n = p * q; int phi = (p - 1) * (q - 1); int check1 = gcd(e, p - 1); int check2 = gcd(e, q - 1); int check3 = gcd(e, phi); // // Compute d such that ed = 1 (mod phi) //i.e. compute d = e-1 mod phi = 3-1 mod 20 //i.e. find a value for d such that phi divides (ed-1) //i.e. find d such that 20 divides 3d-1. //Simple testing (d = 1, 2, ...) gives d = 7 // double d = Math.Pow(e, -1) % phi; int d = 7; // public key = (n,e) // (33,3) //private key = (n,d) //(33 ,7) double g = pow(m,e); int ciphertext = g %n; // Now say we want to encrypt the message m = 7, c = me mod n = 73 mod 33 = 343 mod 33 = 13. Hence the ciphertext c = 13. //double decrypt = Math.Pow(ciphertext, d) % n; return ciphertext; } int main() { char plaintext[80],str[80]; cout<<" enter the text you want to encrpt"; cin.get(plaintext,79); int l =strlen(plaintext); for ( int i =0 ; i<l ; i++) { char s = plaintext[i]; str[i]=REncryptText(s); } for ( int i =0 ; i<l ; i++) { cout<<"the encryption of string"<<endl; cout<<str[i]; } return 0; }

    Read the article

  • Eclipse with JSR 250 (annotation) yields "Access Restriction" errors.

    - by edstafford
    Hi, Hopefully someone has come across this before. I'm running Spring STS 2.3.0 and when attempting to use the @Resource annotation from javax.annotations.Resource I get "Access restriction: The type Resource is not accessible due to restriction on required library". I'm using the JDK 6u18. I've tried changing the JDK Compliance to 1.5 and 1.6 and both yield the same error. Cheers, -Ed

    Read the article

  • which header do i need to send to browser when responding with flash file

    - by user63898
    hello all i build ed simple single threaded web server that i embedded to my application in Qt c++ this server are responding fine with simple html pages , but when i try to response with flash file embedded inside the html all the html string just got printed to the browser my question is what headers and http responses do i need to unable me to serve flash content to the browser Thanks

    Read the article

  • JarOutputStream put parent folder before my-wanted folder

    - by adhitya kristanto
    I tried to make jar with code from How to use JarOutputStream to create a JAR file? but that code always makes new parent folder of my input file/folder before it's inserted into .jar Folder's Path that I want to be added into jar: C:/Trial/MyFolder Folder that I want in MyJar.jar: MyFolder But Folder that was inserted in MyJar.jar: Trial Has it to be done that way? Thanks. here is the code: import EditorXML.GlobalStatus.GlobalStatus; import java.io.BufferedInputStream; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.util.jar.JarEntry; import java.util.jar.JarOutputStream; /** * * @author Photosphere */ public class JarCreatingProgram { public JarCreatingProgram() { } public void proceedNow() throws IOException{ File ed = new File("C:/Trial/EditingDini"); File meta = new File("C:/Trial/META-INF"); File net = new File("C:/Trial/net"); File org = new File("C:/Trial/org"); JarOutputStream target = new JarOutputStream(new FileOutputStream("D:/EditingDiniApp.jar")); add(ed, target); add(meta, target); add(net, target); add(org, target); target.close(); } private static void add(File source, JarOutputStream target) throws IOException { BufferedInputStream in = null; try{ if (source.isDirectory()) { String name = source.getPath().replace("\\", "/"); if (!name.isEmpty()) { if (!name.endsWith("/")) name += "/"; JarEntry entry = new JarEntry(name); entry.setTime(source.lastModified()); target.putNextEntry(entry); System.out.println("ENTRY DALAM IF: "+entry.toString()); target.closeEntry(); } for (File nestedFile: source.listFiles()) add(nestedFile, target); return; } JarEntry entry = new JarEntry(source.getPath().replace("\\", "/")); entry.setTime(source.lastModified()); target.putNextEntry(entry); in = new BufferedInputStream(new FileInputStream(source)); System.out.println("ENTRY: "+entry.toString()); byte[] buffer = new byte[1024]; while (true) { int count = in.read(buffer); if (count == -1) break; target.write(buffer, 0, count); } target.closeEntry(); } finally{ if (in != null) in.close(); } } }

    Read the article

  • Asp.Net Scrapping Grid Pages

    - by SH
    I need cod in C#. Look, i am trying to post the search.aspx page which contains Asp.Net grid. When grid is rendered it loads very first page on the screen and then there are number of pages in the grid header. I scrap first page, and now i want to move on to the next page. All this is being done using following code: HttpWebRequest myRequest = (HttpWebRequest)WebRequest.Create("http://pubrec3.hillsclerk.com/oncore/search.aspx?" + param); myRequest.Method = "GET"; myRequest.KeepAlive = false; HttpWebResponse webresponse; try { webresponse = (HttpWebResponse)myRequest.GetResponse(); Encoding enc = System.Text.Encoding.GetEncoding(1252); StreamReader loResponseStream = new StreamReader(webresponse.GetResponseStream(), enc); string r = loResponseStream.ReadToEnd(); loResponseStream.Close(); webresponse.Close(); //if (GetRecordCount(r)) ExtractResultTable(bd, ed, r); } catch (Exception ex) { } The above code grabs first page... and now i have to move on to the next page. This is the link which produces a grid with 3 pages. http://pubrec3.hillsclerk.com/oncore/search.aspx?bd=01/01/2008&ed=12/31/2008&bt=O&lb=1000000&ub=1000000000&d=5/6/2010&pt=-1&dt=D,%20MTG&st=consideration Using above code i need to load the 2nd page with the same search criteria but the records found in 2nd page. and then so on. I know there is a trick to navigate through the grid pages. I used it but it did not work on this page. the trick is, you can pass __EVENTTARGET and __EVENTARGUMENT in query string to navigate through the gird but it does not work on this website. I am desperately searching a way, how to cope with this website. i really want this done. i do not want any code but a way to nevigate throgh the grid using query string if it does work. Otherwise please be specific to the problem.

    Read the article

  • Page Fault Interrupt Problems

    - by Vikas
    This is a statement referring to problem caused by page fault:(from Silberschatz 7th ed P-310 last para) 'We cant simply restart instructions when instruction modifies several different location Ex:when a instruction moves 256 bytes from source to dest and either src or dest straddles on page boundary , then,after a partial move, if a page fault occurs, 'we can't simply restart the instructions' My question is Why not? Simply restart the instruction again do the same copy after page is in. Is there any problem in it?

    Read the article

  • How to find padding space around the label text?

    - by Ganesh
    I am adding a label controls at run time. I can add or modify text in label and can change the font also.Now my problem is When i changed the font of text in label, alignment is not proper,its due to padding.Is there any way to find how much padding is ocured [ed: obscured?] based on font?

    Read the article

  • Navigate through .Net Grid

    - by SH
    is there a way to navigate through gird found on this page by passing parameters in query string? http://pubrec3.hillsclerk.com/oncore/search.aspx?bd=01/01/2008&ed=12/31/2008&bt=O&lb=1000000&ub=1000000000&d=5/6/2010&pt=-1&dt=D,%20MTG&st=consideration Or any code suggestion? All example could i need in C#.

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • Issue in setting up VPN connection (IKEv1) using android (ICS vpn client) with Strongswan 4.5.0 server

    - by Kushagra Bhatnagar
    I am facing issues in setting up VPN connection(IKEv1) using android (ICS vpn client) and Strongswan 4.5.0 server. Below is the set up: Strongswan server is running on ubuntu linux machine which is connected to some wifi hotspot. Using the steps in this guide link, I generated CA, server and client certificate. Once certificates are generated, following (clientCert.p12 and caCert.pem) are sent to mobile via mail and installed on android device. Below are the ip addresses assigned to various interfaces Linux server wlan0 interface ip where server is running: 192.168.43.212, android device eth0 interface ip address: 192.168.43.62; Android device is also attached with the same wifi hotspot. On the Android device, I uses IPsec Xauth RSA option for setting up VPN authentication configuration. I am using the following ipsec.conf configuration: # basic configuration config setup plutodebug=all # crlcheckinterval=600 # strictcrlpolicy=yes # cachecrls=yes nat_traversal=yes # charonstart=yes plutostart=yes # Add connections here. # Sample VPN connections conn ios1 keyexchange=ikev1 authby=xauthrsasig xauth=server left=%defaultroute leftsubnet=0.0.0.0/0 leftfirewall=yes leftcert=serverCert.pem right=192.168.43.62 rightsubnet=10.0.0.0/24 rightsourceip=10.0.0.2 rightcert=clientCert.pem pfs=no auto=add      With the above configurations when I enable VPN on android device, VPN connection is not successful and it gets timed out in Authentication phase. I ran wireshark on both the android device and strongswan server, from the tcpdump below are the observations. Initially Identity Protection (Main mode) exchanges happens between device and server and all are successful. After all successful Identity Protection (Main mode) exchanges server is sending Transaction (Config mode) to device. In reply android device is sending Informational message instead of Transaction (Config mode) message. Further server is keep on sending Transaction (Config mode) message and device is again sending Identity Protection (Main mode) messages. Finally timeout happens and connection fails. I also capture Strongswan server logs and below are the snippets from the server logs which also verifies the same(described above). Apr 27 21:09:40 Linux pluto[12105]: | **parse ISAKMP Message: Apr 27 21:09:40 Linux pluto[12105]: | initiator cookie: Apr 27 21:09:40 Linux pluto[12105]: | 06 fd 61 b8 86 82 df ed Apr 27 21:09:40 Linux pluto[12105]: | responder cookie: Apr 27 21:09:40 Linux pluto[12105]: | 73 7a af 76 74 f0 39 8b Apr 27 21:09:40 Linux pluto[12105]: | next payload type: ISAKMP_NEXT_HASH Apr 27 21:09:40 Linux pluto[12105]: | ISAKMP version: ISAKMP Version 1.0 Apr 27 21:09:40 Linux pluto[12105]: | exchange type: ISAKMP_XCHG_INFO Apr 27 21:09:40 Linux pluto[12105]: | flags: ISAKMP_FLAG_ENCRYPTION Apr 27 21:09:40 Linux pluto[12105]: | message ID: a2 80 ad 82 Apr 27 21:09:40 Linux pluto[12105]: | length: 92 Apr 27 21:09:40 Linux pluto[12105]: | ICOOKIE: 06 fd 61 b8 86 82 df ed Apr 27 21:09:40 Linux pluto[12105]: | RCOOKIE: 73 7a af 76 74 f0 39 8b Apr 27 21:09:40 Linux pluto[12105]: | peer: c0 a8 2b 3e Apr 27 21:09:40 Linux pluto[12105]: | state hash entry 25 Apr 27 21:09:40 Linux pluto[12105]: | state object not found Apr 27 21:09:40 Linux pluto[12105]: packet from 192.168.43.62:500: Informational Exchange is for an unknown (expired?) SA Apr 27 21:09:40 Linux pluto[12105]: | next event EVENT_RETRANSMIT in 10 seconds for #9 Can anyone please provide update on this issue. Why the VPN connection gets timed out and why the ISAKMP exchanges are not proper between Android and strongswan server.

    Read the article

  • Is this valid JFS partition?

    - by Coolmax
    This is my first question on StackExchange. My teacher gave my his laptop (with Fedora 16 on it) and compact flash card with data. He want to have access to files on card, but he couldn't get access to it. The problem is Linux don't know what type of partion is. I suppose there is JFS: root@debian:~# dmesg |grep sdc [ 9066.908223] sd 3:0:0:1: [sdc] 3940272 512-byte logical blocks: (2.01 GB/1.87 GiB) [ 9066.962307] sd 3:0:0:1: [sdc] Write Protect is off [ 9066.962310] sd 3:0:0:1: [sdc] Mode Sense: 03 00 00 00 [ 9066.962312] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.028420] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.028637] sdc: unknown partition table [ 9067.097065] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.097281] sd 3:0:0:1: [sdc] Attached SCSI removable disk and some of data: root@debian:~# hexdump -Cn 65536 /dev/sdc 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00008000 4a 46 53 31 01 00 00 00 48 63 0e 00 00 00 00 00 |JFS1....Hc......| 00008010 00 10 00 00 0c 00 03 00 00 02 00 00 09 00 00 00 |................| 00008020 00 20 00 00 00 09 20 10 02 00 00 00 00 00 00 00 |. .... .........| 00008030 04 00 00 00 26 00 00 00 02 00 00 00 24 00 00 00 |....&.......$...| 00008040 41 03 00 00 16 00 00 00 00 02 00 00 a0 cc 01 00 |A...............| 00008050 37 00 00 00 69 cc 01 00 b6 d8 ac 4b 00 00 00 00 |7...i......K....| 00008060 32 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 |2...............| 00008070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00008080 00 00 00 00 00 00 00 00 90 15 e5 5f e3 c4 45 fa |..........._..E.| 00008090 9d 6a 5c b5 4f da 62 1a 00 00 00 00 00 00 00 00 |.j\.O.b.........| 000080a0 00 00 00 00 00 00 00 00 c3 c9 01 00 ed 81 00 00 |................| 000080b0 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000080c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * [cut] * 0000f000 4a 46 53 31 01 00 00 00 48 63 0e 00 00 00 00 00 |JFS1....Hc......| 0000f010 00 10 00 00 0c 00 03 00 00 02 00 00 09 00 00 00 |................| 0000f020 00 20 00 00 00 09 20 10 00 00 00 00 00 00 00 00 |. .... .........| 0000f030 04 00 00 00 26 00 00 00 02 00 00 00 24 00 00 00 |....&.......$...| 0000f040 41 03 00 00 16 00 00 00 00 02 00 00 a0 cc 01 00 |A...............| 0000f050 37 00 00 00 69 cc 01 00 b6 d8 ac 4b 00 00 00 00 |7...i......K....| 0000f060 32 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 |2...............| 0000f070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 0000f080 00 00 00 00 00 00 00 00 90 15 e5 5f e3 c4 45 fa |..........._..E.| 0000f090 9d 6a 5c b5 4f da 62 1a 00 00 00 00 00 00 00 00 |.j\.O.b.........| 0000f0a0 00 00 00 00 00 00 00 00 c3 c9 01 00 ed 81 00 00 |................| 0000f0b0 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 0000f0c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00010000 I'm total newbie to filesystems. I googled and found that JFS superblock may starts on 0x8000 offset. But what next? How to mount this card? If there would be normal partition table I would expect 55 AA on 510th and 511th byte, but first 8000 bytes are clear. Any help would be greatly appreciated. And sorry for my bad english :) Kind regards.

    Read the article

  • Keeping track of File System Utilization in Ops Center 12c

    - by S Stelting
    Enterprise Manager Ops Center 12c provides significant monitoring capabilities, combined with very flexible incident management. These capabilities even extend to monitoring the file systems associated with Solaris or Linux assets. Depending on your needs you can monitor and manage incidents, or you can fine tune alert monitoring rules to specific file systems. This article will show you how to use Ops Center 12c to Track file system utilization Adjust file system monitoring rules Disable file system rules Create custom monitoring rules If you're interested in this topic, please join us for a WebEx presentation! Date: Thursday, November 8, 2012 Time: 11:00 am, Eastern Standard Time (New York, GMT-05:00) Meeting Number: 598 796 842 Meeting Password: oracle123 To join the online meeting ------------------------------------------------------- 1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833597&UID=1512095432&PW=NOWQ3YjJlMmYy&RT=MiMxMQ%3D%3D 2. If requested, enter your name and email address. 3. If a password is required, enter the meeting password: oracle123 4. Click "Join". To view in other time zones or languages, please click the link: https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833597&UID=1512095432&PW=NOWQ3YjJlMmYy&ORT=MiMxMQ%3D%3D   Monitoring File Systems for OS Assets The Libraries tab provides basic, device-level information about the storage associated with an OS instance. This tab shows you the local file system associated with the instance and any shared storage libraries mounted by Ops Center. More detailed information about file system storage is available under the Analytics tab under the sub-tab named Charts. Here, you can select and display the individual mount points of an OS, and export the utilization data if desired: In this example, the OS instance has a basic root file partition and several NFS directories. Each file system mount point can be independently chosen for display in the Ops Center chart. File Systems and Incident  Reporting Every asset managed by Ops Center has a "monitoring policy", which determines what represents a reportable issue with the asset. The policy is made up of a bunch of monitoring rules, where each rule describes An attribute to monitor The conditions which represent an issue The level or levels of severity for the issue When the conditions are met, Ops Center sends a notification and creates an incident. By default, OS instances have three monitoring rules associated with file systems: File System Reachability: Triggers an incident if a file system is not reachable NAS Library Status: Triggers an incident for a value of "WARNING" or "DEGRADED" for a NAS-based file system File System Used Space Percentage: Triggers an incident when file system utilization grows beyond defined thresholds You can view these rules in the Monitoring tab for an OS: Of course, the default monitoring rules is that they apply to every file system associated with an OS instance. As a result, any issue with NAS accessibility or disk utilization will trigger an incident. This can cause incidents for file systems to be reported multiple times if the same shared storage is used by many assets, as shown in this screen shot: Depending on the level of control you'd like, there are a number of ways to fine tune incident reporting. Note that any changes to an asset's monitoring policy will detach it from the default, creating a new monitoring policy for the asset. If you'd like, you can extract a monitoring policy from an asset, which allows you to save it and apply the customized monitoring profile to other OS assets. Solution #1: Modify the Reporting Thresholds In some cases, you may want to modify the basic conditions for incident reporting in your file system. The changes you make to a default monitoring rule will apply to all of the file systems associated with your operating system. Selecting the File Systems Used Space Percentage entry and clicking the "Edit Alert Monitoring Rule Parameters" button opens a pop-up dialog which allows you to modify the rule. The first screen lets you decide when you will check for file system usage, and how long you will wait before opening an incident in Ops Center. By default, Ops Center monitors continuously and reports disk utilization issues which exist for more than 15 minutes. The second screen lets you define actual threshold values. By default, Ops Center opens a Warning level incident is utilization rises above 80%, and a Critical level incident for utilization above 95% Solution #2: Disable Incident Reporting for File System If you'd rather not report file system incidents, you can disable the monitoring rules altogether. In this case, you can select the monitoring rules and click the "Disable Alert Monitoring Rule(s)" button to open the pop-up confirmation dialog. Like the first solution, this option affects all file system monitoring. It allows you to completely disable incident reporting for NAS library status or file system space consumption. Solution #3: Create New Monitoring Rules for Specific File Systems If you'd like to have the greatest flexibility when monitoring file systems, you can create entirely new rules. Clicking the "Add Alert Monitoring Rule" (the icon with the green plus sign) opens a wizard which allows you to define a new rule.  This rule will be based on a threshold, and will be used to monitor operating system assets. We'd like to add a rule to track disk utilization for a specific file system - the /nfs-guest directory. To do this, we specify the following attribute FileSystemUsages.name=/nfs-guest.usedSpacePercentage The value of name in the attribute allows us to define a specific NFS shared directory or file system... in the case of this OS, we could have chosen any of the values shown in the File Systems Utilization chart at the beginning of this article. usedSpacePercentage lets us define a threshold based on the percentage of total disk space used. There are a number of other values that we could use for threshold-based monitoring of FileSystemUsages, including freeSpace freeSpacePercentage totalSpace usedSpace usedSpacePercentage The final sections of the screen allow us to determine when to monitor for disk usage, and how long to wait after utilization reaches a threshold before creating an incident. The next screen lets us define the threshold values and severity levels for the monitoring rule: If historical data is available, Ops Center will display it in the screen. Clicking the Apply button will create the new monitoring rule and active it in your monitoring policy. If you combine this with one of the previous solutions, you can precisely define which file systems will generate incidents and notifications. For example, this monitoring policy has the default "File System Used Space Percentage" rule disabled, but the new rule reports ONLY on utilization for the /nfs-guest directory. Stay Connected: Twitter |  Facebook |  YouTube |  Linkedin |  Newsletter

    Read the article

  • CDN on Hosted Service in Windows Azure

    - by Shaun
    Yesterday I told Wang Tao, an annoying colleague sitting beside me, about how to make the static content enable the CDN in his website which had just been published on Windows Azure. The approach would be Move the static content, the images, CSS files, etc. into the blob storage. Enable the CDN on his storage account. Change the URL of those static files to the CDN URL. I think these are the very common steps when using CDN. But this morning I found that the new Windows Azure SDK 1.4 and new Windows Azure Developer Portal had just been published announced at the Windows Azure Blog. One of the new features in this release is about the CDN, which means we can enabled the CDN not only for a storage account, but a hosted service as well. Within this new feature the steps I mentioned above would be turned simpler a lot.   Enable CDN for Hosted Service To enable the CDN for a hosted service we just need to log on the Windows Azure Developer Portal. Under the “Hosted Services, Storage Accounts & CDN” item we will find a new menu on the left hand side said “CDN”, where we can manage the CDN for storage account and hosted service. As we can see the hosted services and storage accounts are all listed in my subscriptions. To enable a CDN for a hosted service is veru simple, just select a hosted service and click the New Endpoint button on top. In this dialog we can select the subscription and the storage account, or the hosted service we want the CDN to be enabled. If we selected the hosted service, like I did in the image above, the “Source URL for the CDN endpoint” will be shown automatically. This means the windows azure platform will make all contents under the “/cdn” folder as CDN enabled. But we cannot change the value at the moment. The following 3 checkboxes next to the URL are: Enable CDN: Enable or disable the CDN. HTTPS: If we need to use HTTPS connections check it. Query String: If we are caching content from a hosted service and we are using query strings to specify the content to be retrieved, check it. Just click the “Create” button to let the windows azure create the CDN for our hosted service. The CDN would be available within 60 minutes as Microsoft mentioned. My experience is that about 15 minutes the CDN could be used and we can find the CDN URL in the portal as well.   Put the Content in CDN in Hosted Service Let’s create a simple windows azure project in Visual Studio with a MVC 2 Web Role. When we created the CDN mentioned above the source URL of CDN endpoint would be under the “/cdn” folder. So in the Visual Studio we create a folder under the website named “cdn” and put some static files there. Then all these files would be cached by CDN if we use the CDN endpoint. The CDN of the hosted service can cache some kind of “dynamic” result with the Query String feature enabled. We create a controller named CdnController and a GetNumber action in it. The routed URL of this controller would be /Cdn/GetNumber which can be CDN-ed as well since the URL said it’s under the “/cdn” folder. In the GetNumber action we just put a number value which specified by parameter into the view model, then the URL could be like /Cdn/GetNumber?number=2. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:  7: namespace MvcWebRole1.Controllers 8: { 9: public class CdnController : Controller 10: { 11: // 12: // GET: /Cdn/ 13:  14: public ActionResult GetNumber(int number) 15: { 16: return View(number); 17: } 18:  19: } 20: } And we add a view to display the number which is super simple. 1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<int>" %> 2:  3: <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> 4: GetNumber 5: </asp:Content> 6:  7: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 8:  9: <h2>The number is: <% 1: : Model.ToString() %></h2> 10:  11: </asp:Content> Since this action is under the CdnController the URL would be under the “/cdn” folder which means it can be CDN-ed. And since we checked the “Query String” the content of this dynamic page will be cached by its query string. So if I use the CDN URL, http://az25311.vo.msecnd.net/GetNumber?number=2, the CDN will firstly check if there’s any content cached with the key “GetNumber?number=2”. If yes then the CDN will return the content directly; otherwise it will connect to the hosted service, http://aurora-sys.cloudapp.net/Cdn/GetNumber?number=2, and then send the result back to the browser and cached in CDN. But to be notice that the query string are treated as string when used by the key of CDN element. This means the URLs below would be cached in 2 elements in CDN: http://az25311.vo.msecnd.net/GetNumber?number=2&page=1 http://az25311.vo.msecnd.net/GetNumber?page=1&number=2 The final step is to upload the project onto azure. Test the Hosted Service CDN After published the project on azure, we can use the CDN in the website. The CDN endpoint we had created is az25311.vo.msecnd.net so all files under the “/cdn” folder can be requested with it. Let’s have a try on the sample.htm and c_great_wall.jpg static files. Also we can request the dynamic page GetNumber with the query string with the CDN endpoint. And if we refresh this page it will be shown very quickly since the content comes from the CDN without MCV server side process. We style of this page was missing. This is because the CSS file was not includes in the “/cdn” folder so the page cannot retrieve the CSS file from the CDN URL.   Summary In this post I introduced the new feature in Windows Azure CDN with the release of Windows Azure SDK 1.4 and new Developer Portal. With the CDN of the Hosted Service we can just put the static resources under a “/cdn” folder so that the CDN can cache them automatically and no need to put then into the blob storage. Also it support caching the dynamic content with the Query String feature. So that we can cache some parts of the web page by using the UserController and CDN. For example we can cache the log on user control in the master page so that the log on part will be loaded super-fast. There are some other new features within this release you can find here. And for more detailed information about the Windows Azure CDN please have a look here as well.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >