Search Results

Search found 32453 results on 1299 pages for 'osr wls oracle service re'.

Page 450/1299 | < Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >

  • Windows 7 to search a network drive

    - by John
    Is there any way that I can have windows 7 clients be able to go to "start" and type in the name of a file that is located on a network drive? I have read that this is possible through indexing, but to get through the indexing steps I need to make files available offline. This network drive I speak of has about 2tb of files on it. How in the heck can I keep all this straight. I imagine there would be syncing errors everywhere if I were to make all of these files available offline. Not to mention not all files being current due to the large amounts of files. Anyone have suggestions?

    Read the article

  • After setting ulimit to unlimited, I am not able to login to machine

    - by user419534
    In one of requirment, I had to set ulimit on one of my machine to unlimited. For this I changed following in /etc/security/limits.conf as below # End of file oracle soft nofile unlimited oracle hard nofile unlimited oracle soft nproc 131072 oracle hard nproc 131072 oracle soft core unlimited oracle hard core unlimited oracle soft memlock 50000000 oracle hard memlock 50000000 * soft nofile unlimited * hard nofile unlimited and changed /etc/profile if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p unlimited ulimit -n unlimited else ulimit -u unlimited -n unlimited fi fi I logged out. I am not able to connect ot machine at all. could you please someone help on this.

    Read the article

  • Windows 7 to search a network drive

    - by John
    Is there any way that I can have windows 7 clients be able to go to "start" and type in the name of a file that is located on a network drive? I have read that this is possible through indexing, but to get through the indexing steps I need to make files available offline. This network drive I speak of has about 2tb of files on it. How in the heck can I keep all this straight. I imagine there would be syncing errors everywhere if I were to make all of these files available offline. Not to mention not all files being current due to the large amounts of files. Anyone have suggestions?

    Read the article

  • Network speeds being report as 4x higher than actual in Windows 7 SP1

    - by Synetech
    Ever since installing Windows 7 SP1, I have noticed that all programs that display my network transfer rate have been exactly 4x higher than they actually are. For example, when I download something from a high-bandwidth web site or through torrents with lots of sources, the download rate indicated is is ~5MBps (~40Mbps) even though my Internet connection has a maximum of only 1.5MBps (12Mbps). It is the same situation with the upstream bandwidth: the connection maximum is 64KBps, but I’m seeing up to 256KBps. I have tried several different programs for monitoring bandwidth throughput and they all give the same results. I also tried different times and different days, and they always show the rate as being four times too high. My initial thought was that my ISP had increased the speeds (without my noticing), which they have done before. However, I checked my ISP’s site and they have not increased the speeds. Moreover, when I look at the speeds in the program actually doing the transfer (eg Chrome, µTorrent, etc.), the numbers are in line with the expected values at the same time that bandwidth monitoring programs are showing the high numbers. The only significant change (and pretty much the only change at all) that has occurred to my system since the change was the installation of SP1 for Windows 7. As such, it is my belief that some sort of change exists in SP1 whereby software that accesses the bandwidth via a specific API receives (erroneously?) high numbers while others that have access to the raw data continue to receive the correct values. I booted into Windows XP and downloaded some things via HTTP and torrent and in both cases, the numbers were as expected (like they were in Windows 7 before installing SP1). I then booted back into 7SP1 and once again, the numbers were four times higher than possible. Therefore it is definitely something in SP1 that has changed how local bandwidth is calculated/returned. There is definitely something wonky with Windows 7 SP1’s network speed calculation. I tried Googling this, but (for multiple reasons), have had a difficult time finding anything relevant. Has anybody else noticed this behavior? Does anybody know of any bugs or changes in SP1 that could account for it?

    Read the article

  • Problems installing W2K8 SP2

    - by zagman76
    Hello - We are having difficulty installing SP2 on a W2K8 SP1 x64 server. When we try to do this through Windows Updates, it runs for about 1 minute and then says that it completed successfully (when clearly it did not). When we run the file manually via the download, the install fails with the following error code: 0x800f0823 I have tried running the System Update Readiness Tool (KB947821), but that did not work. Additional info about the server: It is a domain controller, also running MS-Exchange 2007 SP1 (v8.01.0336.000). Any assistance that could be provided would be greatly appreciated. In you need any more info about the server, please let me know! Thanks!

    Read the article

  • use svcutil to map multiple namespaces for generating wcf service proxies

    - by Pratik
    I want to use svcutil to map multiple wsdl namespace to clr namespace when generating service proxies. I use strong versioning of namespaces and hence the generated clr namespaces are awkward and may mean many client side code changes if the wsdl/xsd namespace version changes. A code example would be better to show what I want. // Service code namespace TestService.StoreService { [DataContract(Namespace = "http://mydomain.com/xsd/Model/Store/2009/07/01")] public class Address { [DataMember(IsRequired = true, Order = 0)] public string street { get; set; } } [ServiceContract(Namespace = "http://mydomain.com/wsdl/StoreService-v1.0")] public interface IStoreService { [OperationContract] List<Customer> GetAllCustomersForStore(int storeId); [OperationContract] Address GetStoreAddress(int storeId); } public class StoreService : IStoreService { public List<Customer> GetAllCustomersForStore(int storeId) { throw new NotImplementedException(); } public Address GetStoreAddress(int storeId) { throw new NotImplementedException(); } } } namespace TestService.CustomerService { [DataContract(Namespace = "http://mydomain.com/xsd/Model/Customer/2009/07/01")] public class Address { [DataMember(IsRequired = true, Order = 0)] public string city { get; set; } } [ServiceContract(Namespace = "http://mydomain.com/wsdl/CustomerService-v1.0")] public interface ICustomerService { [OperationContract] Customer GetCustomer(int customerId); [OperationContract] Address GetStoreAddress(int customerId); } public class CustomerService : ICustomerService { public Customer GetCustomer(int customerId) { throw new NotImplementedException(); } public Address GetStoreAddress(int customerId) { throw new NotImplementedException(); } } } namespace TestService.Shared { [DataContract(Namespace = "http://mydomain.com/xsd/Model/Shared/2009/07/01")] public class Customer { [DataMember(IsRequired = true, Order = 0)] public int CustomerId { get; set; } [DataMember(IsRequired = true, Order = 1)] public string FirstName { get; set; } } } 1. svcutil - without namespace mapping svcutil.exe /t:metadata TestSvcUtil\bin\debug\TestService.CustomerService.dll TestSvcUtil\bin\debug\TestService.StoreService.dll svcutil.exe /t:code *.wsdl *.xsd /o:TestClient\WebServiceProxy.cs The generated proxy looks like namespace mydomain.com.xsd.Model.Shared._2009._07._011 { public partial class Customer{} } namespace mydomain.com.xsd.Model.Customer._2009._07._011 { public partial class Address{} } namespace mydomain.com.xsd.Model.Store._2009._07._011 { public partial class Address{} } The client classes are out of any namespaces. Any change to xsd namespace would imply changing all using statements in my client code all build will break. 2. svcutil - with wildcard namespace mapping svcutil.exe /t:metadata TestSvcUtil\bin\debug\TestService.CustomerService.dll TestSvcUtil\bin\debug\TestService.StoreService.dll svcutil.exe /t:code *.wsdl *.xsd /n:*,MyDomain.ServiceProxy /o:TestClient\WebServicesProxy2.cs The generated proxy looks like namespace MyDomain.ServiceProxy { public partial class Customer{} public partial class Address{} public partial class Address1{} public partial class CustomerServiceClient{} public partial class StoreServiceClient{} } Notice that svcutil has automatically changed one of the Address class to Address1. I don't like this. All client classes are also inside the same namespace. What I want Something like this: svcutil.exe /t:code *.wsdl *.xsd /n:"http://mydomain.com/xsd/Model/Shared/2009/07/01, MyDomain.Model.Shared;http://mydomain.com/xsd/Model/Customer/2009/07/01, MyDomain.Model.Customer;http://mydomain.com/wsdl/CustomerService-v1.0, MyDomain.CustomerServiceProxy;http://mydomain.com/xsd/Model/Store/2009/07/01, MyDomain.Model.Store;http://mydomain.com/wsdl/StoreService-v1.0, MyDomain.StoreServiceProxy" /o:TestClient\WebServiceProxy3.cs This way I can logically group the clr namespace and any change to wsdl/xsd namespace is handled in the proxy generation only without affecting the rest of the client side code. Now this is not possible. The svcutil allows to map only one or all namespaces, not a list of mappings. I can do one mapping as shown below but not multiple svcutil.exe /t:code *.wsdl *.xsd /n:"http://mydomain.com/xsd/Model/Store/2009/07/01, MyDomain.Model.Address" /o:TestClient\WebServiceProxy4.cs But is there any solution. Svcutil is not magic, it is written in .Net and programatically generating the proxies. Has anyone written an alternate to svcutil or point me to directions so that I can write one.

    Read the article

  • Small performance test on a web service

    - by vtortola
    Hi, I'm trying to develop a small application that test how many requests per second can my service support but I think I'm doing something wrong. The service is in an early development stage, but I'd like to have this test handy in order to check in time to time I'm not doing something that decrease the performance. The problem is that I cannot get the web server or the database server go to the 100% of CPU. I'm using three different computers, in one is the web server (WinSrv Standard 2008 x64 IIS7), in other the database (Win 2K - SQL Server 2005) and the last is my computer (Win7 x64 ultimate), where I'll run the test. The computers are connected through a 100 ethernet switch. The request POST is 9 bytes and the response will be 842 bytes. The test launches several threads, and each thread has a "while" loop, in each loop it creates a WebRequest object, performs a call, increment a common counter and waits between 1 and 5 millisencods, then it do it again: static Int32 counter = 0; static void Main(string[] args) { ServicePointManager.DefaultConnectionLimit = 250; Console.WriteLine("Ready. Press any key..."); Console.ReadKey(); Console.WriteLine("Running..."); String localhost = "localhost"; String linuxmono = "192.168.1.74"; String server= "192.168.1.5:8080"; DateTime start = DateTime.Now; Random r = new Random(DateTime.Now.Millisecond); for (int i = 0; i < 50; i++) { new Thread(new ParameterizedThreadStart(Test)).Start(server); Thread.Sleep(r.Next(1, 3)); } Thread.Sleep(2000); while (true) { Console.WriteLine("Request per second :" + counter/DateTime.Now.Subtract(start).TotalSeconds ); Thread.Sleep(3000); } } public static void Test(Object ip) { Guid guid = Guid.NewGuid(); Random r = new Random(DateTime.Now.Millisecond); while (true) { String test = "<lalala/>"; WebRequest req = WebRequest.Create("http://" + (String)ip + "/WebApp/"+guid.ToString()+"/Data/Tables=whatever"); req.Method = "POST"; req.ContentType = "application/xml"; req.Credentials = new NetworkCredential("aaa", "aaa","domain"); Byte[] array = Encoding.UTF8.GetBytes(test); req.ContentLength = array.Length; using (Stream reqStream = req.GetRequestStream()) { reqStream.Write(array, 0, array.Length); reqStream.Close(); } using (Stream responseStream = req.GetResponse().GetResponseStream()) { String response = new StreamReader(responseStream).ReadToEnd(); if (response.Length != 842) Console.Write(" EEEE "); } Interlocked.Increment(ref counter); Thread.Sleep(r.Next(1,5)); } } If I run the test neither of the computers do an excesive CPU usage. Let's say I get a X requests per second, if I run the console application two times at the same moment, I get X/2 request per second in each one... but still the web server is on 30% of CPU, the database server on 25%... I've tried to remove the thread.sleep in the loop, but it doesn't make a big difference. I'd like to put the machines to the maximun, to check how may requests per second they can provide. I guessed that I could do it in this way... but apparently I'm missing something here... What is the problem? Kind regards.

    Read the article

  • Clickonce installation fails after addition of WCF service project

    - by Ant
    So I have a winform solution, deployed via clickonce. Eveything worked fine until i added a WCF project. (see error in parsing the manifest file at end of post) Now I notice that MSBuild compiles the service into a _PublishedWebsites dir. I don't know what the need for this is, but I am suspecting this is the cause of the problem. This wcf project references some other projects within the solution. I am actually hosting the wcf service within the application so I don't really need MSBuild to do all this for me. Any ideas? ===================================================================================== PLATFORM VERSION INFO Windows : 5.1.2600.131072 (Win32NT) Common Language Runtime : 2.0.50727.3603 System.Deployment.dll : 2.0.50727.3053 (netfxsp.050727-3000) mscorwks.dll : 2.0.50727.3603 (GDR.050727-3600) dfdll.dll : 2.0.50727.3053 (netfxsp.050727-3000) dfshim.dll : 2.0.50727.3053 (netfxsp.050727-3000) SOURCES Deployment url : file:///C:/applications/abc/dev/abc.Application.application IDENTITIES Deployment Identity : Flow Management System.app, Version=1.4.0.0, Culture=neutral, PublicKeyToken=8453086392175e0f, processorArchitecture=msil APPLICATION SUMMARY * Installable application. * Trust url parameter is set. ERROR SUMMARY Below is a summary of the errors, details of these errors are listed later in the log. * Activation of C:\applications\abc\dev\abc.Application.application resulted in exception. Following failure messages were detected: + Exception reading manifest from file:///C:/applications/abc/dev/1.4.0.0/abc.Application.exe.manifest: the manifest may not be valid or the file could not be opened. + Parsing and DOM creation of the manifest resulted in error. Following parsing errors were noticed: -HRESULT: 0x80070c81 Start line: 0 Start column: 0 Host file: + Exception from HRESULT: 0x80070C81 COMPONENT STORE TRANSACTION FAILURE SUMMARY No transaction error was detected. WARNINGS There were no warnings during this operation. OPERATION PROGRESS STATUS * [12/03/2010 6:33:53 PM] : Activation of C:\applications\abc\dev\abc.Application.application has started. * [12/03/2010 6:33:53 PM] : Processing of deployment manifest has successfully completed. * [12/03/2010 6:33:53 PM] : Installation of the application has started. ERROR DETAILS Following errors were detected during this operation. * [12/03/2010 6:33:53 PM] System.Deployment.Application.InvalidDeploymentException (ManifestParse) - Exception reading manifest from file:///C:/applications/abc/dev/1.4.0.0/abc.Application.exe.manifest: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.ManifestReader.FromDocument(String localPath, ManifestType manifestType, Uri sourceUri) at System.Deployment.Application.DownloadManager.DownloadManifest(Uri& sourceUri, String targetPath, IDownloadNotification notification, DownloadOptions options, ManifestType manifestType, ServerInformation& serverInformation) at System.Deployment.Application.DownloadManager.DownloadApplicationManifest(AssemblyManifest deploymentManifest, String targetDir, Uri deploymentUri, IDownloadNotification notification, DownloadOptions options, Uri& appSourceUri, String& appManifestPath) at System.Deployment.Application.ApplicationActivator.DownloadApplication(SubscriptionState subState, ActivationDescription actDesc, Int64 transactionId, TempDirectory& downloadTemp) at System.Deployment.Application.ApplicationActivator.InstallApplication(SubscriptionState& subState, ActivationDescription actDesc) at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) --- Inner Exception --- System.Deployment.Application.InvalidDeploymentException (ManifestParse) - Parsing and DOM creation of the manifest resulted in error. Following parsing errors were noticed: -HRESULT: 0x80070c81 Start line: 0 Start column: 0 Host file: - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.LoadCMSFromStream(Stream stream) at System.Deployment.Application.Manifest.AssemblyManifest..ctor(FileStream fileStream) at System.Deployment.Application.ManifestReader.FromDocument(String localPath, ManifestType manifestType, Uri sourceUri) --- Inner Exception --- System.Runtime.InteropServices.COMException - Exception from HRESULT: 0x80070C81 - Source: System.Deployment - Stack trace: at System.Deployment.Internal.Isolation.IsolationInterop.CreateCMSFromXml(Byte[] buffer, UInt32 bufferSize, IManifestParseErrorCallback Callback, Guid& riid) at System.Deployment.Application.Manifest.AssemblyManifest.LoadCMSFromStream(Stream stream) COMPONENT STORE TRANSACTION DETAILS No transaction information is available.

    Read the article

  • REST web service keeps first POST parametrs

    - by Diego
    I have a web service in REST, designed with Java and deployed on Tomcat. This is the web service structure: @Path("Personas") public class Personas { @Context private UriInfo context; /** * Creates a new instance of ServiceResource */ public Personas() { } @GET @Produces("text/html") public String consultarEdad (@QueryParam("nombre") String nombre) { ConectorCliente c = new ConectorCliente("root", "cafe.sql", "test"); int edad = c.consultarEdad(nombre); if (edad == Integer.MIN_VALUE) return "-1"; return String.valueOf(edad); } @POST @Produces("text/html") public String insertarPersona(@QueryParam("nombre") String msg, @QueryParam("edad") int edad) { ConectorCliente c = new ConectorCliente("usr", "passwd", "dbname"); c.agregar(msg, edad); return "listo"; } } Where ConectorCliente class is MySQL connector and querying class. So, I had tested this with the @GET actually doing POST work, any user inputed data and information from ma Java FX app and it went direct to webservice's database. However, I changed so the CREATE operation was performed through a webservice responding to an actual POST HTTP request. However, when I run the client and add some info, parameters go OK, but in next time I input different parameters it'll input the same. I run this several times and I can't get the reason of it. This is the clients code: public class WebServicePersonasConsumer { private WebTarget webTarget; private Client client; private static final String BASE_URI = "http://localhost:8080/GetSomeRest/serviciosweb/"; public WebServicePersonasConsumer() { client = javax.ws.rs.client.ClientBuilder.newClient(); webTarget = client.target(BASE_URI).path("Personas"); } public <T> T insertarPersona(Class<T> responseType, String nombre, String edad) throws ClientErrorException { String[] queryParamNames = new String[]{"nombre", "edad"}; String[] queryParamValues = new String[]{nombre, edad}; ; javax.ws.rs.core.Form form = getQueryOrFormParams(queryParamNames, queryParamValues); javax.ws.rs.core.MultivaluedMap<String, String> map = form.asMap(); for (java.util.Map.Entry<String, java.util.List<String>> entry : map.entrySet()) { java.util.List<String> list = entry.getValue(); String[] values = list.toArray(new String[list.size()]); webTarget = webTarget.queryParam(entry.getKey(), (Object[]) values); } return webTarget.request().post(null, responseType); } public <T> T consultarEdad(Class<T> responseType, String nombre) throws ClientErrorException { String[] queryParamNames = new String[]{"nombre"}; String[] queryParamValues = new String[]{nombre}; ; javax.ws.rs.core.Form form = getQueryOrFormParams(queryParamNames, queryParamValues); javax.ws.rs.core.MultivaluedMap<String, String> map = form.asMap(); for (java.util.Map.Entry<String, java.util.List<String>> entry : map.entrySet()) { java.util.List<String> list = entry.getValue(); String[] values = list.toArray(new String[list.size()]); webTarget = webTarget.queryParam(entry.getKey(), (Object[]) values); } return webTarget.request(javax.ws.rs.core.MediaType.TEXT_HTML).get(responseType); } private Form getQueryOrFormParams(String[] paramNames, String[] paramValues) { Form form = new javax.ws.rs.core.Form(); for (int i = 0; i < paramNames.length; i++) { if (paramValues[i] != null) { form = form.param(paramNames[i], paramValues[i]); } } return form; } public void close() { client.close(); } } And this this the code when I perform the operations in a Java FX app: String nombre = nombreTextField.getText(); String edad = edadTextField.getText(); String insertToDatabase = consumidor.insertarPersona(String.class, nombre, edad); So, as parameters are taken from TextFields, is quite odd why second, third, fourth and so on POSTS post the SAME.

    Read the article

  • iphone web service access

    - by malleswar
    Hi, I have .net webservice methods login and summary. After getting the result from login, I need to show second view and need to call summary method. I am following this tutorial. http://icodeblog.com/2008/11/03/iphone-programming-tutorial-intro-to-soap-web-services/ I created two new classes loginaccess.h and loginaccess.m. @implementation LoginAccess @synthesize ResultString,webData, soapResults, xmlParser; -(NSString*)LoginCheck:(NSString*)userName:(NSString*)pwd { NSString *soapMessage = [NSString stringWithFormat: @"\n" "\n" "\n" "\n" "%@" "%@" "" "\n" "\n",userName,pwd ]; NSLog(soapMessage); NSURL *url = [NSURL URLWithString:@"http://www.XXXXXXXXX.com/service.asmx"]; NSMutableURLRequest *theRequest = [NSMutableURLRequest requestWithURL:url]; NSString *msgLength = [NSString stringWithFormat:@"%d", [soapMessage length]]; [theRequest addValue: @"text/xml; charset=utf-8" forHTTPHeaderField:@"Content-Type"]; [theRequest addValue: @"http://XXXXXXXXm/Login" forHTTPHeaderField:@"SOAPAction"]; [theRequest addValue: msgLength forHTTPHeaderField:@"Content-Length"]; [theRequest setHTTPMethod:@"POST"]; [theRequest setHTTPBody: [soapMessage dataUsingEncoding:NSUTF8StringEncoding]]; NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; if( theConnection ) { webData = [[NSMutableData data] retain]; } else { NSLog(@"theConnection is NULL"); } //[nameInput resignFirstResponder]; return ResultString; } -(void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { [webData setLength: 0]; } -(void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [webData appendData:data]; } -(void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { NSLog(@"ERROR with theConenction"); [connection release]; [webData release]; } -(void)connectionDidFinishLoading:(NSURLConnection *)connection { NSLog(@"DONE. Received Bytes: %d", [webData length]); NSString *theXML = [[NSString alloc] initWithBytes: [webData mutableBytes] length:[webData length] encoding:NSUTF8StringEncoding]; NSLog(theXML); [theXML release]; if( xmlParser ) { [xmlParser release]; } xmlParser = [[NSXMLParser alloc] initWithData: webData]; [xmlParser setDelegate: self]; [xmlParser setShouldResolveExternalEntities: YES]; [xmlParser parse]; [connection release]; [webData release]; } -(void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *) namespaceURI qualifiedName:(NSString *)qName attributes: (NSDictionary *)attributeDict { if( [elementName isEqualToString:@"LoginResult"]) { if(!soapResults) { soapResults = [[NSMutableString alloc] init]; } recordResults = TRUE; } } -(void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string { if( recordResults ) { [soapResults appendString: string]; } } -(void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { if( [elementName isEqualToString:@"LoginResult"]) { recordResults = FALSE; ResultString = soapResults; NSLog(@"Login"); [VariableStore setStr:ResultString]; NSLog(soapResults); [soapResults release]; soapResults = nil; } } @end I am calling LoginCheck method and based on result I want to show the second view. Here after finishing of the button touch down event, it enter into did end element, so I am always getting nil value. If I use the same code controller it works fine as I push second view controller in didendelement. Please give me some samples to place the web service calls in differnt class and how to call them in viewcontrollers. Regards, Malleswar

    Read the article

  • how to create text file in window service

    - by angel ansari
    Hi, I have an XML file <config> <ServiceName>autorunquery</ServiceName> <DBConnection> <server>servername</server> <user>xyz</user> <password>klM#2bs</password> <initialcatelog>TEST</initialcatelog> </DBConnection> <Log> <logfilename>d:\testlogfile.txt</logfilename> </Log> <Frequency> <value>10</value> <unit>minute</unit> </Frequency> <CheckQuery>select * from credit_debit1 where station='Corporate'</CheckQuery> <Queries total="3"> <Query id="1">Update credit_debit1 set station='xxx' where id=2</Query> <Query id="2">Update credit_debit1 set station='xxx' where id=4</Query> <Query id="3">Update credit_debit1 set station='xxx' where id=9</Query> </Queries> </config> using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Diagnostics; using System.Linq; using System.ServiceProcess; using System.Text; using System.IO; using System.Xml; namespace Service1 { public partial class Service1 : ServiceBase { XmlTextReader reader = null; string path = null; FileStream fs = null; StreamWriter sw = null; public Service1() { InitializeComponent(); } protected override void OnStart(string[] args) { timer1.Enabled = true; timer1.Interval = 10000; timer1.Start(); logfile("start service"); } protected override void OnStop() { timer1.Enabled = false; timer1.Stop(); logfile("stop service"); } private void logfile(string content) { try { reader = new XmlTextReader("queryconfig.xml");//xml file name which is in current directory if (reader.ReadToFollowing("logfilename")) { path = reader.ReadElementContentAsString(); } fs = new FileStream(path, FileMode.Append, FileAccess.Write); sw = new StreamWriter(fs); sw.Write(content); sw.WriteLine(DateTime.Now.ToString()); } catch (Exception ex) { sw.Write(ex.ToString()); throw; } finally { if (reader != null) reader.Close(); if (sw != null) sw.Close(); if (fs != null) fs.Close(); } } } } My problem is that the file is not created.

    Read the article

  • 2 way SSL between SOA and OSB

    - by Johnny Shum
    If you have a need to use 2 way SSL between SOA composite and external partner links, you can follow these steps. Create the identity keystores, trust keystores, and server certificates. Setup keystores and SSL on WebLogic Setup server to use 2 way SSL Configure your SOA composite's partner link to use 2 way SSL Configure SOA engine two ways SSL In this case,  I use SOA and OSB for the test.  I started with a separate OSB and SOA domains.  I deployed two soap based proxies on OSB and two composites on SOA.  In SOA, one composite invokes a OSB proxy service, the other is invoked by the OSB.  Similarly,  in OSB,  one proxy invokes a SOA composite and the other is invoked by SOA. 1. Create the identity keystores, trust keystores and the server certificates Since this is a development environment, I use JDK's keytool to create the stores and use self signing certificate.  For production environment, you should use certificates from a trusted certificate authority like Verisign.    I created a script below to show what is needed in this step.  The only requirement is when creating the SOA identity certificate, you MUST use the alias mykey. STOREPASS=welcome1KEYPASS=welcome1# generate identity keystore for soa and osb.  Note: For SOA, you MUST use alias mykeyecho "creating stores"keytool -genkey -alias mykey -keyalg "RSA" -sigalg "SHA1withRSA" -dname "CN=soa, C=US" -keystore soa-default-keystore.jks -storepass $STOREPASS -keypass $KEYPASS keytool -genkey -alias osbkey -keyalg "RSA" -sigalg "SHA1withRSA" -dname "CN=osb, C=US" -keystore osb-default-keystore.jks -storepass $STOREPASS -keypass $KEYPASS# listing keystore contentsecho "listing stores contents"keytool -list -alias mykey -keystore soa-default-keystore.jks -storepass $STOREPASSkeytool -list -alias osbkey -keystore osb-default-keystore.jks -storepass $STOREPASS# exporting certs from storesecho "export certs from  stores"keytool -exportcert -alias mykey -keystore soa-default-keystore.jks -storepass $STOREPASS -file soacert.derkeytool -exportcert -alias osbkey -keystore osb-default-keystore.jks -storepass $STOREPASS -file osbcert.der # import certs to trust storesecho "import certs"keytool -importcert -alias osbkey -keystore soa-trust-keystore.jks -storepass $STOREPASS -file osbcert.der -keypass $KEYPASSkeytool -importcert -alias mykey -keystore osb-trust-keystore.jks -storepass $STOREPASS -file soacert.der  -keypass $KEYPASS SOA suite uses the JDK's SSL implementation for outbound traffic instead of the WebLogic's implementation.  You will need to import the partner's public cert into the trusted keystore used by SOA.  The default trusted keystore for SOA is DemoTrust.jks and it is located in $MW_HOME/wlserver_10.3/server/lib.   (This is set in the startup script -Djavax.net.ssl.trustStore).   If you use your own trusted keystore, then you will need to import it into your own trusted keystore. keytool -importcert -alias osbkey -keystore $MW_HOME/wlserver_10.3/server/lib/DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase  -file osbcert.der -keypass $KEYPASS If you do not perform this step, you will encounter this exception in runtime when SOA invokes OSB service using 2 way SSL Message send failed: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target  2.  Setup keystores and SSL on WebLogic First, you will need to login to the WebLogic console, navigate to the server's configuration->Keystore's tab.   Change the Keystores type to Custom Identity and Custom Trust and enter the rest of the fields. Then you navigate to the SSL tab, enter the fields in the identity section and expand the Advanced section.  Since I am using self signing cert on my VM enviornment, I disabled Hostname verification.  In real production system, this should not be the case.   I also enabled the option "Use Server Certs", so that the application uses the server cert to initiate https traffic (it is important to enable this in OSB). Last, you enable SSL listening port in the Server's configuration->General tab. 3.  Setup server to use 2 way SSL If you follow the screen shot in previous step, you can see in the Server->Configuration->SSL->Advanced section, there is an option for Two Way Client Cert Behavior,  you should set this to Client Certs Requested and Enforced. Repeat step 2 and 3 done on OSB.  After all these configurations,  you have to restart all the servers. 4.  Configure your SOA composite's partner link to use 2 way SSL You do this by modifying the composite.xml in your project, locate the partner's link reference and add the property oracle.soa.two.way.ssl.enabled.   <reference name="callosb" ui:wsdlLocation="helloword.wsdl">    <interface.wsdl interface="http://www.examples.com/wsdl/HelloService.wsdl#wsdl.interface(Hello_PortType)"/>    <binding.ws port="http://www.examples.com/wsdl/HelloService.wsdl#wsdl.endpoint(Hello_Service/Hello_Port)"                location="helloword.wsdl" soapVersion="1.1">      <property name="weblogic.wsee.wsat.transaction.flowOption"                type="xs:string" many="false">WSDLDriven</property>   <property name="oracle.soa.two.way.ssl.enabled">true</property>    </binding.ws>  </reference> In OSB, you should have checked the HTTPS required flag in the proxy's transport configuration.  After this,  rebuilt the composite jar file and ready to deploy in the EM console later. 5.  Configure SOA engine two ways SSL Oracle SOA Suite uses both Oracle WebLogic Server and Sun Secure Socket Layer (SSL) stacks for two-way SSL configurations. For the inbound web service bindings, Oracle SOA Suite uses the Oracle WebLogic Server infrastructure and, therefore, the Oracle WebLogic Server libraries for SSL.  This is already done by step 2 and 3 in the previous section. For the outbound web service bindings, Oracle SOA Suite uses JRF HttpClient and, therefore, the Sun JDK libraries for SSL.  You do this by configuring the SOA Engine in the Enterprise Manager Console, select soa-infra->SOA Administration->Common Properties Then click at the link at the bottom of the page:  "More SOA Infra Advances Infrastructure Configuration Properties" and then enter the full path of soa identity keystore in the value field of the KeyStoreLocation attribute.  Click Apply and Return then navigate to the domain->security->credential. Here, you provide the password to the keystore.  Note: the alias of the certficate must be mykey as described in step 1, so you only need to provide the password to the identity keystore.   You accomplish this by: Click Create Map In the Map Name field, enter SOA, and click OK Click Create Key Enter the following details where the password is the password for the SOA identity keystore. 6.  Test and Trouble Shooting Once the setup is complete and server restarted, you can deploy the composite in the EM console and test it.  In case of error,  you can read the server log file to determine the cause of the error.  For example, If you have not setup step 5 and test 2 way SSL, you will see this in the log when invoking OSB from BPEL: java.lang.Exception: oracle.sysman.emSDK.webservices.wsdlapi.SoapTestException: oracle.fabric.common.FabricInvocationException: Unable to access the following endpoint(s): https://localhost.localdomain:7002/default/helloword ####<Sep 22, 2012 2:07:37 PM CDT> <Error> <oracle.soa.bpel.engine.ws> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0AFDAEF20610F8FD89C5> ............ <11d1def534ea1be0:-4034173:139ef56d9f0:-8000-00000000000002ec> <1348340857956> <BEA-000000> <got FabricInvocationException sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target If you have not enable WebLogic SSL to use server certificate in the console and invoke SOA composite from OSB using two ways SSL, you will see this error: ####<Sep 22, 2012 2:07:37 PM CDT> <Warning> <Security> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <11d1def534ea1be0:-51f5c76a:139ef5e1e1a:-8000-00000000000000e2> <1348340857776> <BEA-090485> <CERTIFICATE_UNKNOWN alert was received from localhost.localdomain - 127.0.0.1. The peer has an unspecified issue with the certificate. SSL debug tracing should be enabled on the peer to determine what the issue is.> ####<Sep 22, 2012 2:07:37 PM CDT> <Warning> <Security> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <11d1def534ea1be0:-51f5c76a:139ef5e1e1a:-8000-00000000000000e4> <1348340857786> <BEA-090485> <CERTIFICATE_UNKNOWN alert was received from localhost.localdomain - 127.0.0.1. The peer has an unspecified issue with the certificate. SSL debug tracing should be enabled on the peer to determine what the issue is.> ####<Sep 22, 2012 2:27:21 PM CDT> <Warning> <Security> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <11d1def534ea1be0:-51f5c76a:139ef5e1e1a:-8000-0000000000000124> <1348342041926> <BEA-090497> <HANDSHAKE_FAILURE alert received from localhost - 127.0.0.1. Check both sides of the SSL configuration for mismatches in supported ciphers, supported protocol versions, trusted CAs, and hostname verification settings.> References http://docs.oracle.com/cd/E23943_01/admin.1111/e10226/soacompapp_secure.htm#CHDCFABB   Section 5.6.4 http://docs.oracle.com/cd/E23943_01/web.1111/e13707/ssl.htm#i1200848

    Read the article

  • On automating a split-mirror ASM backup with EMC TimeFinder ...

    - by [email protected]
    Normal 0 21 false false false MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} Hi clerks,   Offloading the backup operation to another host using disk cloning could really improve the performance on highly busy databases ( 24x7, zero downtime and all this stuff ...) There are well know white papers on this subject, ASM included, but today Im showing you a nice way to automate the procedure using shell scripting with EMC TimeFinder technologies:   Assumptions: *********** ASM diskgroups name:   +data_${db_name} : asm data diskgroup +fra_${db_name} :  asm fra  diskgroup   EMC Time Finder sync groups name:   rac_${DB_NAME}_data_tf : data group rac_${DB_NAME}_fra_tf:   fra group     There are two scripts, one located on the production box ( bck_database.sh ) and the other one on the backup server node ( bck_database_mirror.sh ) The second one is remotly executed from the production host There are a bunch of variables along the code with selfexplanatory names I guess, anyway let me know if you want some help     #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    NAME ###     bck_database.sh ### ###    DESCRIPTION ###     Database backup on third mirror ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###    Oracle            28/01/10             - Creacion ###   V_DATE=`/bin/date +%Y%m%d_%H%M%S` V_FICH_LOG=`dirname $0`/trace_dir_location/`basename $0`.${V_DATE}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1     ADMIN_DIR=`dirname $0` . ${ADMIN_DIR}/setenv_instance.sh -- This script should set the instance vars like Oracle Home, Sid, db_name ... if [ $? -ne 0 ] then   echo "Error when setting the environment."   exit 1 fi   echo "${V_DATE} ####################################################" echo "Executing database backup: ${DB_NAME}" echo "####################################################################"   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm data diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm data diskgroups"   exit 2 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm data disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm data diskgroups"   exit 3 fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm fra diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm fra diskgroups"   exit 4 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm fra disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm fra diskgroups"   exit 5 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "ASM sync sucessfully completed!" echo "####################################################################"     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to BEGIN BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter system archive log current;   alter database ${DB_NAME} begin backup; ! if [ $? -ne 0 ] then   echo "Error when updating database status to BEGIN backup"   exit 6 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm data disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm data disks"   exit 7 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to END BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter database ${DB_NAME} end backup;   alter system archive log current; ! if [ $? -ne 0 ] then   echo "Error when updating database status to END backup"   exit 8 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Generating controlfile copies...." echo "####################################################################" rman<<-! connect target / run { allocate channel ch1 type DISK; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_mount.ctl'; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl'; } ! if [ $? -ne 0 ] then   echo "Error generating controlfile copies"   exit 9 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Resync RMAN catalog ....." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} resync catalog; ! if [ $? -ne 0 ] then   echo "Error when resyncing RMAN catalog"   exit 10 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm fra disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm fra disks"   exit 11 fi     echo "WARNING!: Calling bck_database_mirror.sh host ${NODE_BCK_SERVER}..." ssh ${NODO_BCK_SERVER} ${ADMIN_DIR_BCK}/bck_database_mirror.sh if [ $? -ne 0 ] then   echo "Error, when remote executing the backup "   exit 12 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Cleaning the archived redo logs already copied to tape ..." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} run { resync catalog; delete noprompt archivelog all backed up 1 times to device type sbt; } ! if [ $? -ne 0 ] then   echo "Error when cleaning the archived redo logs"   exit 13 fi echo "${V_DATE} ####################################################" echo "Backup sucessfully executed!!" echo "####################################################################" exit 0   ------------------------------------------------------------------------------ ------------------------** BACKUP SERVER NODE ** ----------------------------- ------------------------------------------------------------------------------   #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    ###    NAME ###     bck_database_mirror.sh ### ###    DESCRIPTION ###      Backup @ backup server ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###      Oracle                    28/01/10     - Creacion         V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ASM instance ..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_asm.sh -- This script is supposed to start the ASM instance in the backup server   if [ $? -ne 0 ]   then     echo "Error when tying to start ASM instance."     exit 1   fi       . ${V_ADMIN_DIR}/setenv_asm.sh -- This script is supposed to set the env. variables of the ASM instance   if [ $? -ne 0 ]   then     echo "Error when setting the ASM environment"     exit 1   fi       V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "The asm diskgroups/disks dettected are the following ..."   echo "####################################################################"     sqlplus /nolog <<-!     whenever sqlerror exit 1     connect / as sysdba     whenever sqlerror exit     SET LINES 200     COL PATH FORMAT A25     SELECT DISK.MOUNT_STATUS, DISK.PATH, DISK.NAME, DISK_GROUP.NAME, DISK_GROUP.TOTAL_MB FROM V\$ASM_DISK DISK, V\$ASM_DISKGROUP DISK_GROUP WHERE DISK.GROUP_NUMBER=DISK_GROUP.GROUP_NUMBER; !       V_ADMIN_DIR=`dirname $0`   . ${V_ADMIN_DIR}/setenv_instance.sh -- This script is supposed to set the env. variables of the database instance   if [ $? -ne 0 ]   then     echo "Error when setting the database instance environment"     exit 1   fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ${DB_NAME} in MOUNT mode..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_instance_mount.sh -- This script is supposed to do a startup mount   if [ $? -ne 0 ]   then     echo "Error starting  ${DB_NAME} in MOUNT mode"     exit 1   fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Executing RMAN backup..."   echo "####################################################################"   rman<<-!   connect target /   connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG}   run {   allocate channel ch1 type 'SBT_TAPE' parms'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opt)'; -- TDPO Media Library   crosscheck archivelog all;   backup tag BCK_CONTROLFILE_ST_${DB_NAME}   format 'ctl_%d_%s__%p_%t'   controlfilecopy '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl';   backup tag BCK_DATAFILE_ST_${DB_NAME} full   format 'db_%d_%s_%p_%t'database;   backup tag BCK_ARCHLOG_ST_${DB_NAME} format 'al_%d_%s_%p_%t' archivelog all;   release channel ch1;   } !   if [ $? -ne 0 ]   then     echo "Error executing the RMAN backup"     exit 1   fi     ${V_ADMIN_DIR}/stop_instance_immediate.sh -- This script is supposed to do a shutdown immediate of the database instance   ${ADMIN_DIR}/stop_asm_immediate.sh -- This script is supposed to do a shutdown immediate of the ASM instance   exit 0     fi   Hope it helps someone! --L

    Read the article

  • In Which We Demystify A Few Docupresentment Settings And Learn the Ethos of the Author

    - by Andy Little
    It's no secret that Docupresentment (part of the Oracle Documaker suite) is powerful tool for integrating on-demand and interactive applications for publishing with the Oracle Documaker framework.  It's also no secret there are are many details with respect to the configuration of Docupresentment that can elude even the most erudite of of techies.  To be sure, Docupresentment will work for you right out of the box, and in most cases will suit your needs without toying with a configuration file.  But, where's the adventure in that?   With this inaugural post to That's The Way, I'm going to introduce myself, and what my aim is with this blog.  If you didn't figure it out already by checking out my profile, my name is Andy and I've been with Oracle (nee Skywire Software nee Docucorp nee Formmaker) since the formative years of 1998.  Strangely, it doesn't seem that long ago, but it's certainly a lifetime in the age of technology.  I recall running a BBS from my parent's basement on a 1200 baud modem, and the trepidation and sweaty-palmed excitement of upgrading to the power and speed of 2400 baud!  Fine, I'll admit that perhaps I'm inflating the experience a bit, but I was kid!  This is the stuff of War Games and King's Quest I and the demise of TI-99 4/A.  Exciting times.  So fast-forward a bit and I'm 12 years into a career in the world of document automation and publishing working for the best (IMHO) software company on the planet.  With That's The Way I hope to shed a little light and peek under the covers of some of the more interesting aspects of implementations involving the tech space within the Oracle Insurance Global Business Unit (IGBU), which includes Oracle Documaker, Rating & Underwriting, and Policy Administration to name a few.  I may delve off course a bit, and you'll likely get a dose of humor (at least in my mind) but I hope you'll glean at least a tidbit of usefulness with each post.  Feel free to comment as I'm a fairly conversant guy and happy to talk -- it's stopping the talking that's the hard part... So, back to our regularly-scheduled post, already in progress.  By this time you've visited Oracle's E-Delivery site and acquired your properly-licensed version of Oracle Documaker.  Wait -- you didn't find it?  Understandable -- navigating the voluminous download library within Oracle can be a daunting task.  It's pretty simple once you’ve done it a few times.  Login to the e-delivery site, and accept the license terms and restrictions.  Then, you’ll be able to select the Oracle Insurance Applications product pack and your appropriate platform. Click Go and you’ll see a list of applicable products, and you’ll click on Oracle Documaker Media Pack (as I went to press with this article the version is 11.4): Finally, click the Download button next to Docupresentment (again, version at press time is 2.2 p5). This should give you a ZIP file that contains the installation packages for the Docupresentment Server and Client, cryptically named IDSServer22P05W32.exe and IDSClient22P05W32.exe. At this time, I’d like to take a little detour and explain that the world of Oracle, like most technical companies, is rife with acronyms.  One of the reasons Skywire Software was a appealing to Oracle was our use of many acronyms, including the occasional use of multiple acronyms with the same meaning.  I apologize in advance and will try to point these out along the way.  Here’s your first sticky note to go along with that: IDS = Internet Document Server = Docupresentment Once you’ve completed the installation, you’ll have a shiny new Docupresentment server and client, and if you installed the default location it will be living in c:\docserv. Unix users, I’m one of you!  You’ll find it by default in  ~/docupresentment/docserv.  Forging onward with the meat of this post is learning about some special configuration options.  By now you’ve read the documentation included with the download (specifically ids_book.pdf) which goes into some detail of the rubric of the configuration file and in fact there’s even a handy utility that provides an interface to the configuration file (see Running IDSConfig in the documentation).  But who wants to deal with a configuration utility when we have the tools and technology to edit the file <gasp> by hand! I shall now proceed with the standard Information Technology Under the Hood Disclaimer: Please remember to back up any files before you make changes.  I am not responsible for any havoc you may wreak! Go to your installation directory, and locate your docserv.xml file.  Open it in your favorite XML editor.  I happen to be fond of Notepad++ with the XML Tools plugin.  Almost immediately you will behold the splendor of the configuration file.  Just take a moment and let that sink in.  Ok – moving on.  If you reviewed the documentation you know that inside the root <configuration> node there are multiple <section> nodes, each containing a specific group of settings.  Let’s take a look at <section name=”DocumentServer”>: There are a few entries I’d like to discuss.  First, <entry name=”StartCommand”>. This should be pretty self-explanatory; it’s the name of the executable that’s run when you fire up Docupresentment.  Immediately following that is <entry name=”StartArguments”> and as you might imagine these are the arguments passed to the executable.  A few things to point out: The –Dids.configuration=docserv.xml parameter specifies the name of your configuration file. The –Dlogging.configuration=logconf.xml parameter specifies the name of your logging configuration file (this uses log4j so bone up on that before you delve here). The -Djava.endorsed.dirs=lib/endorsed parameter specifies the path where 3rd party Java libraries can be located for use with Docupresentment.  More on that in another post. The <entry name=”Instances”> allows you to specify the number of instances of Docupresentment that will be started.  By default this is two, and generally two instances per CPU is adequate, however you will always need to perform load testing to determine the sweet spot based on your hardware and types of transactions.  You may have many, many more instances than 2. Time for a sidebar on instances.  An instance is nothing more than a separate process of Docupresentment.  The Docupresentment service that you fire up with docserver.bat or docserver.sh actually starts a watchdog process, which is then responsible for starting up the actual Docupresentment processes.  Each of these act independently from one another, so if one crashes, it does not affect any others.  In the case of a crashed process, the watchdog will start up another instance so the number of configured instances are always running.  Bottom line: instance = Docupresentment process. And now, finally, to the settings which gave me pause on an not-too-long-ago implementation!  Docupresentment includes a feature that watches configuration files (such as docserv.xml and logconf.xml) and will automatically restart its instances to load the changes.  You can configure the time that Docupresentment waits to check these files using the setting <entry name=”FileWatchTimeMillis”>.  By default the number is 12000ms, or 12 seconds.  You can save yourself a few CPU cycles by extending this time, or by disabling  the check altogether by setting the value to 0.  This may or may not be appropriate for your environment; if you have 100% uptime requirements then you probably don’t want to bring down an entire set of processes just to accept a new configuration value, so it’s best to leave this somewhere between 12 seconds to a few minutes.  Another point to keep in mind: if you are using Documaker real-time processing under Docupresentment the Master Resource Library (MRL) files and INI options are cached, and if you need to affect a change, you’ll have to “restart” Docupresentment.  Touching the docserv.xml file is an easy way to do this (other methods including using the RSS request, but that’s another post). The next item up: <entry name=”FilePurgeTimeSeconds”>.  You may already know that the Docupresentment system can generate many temporary files based on certain request types that are processed through the system.  What you may not know is how those files are cleaned up.  There are many rules in Docupresentment that cause the creation of temporary files.  When these files are created, Docupresentment writes an entry into a properties file called the file cache.  This file contains the name, creation date, and expiration time of each temporary file created by each instance of Docupresentment.  Periodically Docupresentment will check the file cache to determine if there are files that are past the expiration time, not unlike that block of cheese festering away in the back of my refrigerator.  However, unlike my ‘fridge cleaning tendencies, Docupresentment is quick to remove files that are past their expiration time.  You, my friend, have the power to control how often Docupresentment inspects the file cache.  Simply set the value for <entry name=”FilePurgeTimeSeconds”> to the number of seconds appropriate for your requirements and you’re set.  Note that file purging happens on a separate thread from normal request processing, so this shouldn’t interfere with response times unless the CPU happens to be really taxed at the point of cache processing.  Finally, after all of this, we get to the final setting I’m going to address in this post: <entry name=”FilePurgeList”>.  The default is “filecache.properties”.  This establishes the root name for the Docupresentment file cache that I mentioned previously.  Docupresentment creates a separate cache file for each instance based on this setting.  If you have two instances, you’ll see two files created: filecache.properties.1 and filecache.properties.2.  Feel free to open these up and check them out. I hope you’ve enjoyed this first foray into the configuration file of Docupresentment.  If you did enjoy it, feel free to drop a comment, I welcome feedback.  If you have ideas for other posts you’d like to see, please do let me know.  You can reach me at [email protected]. ‘Til next time! ###

    Read the article

  • SharePoint 2010 Hosting :: HTTP Error 503. The Service in Unavailable.

    - by mbridge
    Error Message: HTTP Error 503. The service in unavailable. The application pool is shutdown and the Event Viewer indicates: Log Name:      Application Source:        Microsoft-Windows-User Profiles Service Date:          4/24/2010 4:58:28 PM Event ID:      1500 Task Category: None Level:         Error Keywords: User:          GUERILLA\spapppool Computer:      SPS2010RTM.guerilla.local Description: Windows cannot log you on because your profile cannot be loaded. Check that you are connected to the network, and that your network is functioning correctly. DETAIL – Unspecified error Solution: Option 1 – Log on locally with the service account once to create a local profile for it. Option 2 - Modify the application pool by going into IIS > Application Pools > Right-Click offending app pool > Advanced Settings > Set “Load User Profile” to False. Give it try!! Good luck!!

    Read the article

  • How can I unit test a class which requires a web service call?

    - by Chris Cooper
    I'm trying to test a class which calls some Hadoop web services. The code is pretty much of the form: method() { ...use Jersey client to create WebResource... ...make request... ...do something with response... } e.g. there is a create directory method, a create folder method etc. Given that the code is dealing with an external web service that I don't have control over, how can I unit test this? I could try and mock the web service client/responses but that breaks the guideline I've seen a lot recently: "Don't mock objects you don't own". I could set up a dummy web service implementation - would that still constitute a "unit test" or would it then be an integration test? Is it just not possible to unit test at this low a level - how would a TDD practitioner go about this?

    Read the article

  • How to structure a project that supports multiple versions of a service?

    - by Nick Canzoneri
    I'm hoping for some tips on creating a project (ASP.NET MVC, but I guess it doesn't really matter) against multiples versions of a service (in this case, actually multiple sets of WCF services). Right now, the web app uses only some of the services, but the eventual goal would be to use the features of all of the services. The code used to implement a service feature would likely be very similar between versions in most cases (but, of course, everything varies). So, how would you structure a project like this? Separate source control branches for each different version? Kind of shying away from this because I don't feel like branch merging should be something that we're going to be doing really often. Different project/solution files in the same branch? Could link the same shared projects easily Build some type of abstraction layer on top of the services, so that no matter what service is being used, it is the same to the web application?

    Read the article

  • How to decide whether to implement an operation as Entity operation vs Service operation in Domain Driven Design?

    - by Louis Rhys
    I am reading Evans's Domain Driven Design. The book says that there are entity and there are services. If I were to implement an operation, how to decide whether I should add it as a method on an entity or do it in a service class? e.g. myEntity.DoStuff() or myService.DoStuffOn(myEntity)? Does it depend on whether other entities are involved? If it involves other entities, implement as service operation? But entities can have associations and can traverse it from there too right? Does it depend on stateless or not? But service can also access entities' variable, right? Like in do stuff myService.DoStuffOn, it can have code like if(myEntity.IsX) doSomething(); Which means that it will depend on the state? Or does it depend on complexity? How do you define complex operations?

    Read the article

  • Le co-fondateur d'Hotmail veut rendre les SMS gratuits pour tous, un nouveau service voit le jour : JaxtrSMS

    Le co-fondateur d'Hotmail prévoit de rendre les SMS gratuits pour tous les utilisateurs de mobile Un nouveau service voit le jour : JaxtrSMS Sabeer Bhatia, que l'on qualifie de "serial entrepreneur", assure d'avoir trouvé la meilleure idée du siècle depuis la création d'Hotmail. Pour rappel, il était le co-fondateur de ce service, qui fut vendu à Microsoft pour 400 millions de dollars en 1998. Sa dernière idée en date, une application baptisée JaxtrSMS, se vanterait de fournir un service de sms gratuit, illimité et international. Cette application viendrait compléter les solutions VoIP de la société Jaxtr, connue pour fournir des prix attractifs. L...

    Read the article

  • Windows 8 signe la fin des Service Packs pour Windows ? Windows 7 n'aurait pas de SP2

    Fin des Service Packs pour Windows ? Windows 7 n'aurait pas de SP2 Windows 8 mettra-t-il fin à la traditionnelle publication des Services Pack, dont Microsoft avait habitué les utilisateurs ? On sait déjà que le nouvel OS de Microsoft recevra dès sa sortie grand public demain, une mise à jour cumulative qui a déjà été publiée par la société. Cette mise à jour est d'habitude (pour les versions précédentes de l'OS) publiée dans le cadre du Service Pack 1. Selon The Register des sources proches d'une équipe de Windows, le développement des Service Packs pour Windows ne figurerait plus dans les plans de Microsoft pour...

    Read the article

  • Microsoft ajoute un nouveau service de Build à la version hébergée de Team Foundation Server permettant de compiler sur Windows Azure

    Microsoft ajoute nouveau service de Build à la version hébergée de Team Foundation Server permettant de compiler le code source sur Windows Azure Microsoft a procédé à une mise à jour de la version hébergée de Team Foundation Server sur Windows Azure. Team Foundation Server est une solution de travail collaboratif permettant la gestion des sources, des builds, le suivi des éléments de travail, la planification et l'analyse des performances. Il y a un an de cela, Microsoft avait porté la solution sur sa plateforme d'hébergement Cloud Windows Azure. La société vient d'annoncer qu'un nouveau service de Build a été ajouté à Team Foundation. Ce service réduit le...

    Read the article

  • Google met à jour l'API Google Maps, qui permet aux applications Android d'exploiter au maximum le service de cartographie

    Google met à jour l'API Google Maps qui permet désormais aux applications Android d'exploiter au maximum le service de cartographie Google vient d'annoncer dans un billet de blog la mise à jour de son API Google Maps. Pour rappel, Google Maps API est une bibliothèque permettant d'intégrer dans une application une carte du service Google Maps et d'utiliser ses fonctionnalités comme le zoom, le marker et bien d'autres. Cette mise à jour de l'API concerne essentiellement les développeurs d'applications pour les terminaux Android. Google Maps Android API v2 s'enrichit de plusieurs nouvelles fonctionnalités permettant d'exploiter au maximum le service Google Maps...

    Read the article

  • Browser connects to WCF service but not my WCF client. What can be the reason?

    - by Slauma
    On a production server (Windows Server 2003 SP2) I can connect to a remote WCF service with Internet Explorer 8: When I browse to the URL http://www.domain.com/Service.svc (where my service listens) I get the expected info page of the service displayed. Connection settings in Internet Explorer only specify "auto detect", proxy settings are disabled. If I start a console application (built with WCF in .NET 4.0) on the same server which also tries to connect to the same WCF service it fails telling me that no endpoint was available listening on http://www.domain.com/Service.svc. Configuration of the WCF client: <configuration> <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IMyService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="None"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true"/> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://www.domain.com/Service.scv" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IMyService" contract="Service.IMyService" name="WSHttpBinding_IMyService" /> </client> </system.serviceModel> <configuration> With these settings I can communicate successfully with the remote service from my development machine. Looking around for other options I found that I can specify to use the Internet Explorer proxy settings with: <system.net> <defaultProxy> <proxy usesystemdefault="true" /> </defaultProxy> </system.net> It didn't work and I am not sure if I understood this setting really correctly. (My hope was that the WCF client will adopt the "autodetect" setting of Internet Explorer and then connect the same way to the service like the installed IE.) I also had toggled the useDefaultWebProxy setting in the binding configuration between true and false with no success. Now I am asking for help what I can do? Which settings might be wrong or missing? What could I test and how can I get more detailed error messages to better identify the problem? Thank you in advance!

    Read the article

< Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >