Search Results

Search found 21364 results on 855 pages for 'service bus'.

Page 56/855 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • 13.10 upgrade dropping wifi [on hold]

    - by Daryl
    Almost a complete newb here. After my last upgrade from 12.04 to 13.10 my wifi now randomly drops. The only way I can get a signal back is a shutdown and restart otherwise it shows no network is even available to connect to. Had no problems until the upgrade. Any help would be appreciated. H/W path Device Class Description ==================================================== system h8-1534 (H2N64AA#ABA) /0 bus 2AC8 /0/0 memory 64KiB BIOS /0/4 processor AMD FX(tm)-6200 Six-Core Processor /0/4/5 memory 288KiB L1 cache /0/4/6 memory 6MiB L2 cache /0/4/7 memory 8MiB L3 cache /0/d memory 10GiB System Memory /0/d/0 memory DIMM Synchronous [empty] /0/d/1 memory 4GiB DIMM DDR3 Synchronous 1600 MHz (0.6 ns) /0/d/2 memory 2GiB DIMM DDR3 Synchronous 1600 MHz (0.6 ns) /0/d/3 memory 4GiB DIMM DDR3 Synchronous 1600 MHz (0.6 ns) /0/100 bridge RD890 PCI to PCI bridge (external gfx0 port B) /0/100/0.2 generic RD990 I/O Memory Management Unit (IOMMU) /0/100/2 bridge RD890 PCI to PCI bridge (PCI express gpp port B) /0/100/2/0 display Turks PRO [Radeon HD 7570] /0/100/2/0.1 multimedia Turks/Whistler HDMI Audio [Radeon HD 6000 Series] /0/100/5 bridge RD890 PCI to PCI bridge (PCI express gpp port E) /0/100/5/0 bus TUSB73x0 SuperSpeed USB 3.0 xHCI Host Controller /0/100/11 storage SB7x0/SB8x0/SB9x0 SATA Controller [RAID5 mode] /0/100/12 bus SB7x0/SB8x0/SB9x0 USB OHCI0 Controller /0/100/12.2 bus SB7x0/SB8x0/SB9x0 USB EHCI Controller /0/100/13 bus SB7x0/SB8x0/SB9x0 USB OHCI0 Controller /0/100/13.2 bus SB7x0/SB8x0/SB9x0 USB EHCI Controller /0/100/14 bus SBx00 SMBus Controller /0/100/14.2 multimedia SBx00 Azalia (Intel HDA) /0/100/14.3 bridge SB7x0/SB8x0/SB9x0 LPC host controller /0/100/14.4 bridge SBx00 PCI to PCI Bridge /0/100/14.5 bus SB7x0/SB8x0/SB9x0 USB OHCI2 Controller /0/100/15 bridge SB700/SB800/SB900 PCI to PCI bridge (PCIE port 0) /0/100/15.1 bridge SB700/SB800/SB900 PCI to PCI bridge (PCIE port 1) /0/100/15.2 bridge SB900 PCI to PCI bridge (PCIE port 2) /0/100/15.2/0 wlan0 network RT3290 Wireless 802.11n 1T/1R PCIe /0/100/15.2/0.1 generic RT3290 Bluetooth /0/100/15.3 bridge SB900 PCI to PCI bridge (PCIE port 3) /0/100/15.3/0 eth0 network RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller /0/100/16 bus SB7x0/SB8x0/SB9x0 USB OHCI0 Controller /0/100/16.2 bus SB7x0/SB8x0/SB9x0 USB EHCI Controller /0/101 bridge Family 15h Processor Function 0 /0/102 bridge Family 15h Processor Function 1 /0/103 bridge Family 15h Processor Function 2 /0/104 bridge Family 15h Processor Function 3 /0/105 bridge Family 15h Processor Function 4 /0/106 bridge Family 15h Processor Function 5 /0/1 scsi0 storage /0/1/0.0.0 /dev/sda disk 1TB WDC WD1002FAEX-0 /0/1/0.0.0/1 volume 189MiB Windows FAT volume /0/1/0.0.0/2 /dev/sda2 volume 244MiB data partition /0/1/0.0.0/3 /dev/sda3 volume 931GiB LVM Physical Volume /0/2 scsi2 storage /0/2/0.0.0 /dev/cdrom disk DVD A DH16ACSHR /0/3 scsi6 storage /0/3/0.0.0 /dev/sdb disk SCSI Disk /0/3/0.0.1 /dev/sdc disk SCSI Disk /0/3/0.0.2 /dev/sdd disk SCSI Disk /0/3/0.0.3 /dev/sde disk MS/MS-Pro /0/3/0.0.3/0 /dev/sde disk /1 power Standard Efficiency I apologize for my newbness. I hope this is enough info for the hardware. Thanks Bruno for pointing out I needed to add more info. If I am lacking anything else please let me know and I'll post it.

    Read the article

  • Oracle Enterprise Manager 12c Testing-as-a-Service Solution

    - by user810030
    With organizations spending as much as 50 percent of their QA time with non-test related activities like setting up hardware and deploying applications and test tools, the cloud will bring obvious benefits. A key component of Oracle Enterprise Manager our current Application Quality Management products have been helping our customers with application load testing, functional testing and test process management, but also test data management, data masking and real application testing. These products enable customers to thoroughly test applications and their underlying infrastructure to help ensure the best quality, scalability and availability prior to deployment.  Today, Oracle announced Oracle Enterprise Manager 12c Testing-as-a-Service Solution . This solution will allow users to significantly decrease the time needed to setup a complete test environment, while enhancing testing efficiency. Please read the Press Release mentioned above and join us in our Enterprise Manager LinkedIn Group discussion on this topic. (need to be a member). Or visit our booth this week during the EuroSTAR Software Testing conference in Amsterdam where we can demo this solution  I hope you find this helpfull Stay Connected: Twitter |  Facebook |  YouTube |  Linkedin |  Newsletter

    Read the article

  • Advice needed on Process - Service Design

    - by user99314
    Need some advice from experts on designing a flow. Create a service that will read a csv file which may contain anywhere over 6000+ rows of individual ids as shown in the sample below. Need to read that file and go to oracle database and fetch a vname,vnumber,vid of each id in the csv and then go to document repository i.e. Oracle UCM and download all documents matching vname,vnumber,vid there can be =0 documents for each vname,vnumber,vid and save them on a file system. UCM exposes a webservice to dowload the documents. Finally create a new csv appending the filenames that are downloaded for each id. Need to keep track of any errors but need to make sure to go over the whole ids in the csv to download the documents and skip in case of errors. Need some advice on how to go about designing this as there may be over 6000+ rows in a csv file and looping it and hitting the database for each individual id and then hitting a UCM may be a bit expensive so open for any idea. How to go by designing this solution. Wondering if messaging can be helpful here or offloading process of getting the vname,vnumber,vid to pl/sql packages, creating staging tables etc. Initial csv that contains ids: **ID** 12345a 12s345 3456fr we9795 we9797 Final csv output: **ID Files Downloaded from UCM** 12345a a.pdf,b.doc,d.txt 12s345 a1.pdf,s2.pdf,f4.gif 3456fr b.xls we9795 we9797 x.doc Thanks

    Read the article

  • Web-Service: How much would you charge? [closed]

    - by jacksbox
    I´ve to make an offer to a client who wants me to develope a web-service for him. I have some trouble calculating the pricing - can you help me? Here is a rough outline of the project: It´s a portal on which various artists and entertainers can register and administrate there profile (texts, gallerie, some embedded videos, choose categories and radius of offer, etc). Other user can browse the artists by 3-4 level categories and countries/staats. If someone wants to hire a artists, he can put them in an "shopping-basket" and can mail everyone in his basket. The Artists can answer them with a form on the website (he doesn't get the email of the other person). Users should be able to comment/rate the artists. But if an artists wants this comments/ratings visible on his profile he must pay, and after that, the admin of the site must activate this comments/ratings. So, no more information given, what would you charge (ca.)? Edit: It´ll be developed with php/codeigniter. A .psd design is already there.

    Read the article

  • is appassembler plugin broken for java service wrapper on windows 64bit?

    - by Paul McKenzie
    Hi I'm developing on 32bit windows and am using appassembler to create a java service wrapper assembly, and it works ok. But I need to also create a 64bit assembly for deployment to a dev server. In the following config I have substituted the 32bit platform with the 64bit, see the <includes> section. But it no longer places the wrapper jar and dll in the lib folder. If I omit the includes completely, I get linux, solaris, Mac OSX and Win32 libraries, but no win64. Anyone got this working? <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>appassembler-maven-plugin</artifactId> <version>1.1-SNAPSHOT</version> <configuration> <target>${project.build.directory}/appassembler</target> <repositoryLayout>flat</repositoryLayout> <defaultJvmSettings> <initialMemorySize>256M</initialMemorySize> <maxMemorySize>1024M</maxMemorySize> </defaultJvmSettings> <daemons> <daemon> <id>MyApp</id> <mainClass>com.foo.AppMain</mainClass> <platforms> <platform>jsw</platform> </platforms> <generatorConfigurations> <generatorConfiguration> <generator>jsw</generator> <includes> <include>windows-x86-64</include> </includes> <configuration> <property> <name>set.default.REPO_DIR</name> <value>../../repo</value> </property> </configuration> </generatorConfiguration> </generatorConfigurations> </daemon> </daemons> </configuration> <executions> <execution> <goals> <goal>generate-daemons</goal> <goal>create-repository</goal> </goals> </execution> </executions> </plugin>

    Read the article

  • More efficient way of updating UI from Service than intents?

    - by Donal Rafferty
    I currently have a Service in Android that is a sample VOIP client so it listens out for SIP messages and if it recieves one it starts up an Activity screen with UI components. Then the following SIP messages determine what the Activity is to display on the screen. For example if its an incoming call it will display Answer or Reject or an outgoing call it will show a dialling screen. At the minute I use Intents to let the Activity know what state it should display. An example is as follows: Intent i = new Intent(); i.setAction(SIPEngine.SIP_TRYING_INTENT); i.putExtra("com.net.INCOMING", true); sendBroadcast(i); Intent x = new Intent(); x.setAction(CallManager.SIP_INCOMING_CALL_INTENT); sendBroadcast(x); Log.d("INTENT SENT", "INTENT SENT INCOMING CALL AFTER PROCESSINVITE"); So the activity will have a broadcast reciever registered for these intents and will switch its state according to the last intent it received. Sample code as follows: SipCallListener = new BroadcastReceiver(){ @Override public void onReceive(Context context, Intent intent) { String action = intent.getAction(); if(SIPEngine.SIP_RINGING_INTENT.equals(action)){ Log.d("cda ", "Got RINGING action SIPENGINE"); ringingSetup(); } if(CallManager.SIP_INCOMING_CALL_INTENT.equals(action)){ Log.d("cda ", "Got PHONE RINGING action"); incomingCallSetup(); } } }; IntentFilter filter = new IntentFilter(CallManager.SIP_INCOMING_CALL_INTENT); filter.addAction(CallManager.SIP_RINGING_CALL_INTENT); registerReceiver(SipCallListener, filter); This works however it seems like it is not very efficient, the Intents will get broadcast system wide and Intents having to fire for different states seems like it could become inefficient the more I have to include as well as adding complexity. So I was wondering if there is a different more efficient and cleaner way to do this? Is there a way to keep Intents broadcasting only inside an application? Would callbacks be a better idea? If so why and in what way should they be implemented?

    Read the article

  • WCF Endpoints & Binding Configuration Issues

    - by CodeAbundance
    I am running into a very strange issue here folks. For simplicity I created a project for the sole purpose of testing the issue outside the framework of a larger application and still encountered what is either a bug in WCF within Visual Studio 2010 or something related to my WCF newbie skill set : ) Here is the issue: I have a WCF endpoint I created running inside of an MVC3 project called "SimpleMethod". The method runs inside of a .svc file on the root of the application and it returns a bool. Using the "WCF Service Configuration Editor" I have added the endpoint to my Web.Config along with a called "LargeImageBinding". Here is the service: [OperationContract] public bool SimpleMethod() { return true; } And the Web.Config generated by the Config Tool: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="LargeImageBinding" closeTimeout="00:10:00" /> </wsHttpBinding> </bindings> <services> <service name="WCFEndpoints.ServiceTestOne"> <endpoint address="/ServiceTestOne.svc" binding="wsHttpBinding" bindingConfiguration="LargeImageBinding" contract="WCFEndpoints.IServiceTestOne" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name=""> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> The service renders fine and you can see the endpoint when you navigate to: http://localhost:57364/ServiceTestOne.svc - Now the issue occurs when I create a separate project to consume the service. I add a service reference to a running instance of the above project, point it to: http://localhost:57364/ServiceTestOne.svc Here is the weird part. The service automatically generates just fine but In the Web.Config the endpoint that is generated looks like this: <client> <endpoint address="http://localhost:57364/ServiceTestOne.svc/ServiceTestOne.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IServiceTestOne" contract="ServiceTestOne.IServiceTestOne" name="WSHttpBinding_IServiceTestOne"> As you can see it lists the "ServiceTestOne.svc" portion of the address twice! When I make a call to the the service I get the following error: The remote server returned an error: (404) Not Found. I tried removing the extra "/ServiceTestOne.svc" at the end of the endpoint address in the above config, and I get the same exact error. Now what DOES work is if I go back to the WCF application and remove the custom endpoint and binding references in the Web.Config (everything in the "services" and "bindings" tags) then go back to the consumer application, update the reference to the service and make the call to SimpleMethod()....BOOM works like a charm and I get back a bool set to true. The thing is, I need to make custom binding configurations in order to allow for access to the service outside of the defaults, and from what I can tell, any attempt to create custom bindings makes the endpoints seem to run fine, but fail when an actual method call is made. Can anyone see any flaw in how I am putting this together? Thank you for your time - I have been running in circles with this for about a week!

    Read the article

  • Why does this service refuse to start on Windows server 2003?

    - by PenguinCoder
    We have a Windows 2003 server with Cebos MQ1 (ver. 7 and ver. GRI) products installed that have been operational for years. After installing Microsoft 2010 C++ Redistributable package needed for other development, the MQ1 GRI service now fails to start. Event logs showed that two additional updates (.NET4 and the 2010 C++ Redistributable SP2) where installed by the redistributable as well. As soon as we discovered the MQ1 service was not starting properly, we removed these three installed packages. However the service still does not start; the dialog that pops up states 'The service started then stopped. '. Event logs when we attempt to start the service show nothing; IE: No errors, crashes, failures, or other information related to this service. Executing the MQ1Serv.exe directly specifies an issue of 'Missing command line operation, must specify install, uninstall and company abbreviation.' sc query MQ1Service(GRI) shows a clean exit for the Win32ExitCode of 0x0. Attempting to reinstall the client or server software gives an error of 'The procedure entry point ReInitializeCriticalSection could not be located in the dynamic link library KERNEL32.dll.' at the 'Registering Libraries' stage. At this point, further research has stated that the required function is in URL.dll and to verify the library is not corrupted. Running an sfc /scannow on the server has replaced a few DLLS; including the URL.DLL to versions from 2005. This actually broke other applications which required a reinstall (one of them being IE 7). After reinstall and updates, url.dll version is 7.0.5730.13 (2009) and Kernel32.dll is version 5.2.3790.4480 (2009). The MQ1 GRI service still will not start, specifying the same error as previous 'Service started then stopped'. Running a disassembler on Kernel32.dll and Url.dll show no functions named ReinitializeCriticalSection. Attempting the reinstall of the MQ1 client and server as well as starting the service again, fails once more. However, setting the compatibility mode on the MQ1 client install exe to 'Windows 95' actually gets the program to install. Setting the compatibility mode on the MQ1 server service does not enable it to start. I have been researching this problem for nearly a week and besides the advice to scan and replace url.dll, have come to no successful conclusions. This service was operational prior to the 2010 C++ install, without any additional parameters or settings. After removing the C++ install and all servicepacks/updates it installed silently, still does not correct the issue of the MQ1 GRI service not starting. Q: Has anyone else run into this or similar issue while attempting to get a service initialized? What have I overlooked or what else can I try in order to get this service started??

    Read the article

  • How to create item in SharePoint2010 document library using SharePoint Web service

    - by ybbest
    Today, I’d like to show you how to create item in SharePoint2010 document library using SharePoint Web service. Originally, I thought I could use the WebSvcLists(list.asmx) that provides methods for working with lists and list data. However, after a bit Googling , I realize that I need to use the WebSvcCopy (copy.asmx).Here are the code used private const string siteUrl = "http://ybbest"; private static void Main(string[] args) { using (CopyWSProxyWrapper copyWSProxyWrapper = new CopyWSProxyWrapper(siteUrl)) { copyWSProxyWrapper.UploadFile("TestDoc2.pdf", new[] {string.Format("{0}/Shared Documents/TestDoc2.pdf", siteUrl)}, Resource.TestDoc, GetFieldInfos().ToArray()); } } private static List<FieldInformation> GetFieldInfos() { var fieldInfos = new List<FieldInformation>(); //The InternalName , DisplayName and FieldType are both required to make it work fieldInfos.Add(new FieldInformation { InternalName = "Title", Value = "TestDoc2.pdf", DisplayName = "Title", Type = FieldType.Text }); return fieldInfos; } Here is the code for the proxy wrapper. public class CopyWSProxyWrapper : IDisposable { private readonly string siteUrl; public CopyWSProxyWrapper(string siteUrl) { this.siteUrl = siteUrl; } private readonly CopySoapClient proxy = new CopySoapClient(); public void UploadFile(string testdoc2Pdf, string[] destinationUrls, byte[] testDoc, FieldInformation[] fieldInformations) { using (CopySoapClient proxy = new CopySoapClient()) { proxy.Endpoint.Address = new EndpointAddress(String.Format("{0}/_vti_bin/copy.asmx", siteUrl)); proxy.ClientCredentials.Windows.ClientCredential = CredentialCache.DefaultNetworkCredentials; proxy.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation; CopyResult[] copyResults = null; try { proxy.CopyIntoItems(testdoc2Pdf, destinationUrls, fieldInformations, testDoc, out copyResults); } catch (Exception e) { System.Console.WriteLine(e); } if (copyResults != null) System.Console.WriteLine(copyResults[0].ErrorMessage); System.Console.ReadLine(); } } public void Dispose() { proxy.Close(); } } You can download the source code here . ******Update********** It seems to be a bug that , you can not set the contentType when create a document item using Copy.asmx. In sp2007 the field type was Choice, however, in sp2010 it is actually Computed. I have tried using the Computed field type with no luck. I have also tried sending the ContentTypeId and this does not work.You might have to write your own web services to handle this.You can check my previous blog on how to get started with you own custom WCF in SP2010 here. References: SharePoint 2010 Web Services SharePoint2007 Web Services SharePoint MSDN Forum

    Read the article

  • What’s new in IIS8, Perf, Indexing Service-Week 49

    - by OWScott
    You can find this week’s video here. After some delays in the publishing process week 49 is finally live.  This week I'm taking Q&A from viewers, starting with what's new in IIS8, a question on enable32BitAppOnWin64, performance settings for asp.net, the ARR Helper, and Indexing Services. Starting this week for the remaining four weeks of the 52 week series I'll be taking questions and answers from the viewers. Already a number of questions have come in. This week we look at five topics. Pre-topic: We take a look at the new features in IIS8. Last week Internet Information Services (IIS) 8 Beta was released to the public. This week's video touches on the upcoming features in the next version of IIS. Here’s a link to the blog post which was mentioned in the video Question 1: In a number of places (http://learn.iis.net/page.aspx/201/32-bit-mode-worker-processes/, http://channel9.msdn.com/Events/MIX/MIX08/T06), I've saw that enable32BitAppOnWin64 is recommended for performance reasons. I'm guessing it has to do with memory usage... but I never could find detailed explanation on why this is recommended (even Microsoft books are vague on this topic - they just say - do it, but provide no reason why it should be done). Do you have any insight into this? (Predrag Tomasevic) Question 2: Do you have any recommendations on modifying aspnet.config and machine.config to deliver better performance when it comes to "high number of concurrent connections"? I've implemented recommendations for modifying machine.config from this article (http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx - ASP.NET Process Configuration Optimization section)... but I would gladly listen to more recommendations if you have them. (Predrag Tomasevic) Question 3: Could you share more of your experience with ARR Helper? I'm specifically interested in configuring ARR Helper (for example - how to only accept only X-Forwards-For from certain IPs (proxies you trust)). (Predrag Tomasevic) Question 4: What is the replacement for indexing service to use in coding web search pages on a Windows 2008R2 server? (Susan Williams) Here’s the link that was mentioned: http://technet.microsoft.com/en-us/library/ee692804.aspx This is now week 49 of a 52 week series for the web pro. You can view past and future weeks here: http://dotnetslackers.com/projects/LearnIIS7/ You can find this week’s video here.

    Read the article

  • 503.1 Service Unavailable Error Resolution

    - by Lee Brandt
    I was having a hell of a time tonight with my IIS on my development laptop. I don’t remember doing anything to change the IIS settings. I don’t use IIS that much on my dev machine. Usually Cassini is enough for testing my development efforts but tonight I needed to replicate a problem that seems to stem from x86 v x64 mismatch, so I went to create an IIS site pointed to my dev folder. When I did, I got a “503.1 Service Unavailable Error”. First thing I did is go over all my setting to make sure I didn’t screw something up when I set up the site. It was pointing to the right place, and the app pool settings seemed to be alright. However, when I got the 503.1 error and went back to my app pool list, I saw that the app pool I was using was stopped again. I must’ve started and ran it a dozen times to verify that I wasn’t seeing things. After having a colleague look at it and not finding an answer, I started poking around Google. I cam across a post from Phil Haack about the same error. His fix was not mine, however. When I ran his command on the CLI, I didn’t see the reserved routes for HTTP.SYS there. Finally, I looked in the event viewer (where I should have looked as soon as I saw that my app pool was stopping) and saw an error in there. For the IIS-W3SVC-WP Source I saw: The worker process for application pool 'DefaultAppPool' encountered an error 'Cannot read configuration file due to insufficient permissions ' trying to read configuration data from file '\\?\C:\Windows\Microsoft.NET\Framework64\v4.0.30319\CONFIG\machine.config', line number '0'. The data field contains the error code. So I went to that path and saw a little lock on the file icon. I opened up the security tab for file properties and saw that I was missing the IIS_IUSRS group. On a machine that was working correctly, I verified that it indeed had the IIS_IUSRS group set to Read and Read & Execute allowed. So I set mine up the same and voila! Hopefully this helps somebody else, too.

    Read the article

  • Installing Yaws server on Ubuntu 12.04 (Using a cloud service)

    - by Lee Torres
    I'm trying to get a Yaws web server working on a cloud service (Amazon AWS). I've compilled and installed a local copy on the server. My problem is that I can't get Yaws to run while running on either port 8000 or port 80. I have the following configuration in yaws.conf: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true This produces the following successful launch/result: Eshell V5.8.5 (abort with ^G) =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Using config file /home/ubuntu/yaws.conf =INFO REPORT==== 16-Sep-2012::17:21:06 === Ctlfile : /home/ubuntu/.yaws/yaws/default/CTL =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:8000 for <3> virtual servers: - http://domU-12-31-39-0B-1A-F6:8000 under /home/ubuntu/yaws/www/trial - =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:4443 for <1> virtual servers: - When I try to access the the url (http://ec2-72-44-47-235.compute-1.amazonaws.com), it never connects. I've tried using paping to check if port 80 or 8000 is open(http://code.google.com/p/paping/) and I get a "Host can not be resolved" error, so obviously something isn't working. I've also tried setting the yaws.conf so its at Port 80, appearing like this: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true and I get the following error: =ERROR REPORT==== 16-Sep-2012::17:24:47 === Yaws: Failed to listen 0.0.0.0:80 : {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Can't listen to socket: {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =INFO REPORT==== 16-Sep-2012::17:24:47 === application: yaws exited: {shutdown,{yaws_app,start,[normal,[]]}} type: permanent {"Kernel pid terminated",application_controller," {application_start_failure,yaws,>>>>>>{shutdown,>{yaws_app,start,[normal,[]]}}}"} I've also opened up the port 80 using iptables. Running sudo iptables -L gives this output: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- ip-192-168-2-0.ec2.internal ip-192-168-2-16.ec2.internal tcp dpt:http ACCEPT tcp -- 0.0.0.0 anywhere tcp dpt:http ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:http Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination In addition, I've gone to the security group panel in the Amazon AWS configuration area, and add ports 80, 8000, and 8080 to ip source 0.0.0.0 Please note: if you try to access the URL of the virtual server now, it likely won't connect because I'm not running currently running the yaws daemon. I've tested it when I've run yaws either through yaws or yaws -i Thanks for the patience

    Read the article

  • Characteristics of a Web service that promote reusability and change

    Characteristics of a Web service that promote reusability and change:  Standardized Data Exchange Formats (XML, JSON) Standardized communication protocols (Soap, Rest) Promotes Loosely Coupled Systems  Standardized Data Exchange Formats (XML, JSON) XML W3.org defines Extensible Markup Language (XML) as a simplistic text format derived from SGML. XML was designed to solve challenges found in large-scale electronic publishing. In addition,  XML is playing an important role in the exchange of data primarily focusing on data exchange on the web. JSON JavaScript Object Notation (JSON) is a human-readable text-based standard designed for data interchange. This format is used for serializing and transmitting data over a network connection in a structured format. The primary use of JSON is to transmit data between a server and web application. JSON is an alternative to XML. Standardized communication protocols (Soap, Rest) Soap W3Scools.com defines SOAP as a simple XML-based protocol. This protocol lets applications exchange data over HTTP.  SOAP provides a way to communicate between applications running on different operating systems, with different technologies and programming languages. Rest In 2007, Stefan Tilkov defines Representational State Transfer (REST) as a set of principles that outlines how Web standards are supposed to be used.  Using REST in an application will ensure that it exploits the Web’s architecture to its benefit. Promotes Loosely Coupled Systems “Loose coupling as an approach to interconnecting the components in a system or network so that those components, also called elements, depend on each other to the least extent practicable. Coupling refers to the degree of direct knowledge that one element has of another.” (TechTarget.com, 2007) “Loosely coupled system can be easily broken down into definable elements. The extent of coupling in a system can be measured by mapping the maximum number of element changes that can occur without adverse effects. Examples of such changes include adding elements, removing elements, renaming elements, reconfiguring elements, modifying internal element characteristics and rearranging the way in which elements are interconnected.” (TechTarget.com, 2007) References: W3C. (2011). Extensible Markup Language (XML). Retrieved from W3.org: http://www.w3.org/XML/ W3Scools.com. (2011). SOAP Introduction. Retrieved from W3Scools.com: http://www.w3schools.com/soap/soap_intro.asp Tilkov, Stefan. (2007). A Brief Introduction to REST. Retrieved from Infoq.com: http://www.infoq.com/articles/rest-introduction TechTarget.com. (2011). loose coupling. Retrieved from TechTarget.com: http://searchnetworking.techtarget.com/definition/loose-coupling

    Read the article

  • Sharing one static ip for both ftp and www service

    - by user11496
    Trying to figure out how to update the Zone record and configure webserver so that one application on the webserver is accessible by public. I'm completely not good at NS/DNS/NAT/firewall/routing/port forwarding/networking etc. "faraday" is the intranet name. Everyone within local network can access all applications hosted on "faraday". Hostname for webserver is "www", FTP server is "ftpserver". Both servers running RHEL4 OS. The goal is to allow anyone outside the company network (public) to access only one of the many applications on "faraday". Hope somebody can help me with some of the questions below, if not all. From zoneedit record, the static IP is used by FTP now. Can I use the same existing static IP - 219.95.10.100, for web service? Currently anyone who enter "http://www.abc.com.my" will be directed to "http://www.abc.com". I don't want this to change. Currently, no one else, except employee on local network, can access "faraday" web pages. How to configure so that when anyone type "http://thisapp.abc.com.my" on their web browser, the url will lead them to "http://faraday/thisapp" (application folder is /var/www/html/thisapp on RHEL4 web server). If possible, how to set the URL will continue to show "http://thisapp.abc.com.my" instead of "http://faraday/thisapp" How to limit/restrict user (those who are not from local network) so they only have access to "http://thisapp.abc.com.my", but not "http://faraday" or "http://faraday/anotherapp", etc. What's the configuration changes needed in /etc/httpd.conf on web server? Company domain name is "abc.com.my". Following is the zone records on www.zoneedit.com. Subdomain Type IP sdsl A 219.95.10.100 ftp CNAME sdsl.abc.com.my @ NS ns3.zoneedit.com @ NS ns7.zoneedit.com WebForward record: New Domain Destination Cloaked www.abc.com.my http://www.abc.com N On my local DNS server, there are 2 zone files: abc.com.my and pnmy.abc.com. > cat abc.com.my.zone ftp CNAME ftp.pnmy.abc.com. sdsl A 219.95.10.100 > cat pnmy.abc.com.zone ftp CNAME ftpserver ftpserver A 172.16.5.1 faraday CNAME www www A 172.16.5.2

    Read the article

  • SharePoint 2010 search crawl not working

    - by J. Hammond
    I have the following set up: 1 x Windows server 2008 R2 running SQL 1 x Windows server 2008 R2 running SharePoint 2010. I have an issue with the search service application, the crawl appears to run for a never ending amount of time with 0 successes and 0 failures. Checking the Search Application Topology, I find that "Query Component 0" is "Not Responding". I have tried the following: I have ensured that the index directory has the right permissions applied to it and the search service account is in the right groups to consume those permissions. I have re-created the search service application. I have restarted the search service manually. I have trawled the net as much as possible to find a solution but as of yet have not come across something that has resolved this issue. Any input will be very much appreciated

    Read the article

  • Start a screen through svcadm with Solaris 11

    - by Sephreph
    I am running into a problem when trying to start a detached screen through a Solaris 11 service. This service controls nginx. When I reboot the system, the screen doesn't start, but if I issue svcadm disable nginx then svcadm enable nginx manually, it does. The rest of the init script functions correctly on a reboot (the nginx daemon starts, etc). The part of the service that triggers the screen looks like this: case "$1" in start) echo "Starting Nginx Logger: \c" /usr/bin/screen -S nginxLogger -d -m /opt/php-5.3.10/bin/php $loggingProg LogRetVal=$? [ $LogRetVal -eq 0 ] & echo "ok" || echo "failed" .... The log (/var/svc/log/network-nginx\:default.log) shows that $LogRetVal is returning 0, and $loggingProg is just pointing to a PHP script. If it matters, when I manually restart the service, I'm logged in as root. I'm unsure how to check if it's a permission issue (I'm new to Solaris, I've recently switched from CentOS/RHEL).

    Read the article

  • Windows Event Log wrong Source column value

    - by O.O
    In the Event Viewer in Windows 7 there is a Source column that is set by my Windows Service application. The value is set to TOS and usually when a log entry is associated to my application, it has TOS as the Source column value. However, when the service fails to start (or some other kind of error occurs) I get a Source of one of the following values: Application Error Service Control Manager .NET Runtime I don't understand why the value is not always TOS Also, is it possible to force it to use TOS every time?

    Read the article

  • Srvany Starts but Application doesn't

    - by mharran
    I used Instsrv and Srvany as to create a service on W2008; the Srvany service starts okay but the application does not start. The application is TeamSpeak 3 btw. I don't think it's an issue with my W2008 setup as I have a previous version of the application set up the exact same way and running perfectly. Also, I have no problem manually starting the application even when I copy and paste into the 'Run' box the path used for the application by Srvany. I looked at Events but nothing there except notification that the service has entered the running state, didn't really expect any errors as the service has started even though the application hasn't. Any suggestions on what could be the problem?

    Read the article

  • Allow members of a group to be unlocked by a specific account on AD

    - by JohnLBevan
    Background I'm creating a service to allow support staff to enable their firecall accounts out of hours (i.e. if there's an issue in the night and we can't get hold of someone with admin rights, another member of the support team can enable their personal firecall account on AD, which has previously been setup with admin rights). This service also logs a reason for the change, alerts key people, and a bunch of other bits to ensure that this change of access is audited / so we can ensure these temporary admin rights are used in the proper way. To do this I need the service account which my service runs under to have permissions to enable users on active directory. Ideally I'd like to lock this down so that the service account can only enable/disable users in a particular AD security group. Question How do you grant access to an account to enable/disable users who are members of a particular security group in AD? Backup Question If it's not possible to do this by security group, is there a suitable alternative? i.e. could it be done by OU, or would it be best to write a script to loop through all members of the security group and update the permissions on the objects (firecall accounts) themselves? Thanks in advance. Additional Tags (I don't yet have access to create new tags here, so listing below to help with keyword searches until it can be tagged & this bit editted/removed) DSACLS, DSACLS.EXE, FIRECALL, ACCOUNT, SECURITY-GROUP

    Read the article

  • Create Hello World with RESTful web service and Jersey

    - by Harry Pham
    I follow tutorial here on how to create web service using RESTful web service and Jersey and I get kind of stuck. The code is from HelloWorld3 in the tutorial I linked above. Here is the code. I use Netbean6.8 + glassfish v3 RESTGreeting.java create using JAXB. This class represents the HTML message in Java package com.sun.rest; import javax.xml.bind.annotation.XmlRootElement; import javax.xml.bind.annotation.XmlElement; @XmlRootElement(name = "restgreeting") public class RESTGreeting { private String message; private String name; /** * Creates new instance of Greeting */ public RESTGreeting() { } /* Create new instance of Greeting * with parameters message and name */ public RESTGreeting( String message, String name) { this.message = message; this.name = name; } /** Getter for message * return value for message * */ @XmlElement public String getMessage() { return message; } public void setMessage(String message) { this.message = message; } /* Getter for name * return name */ @XmlElement public String getName() { return name; } public void setName(String name) { this.name = name; } } HelloGreetingService.java creates a RESTful web service that returns an HTML message package com.sun.rest; import javax.ws.rs.core.Context; import javax.ws.rs.core.UriInfo; import javax.ws.rs.Consumes; import javax.ws.rs.PUT; import javax.ws.rs.Path; import javax.ws.rs.GET; import javax.ws.rs.Produces; import javax.ws.rs.QueryParam; @Path("helloGreeting") public class HelloGreetingService { @Context private UriInfo context; /** Creates a new instance of HelloGreetingService */ public HelloGreetingService() { } /** * Retrieves representation of an instance of com.sun.rest.HelloGreetingService * @return an instance of java.lang.String */ @GET @Produces("text/html") public RESTGreeting getHtml(@QueryParam("name") String name) { return new RESTGreeting( getGreeting(), name); } private String getGreeting() { return "Hello "; } /** * PUT method for updating or creating an instance of HelloGreetingService * @param content representation for the resource * @return an HTTP response with content of the updated or created resource. */ @PUT @Consumes("text/html") public void putHtml(String content) { } } However when i deploy it on Glassfish, and run it. It generate an exception. I try to debug using netbean 6.8, and figure out that this line return new RESTGreeting(getGreeting(), name); in HelloGreetingService.java cause the exception. But not sure why. Here is the stacktrace javax.ws.rs.WebApplicationException at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:268) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1029) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:941) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:932) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:384) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:451) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:632) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:637)

    Read the article

  • How to expose MEX when I need the service to have NTLM authentication

    - by Ram Amos
    I'm developing a WCF service that is RESTful and SOAP, now both of them needs to be with NTLM authentication. I also want to expose a MEX endpoint so that others can easily reference the service and work with it. Now when I set IIS to require windows authentication I can use the REST service and make calls to the service succesfully, but when I want to reference the service with SVCUTIL it throws an error that it requires to be anonymous. Here's my web.config: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true"/> <bindings> <basicHttpBinding> <binding name="basicHttpBinding" maxReceivedMessageSize="214748563" maxBufferSize="214748563" maxBufferPoolSize="214748563"> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Ntlm"> </transport> </security> </binding> </basicHttpBinding> <webHttpBinding> <binding name="webHttpBinding" maxReceivedMessageSize="214748563" maxBufferSize="214748563" maxBufferPoolSize="214748563"> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Ntlm"> </transport> </security> </binding> </webHttpBinding> <mexHttpBinding> <binding name="mexHttpBinding"></binding> </mexHttpBinding> </bindings> <standardEndpoints> <webHttpEndpoint> <standardEndpoint name="" automaticFormatSelectionEnabled="true" helpEnabled="True"> </standardEndpoint> </webHttpEndpoint> </standardEndpoints> <services> <service name="Intel.ResourceScheduler.Service" behaviorConfiguration="Meta"> <clear /> <endpoint address="soap" name="SOAP" binding="basicHttpBinding" contract="Intel.ResourceScheduler.Service.IResourceSchedulerService" listenUriMode="Explicit" /> <endpoint address="" name="rest" binding="webHttpBinding" behaviorConfiguration="REST" contract="Intel.ResourceScheduler.Service.IResourceSchedulerService" /> <endpoint address="mex" name="mex" binding="mexHttpBinding" behaviorConfiguration="" contract="IMetadataExchange" /> </service> </services> <behaviors> <endpointBehaviors> <behavior name="REST"> <webHttp /> </behavior> <behavior name="WCFBehavior"> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="Meta"> <serviceMetadata httpGetEnabled="true"/> </behavior> <behavior name="REST"> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> <behavior name="WCFBehavior"> <serviceMetadata httpGetEnabled="true"/> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> <behavior name=""> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> Any help will be appreciated.

    Read the article

  • Should all new web projects build their backend based on xml/json result sets?

    - by Blankman
    If you were building a new Saas project, would it make sense to start with all of the backend services returning xml/json? Because these days you need to build for both the web and mobile devices, and having a backend that is build from the start to return xml and json, you are ready to go mobile (all services have the business logic, so you won't be repeating anything). Now the web would be MVC, so the controller would just be routing the request to your service backend, and converting the json or xml to html. The obviousl downside is that you have to build a backend, and then another web project that calls your backend. But this also goes to you favor as it forces you to seperate your concerns, and not leak business logic in your controller/view layer. Thoughts?

    Read the article

  • Verizon Business Delivers New Sales and Support Tools

    - by michael.seback
    Verizon Business Delivers New Sales and Support Tools and Improves System Performance by 35% Verizon Business, a unit of Verizon Communications, is a global leader in communications and IT solutions. With one of the world's most connected internet protocol networks, Verizon Business delivers communications, IT, security, and network solutions to many of the largest businesses and governments. ..."Our work with Accenture to upgrade our Oracle systems has improved system performance significantly. In a recent survey, 84% of users said performance was 'faster' or 'much faster.' Plus, our sales and support staff have new tools to improve productivity and customer service, which ultimately drives customer retention and revenue." - Rob Moore, Director Verizon Business ...Read more.

    Read the article

  • Best design for a memory resident tool

    - by Andrew S.
    I apologize if this tends more toward design that programming, but here goes. What design would you recommend for a database that is Memory resident Must run on windows, linux and (at a stretch) the mac Accept multiple queries simultaneously Have minimum overhead, since a search is expected to take <0.25s This program implements a domain-specific search. Think of it as a database, but one that takes advantage of domain specific information to outperform a convential database search (for example, with custom oracle indexing). We have a custom data structure for our data. Our protoype is a simple exe that constructs the database in memory each time it is run. We were thinking that perhaps this program would suffice, but augmented with sockets so it can listen for queries. This database will be static. Its contents will change infrequently. We expect queries, and the solution, to be delivered via a web service.

    Read the article

  • Which hosted chat solutions offer the following?

    - by David
    I am looking for a chat room solution similar to the one on StackExchange to facilitate more responsive communication between the contributors on Open-Org.com. My criteria are the following: No Flash (this rules out more than half) Full history (meaning that it is possible to access all previous conversation for future reference. Very customizable No ugly IRC stuff filling up the chat view (I do not want to see who joined an who left etc.) No private conversations possible (this is just not in the spirit of Open-org.com) A hosted solution with a reasonable price. These criteria are so different from this question, so this is not a duplicate question. The service which matches this the closest is Chatroll.com. However, at 199$ per month their prices are outrageous.

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >