Search Results

Search found 21364 results on 855 pages for 'service bus'.

Page 448/855 | < Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >

  • The bottlenecks of any computer, what to look for?

    - by WebDevHobo
    Whether it is a laptop or a desktop, any computer is made up of several pieces of hardware that communicate with each other. Sending data back and forth to ensure that the user gets the desired results. I have seen some theoretical stuff on computers & hardware, but I wonder how it all comes together. CPU RAM Graphics Card L1 CACHE L2 CACHE L3 CACHE FSB ... And all other things. Which is the biggest bottle neck? Why would a person not want/need a big value in one of those categories in certain situations? P.S.: when reading the specs of the i5 750 processor, I came across this description: In place of the FSB, one or more high speed, point-to-point buses called Quick Path Interconnect (QPI) are used, formerly known as Common Serial Interconnect Bus or CSI. QPI features higher bandwidth than the traditional FSB and is better suited to system scaling. What is this, and how does it compare to FSB? EDIT: I am not planning to buy a computer at all. The goal of this question is to understand the internal relation of various hardware pieces, their specific functions and how they work together. For instance, I have heard to a somewhat higher-than-usual amount of L2/L3 Cache can help speed up your computer. What's up with saying that? Also I forgot to mention Hard-disk RPM.

    Read the article

  • @Autowire strange problem

    - by Javi
    Hello, I have a strange behaviour when autowiring I have a similar code like this one, and it works @Controller public class Class1 { @Autowired private Class2 object2; ... } @Service @Transactional public class Class2{ ... } The problem is that I need that the Class2 implements an interface so I've only changed the Class2 so it's now like: @Controller public class Class1 { @Autowired private Class2 object2; ... } @Service @Transactional public class Class2 implements IServiceReference<Class3, Long>{ ... } public interface IServiceReference<T, PK extends Serializable> { public T reference(PK id); } with this code I get a org.springframework.beans.factory.NoSuchBeanDefinitionException: No matching bean of type for Class2. It seems that @ Transitional annotation is not compatible with the interface because if I remove the @Transitional annotation or the "implements IServiceReference" the problem disapears and the bean is injected (though I need to have both in this class). It also happens if I put the annotation @Transitional in the methods instead of in the Class. I use Spring 3.0.2 if this helps. Is not compatible the interface with the transactional method? May it be a Spring bug? Thanks

    Read the article

  • Yahoo Account Has Been Closed

    - by VIVEK MISHRA
    My Domain www.manumachu.com had been closed by Yahoo due to non- Payment. I want to backorder it. is there anyway to do the following. This is the mail i received from Yahoo :- This is an automated notice. Replies to this address will not be received. If you have questions, please contact Yahoo! Customer Care. For your protection, Yahoo! will never ask you to provide your billing information via email. Dear Ravi Reddy manumachu, This notice is to inform you that your Yahoo! GeoCities Pro account has been closed due to nonpayment. The Yahoo! ID associated with this account: ravi_manumachu The domain name for this account: manumachu.com For questions, please visit our online help center or call our toll-free number at (800) 318-0870 between 6 a.m. and 6 p.m. PT, Monday through Friday, excluding holidays. Best regards, The Yahoo! Billing team This is a service email related to your use of Yahoo! Small Business. To learn more about Yahoo!'s use of personal information, including the use of web beacons in HTML-based email, please read our privacy policy. Yahoo! is located at 701 First Avenue, Sunnyvale, CA 94089. Copyright Policy - Terms of Service - Additional Terms - Help

    Read the article

  • PCRE/PHP regex not matching last "item"

    - by superbarney
    Here's a regex that I've cobbled up: /(.*={76}\s)?\s*(.*?)\s\-\-\s(\d{2}\/\d{2}\-\d{2}\s\d{2}:\d{2})\s\s(.*?)\s(http:\/\/service.*?)\s(\-{76})/is and here's the text I will be parsing: http://p.linode.com/7015 and here's the replacement for the matched text: <item>\n\t<title>$2</title>\n\t<pubDate>$pubDate</pubDate>\n\t<description>$4</description>\n\t<link>$5</link>\n</item>\n\n I have almost come up with a regex needed to parse a block of text into RSS 2.0 XML markup. I've tested it with RegExr and RegexBuddy and it works perfectly except for the last "item" where there is no line breaks after the link (Line 269). In short, the problem is the "iProperty" article in the text is not matched. Any regex gurus willing to help me troubleshoot what's wrong? I've tried /(.*={76}\s)?\s*(.*?)\s\-\-\s(\d{2}\/\d{2}\-\d{2}\s\d{2}:\d{2})\s\s(.*?)\s(http:\/\/service.*?)\s*(\-{76})/is Here's the output that I get: http://p.linode.com/7016 but that didn't work as I expected (i.e. a newline would be optional). Thanks!

    Read the article

  • Setting my NIC to full duplex

    - by David
    I am trying to optimize the network speed of my Solaris X86 server, and have discovered that the Cisco 3548 that it is connected to has issues with the NIC in my server. The NIC appears to have not been configured fully, and is coming up 100 half-duplex. The 3548 ports are all set to 100 full. Ideally I'd like to have the server set for 100 full, and have been attempting to configure it using ndd commands. However I have had no results. The following command: -bash-3.00# dladm show-dev rtls0 link: unknown speed: 100 Mbps duplex: unknown The NIC shows up as: pci bus 0x0001 cardnum 0x06 function 0x00: vendor 0x10ec device 0x8139 Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ which should be configurable. I have modified the configuration file from auto config (5) to 100 fdx (4) to no avail. If there is no other choice, I could alter the Cisco 3548 to be 100 half-duplex. However, this solution causes huge performance loss. Currently throughput is about 500Kbps, when it should be around 40Mbps.

    Read the article

  • Spring security with GAE

    - by xybrek
    I'm trying to implement Spring security for my GAE application however I'm getting this error: No bean named 'springSecurityFilterChain' is defined I added this configuration on my application web.xml: <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> And in the servlet-context: <!-- Configure security --> <security:http auto-config="true"> <security:intercept-url pattern="/**" access="ROLE_USER" /> </security:http> <security:authentication-manager alias="authenticationManager"> <security:authentication-provider> <security:user-service> <security:user name="jimi" password="jimi" authorities="ROLE_USER, ROLE_ADMIN" /> <security:user name="bob" password="bob" authorities="ROLE_USER" /> </security:user-service> </security:authentication-provider> </security:authentication-manager> What could be causing the error?

    Read the article

  • Testing a wide variety of computers with a small company

    - by Tom the Junglist
    Hello everyone, I work for a small dotcom which will soon be launching a reasonably-complicated Windows program. We have uncovered a number of "WTF?" type scenarios that have turned up as the program has been passed around to the various not-technical-types that we've been unable to replicate. One of the biggest problems we're facing is that of testing: there are a total of three programmers -- only one working on this particular project, me -- no testers, and a handful of assorted other staff (sales, etc). We are also geographically isolated. The "testing lab" consists of a handful of VMWare and VPC images running sort-of fresh installs of Windows XP and Vista, which runs on my personal computer. The non-technical types try to be helpful when problems arise, we have trained them on how to most effectively report problems, and the software itself sports a wide array of diagnostic features, but since they aren't computer nerds like us their reporting is only so useful, and arranging remote control sessions to dig into the guts of their computers is time-consuming. I am looking for resources that allow us to amplify our testing abilities without having to put together an actual lab and hire beta testers. My boss mentioned rental VPS services and asked me to look in to them, however they are still largely very much self-service and I was wondering if there were any better ways. How have you, or any other companies in a similar situation handled this sort of thing? EDIT: According to the lingo, our goal here is to expand our systems testing capacity via an elastic computing platform such as Amazon EC2. At this point I am not sure suggestions of beefing up our unit/integration testing are going to help very much as we are consistently hitting walls at the systems testing phase. Has anyone attempted to do this kind of software testing on a cloud-type service like EC2? Tom

    Read the article

  • Having trouble coming up with a good architecture for a client/server application

    - by rmw1985
    I am writing a remote backup service meant to support 1000+ users. It is going to use librsync to store reverse diffs (like rdiff-backup) and make data transfer efficient. My trouble is that I do not know the "best" way to implement the client/server model. I have thought of doing it like rsync/rdiff-backup do it by having the client open an SSH connection and running a server executable and communicating across pipes. Another alternative would be to write a server which would handle authentication and communicate with the client via SSL. The reason I have thought of this is that there is "state" information like how many backup jobs are setup, etc. that must be maintained. Another alternative that I have thought about is running a "web service" using Pylons or Django to handle the authentication, but I do not know how to bridge that the the "storage" side. Since I am using librsync, I cannot use "dumb" storage. Is there a way to pipe data through Pylons or Django to a server side handler that would do the rsync calculation? This seems to me like maybe a dumb question but I am sort of lost. Any tips or suggestions from more experienced developers would be extremely helpful.

    Read the article

  • Does JAXWS client make difference between an empty collection and a null collection value as returne

    - by snowflake
    Since JAX-WS rely on JAXB, and since I observed the code that unpack the XML bean in JAX-B Reference Implementation, I guess the difference is not made and that a JAXWS client always return an empty collection, even the webservice result was a null element: public T startPacking(BeanT bean, Accessor<BeanT, T> acc) throws AccessorException { T collection = acc.get(bean); if(collection==null) { collection = ClassFactory.create(implClass); if(!acc.isAdapted()) acc.set(bean,collection); } collection.clear(); return collection; } I agree that for best interoperability service contracts should be non ambiguous and avoid such differences, but it seems that the JAX-WS service I'm invoking (hosted on a Jboss server with Jbossws implementation) is returning as expected either null either empty collection (tested with SoapUI). I used for my test code generated by wsimport. Return element is defined as: @XmlElement(name = "return", nillable = true) protected List<String> _return; I even tested to change the Response class getReturn method from : public List<String> getReturn() { if (_return == null) { _return = new ArrayList<String>(); } return this._return; } to public List<String> getReturn() { return this._return; } but without success. Any helpful information/comment regarding this problem is welcome !

    Read the article

  • WCF object parameter loses values

    - by Josh
    I'm passing an object to a WCF service and wasn't getting anything back. I checked the variable as it gets passed to the method that actually does the work and noticed that none of the values are set on the object at that point. Here's the object: [DataContract] public class Section { [DataMember] public long SectionID { get; set; } [DataMember] public string Title { get; set; } [DataMember] public string Text { get; set; } [DataMember] public int Order { get; set; } } Here's the service code for the method: [OperationContract] public List<Section> LoadAllSections(Section s) { return SectionRepository.Instance().LoadAll(s); } The code that actually calls this method is this and is located in a Silverlight XAML file: SectionServiceClient proxy = new SectionServiceClient(); proxy.LoadAllSectionsCompleted += new EventHandler<LoadAllSectionsCompletedEventArgs>(proxy_LoadAllSectionsCompleted); Section s = new Section(); s.SectionID = 4; proxy.LoadAllSectionsAsync(s); When the code finally gets into the method LoadAllSections(Section s), the parameter's SectionID is not set. I stepped through the code and when it goes into the generated code that returns an IAsyncResult object, the object's properties are set. But when it actually calls the method, LoadAllSections, the parameter received is completely blank. Is there something I have to set to make the proeprty stick between method calls?

    Read the article

  • Gnome 3 gdm fails to start after preupgrade from fedora 14 to 15

    - by digital illusion
    I'm not able to boot fedora 15 in runlevel 5. After all services start, when the login screen should appear, gdm just show a mouse waiting cursor and keeps restarting itself. From /var/log/gdm/\:0-greeter.log Gtk-Message: Failed to load module "pk-gtk-module" /usr/bin/gnome-session: symbol lookup error: /usr/lib/gtk-3.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type /usr/libexec/gnome-setting-daemon: symbol lookup error: /usr/lib/gtk-3.0modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Where should atk_plug_get_type be defined? Edit: Here a better description of the error (system-config-network-gui:2643): Gnome-WARNING **: Accessibility: failed to find module 'libgail-gnome' which is needed to make this application accessible /usr/bin/python: symbol lookup error: /usr/lib/gtk-2.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Why there are still references to gtk2? Did preupgrade fail? Attaching upgrade log... it seems gdm was not added, but it is present in the users and groups list. May 26 11:25:52 sysimage sendmail[1076]: alias database /etc/aliases rebuilt by root May 26 11:25:52 sysimage sendmail[1076]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:46:09 sysimage useradd[1793]: failed adding user 'dbus', data deleted May 26 11:53:37 sysimage systemd-machine-id-setup[2443]: Initializing machine ID from D-Bus machine ID. May 26 11:55:28 sysimage useradd[2835]: failed adding user 'apache', data deleted May 26 11:55:38 sysimage useradd[2842]: failed adding user 'haldaemon', data deleted May 26 11:55:43 sysimage useradd[2848]: failed adding user 'smolt', data deleted May 26 11:57:32 sysimage sendmail[3032]: alias database /etc/aliases rebuilt by root May 26 11:57:32 sysimage sendmail[3032]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:57:46 sysimage groupadd[3066]: group added to /etc/group: name=cgred, GID=482 May 26 11:57:47 sysimage groupadd[3066]: group added to /etc/gshadow: name=cgred May 26 11:57:47 sysimage groupadd[3066]: new group: name=cgred, GID=482 May 26 11:58:42 sysimage useradd[3086]: failed adding user 'ntp', data deleted May 26 12:00:13 sysimage dbus: avc: received policyload notice (seqno=2) May 26 12:15:08 sysimage useradd[4950]: failed adding user 'gdm', data deleted May 26 12:24:39 sysimage dbus: avc: received policyload notice (seqno=3) May 26 12:25:24 sysimage useradd[5522]: failed adding user 'mysql', data deleted May 26 12:25:37 sysimage useradd[5533]: failed adding user 'rpcuser', data deleted May 26 12:26:31 sysimage useradd[5592]: failed adding user 'tcpdump', data deleted Any suggestions before I revert installation to F14?

    Read the article

  • Extend RAID 1 (HP SmartArray P410i) running Linux

    - by Oliver
    I took over a fairly simple server setup with the following RAID 1 config running Ubuntu 11.10 (Kernel 3.0.0-12-server x86_64): => ctrl all show config Smart Array P410i in Slot 0 (Embedded) (sn: removed) array A (SAS, Unused Space: 1335535 MB) logicaldrive 1 (279.4 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 1 TB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 1 TB, OK) Initially there were two 300GB disks that got replaced by 1TB disks and I now have to extend the logical volume to use that extra space. However, when trying to do so I get the following warning: => ctrl slot=0 ld 1 modify size=max Warning: Extension may not be supported on certain operating systems. Performing extension on these operating systems can cause data to become inaccessible. See ACU documentation for details. Continue? (y/n) Is it safe to say yes or am I at risk of corrupting the file system / loosing data? Rearranging and extending the file system afterwards shouldn't be an issue as I can take the server offline and boot from a gparted live disk. Here's the config of the RAID controller in use: => ctrl all show detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: removed RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: Rev C Firmware Version: 5.12 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: False Drive Write Cache: Disabled SATA NCQ Supported: True And the partition table: Number Start End Size Type File system Flags 1 1049kB 274GB 274GB primary ext4 boot 2 274GB 300GB 25.8GB extended 5 274GB 300GB 25.8GB logical linux-swap(v1)

    Read the article

  • How to get HTTP status code in HTTPService fault handler

    - by Ankur
    I am calling a server method through HTTPService from client side. The server is a RestFul web service and it might respond with one of many HTTP error codes (say, 400 for one error, 404 for another and 409 for yet another). I have been trying to find out the way to determine what was the exact error code sent by the server. I have walked teh entire object tree for the FaultEvent populated in my fault handler, but no where does it tell me the error code. Is this missing functionality in Flex? My code looks like this: The HTTP Service declaration: <mx:HTTPService id="myServerCall" url="myService" method="GET" resultFormat="e4x" result="myServerCallCallBack(event)" fault="faultHandler(event)"> <mx:request> <action>myServerCall</action> <docId>{m_sDocId}</docId> </mx:request> </mx:HTTPService> My fault handler code is like so: private function faultHandler(event : FaultEvent):void { Alert.show(event.statusCode.toString() + " / " + event.fault.message.toString()); }

    Read the article

  • Executing remote script - Architecture

    - by Bruno Lee
    Hi, I want to make an application that executes a remote script. The user can create a script (probabily a LUA script) then stores it in the server. Then he can uses an API for execute the script. I was thinking that API could be a webservice. So my questions are: I need high performance to execute the script. So my first choice was LUA script. Someone has another sugestion? Cause I need high perfomance, I was thinking if the webservice is the best solution. Maybe I could create a TCP/IP Windows Service that hold the users request. It is important to say that I will have many user executing scripts at the same time. So I will have a concurrency problem. My scripts will query in a database. I will use Tokyo Cabinet or Tokio Tyrant. I think Tokio Tyrant is the only solution cause I will have many requests. For perfomance, Do I need to make a connection pooling? Is there anyway to share variables between webservices requests? To make the webservice or the Windows service i was thinking to use C++. Can someone help with these questions? thanks

    Read the article

  • implementing Ws-security within WCF proxy

    - by harrisonmeister
    Hi, I have imported an axis based wsdl into a VS 2008 project as a service reference. I need to be able to pass security details such as username/password and nonce values to call the axis based service. I have looked into doing it for wse, which i understand the world hates (no issues there) I have very little experience of WCF, but have worked how to physically call the endpoint now, thanks to SO, but have no idea how to set up the SoapHeaders as the schema below shows: <S:Envelope xmlns:S="http://www.w3.org/2001/12/soap-envelope" xmlns:ws="http://schemas.xmlsoap.org/ws/2002/04/secext"> <S:Header> <ws:Security> <ws:UsernameToken> <ws:Username>aarons</ws:Username> <ws:Password>snoraa</ws:Password> </ws:UsernameToken> </wsse:Security> ••• </S:Header> ••• </S:Envelope> Any help much appreciated Thanks, Mark

    Read the article

  • WCF, IIS6.0 (413) Request Entity Too Large.

    - by Andrew Kalashnikov
    Hello, guys. I've got annoyed problem. I've got WCF service(basicHttpBinding with Transport security Https). This service implements contract which consists 2 methods. LoadData. GetData. GetData works OK!. My client received pachage ~2Mb size without problems. All work correctly. But when I try load data by bool LoadData(Stream data); - signature of method I'll get (413) Request Entity Too Large. Stack Trace: Server stack trace: ? ServiceModel.Channels.HttpChannelUtilities.ValidateRequestReplyResponse(HttpWebRequest request, HttpWebResponse response, HttpChannelFactory factory, WebException responseException, ChannelBinding channelBinding) System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout) System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) I try this http://blogs.msdn.com/jiruss/archive/2007/04/13/http-413-request-entity-too-large-can-t-upload-large-files-using-iis6.aspx. But it doesn't work! My server is 2003 with IIS6.0. Please help.

    Read the article

  • filterSecurityInterceptor and metadatasource implementation spring-security

    - by Mike
    Hi! I created a class that implements the FilterInvocationSecurityMetadataSource interface. I implemented it like this: public List<ConfigAttribute> getAttributes(Object object) { FilterInvocation fi = (FilterInvocation) object; Object principal = SecurityContextHolder.getContext().getAuthentication().getPrincipal(); Long companyId = ((ExtenededUser) principal).getCompany().getId(); String url = fi.getRequestUrl(); // String httpMethod = fi.getRequest().getMethod(); List<ConfigAttribute> attributes = new ArrayList<ConfigAttribute>(); FilterSecurityService service = (FilterSecurityService) SpringBeanFinder.findBean("filterSecurityService"); Collection<Role> roles = service.getRoles(companyId); for (Role role : roles) { for (View view : role.getViews()) { if (view.getUrl().equalsIgnoreCase(url)) attributes.add(new SecurityConfig(role.getName() + "_" + role.getCompany().getName())); } } return attributes; } when I debug my application I see it reaches this class, it only reaches getAllConfigAttributes method, that is empty as I said, and return null. after that it prints this warning: Could not validate configuration attributes as the SecurityMetadataSource did not return any attributes from getAllConfigAttributes(). My aplicationContext- security is like this: <beans:bean id="filterChainProxy" class="org.springframework.security.web.FilterChainProxy"> <filter-chain-map path-type="ant"> <filter-chain filters="sif,filterSecurityInterceptor" pattern="/**" /> </filter-chain-map> </beans:bean> <beans:bean id="filterSecurityInterceptor" class="org.springframework.security.web.access.intercept.FilterSecurityInterceptor"> <beans:property name="authenticationManager" ref="authenticationManager" /> <beans:property name="accessDecisionManager" ref="accessDecisionManager" /> <beans:property name="securityMetadataSource" ref="filterSecurityMetadataSource" /> </beans:bean> <beans:bean id="filterSecurityMetadataSource" class="com.mycompany.filter.FilterSecurityMetadataSource"> </beans:bean> what could be the problem?

    Read the article

  • WCF and streaming requests and responses

    - by Cheeso
    Is it correct that in WCF, I cannot have a service write to a stream that is received by the client? My understanding is that streaming is supported in WCF for requests, responses, or both. Is it true that in all cases, the receiver of the stream must invoke Read ? I would like to support a scenario where the receiver of the stream can Write on it. Is this supported? Let me show it this way. The simplest example of Streaming in WCF is the service returning a FileStream to a client. This is a streamed response. The server code is like this: [ServiceContract] public interface IStreamService { [OperationContract] Stream GetData(string fileName); } public class StreamService : IStreamService { public Stream GetData(string filename) { FileStream fs = new FileStream(filename, FileMode.Open) return fs; } } And the client code is like this: StreamDemo.StreamServiceClient client = new WcfStreamDemoClient.StreamDemo.StreamServiceClient(); Stream str = client.GetData(@"c:\path\to\myfile.dat"); do { b = str.ReadByte(); //read next byte from stream ... } while (b != -1); (example taken from http://blog.joachim.at/?p=33) Clear, right? The server returns the Stream to the client, and the client invokes Read on it. Is it possible for the client to provide a Stream, and the server to invoke Write on it? In other words, rather than a pull model - where the client pulls data from the server - it is a push model, where the client provides the "sink" stream and the server writes into it. Is this possible in WCF, and if so, how? What are the config settings required for the binding, interface, etc? The analogy is the Response.OutputStream from an ASP.NET request. In ASPNET, any page can invoke Write on the output stream, and the content is received by the client. Can I do something similar in WCF? Thanks.

    Read the article

  • Testing fault tolerant code

    - by Robert
    I’m currently working on a server application were we have agreed to try and maintain a certain level of service. The level of service we want to guaranty is: if a request is accepted by the server and the server sends on an acknowledgement to the client we want to guaranty that the request will happen, even if the server crashes. As requests can be long running and the acknowledgement time needs be short we implement this by persisting the request, then sending an acknowledgement to the client, then carrying out the various actions to fulfill the request. As actions are carried out they too are persisted, so the server knows the state of a request on start up, and there’s also various reconciliation mechanisms with external systems to check the accuracy of our logs. This all seems to work fairly well, but we have difficult saying this with any conviction as we find it very difficult to test our fault tolerant code. So far we’ve come up with two strategies but neither is entirely satisfactory: Have an external process watch the server code and then try and kill it off at what the external process thinks is an appropriate point in the test Add code the application that will cause it to crash a certain know critical points My problem with the first strategy is the external process cannot know the exact state of the application, so we cannot be sure we’re hitting the most problematic points in the code. My problem with the second strategy, although it gives more control over were the fault takes, is I do not like have code to inject faults within my application, even with optional compilation etc. I fear it would be too easy to over look a fault injection point and have it slip into a production environment.

    Read the article

  • VLC tv card streaming

    - by Franco
    I'm trying to stream the output of my desktop's tv card to my laptop using vlc without success. I have on both pcs ArchLinux installed. I'm stuck here: $ cvlc v4l2:///dev/video0:norm=pal-nc:frequency=543250:size=640x480:channel=0:input-slave=alsa:///dev/dsp:audio=0 --sout '#transcode{vcodec=mp4v,acodec=mpga,vb=3000,ab=256,vt=800000,keyint=80,deinterlace}:standard{access=http,mux=ogg,dst=192.168.0.2:8080}' --ttl 12 VLC media player 1.1.4 The Luggage (revision exported) Blocked: call to unsetenv("DBUS_ACTIVATION_ADDRESS") Blocked: call to unsetenv("DBUS_ACTIVATION_BUS_TYPE") [0x1c9e480] inhibit interface error: Failed to connect to the D-Bus session daemon: /usr/bin/dbus-launch terminated abnormally with the following error: Autolaunch error: X11 initialization failed. [0x1c9e480] main interface error: no suitable interface module [0x1ca1500] main interface error: no suitable interface module [0x1bb3120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x1c9f940] dummy interface: using the dummy interface module... [0x1ca4850] main access out: creating httpd [0x1ebb340] mux_ogg mux: Open And on my laptop: $ vlc http://192.168.0.2:8080 VLC media player 1.1.4.1 The Luggage (revision exported) Blocked: call to unsetenv("DBUS_ACTIVATION_ADDRESS") Blocked: call to unsetenv("DBUS_ACTIVATION_BUS_TYPE") Blocked: call to setlocale(6, "") Blocked: call to sigaction(17, 0xb25c7058, 0xb25c70e4) Warning: call to signal(13, 0x1) Warning: call to signal(13, 0x1) Blocked: call to setenv("ORBIT_SOCKETDIR", "/tmp/orbit-zf", 1) Warning: call to srand(1287690122) Warning: call to rand() Blocked: call to setlocale(6, "") (process:17933): Gtk-WARNING **: Locale not supported by C library. Using the fallback 'C' locale. Warning: call to signal(13, 0x1) Blocked: call to setlocale(6, "") [0x8af5f04] main stream error: cannot pre fill buffer Any idea why this isn't working?

    Read the article

  • Convert C# Silverlight App To AZURE CLOUD Platform?!?!

    - by Goober
    The Scenario I've been following Brad Abrams Silverlight tutorial on his blog.... I have tried following Brads "How to deploy your app to the Cloud" tutorial however i'm struggling with it, even though it is in the same context as the first tutorial.... The Question Is the application structure essentially the same as the original "non-cloud based version"!? If not, which parts are different? (I get that there is a Cloud Service project added to the solution) - but what else?! Connection String Issue In my "Non-Cloud based application", I make use of the ADO.Net Entity Framework to communicate with my database. The connection string in my web.config file looks like: <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=CHASEDIGITALWS3;Initial Catalog=InmarsatZenith;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /></connectionStrings> However However the connection string that I get from SQL AZURE looks like: Server=tcp:k12ioy1rsi.ctp.database.windows.net;Database=master;User ID=simongilbert;Password=myPassword;Trusted_Connection=False; So how do I go about merging the two when I move the "non-cloud based application" to THE CLOUD?! Any help regarding converting a silverlight application to a cloud service and deploying it would be greatly appreciated

    Read the article

  • Unable to use .xjb file inside wsdlc ant task

    - by Govind
    Hi, I have a requirement of customize the default conversion provided by JAXB. For the xs:date type we need to show only the date part(removing the time). I have created an .xjb file and used the xjc command to generate the required classes. This is working perfectly and I got the desired results. Since in our project we create the web service jars using ant I tried to include it inside the wsdlc ant task I get the error as: dateFormatter.xjb is not a xsd config file. <target name="generate-service-from-wsdl" depends="validate-weblogic, clean"> <taskdef name="wsdlc" classname="weblogic.wsee.tools.anttasks.WsdlcTask" /> <wsdlc srcWsdl="${sourceWsdl}/My_Gateway.wsdl" verbose="on" destJwsDir="${targetDir}" destImplDir="${targetDir}/impl" packageName="${servicePackage}" > <xsdConfig dir="wsdls/xjb" includes="dateFormatter.xjb"/> </wsdlc> </target> I am using Weblogic 9.2 and tried the using Weblogic 10.3 jar using the binding tag instead of xsdConfig. But I get the same error. Please let me know where am I making the mistake and how to correct it. Thanks, Govind.

    Read the article

  • What is the prefered or accepted method for testing proxy settings?

    - by Mike Webb
    I have a lot of trouble with the internet connectivity in the program I am working on and it all seems to spawn from some issue with the proxy settings. Most of the issues at this point are fixed, but the issue I am having now is that my method of testing the proxy settings makes some users wait for long periods of time. Here is what I do: System.Net.WebClient webClnt = new System.Net.WebClient(); webClnt.Proxy = proxy; webClnt.Credentials = proxy.Credentials; byte[] tempBytes; try { tempBytes = webClnt.DownloadData(url.Address); } catch { //Invalid proxy settings //Code to handle the exception goes here } This is the only way that I've found to test if the proxy settings are correct. I tried making a web service call to our web service, but no proxy settings are needed when making the call. It will work even if I have bogus proxy settings. The above method, though, has no timeout member that I can set that I can find and I use the DownloadData as opposed to the DownloadDataAsync because I need to wait til the method is done so that I can know if the settings are correct before continuing on in the program. Any suggestions on a better method or a work around for this method is appreciated. Mike

    Read the article

  • ASP.NET 2.0 and COM Port Communication

    - by theaviator
    ASP.NET 2.0 and COM Port Communication Hello Guys, I have a managed DLL which communicates with the devices attached on COM/Serial ports. The desktop Winforms application sends requests on ports and receives/stores data in memory. In Winforms app I have added a reference to DLL and I am using the methods. This works well. Now, there is a situation where I need to show this data from serial/com port on a web-page. And also users should be able to send requests to the ports using this DLL. I have made a web app in ASP.NET (2.0). Added a reference to the DLL. I am able to use this DLL, the DLL communicates on the COM upon button click on web-page and also the response is shown on web page. However I am not happy with the approach and strongly feel that this is a bad approach. Also the development server crashes after 3 -4 requests. What is the best approach in this scenario. If I use a windows service then how would my ASP.net app will communicate with the Weindows service. Or can this be easily done using WCF. I have not used WCF any time nor any of .net remoting technique. Please suggest me the best architecture in this scenario. Thank you

    Read the article

  • Visual Studio 2008: Can't connect to known good TFS 2010 beta 2

    - by p.campbell
    A freshly installed TFS 2010 Beta 2 is at http://serverX:8080/tfs. A Windows 7 developer machine with VS 2008 Pro SP1 and the VS2008 Team Explorer (no SP). The TFS 2008 Service Pack 1 didn't work for me - "None of the products that are addressed by this software update are installed on this computer." The developer machine is able to browse the TFS site at the above URL. The Issue is around trying to add the TFS server into the Team Explorer window in Visual Studio 2008. Here's a screenshot showing the error: unable to connect to this Team Foundation Server. Possible reasons for failure include: The Team Foundation Server name, port number or protocol is incorrect. The Team Foundation Server is offline. Password is expired or incorrect. The TFS server is up and running properly. Firewall ports are open, and is accessible via the browser on the dev machine!! larger image Question: how can you connect from VS 2008 Pro to a TFS 2010 Beta 2 server? Resolution Here's how I solved this problem: installed VS 2008 Team Explorer as above. re-install VS 2008 Service Pack 1 when adding a TFS server to Team Explorer, you MUST specify the URL as such: http://[tfsserver]:[port]/[vdir]/[projectCollection] in my case above, it was http://serverX:8080/tfs/AppDev-TestProject you cannot simply add the TFS server name and have VS look for all Project Collections on the server. TFS 2010 has a new URL (by default) and VS 2008 doesn't recognize how to gather that list.

    Read the article

< Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >