Search Results

Search found 10615 results on 425 pages for 'resources sharing'.

Page 40/425 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Unable to access network resources through VPN

    - by fbueckert
    I'm currently attempting to connect one of our computers in the office to a client VPN. My development machine is running Windows 7, and can connect and see resources just fine. The problem computer is running Windows XP. They're both within the same network. Using the same credentials at both computers, the VPN connection (using the built in Windows network connections) works just fine. So far, so good. An IP address is assigned, and comparing both machines shows they're still in the same subnet. The problem is that the XP machine cannot see ANY of the computers in the client network. I tried a tracert to a target machine on the Windows 7 box, and the first item that comes up is the .0 address. Pinging it gives responses. Trying it on the Windows XP machine, however, comes up with just timeouts. Trying to trace to www.google.com allows the address to resolve (probably part of the cached resolutions), but results in just timeouts. I double-checked to make sure that the Windows firewall was not on, and trying to open the settings brings up a notification that the firewall service wasn't running, which leads me to believe that it's definitely not on. From my best guess, I've managed to connect the XP machine to a black hole of some sort. There's obviously something strange going on, but I'm not sure where I should be looking.

    Read the article

  • ipad video input

    - by euphoria83
    Is it possible to send video to iPad so that it can either be used for screen sharing or as an extra screen or to watch videos that are being played on another machine, say my mac ? Thanks.

    Read the article

  • Virtual Network Interface and NAT disables localhost access for MySQL and Apache

    - by Interarticle
    I'm running an Ubuntu Server 12.04, and recently I configured it to do NAT for my laptop. Since the server has only one NIC, I followed instructions online to create a virtual network device (eth0:0) that has a LAN IP address, then further configured iptables and UFW to allow internet sharing. However, just a few days ago, I discovered that one of the PHP pages hosted on the server failed for no apparent reason. A little digging revealed that the MySQL server started refusing connections from localhost. The same happened with a page (PhpMyAdmin) that was configured to be accessible only from localhost (in Apache2). The error, as shown by $mysql --protocol=tcp -u root -p looks like ERROR 1130 (HY000): Host '<host name of eth0>' is not allowed to connect to this MySQL server However, the funny thing is, I configured the mysql server to allow root access from localhost (only). Moreover, the mysql server listens only on 127.0.0.1:3306, as shown by: sudo netstat -npa | head Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1029/mysqld which means that the connection could have only come from 127.0.0.1 (Note that MySQL is working because I can still connect to it via unix domain sockets) In effect, it seems that all tcp connections originating from 127.0.0.1 to 127.0.0.1 appear to any local daemon to come from the eth0 IP address. Indeed, apache2 allowed me to access PhpMyAdmin after I added allow <eth0 IP address>. The following are my network configurations (redacted): /etc/hosts: 127.0.0.1 localhost 211.x.x.x <host name of eth0> <server name> #IPv6 Defaults follows .... /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 211.x.x.x netmask 255.255.255.0 gateway 211.x.x.x dns-nameservers 8.8.8.8 # dns-* options are implemented by the resolvconf package, if installed dns-search xxxxxxx.com hwaddress ether xx:xx:xx:xx:xx:xx auto eth0:0 iface eth0:0 inet static address 192.168.57.254 netmask 255.255.254.0 broadcast 192.168.57.255 network 192.168.57.0 /etc/ufw/sysctl.conf: #Uncommented the following lines net/ipv4/ip_forward=1 net/ipv6/conf/default/forwarding=1 /etc/default/ufw: DEFAULT_FORWARD_POLICY="ACCEPT" #Changed DROP to ACCEPT /etc/init/internet-sharing.conf (upstart script I wrote), section pre-start script: iptables -A FORWARD -o eth0 -i eth0:0 -s 192.168.57.22 -m conntrack --ctstate NEW -j ACCEPT iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A POSTROUTING -t nat -j MASQUERADE Note again that my problem here is that programs cannot access localhost tcp services, from the server itself, and that access is blocked because the services have access control allowing only 127.0.0.1. I have no problem connecting (as in TCP connections) to services via tcp, even if the services listen only on 127.0.0.1. I do NOT want to connect to the services from another computer.

    Read the article

  • Splitting an internet connection between multiple separate subnetworks

    - by pythonian4000
    Problem I have an internet connection that I want to split between four separate networks. My requirements are: I need to be able to monitor the amount of bandwidth and data being used by each network, and notify or control as necessary. The four networks should only be able to connect to the internet, not each other. My parents need to be able to operate it, so it needs a simple, preferably Windows-based GUI. Progress so far Server I have a mini-ITX server with six Gigabit ethernet ports - one for the ethernet internet connection, one for each of the four networks, and one for remote access to the server for administration. Bandwidth control I spent a long time researching solutions here. The majority of the control systems/software I found could control bandwidth usage via QOS, but could not monitor or control the amount of data being used. Eventually I found the SoftPerfect Bandwidth Manager, which has everything I need in terms of monitoring and control - per-interface quota management, usage statistics, a web interface for checking usage, and email notifications when quotas are exceeded. It is also Windows-based and has a simple GUI. Internet sharing This is where I am having issues. I am currently using Windows XP Pro SP2 for the server (yes, I know this is far from ideal, but it's the only spare Windows OS I currently have). I can't use the built-in Internet Connection Sharing for several reasons: The upstream internet router has an IP of 192.168.0.1 which ICS clashes with, and I cannot change the router settings. ICS can only share an internet connection with a single interface, but I have four. I have tried bridging the four network cards, but then the Bandwidth Manager cannot see the four individual interfaces - it only sees the bridge. I have tried setting up Dual DHCP DNS server (and am having issues getting DHCP offers to be received by clients), but that would still require gateway software of some sort, which I have been unable to find. My current attempt is to use OpenVPN, with a server for the internet NIC and a separate client for each of the four networks. My thought is that I could bridge the OpenVPN TAP devices to each NIC, meaning that the Bandwidth Manager would control traffic from the bridge instead of the interface. I have not made much progress here though - I've never used OpenVPN before. Questions Is there a Windows software package that does everything I need? (Unlikely, I know) Is there a Windows software package that will share internet between multiple NICs without bridging? Are either of my about attempts feasible? Would it help to have a newer/server version of Windows? Is there a non-Windows alternative that is easy to use?

    Read the article

  • Sync my files across multiple computers

    - by EnderMB
    I do a lot of work on my home computer, ranging from programming, writing stored procedures and writing documentation and reporting. A lot of this work is university related and constantly swapping files across several computers is annoying at best. I have a large final-year project coming up and I'm going to be sharing this work amongst home and university and require some kind of online storage that provides version control for my programs, as well as my Word documents, PDF's and saved academic papers. Are there any good solutions for my problem?

    Read the article

  • Improve file transfer speed between Windows PCs and servers

    - by Geotarget
    I've setup a server which I've connected to multiple PCs in my workplace. Sadly, data transfer speeds are at max 3 MB/sec per connection which works out slow for file transfers, especially when transferring large files. I'm using Windows filesharing and the server is a Windows Server 2008 (2 Ghz CPU, 1 GB RAM) and the client PCs mostly running Windows 7. How can I detect bottlenecks in my network and improve file sharing speed within the network?

    Read the article

  • Unable to share a folder between Windows 7 and Ubuntu (running in VMWare)

    - by darthvader
    I have installed vmware toolbox in ubuntu (guest OS). I tried to share a location from the settings of the virtual machine. But when I click Ok, the following error in thrown in the host (Win 7) OS. "Unable to update run-time folder sharing status: Unknown error." The location is not showing up in /mnt/ What could be the reason? P.S I have vmhgfs process running in my Ubuntu VM. I was following this method.

    Read the article

  • Using Dropbox as a cloud based file share - does everyone need an upgraded account?

    - by aSkywalker
    We have a file share in our small office (3-5 users). We now have the need for the files to be accessible outside of the office. I like the idea of dropbox - we have been using it for small remote sharing. If we buy the upgrade account, and move 30 to 70 gigs of files to it, will every user have to have the pro account? I have submitted this question to dropbox - but thought that the advice of users here would also be valuable

    Read the article

  • Is it necessary to share every underlying folder in a Dropbox shared folder?

    - by ErnstvdS
    I have one Dropbox (I suppose) shared between my business account / PC and my wife's account / PC running Windows XP and a laptop with Windows 7. I created a folder and shared this one with both (or three) accounts. I created an underlying folder (no need to share, says the help) but it is not visible on the other PCs, so I've shared it to both accounts. Is this sharing necessary for every simple new folder?

    Read the article

  • Unable to share a folder between Windows 7 and Ubuntu (running in VMWare)

    - by iJeeves
    I have installed vmware toolbox in ubuntu (guest OS). I tried to share a location from the settings of the virtual machine. But when I click Ok, the following error in thrown in the host (Win 7) OS. "Unable to update run-time folder sharing status: Unknown error." The location is not showing up in /mnt/ What could be the reason? P.S I have vmhgfs process running in my Ubuntu VM. I was following this method.

    Read the article

  • How to troubleshoot port forwarding on Windows 7 (64 Bit) with ICS enabled?

    - by LearnCocos2D
    I want to forward some ports (1666 for perforce, 8081 for Hudson) on my Internet Gateway machine. This machine is running Windows 7 (64 Bit, legal, user-account) and connected to the Internet via cable modem (it's not a router). The Windows machine is sharing its Internet Connection via ICS and that works fine on all connected computers. I can access the services via the gateway's public IP (95.x.x.x) on the given ports if they are running on the gateway machine itself. I've added the ports and destination IP address (192.168.0.18) in the Internet network adapter's Advanced Settings dialog (Sharing tab). That's the same dialog where you have a list of preconfigured services like HTTP, FTP and other incoming services. When I do that I can't connect to the services anymore. For some reason port forwarding isn't working. I have uninstalled Bitdefender because I wanted to check if the Firewall interferes. I've also disabled the Windows Firewall and Defender to no avail. I tried a freeware tool that helps to setup port forwarding but that didn't work either. The target machine is a Mac OS X computer whose Firewall is disabled. The IP is static. I can successfully connect to the services using the local IP address (192.168.0.18) from two different machines, including the gateway computer. So internally and externally it seems to me that the ports are open and not blocked, and the issue is with port forwarding itself. From what I understand it should be enough to add an entry to the Advanced Settings dialog to enable port forwarding when there are no firewalls interfering. How can I troubleshoot why port forwarding isn't working for me? What steps should I follow to alleviate the issue? PS: I gladly accept command line solutions. Other things I've tried: adding an Inbound Rule to Windows Firewall for the 1666, 8081 ports trying with Windows Firewall enabled and disabled disabling/enabling the network adapter double-checked that the IP addresses are correct mapping a different incoming port to the service's actual port followed or checked the misc tips in this article What I haven't dared trying yet (let me know if it's worth a shot): disable/enable ICS remove all network adapters (via Control Panel), then re-install and re-configure them

    Read the article

  • Can a windows virus downloaded in linux be transferred to windows?

    - by user219048
    I know that linux is mostly safe from viruses, however: if you do download a windows virus (i.e., through a drive-by download), will it just sit there on your computer, and take up space? Is it unable to infect files because of the different operating system? If you transfer files between computers (by using a usb flash drive or through online file sharing), is there any risk that the virus could be transferred to windows and activate?

    Read the article

  • Corosync :: Restarting some resources after Lan connectivity issue

    - by moebius_eye
    I am currently looking into corosync to build a two-node cluster. So, I've got it working fine, and it does what I want to do, which is: Lost connectivity between the two nodes gives the first node '10node' both Failover Wan IPs. (aka resources WanCluster100 and WanCluster101 ) '11node' does nothing. He "thinks" he still has his Failover Wan IP. (aka WanCluster101) But it doesn't do this: '11node' should restart the WanCluster101 resource when the connectivity with the other node is back. This is to prevent a condition where node10 simply dies (and thus does not get 11node's Failover Wan IP), resulting in a situation where none of the nodes have 10node's failover IP because 10node is down 11node has "given back" his failover Wan IP. Here's the current configuration I'm working on. node 10sch \ attributes standby="off" node 11sch \ attributes standby="off" primitive LanCluster100 ocf:heartbeat:IPaddr2 \ params ip="172.25.0.100" cidr_netmask="32" nic="eth3" \ op monitor interval="10s" \ meta is-managed="true" target-role="Started" primitive LanCluster101 ocf:heartbeat:IPaddr2 \ params ip="172.25.0.101" cidr_netmask="32" nic="eth3" \ op monitor interval="10s" \ meta is-managed="true" target-role="Started" primitive Ping100 ocf:pacemaker:ping \ params host_list="192.0.2.1" multiplier="500" dampen="15s" \ op monitor interval="5s" \ meta target-role="Started" primitive Ping101 ocf:pacemaker:ping \ params host_list="192.0.2.1" multiplier="500" dampen="15s" \ op monitor interval="5s" \ meta target-role="Started" primitive WanCluster100 ocf:heartbeat:IPaddr2 \ params ip="192.0.2.100" cidr_netmask="32" nic="eth2" \ op monitor interval="10s" \ meta target-role="Started" primitive WanCluster101 ocf:heartbeat:IPaddr2 \ params ip="192.0.2.101" cidr_netmask="32" nic="eth2" \ op monitor interval="10s" \ meta target-role="Started" primitive Website0 ocf:heartbeat:apache \ params configfile="/etc/apache2/apache2.conf" options="-DSSL" \ operations $id="Website-one" \ op start interval="0" timeout="40" \ op stop interval="0" timeout="60" \ op monitor interval="10" timeout="120" start-delay="0" statusurl="http://127.0.0.1/server-status/" \ meta target-role="Started" primitive Website1 ocf:heartbeat:apache \ params configfile="/etc/apache2/apache2.conf.1" options="-DSSL" \ operations $id="Website-two" \ op start interval="0" timeout="40" \ op stop interval="0" timeout="60" \ op monitor interval="10" timeout="120" start-delay="0" statusurl="http://127.0.0.1/server-status/" \ meta target-role="Started" group All100 WanCluster100 LanCluster100 group All101 WanCluster101 LanCluster101 location AlwaysPing100WithNode10 Ping100 \ rule $id="AlWaysPing100WithNode10-rule" inf: #uname eq 10sch location AlwaysPing101WithNode11 Ping101 \ rule $id="AlWaysPing101WithNode11-rule" inf: #uname eq 11sch location NeverLan100WithNode11 LanCluster100 \ rule $id="RAND1083308" -inf: #uname eq 11sch location NeverPing100WithNode11 Ping100 \ rule $id="NeverPing100WithNode11-rule" -inf: #uname eq 11sch location NeverPing101WithNode10 Ping101 \ rule $id="NeverPing101WithNode10-rule" -inf: #uname eq 10sch location Website0NeedsConnectivity Website0 \ rule $id="Website0NeedsConnectivity-rule" -inf: not_defined pingd or pingd lte 0 location Website1NeedsConnectivity Website1 \ rule $id="Website1NeedsConnectivity-rule" -inf: not_defined pingd or pingd lte 0 colocation Never -inf: LanCluster101 LanCluster100 colocation Never2 -inf: WanCluster100 LanCluster101 colocation NeverBothWebsitesTogether -inf: Website0 Website1 property $id="cib-bootstrap-options" \ dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \ cluster-infrastructure="openais" \ expected-quorum-votes="2" \ no-quorum-policy="ignore" \ stonith-enabled="false" \ last-lrm-refresh="1408954702" \ maintenance-mode="false" rsc_defaults $id="rsc-options" \ resource-stickiness="100" \ migration-threshold="3" I also have a less important question concerning this line: colocation NeverBothLans -inf: LanCluster101 LanCluster100 How do I tell it that this collocation only applies to '11node'.

    Read the article

  • Error Cannot create an Instance of "ObjectName" in Designer when using <UserControl.Resources>

    - by Mike Bynum
    Hi All, I'm tryihg to bind a combobox item source to a static resource. I'm oversimplfying my example so that it's easy to understand what i'm doing. So I have created a class public class A : ObservableCollection<string> { public A() { IKBDomainContext Context = new IKBDomainContext(); Context.Load(Context.GetIBOptionsQuery("2C6C1Q"), p => { foreach (var item in SkinContext.IKBOptions) { this.Add(item); } }, null); } } So the class has a constructor that populates itself using a domaincontext that gets data from a persisted database. I'm only doing reads on this list so dont have to worry about persisting back. in xaml i add a reference to the namespace of this class then I add it as a usercontrol.resources to the page control. <UserControl.Resources> <This:A x:Key="A"/> </UserControl.Resources> and then i use it this staticresource to bind it to my combobox items source.in reality i have to use a datatemplate to display this object properly but i wont add that here. <Combobox ItemsSource="{StaticResource A}"/> Now when I'm in the designer I get the error: Cannot Create an Instance of "A". If i compile and run the code, it runs just fine. This seems to only affect the editing of the xaml page. What am I doing wrong?

    Read the article

  • IUsable: controlling resources in a better way than IDisposable

    - by Ilya Ryzhenkov
    I wish we have "Usable" pattern in C#, when code block of using construct would be passed to a function as delegate: class Usable : IUsable { public void Use(Action action) // implements IUsable { // acquire resources action(); // release resources } } and in user code: using (new Usable()) { // this code block is converted to delegate and passed to Use method above } Pros: Controlled execution, exceptions The fact of using "Usable" is visible in call stack Cons: Cost of delegate Do you think it is feasible and useful, and if it doesn't have any problems from the language point of view? Are there any pitfalls you can see? EDIT: David Schmitt proposed the following using(new Usable(delegate() { // actions here }) {} It can work in the sample scenario like that, but usually you have resource already allocated and want it to look like this: using (Repository.GlobalResource) { // actions here } Where GlobalResource (yes, I know global resources are bad) implements IUsable. You can rewrite is as short as Repository.GlobalResource.Use(() => { // actions here }); But it looks a little bit weird (and more weird if you implement interface explicitly), and this is so often case in various flavours, that I thought it deserve to be new syntactic sugar in a language.

    Read the article

  • Apache mod_rewrite RewriteCond to by-pass static resources not working

    - by d11wtq
    I can't for the life of me fathom out why this RewriteCond is causing every request to be sent to my FastCGI application when it should in fact be letting Apache serve up the static resources. I've added a hello.txt file to my DocumentRoot to demonstrate. The text file: ls /Sites/CioccolataTest.webapp/Contents/Resources/static hello.txt` The VirtualHost and it's rewrite rules: AppClass /Sites/CioccolataTest.webapp/Contents/MacOS/CioccolataTest -port 5065 FastCgiExternalServer /Sites/CioccolataTest.webapp/Contents/MacOS/CioccolataTest.fcgi -host 127.0.0.1:5065 <VirtualHost *:80> ServerName cioccolata-test.webdev DocumentRoot /Sites/CioccolataTest.webapp/Contents/Resources/static RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ /Sites/CioccolataTest.webapp/Contents/MacOS/CioccolataTest.fcgi/$1 [QSA,L] </VirtualHost> Even with the -f, Apache is directing requests for this text file (i.e. access http://cioccolata-test.webdev/hello.txt returns my app, not the text file). As a proof of concept I changed the RewriteCond to: RewriteCond %{REQUEST_URI} !^/hello.txt That made it serve the text file correctly and allowed every other request to hit the FastCGI application. Why doesn't my original version work? I need to tell apache to serve every file in the DocumentRoot as a static resource, but if the file doesn't exist it should rewrite the request to my FastCGI application. NOTE: The running FastCGI application is at /Sites/CioccolataTest.webapp/Contents/MacOS/CioccolataTest (without the .fcgi prefix)... the .fcgi prefix is only being used to tell the fastcgi module to direct the request to the app.

    Read the article

  • Sharing storage between servers

    - by El Yobo
    I have a PHP based web application which is currently only using one webserver but will shortly be scaling up to another. In most regards this is pretty straightforward, but the application also stores a lot of files on the filesystem. It seems that there are many approaches to sharing the files between the two servers, from the very simple to the reasonably complex. These are the options that I'm aware of Simple network storage NFS SMB/CIFS Clustered filesystems Lustre GFS/GFS2 GlusterFS Hadoop DFS MogileFS What I want is for a file uploaded via one webserver be immediately available if accessed through the other. The data is extremely important and absolutely cannot be lost, so whatever is implemented needs to a) never lose data and b) have very high availability (as good as, or better, than a local filesystem). It seems like the clustered filesystems will also provide faster data access than local storage (for large files) but that isn't of vita importance at the moment. What would you recommend? Do you have any suggestions to add or anything specifically to look out for with the above options? Any suggestions on how to manage backup of data on the clustered filesystems?

    Read the article

  • ResourceFilterFactory and non-Path annotated Resources

    - by tousdan
    (I'm using Jersey 1.7) I am attempting to add a ResourceFilterFactory in my project to select which filters are used per method using annotations. The ResourceFilterFactory seems to be able to filters on Resources which are annotated with the Path annotation but it would seem that it does not attempt to generate filters for the methods of the SubResourceLocator of the resources that are called. @Path("a") public class A { //sub resource locator? @Path("b") public B getB() { return new B(); } @GET public void doGet() {} } public class B { @GET public void doOtherGet() { } @Path("c") public void doInner() { } } When ran, the Filter factory will only be called for the following: AbstractResourceMethod(A#doGet) AbstractSubResourceLocator(A#getB) When I expected it to be called for every method of the sub resource. I'm currently using the following options in my web.xml; <init-param> <param-name>com.sun.jersey.spi.container.ResourceFilters</param-name> <param-value>com.my.MyResourceFilterFactory</param-value> </init-param> <init-param> <param-name>com.sun.jersey.config.property.packages</param-name> <param-value>com.my.resources</param-value> </init-param> Is my understanding of the filter factory flawed?

    Read the article

  • Castle, sharing a transient component between a decorator and a decorated component

    - by Marius
    Consider the following example: public interface ITask { void Execute(); } public class LoggingTaskRunner : ITask { private readonly ITask _taskToDecorate; private readonly MessageBuffer _messageBuffer; public LoggingTaskRunner(ITask taskToDecorate, MessageBuffer messageBuffer) { _taskToDecorate = taskToDecorate; _messageBuffer = messageBuffer; } public void Execute() { _taskToDecorate.Execute(); Log(_messageBuffer); } private void Log(MessageBuffer messageBuffer) {} } public class TaskRunner : ITask { public TaskRunner(MessageBuffer messageBuffer) { } public void Execute() { } } public class MessageBuffer { } public class Configuration { public void Configure() { IWindsorContainer container = null; container.Register( Component.For<MessageBuffer>() .LifeStyle.Transient); container.Register( Component.For<ITask>() .ImplementedBy<LoggingTaskRunner>() .ServiceOverrides(ServiceOverride.ForKey("taskToDecorate").Eq("task.to.decorate"))); container.Register( Component.For<ITask>() .ImplementedBy<TaskRunner>() .Named("task.to.decorate")); } } How can I make Windsor instantiate the "shared" transient component so that both "Decorator" and "Decorated" gets the same instance? Edit: since the design is being critiqued I am posting something closer to what is being done in the app. Maybe someone can suggest a better solution (if sharing the transient resource between a logger and the true task is considered a bad design)

    Read the article

  • Sharing a COM port over TCP

    - by guinness
    What would be a simple design pattern for sharing a COM port over TCP to multiple clients? For example, a local GPS device that could transmit co-ordinates to remote hosts in realtime. So I need a program that would open the serial port and accept multiple TCP connections like: class Program { public static void Main(string[] args) { SerialPort sp = new SerialPort("COM4", 19200, Parity.None, 8, StopBits.One); Socket srv = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); srv.Bind(new IPEndPoint(IPAddress.Any, 8000)); srv.Listen(20); while (true) { Socket soc = srv.Accept(); new Connection(soc); } } } I would then need a class to handle the communication between connected clients, allowing them all to see the data and keeping it synchronized so client commands are received in sequence: class Connection { static object lck = new object(); static List<Connection> cons = new List<Connection>(); public Socket socket; public StreamReader reader; public StreamWriter writer; public Connection(Socket soc) { this.socket = soc; this.reader = new StreamReader(new NetworkStream(soc, false)); this.writer = new StreamWriter(new NetworkStream(soc, true)); new Thread(ClientLoop).Start(); } void ClientLoop() { lock (lck) { connections.Add(this); } while (true) { lock (lck) { string line = reader.ReadLine(); if (String.IsNullOrEmpty(line)) break; foreach (Connection con in cons) con.writer.WriteLine(line); } } lock (lck) { cons.Remove(this); socket.Close(); } } } The problem I'm struggling to resolve is how to facilitate communication between the SerialPort instance and the threads. I'm not certain that the above code is the best way forward, so does anybody have another solution (the simpler the better)?

    Read the article

  • Polymorphic urls with singular resources

    - by Brendon Muir
    I'm getting strange output when using the following routing setup: resources :warranty_types do resources :decisions end resource :warranty_review, :only => [] do resources :decisions end I have many warranty_types but only one warranty_review (thus the singular route declaration). The decisions are polymorphically associated with both. I have just a single decisions controller and a single _form.html.haml partial to render the form for a decision. This is the view code: = simple_form_for @decision, :url => [@decision_tree_owner, @decision.becomes(Decision)] do |form| The warranty_type url looks like this (for a new decision): /warranty_types/2/decisions whereas the warranty_review url looks like this: /admin/warranty_review/decisions.1 I think because the warranty_review id has no where to go, it's just getting appended to the end as an extension. Can someone explain what's going on here and how I might be able to fix it? I can work around it by trying to detect for a warranty_review class and substituting @decision_tree_owner with :warranty_review and this generates the correct url, but this is messy. I would have thought that the routing would be smart enough to realise that warranty_review is a singular resource and thus discard the id from the URL. This is Rails 3 by the way :)

    Read the article

  • Sharing HDC between different processes

    - by Heinrich Ulbricht
    Hi, I am writing some kind of IPC functionality and need to pass certain resources from one process to another. This works well for Pipe handles etc. which can be duplicated via DuplicateHandle. Now I need to pass a HDC from one process to the other. Is this even possible? If yes: how? Sub-Question: I am assuming passing window handles (HWND) from one process to the other is safe. Is this assumption correct? Thanks for your help!

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >