Search Results

Search found 14951 results on 599 pages for 'connect'.

Page 126/599 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • IE / Facebook Issue : Why Facebook Like box not display in Internet Explorer6 - IE8 ?

    - by Vaibhav Bhalke
    Now My final Application.html file contains are < !DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/DTD/strict.dtd" < html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml" < head < meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" / < /head < body < script type="text/javascript" language="javascript" src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" < /script < script type="text/javascript" FB_RequireFeatures(["Connect"], function(){ var x=1; } ); < /script < script src="http://static.ak.connect.facebook.com/connect.php/en_US" type="text/javascript" < /script < /body < /html My Java code for LIke Box is as follows FBPageFanWidget.java class FBPageFanWidget extends Composite { public FBPageFanWidget() { VerticalPanel mainPanel = new VerticalPanel(); mainPanel .getElement() .setInnerHTML( "< script type='text/javascript' src='http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php/en_US'< /script< script type='text/javascript'FB.init('');< /script< fb:fan profile_id=\"113106068709539\" stream=\"0\" connections=\"10\" logobar=\"0\" width=\"244\" height=\"240\" css='http://127.0.01:8080/webapplicationname/facebook.css?1'< /fb:fan"); initWidget(mainPanel); } } We used proper facebook API_KEY & PAGE_ID It's very important for us to Show Facebook like Box in Our web application Because we have more IE users. If we add FBPageFanWidget.java in our web applicaton then Our Home page is not display in IE because we add Facebook LikeBox so we made changes in Our FBPageFanWidget.java class FBPageFanWidget extends Composite { public FBPageFanWidget() { VerticalPanel mainPanel = new VerticalPanel(); if (!isIE()) { mainPanel.getElement() .setInnerHTML("<script type='text/javascript' src='http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php/en_US'></script><script type='text/javascript'>FB.init('');</script><fb:fan profile_id=\"113106068709539\" stream=\"0\" connections=\"10\" logobar=\"0\" width=\"244\" height=\"240\" css='http://127.0.01:8080/webapplicationname/facebook.css?1'></fb:fan>"); } initWidget(mainPanel); } public native String getUserAgent() /-{ return navigator.userAgent; }-/; private boolean isIE() { return (getUserAgent().indexOf("MSIE") > -1); } } when we did this changes Then Facebook Like Box display in every browser excluding IE6 - IE8 :( and also display Our Home page in IE8 excludeing Facebook Like Box. It means There is probelm in IE ? or what changes i need to do in my html file or java file to show facebook like Box properly with displaying our home page It's very important for us to Show Facebook like Box in Our web application Because we have more IE users. Please Reply ASAP. Hope-for Best Co-operation from your side !!!!

    Read the article

  • Mercurial hg clone on Windows via ssh with copSSH issue

    - by Kyle Tolle
    I have a Windows Server 2008 machine (iis7) that has CopSSH set up on it. To connect to it, I have a Windows 7 machine with Mercurial 1.5.1 (and TortoiseHg) installed. I can connect to the server using PuTTY with a non-standard ssh port and a .ppk file just fine. So I know the server can be SSH'd into. Next, I wanted to use the CLI to connect via hg clone to get a private repo. I've seen elsewhere that you need to have ssh configured in your mercurial.ini file, so my mercurial.ini has a line: ssh = plink.exe -ssh -C -l username -P #### -i "C:/Program Files/PuTTY/Key Files/KyleKey.ppk" Note: username is filled in with the username I set up via copSSH. #### is filled in with the non-standard ssh port I've defined for copSSH. I try to do the command hg clone ssh://inthom.com but I get this error: remote: bash: inthom.com: command not found abort: no suitable response from remote hg! It looks like hg or plink parses the hostname such that it thinks that inthom.com is a command instead of the server to ssh to. That's really odd. Next, I tried to just use plink to connect by plink -P #### ssh://inthom.com, and I am then prompted for my username, and next password. I enter them both and then I get this error: bash: ssh://inthom.com: No such file or directory So now it looks like plink doesn't parse the hostname correctly. I fiddled around for a while trying to figure out how to do call hg clone with an empty ssh:// field and eventually figured out that this command allows me to reach the server and clone a test repo on the inthom.com server: hg clone ssh://!/Repos/test ! is the character I've found that let's me leave the hostname blank, but specify the repo folder to clone. What I really don't understand is how plink knows what server to ssh to at all. neither my mercurial.ini nor the command specify a server. None of the hg clone examples I've seen have a ! character. They all use an address, which makes sense, so you can connect to any repo via ssh that you want to clone. My only guess is that it somehow defaults to the last server I used PuTTY to SSH to, but I SSH'd into another server, and then tried to use plink to get to it, but plink still defaults to inthom.com (verified with the -v arg to plink). So I am at a loss as to how plink gets this server value at all. For "fun", I tried using TortoiseHg and can only clone a repo when I use ssh://!/Repos/test as the Source. Now, you can see that, since plink doesn't parse the hostname correctly, I had to specify the port number and username in the mercurial.ini file, instead of in the hostname like [email protected]:#### like you'd expect to. Trying to figure this out at first drove me insane, because I would get errors that the host couldn't be reached, which I knew shouldn't be the case. My question is how can I configure my setup so that ssh://[email protected]:####/Repos/test is parsed correctly as the username, hostname, port number, and repo to copy? Is it something wrong with the version of plink that I'm using, or is there some setting I may have messed up? If it is plink's fault, is there an alternative tool I can use? I'm going to try to get my friend set up to connect to this same repo, so I'd like to have a clean solution instead of this ! business. Especially when I have no idea how plink gets this default server, so I'm not sure if he'd even be able to get to inthom.com correctly. PS. I've had to use a ton of different tutorials to even get to this stage. Therefore, I haven't tried pushing any changes to the server yet. Hopefully I'll get this figured out and then I can try pushing changes to the repo.

    Read the article

  • Mercurial "hg clone" on Windows via ssh with plink issue

    - by Kyle Tolle
    I have a Windows Server 2008 machine (iis7) that has CopSSH set up on it. To connect to it, I have a Windows 7 machine with Mercurial 1.5.1 (and TortoiseHg) installed. I can connect to the server using PuTTY with a non-standard ssh port and a .ppk file just fine. So I know the server can be SSH'd into. Next, I wanted to use the CLI to connect via hg clone to get a private repo. I've seen elsewhere that you need to have ssh configured in your mercurial.ini file, so my mercurial.ini has a line: ssh = plink.exe -ssh -C -l username -P #### -i "C:/Program Files/PuTTY/Key Files/KyleKey.ppk" Note: username is filled in with the username I set up via copSSH. #### is filled in with the non-standard ssh port I've defined for copSSH. I try to do the command hg clone ssh://inthom.com but I get this error: remote: bash: inthom.com: command not found abort: no suitable response from remote hg! It looks like hg or plink parses the hostname such that it thinks that inthom.com is a command instead of the server to ssh to. That's really odd. Next, I tried to just use plink to connect by plink -P #### ssh://inthom.com, and I am then prompted for my username, and next password. I enter them both and then I get this error: bash: ssh://inthom.com: No such file or directory So now it looks like plink doesn't parse the hostname correctly. I fiddled around for a while trying to figure out how to do call hg clone with an empty ssh:// field and eventually figured out that this command allows me to reach the server and clone a test repo on the inthom.com server: hg clone ssh://!/Repos/test ! is the character I've found that let's me leave the hostname blank, but specify the repo folder to clone. What I really don't understand is how plink knows what server to ssh to at all. neither my mercurial.ini nor the command specify a server. None of the hg clone examples I've seen have a ! character. They all use an address, which makes sense, so you can connect to any repo via ssh that you want to clone. My only guess is that it somehow defaults to the last server I used PuTTY to SSH to, but I SSH'd into another server, and then tried to use plink to get to it, but plink still defaults to inthom.com (verified with the -v arg to plink). So I am at a loss as to how plink gets this server value at all. For "fun", I tried using TortoiseHg and can only clone a repo when I use ssh://!/Repos/test as the Source. Now, you can see that, since plink doesn't parse the hostname correctly, I had to specify the port number and username in the mercurial.ini file, instead of in the hostname like [email protected]:#### like you'd expect to. Trying to figure this out at first drove me insane, because I would get errors that the host couldn't be reached, which I knew shouldn't be the case. My question is how can I configure my setup so that ssh://[email protected]:####/Repos/test is parsed correctly as the username, hostname, port number, and repo to copy? Is it something wrong with the version of plink that I'm using, or is there some setting I may have messed up? If it is plink's fault, is there an alternative tool I can use? I'm going to try to get my friend set up to connect to this same repo, so I'd like to have a clean solution instead of this ! business. Especially when I have no idea how plink gets this default server, so I'm not sure if he'd even be able to get to inthom.com correctly. PS. I've had to use a ton of different tutorials to even get to this stage. Therefore, I haven't tried pushing any changes to the server yet. Hopefully I'll get this figured out and then I can try pushing changes to the repo.

    Read the article

  • Building a JMX client in a servlet installed on the Deployment Manager

    - by Trevor
    Hi guys, I'm building a monitoring application as a servlet running on my websphere 7 ND deployment manager. The tool uses JMX to query the deployment manager for various data. Global Security is enabled on the dmgr. I'm having problems getting this to work however. My first attempt was to use the websphere client code: String sslProps = "file:" + base +"/properties/ssl.client.props"; System.setProperty("com.ibm.SSL.ConfigURL", sslProps); String soapProps = "file:" + base +"/properties/soap.client.props"; System.setProperty("com.ibm.SOAP.ConfigURL", pp); Properties connectProps = new Properties(); connectProps.setProperty(AdminClient.CONNECTOR_TYPE, AdminClient.CONNECTOR_TYPE_SOAP); connectProps.setProperty(AdminClient.CONNECTOR_HOST, dmgrHost); connectProps.setProperty(AdminClient.CONNECTOR_PORT, soapPort); connectProps.setProperty(AdminClient.CONNECTOR_SECURITY_ENABLED, "true"); AdminClient adminClient = AdminClientFactory.createAdminClient(connectProps) ; This results in the following exception: Caused by: com.ibm.websphere.management.exception.ConnectorNotAvailableException: ADMC0016E: The system cannot create a SOAP connector to connect to host ssunlab10.apaceng.net at port 13903. at com.ibm.ws.management.connector.soap.SOAPConnectorClient.getUrl(SOAPConnectorClient.java:1306) at com.ibm.ws.management.connector.soap.SOAPConnectorClient.access$300(SOAPConnectorClient.java:128) at com.ibm.ws.management.connector.soap.SOAPConnectorClient$4.run(SOAPConnectorClient.java:370) at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:118) at com.ibm.ws.management.connector.soap.SOAPConnectorClient.reconnect(SOAPConnectorClient.java:363) ... 22 more Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:519) at java.net.Socket.connect(Socket.java:469) at java.net.Socket.<init>(Socket.java:366) at java.net.Socket.<init>(Socket.java:209) at com.ibm.ws.management.connector.soap.SOAPConnectorClient.getUrl(SOAPConnectorClient.java:1286) ... 26 more So, I then tried to do it via RMI, but adding in the sas.client.properties to the environment, and setting the connectort type in the code to CONNECTOR_TYPE_RMI. Now though I got a NameNotFoundException out of CORBA: Caused by: javax.naming.NameNotFoundException: Context: , name: JMXConnector: First component in name JMXConnector not found. [Root exception is org.omg.CosNaming.NamingContextPackage.NotFound: IDL:omg.org/CosNaming/NamingContext/NotFound:1.0] To see if it was an IBM issue, I tried using the standard JMX connector as well with the same result (substitute AdminClient for JMXConnector in the above error) * JMXServiceURL url = new JMXServiceURL("service:jmx:rmi:///jndi/JMXConnector"); Hashtable h = new Hashtable(); String providerUrl = "corbaloc:iiop:" + dmgrHost + ":" + rmiPort + "/WsnAdminNameService"; h.put(Context.PROVIDER_URL, providerUrl); // Specify the user ID and password for the server if security is enabled on server. String[] credentials = new String[] { "***", "***" }; h.put("jmx.remote.credentials", credentials); // Establish the JMX connection. JMXConnector jmxc = JMXConnectorFactory.connect(url, h); // Get the MBean server connection instance. mbsc = jmxc.getMBeanServerConnection(); At this point, in desperation I wrote a wsadmin sccript to run both the RMI and SOAP methods. To my amazement, this works fine. So my question is, why does the code not work in a servlet installed on the dmgr ? regards, Trevor

    Read the article

  • IE / Facebook Issue : Why Facebook Like box not display in Internet Explorer 6 - IE8 ?

    - by Vaibhav Bhalke
    IE / Facebook Issue : Why Facebook Like box not display in Internet Explorer6 - IE8 ? Facebook like box display throgh my web application on every browser excluding IE-IE8 Now final Application.html file contains are < !DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/DTD/strict.dtd"><BR> < html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> <BR>< head> < meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" /> < /head><BR> < body> < script type="text/javascript" language="javascript" src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php"> < /script> <BR> < script type="text/javascript"> FB_RequireFeatures(["Connect"], function(){ var x=1; } ); < /script> <BR> < script src="http://static.ak.connect.facebook.com/connect.php/en_US" type="text/javascript"> < /script> < /body> < /html> My Java code for LIke Box is as follows FBPageFanWidget.java class FBPageFanWidget extends Composite { public FBPageFanWidget() { VerticalPanel mainPanel = new VerticalPanel(); mainPanel .getElement() .setInnerHTML( "< script type='text/javascript' src='http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php/en_US'>< /script>< script type='text/javascript'>FB.init('');< /script>< fb:fan profile_id=\"113106068709539\" stream=\"0\" connections=\"10\" logobar=\"0\" width=\"244\" height=\"240\" css='http://127.0.01:8080/webapplicationname/facebook.css?1'>< /fb:fan>"); initWidget(mainPanel); } } We used proper facebook API_KEY & PAGE_ID It's very important for us to Show Facebook like Box in Our web application Because we have more IE users. If we add FBPageFanWidget.java in our web applicaton then Our Home page is not display in IE because we add Facebook LikeBox so we made changes in Our FBPageFanWidget.java class FBPageFanWidget extends Composite { public FBPageFanWidget() { VerticalPanel mainPanel = new VerticalPanel(); if (!isIE()) { mainPanel.getElement() .setInnerHTML("<script type='text/javascript' src='http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php/en_US'></script><script type='text/javascript'>FB.init('');</script><fb:fan profile_id=\"113106068709539\" stream=\"0\" connections=\"10\" logobar=\"0\" width=\"244\" height=\"240\" css='http://127.0.01:8080/webapplicationname/facebook.css?1'></fb:fan>"); } initWidget(mainPanel); } public native String getUserAgent() /*-{ return navigator.userAgent; }-*/; private boolean isIE() { return (getUserAgent().indexOf("MSIE") > -1); } } when we did this changes Then Facebook Like Box display in every browser excluding IE6 - IE8 :( and also display Our Home page in IE8 excludeing Facebook Like Box. It means There is probelm in IE ? or what changes i need to do in my html file or java file to show facebook like Box properly with displaying our home page It's very important for us to Show Facebook like Box in Our web application Because we have more IE users. Please Reply ASAP. Hope-for Best Co-operation from your side !!!!

    Read the article

  • Qthread - trouble shutting down threads

    - by Bryan Greenway
    For the last few days, I've been trying out the new preferred approach for using QThreads without subclassing QThread. The trouble I'm having is when I try to shutdown a set of threads that I created. I regularly get a "Destroyed while thread is still running" message (if I'm running in Debug mode, I also get a Segmentation Fault dialog). My code is very simple, and I've tried to follow the examples that I've been able to find on the internet. My basic setup is as follows: I've a simple class that I want to run in a separate thread; in fact, I want to run 5 instances of this class, each in a separate thread. I have a simple dialog with a button to start each thread, and a button to stop each thread (10 buttons). When I click one of the "start" buttons, a new instance of the test class is created, a new QThread is created, a movetothread is called to get the test class object to the thread...also, since I have a couple of other members in the test class that need to move to the thread, I call movetothread a few additional times with these other items. Note that one of these items is a QUdpSocket, and although this may not make sense, I wanted to make sure that sockets could be moved to a separate thread in this fashion...I haven't tested the use of the socket in the thread at this point. Starting of the threads all seem to work fine. When I use the linux top command to see if the threads are created and running, they show up as expected. The problem occurs when I begin stopping the threads. I randomly (or it appears to be random) get the error described above. Class that is to run in separate thread: // Declaration class TestClass : public QObject { Q_OBJECT public: explicit TestClass(QObject *parent = 0); QTimer m_workTimer; QUdpSocket m_socket; Q_SIGNALS: void finished(); public Q_SLOTS: void start(); void stop(); void doWork(); }; // Implementation TestClass::TestClass(QObject *parent) : QObject(parent) { } void TestClass::start() { connect(&m_workTimer, SIGNAL(timeout()),this,SLOT(doWork())); m_workTimer.start(50); } void TestClass::stop() { m_workTimer.stop(); emit finished(); } void TestClass::doWork() { int j; for(int i = 0; i<10000; i++) { j = i; } } Inside my main app, code called to start the first thread (similar code exists for each of the other threads): mp_thread1 = new QThread(); mp_testClass1 = new TestClass(); mp_testClass1->moveToThread(mp_thread1); mp_testClass1->m_socket.moveToThread(mp_thread1); mp_testClass1->m_workTimer.moveToThread(mp_thread1); connect(mp_thread1, SIGNAL(started()), mp_testClass1, SLOT(start())); connect(mp_testClass1, SIGNAL(finished()), mp_thread1, SLOT(quit())); connect(mp_testClass1, SIGNAL(finished()), mp_testClass1, SLOT(deleteLater())); connect(mp_testClass1, SIGNAL(finished()), mp_thread1, SLOT(deleteLater())); connect(this,SIGNAL(stop1()),mp_testClass1,SLOT(stop())); mp_thread1->start(); Also inside my main app, this code is called when a stop button is clicked for a specific thread (in this case thread 1): emit stop1(); Sometimes it appears that threads are stopped and destroyed without issue. Other times, I get the error described above. Any guidance would be greatly appreciated. Thanks, Bryan

    Read the article

  • How to support both HTTP and HTTPS channels in Flex/BlazeDS?

    - by digitalsanctum
    I've been trying to find the right configuration for supporting both http/s requests in a Flex app. I've read all the docs and they allude to doing something like the following: <default-channels> <channel ref="my-secure-amf"> <serialization> <log-property-errors>true</log-property-errors> </serialization> </channel> <channel ref="my-amf"> <serialization> <log-property-errors>true</log-property-errors> </serialization> </channel> This works great when hitting the app via https but get intermittent communication failures when hitting the same app via http. Here's an abbreviated services-config.xml: <channel-definition id="my-amf" class="mx.messaging.channels.AMFChannel"> <endpoint url="http://{server.name}:{server.port}/{context.root}/messagebroker/amf" class="flex.messaging.endpoints.AMFEndpoint"/> <properties> <!-- HTTPS requests don't work on IE when pragma "no-cache" headers are set so you need to set the add-no-cache-headers property to false --> <add-no-cache-headers>false</add-no-cache-headers> <!-- Use to limit the client channel's connect attempt to the specified time interval. --> <connect-timeout-seconds>10</connect-timeout-seconds> </properties> </channel-definition> <channel-definition id="my-secure-amf" class="mx.messaging.channels.SecureAMFChannel"> <!--<endpoint url="https://{server.name}:{server.port}/{context.root}/messagebroker/amfsecure" class="flex.messaging.endpoints.SecureAMFEndpoint"/>--> <endpoint url="https://{server.name}:{server.port}/{context.root}/messagebroker/amfsecure" class="flex.messaging.endpoints.AMFEndpoint"/> <properties> <add-no-cache-headers>false</add-no-cache-headers> <connect-timeout-seconds>10</connect-timeout-seconds> </properties> </channel-definition> I'm running with Tomcat 5.5.17 and Java 5. The BlazeDS docs say this is the best practice. Is there a better way? With this config, there seems to be 2-3 retries associated with each channel defined in the default-channels element so it always takes ~20s before the my-amf channel connects via a http request. Is there a way to override the 2-3 retries to say, 1 retry for each channel? Thanks in advance for answers.

    Read the article

  • PHP5.3 is not working with MySQL5.1 IIS7 Times out

    - by Thorn007
    I have set up PHP5.3, MySQL5.1, and IIS7 on Window 7 but php doesn't want to work with MySQL. I'm assuming it is a configuration error or an incomplete install on my part. MySQL5.1 is working PHP5.3 is working, phpinfo() shows info and that i have enabled MySQL IIS is setup and using fastCgiModule to run PHP IIS registers php.ini updates port 3306 is firewall free and open to the world php.ini is configured correctly I have added c:\php to the Windows systems PATH In the past I remember moving a file, libmysql.dll, to System32 but I doesn't look like that come with php5.3.1, as the driver comes built in now http://us3.php.net/manual/en/mysqlnd.install.php. (This has been giving me so much trouble I have been documenting my findings on my blog as http://inteldesigner.com/2010/code/having-problems-getting-php5-3-to-work-with-mysql5-1 ) NEED: I need to install PHP manually, don't want to use the quick installer or an older version I need to get PHP5.3 to work with MySQL5.1 so i can install Wordpress2.9 and Drupal7a Any links or suggestion would be great, I have already done everything on the iis web site, nothing is working. I'm guessing they have not updated for new software. BUGS/SOLUTION: The solution is here: http://bugs.php.net/bug.php?id=50172 thanks go to don.raman on the iis.net forums http://forums.iis.net/p/1164911/1933894.aspx SYMPTOMS: The php function mysql_connect() in conjunction with php5.3 locks up sever and returns error 500. (IPv6 is the problem see above link) TEST CODE: <?php $con = mysql_connect("localhost","root","***"); if (!$con) { die('Could not connect: ' . mysql_error()); } // some code mysql_close($con); ?> ERRORS: From Browser: HTTP Error 500.0 - Internal Server Error C:\php\php-cgi.exe - The FastCGI process exceeded configured activity timeout When i run php -f c:\public_html\index.php from the command line i got: PHP Warning: mysql_connect(): [2002] A connection attempt failed because the co nnected party did not (trying to connect via tcp://localhost:3306) in C:\public _html\index.php on line 10 Warning: mysql_connect(): [2002] A connection attempt failed because the connect ed party did not (trying to connect via tcp://localhost:3306) in C:\public_html \index.php on line 10 PHP Warning: mysql_connect(): A connection attempt failed because the connected party did not properly respond after a period of time, or established connectio n failed because connected host has failed to respond. in C:\public_html\index.php on line 10 Warning: mysql_connect(): A connection attempt failed because the connected part y did not properly respond after a period of time, or established connection fai led because connected host has failed to respond. in C:\public_html\index.php on line 10 Could not connect: A connection attempt failed because the connected party did n ot properly respond after a period of time, or established connection failed bec ause connected host has failed to respond. C:\Users\Kevin>

    Read the article

  • python Socket.IO client for sending broadcast messages to TornadIO2 server

    - by Alp
    I am building a realtime web application. I want to be able to send broadcast messages from the server-side implementation of my python application. Here is the setup: socketio.js on the client-side TornadIO2 server as Socket.IO server python on the server-side (Django framework) I can succesfully send socket.io messages from the client to the server. The server handles these and can send a response. In the following i will describe how i did that. Current Setup and Code First, we need to define a Connection which handles socket.io events: class BaseConnection(tornadio2.SocketConnection): def on_message(self, message): pass # will be run if client uses socket.emit('connect', username) @event def connect(self, username): # send answer to client which will be handled by socket.on('log', function) self.emit('log', 'hello ' + username) Starting the server is done by a Django management custom method: class Command(BaseCommand): args = '' help = 'Starts the TornadIO2 server for handling socket.io connections' def handle(self, *args, **kwargs): autoreload.main(self.run, args, kwargs) def run(self, *args, **kwargs): port = settings.SOCKETIO_PORT router = tornadio2.TornadioRouter(BaseConnection) application = tornado.web.Application( router.urls, socket_io_port = port ) print 'Starting socket.io server on port %s' % port server = SocketServer(application) Very well, the server runs now. Let's add the client code: <script type="text/javascript"> var sio = io.connect('localhost:9000'); sio.on('connect', function(data) { console.log('connected'); sio.emit('connect', '{{ user.username }}'); }); sio.on('log', function(data) { console.log("log: " + data); }); </script> Obviously, {{ user.username }} will be replaced by the username of the currently logged in user, in this example the username is "alp". Now, every time the page gets refreshed, the console output is: connected log: hello alp Therefore, invoking messages and sending responses works. But now comes the tricky part. Problems The response "hello alp" is sent only to the invoker of the socket.io message. I want to broadcast a message to all connected clients, so that they can be informed in realtime if a new user joins the party (for example in a chat application). So, here are my questions: How can i send a broadcast message to all connected clients? How can i send a broadcast message to multiple connected clients that are subscribed on a specific channel? How can i send a broadcast message anywhere in my python code (outside of the BaseConnection class)? Would this require some sort of Socket.IO client for python or is this builtin with TornadIO2? All these broadcasts should be done in a reliable way, so i guess websockets are the best choice. But i am open to all good solutions.

    Read the article

  • Connecting client (on VirtualBox) and server (on localhost) using CORBA - org.omg.CORBA.BAD_PARAM:

    - by yak
    Im working now on simple gui appllication in Java/C++ and CORBA. I want my client on VirtualBox connect to server on localhost. When I have a simple app, like a calc I wrote about earlier its just fine. But when it comes to run client which needs some args witch javas -cp option, Im getting errors. (Theres no such problem when I have both client and server in localhost!) My errors: WARNING: "IOP00100007: (BAD_PARAM) string_to_object conversion failed due to bad scheme name" org.omg.CORBA.BAD_PARAM: vmcid: OMG minor code: 7 completed: No at com.sun.corba.se.impl.logging.OMGSystemException.soBadSchemeName(Unkn own Source) at com.sun.corba.se.impl.logging.OMGSystemException.soBadSchemeName(Unkn own Source) at com.sun.corba.se.impl.resolver.INSURLOperationImpl.operate(Unknown So urce) at com.sun.corba.se.impl.resolver.ORBInitRefResolverImpl.resolve(Unknown Source) at com.sun.corba.se.impl.resolver.CompositeResolverImpl.resolve(Unknown Source) at com.sun.corba.se.impl.resolver.CompositeResolverImpl.resolve(Unknown Source) at com.sun.corba.se.impl.orb.ORBImpl.resolve_initial_references(Unknown Source) at ClientConnection.connect(ClientConnection.java:57) at Client.main(Client.java:295) Exception in thread "main" org.omg.CORBA.BAD_PARAM: vmcid: OMG minor code: 7 completed: No at com.sun.corba.se.impl.logging.OMGSystemException.soBadSchemeName(Unkn own Source) at com.sun.corba.se.impl.logging.OMGSystemException.soBadSchemeName(Unkn own Source) at com.sun.corba.se.impl.resolver.INSURLOperationImpl.operate(Unknown So urce) at com.sun.corba.se.impl.resolver.ORBInitRefResolverImpl.resolve(Unknown Source) at com.sun.corba.se.impl.resolver.CompositeResolverImpl.resolve(Unknown Source) at com.sun.corba.se.impl.resolver.CompositeResolverImpl.resolve(Unknown Source) at com.sun.corba.se.impl.orb.ORBImpl.resolve_initial_references(Unknown Source) at ClientConnection.connect(ClientConnection.java:57) at Client.main(Client.java:295) make[1]: *** [run] Error 1 ClientConnection.java:57 is a line objRef = clientORB.resolve_initial_references("NameService"); Client.java:295 is a line: ClientConnection.connect(args); A connect method is just an ordinary client-connection corba code. I ran my example: 1) C:\Temp\Client>java -cp .:../Dir1:../Dir2 Client -ORBInitRef NameService =corbaloc::192.168.56.1:2809/NameService Error: Could not find or load main class Client so its even didnt run at all .. 2) with the help of a Makefile: HOST = 192.168.56.1 PORT = 2809 NAMESERVICE = NameService run: java -cp .:../Dir1:../Dir2 Client -ORBInitRef NameService=corbaloc::$(HOST):$(PORT)/$(NAMESERVICE) by typing make run and then I got those error I posted earlier. Whats wrong? I mean, a simple code works fine but gui version doesnt want to ... is there a problem with -cp option? I cant change my apps' dir tree.

    Read the article

  • Sockets, Threads and Services in android, how to make them work together ?

    - by Spredzy
    Hi all, I am facing a probleme with threads and sockets I cant figure it out, if someone can help me please i would really appreciate. There are the facts : I have a service class NetworkService, inside this class I have a Socket attribute. I would like it be at the state of connected for the whole lifecycle of the service. To connect the socket I do it in a thread, so if the server has to timeout, it would not block my UI thread. Problem is, into the thread where I connect my socket everything is fine, it is connected and I can talk to my server, once this thread is over and I try to reuse the socket, in another thread, I have the error message Socket is not connected. Questions are : - Is the socket automatically disconnected at the end of the thread? - Is their anyway we can pass back a value from a called thread to the caller ? Thanks a lot, Here is my code public class NetworkService extends Service { private Socket mSocket = new Socket(); private void _connectSocket(String addr, int port) { Runnable connect = new connectSocket(this.mSocket, addr, port); new Thread(connect).start(); } private void _authentification() { Runnable auth = new authentification(); new Thread(auth).start(); } private INetwork.Stub mBinder = new INetwork.Stub() { @Override public int doConnect(String addr, int port) throws RemoteException { _connectSocket(addr, port); _authentification(); return 0; } }; class connectSocket implements Runnable { String addrSocket; int portSocket; int TIMEOUT=5000; public connectSocket(String addr, int port) { addrSocket = addr; portSocket = port; } @Override public void run() { SocketAddress socketAddress = new InetSocketAddress(addrSocket, portSocket); try { mSocket.connect(socketAddress, TIMEOUT); PrintWriter out = new PrintWriter(mSocket.getOutputStream(), true); out.println("test42"); Log.i("connectSocket()", "Connection Succesful"); } catch (IOException e) { Log.e("connectSocket()", e.getMessage()); e.printStackTrace(); } } } class authentification implements Runnable { private String constructFirstConnectQuery() { String query = "toto"; return query; } @Override public void run() { BufferedReader in; PrintWriter out; String line = ""; try { in = new BufferedReader(new InputStreamReader(mSocket.getInputStream())); out = new PrintWriter(mSocket.getOutputStream(), true); out.println(constructFirstConnectQuery()); while (mSocket.isConnected()) { line = in.readLine(); Log.e("LINE", "[Current]- " + line); } } catch (IOException e) {e.printStackTrace();} } }

    Read the article

  • Custom dynamic error pages in Ruby on Rails not working

    - by PlanetMaster
    Hi, I'm trying to implement custom dynamic error pages following this post: http://www.perfectline.co.uk/blog/custom-dynamic-error-pages-in-ruby-on-rails I did exactly what the blog post says. I included config.action_controller.consider_all_requests_local = false in my environment.rb. But is not working. My browser shows: Routing Error No route matches "/555" with {:method=>:get} So, it looks like the rescues are not fired. I get the following in my log file: ActionController::RoutingError (No route matches "/555" with {:method=>:get}): Rendering rescues/layout (not_found) Is there some routing interfering with the code? I'm not sure what to look for. I'm running rails 2.3.5. Here is the routes.rb file: ActionController::Routing::Routes.draw do |map| # routing van property-url map.connect 'buy/:property_type_plural/:province/:city/:address/:house_number', :controller => 'properties' , :action => 'show', :id => 'whatever' map.myimmonatie 'myimmonatie' , :controller => 'myimmonatie/properties', :action => 'index' map.login "login", :controller => "user_sessions", :action => "create", :conditions => {:method => :post} map.login "login", :controller => "user_sessions", :action => "new" map.logout "logout", :controller => "user_sessions", :action => "destroy" map.buy "buy", :controller => 'buy' map.sell "sell", :controller => 'sell' map.home "home", :controller => 'home' map.disclaimer "disclaimer", :controller => 'disclaimer' map.sign_up "sign_up", :controller => 'users', :action => :new map.contact "contact", :controller => 'contact' map.resources :user_sessions map.resources :contact map.resources :password_resets map.resources :messages map.resources :users, :only => [:index,:new,:create,:activate,:edit,:profile,:password] map.resources :images map.resources :activation , :only => [:new,:resend] map.resources :email map.resources :properties, :except => [:index,:destroy] map.namespace :admin do |admin| admin.resources :users admin.resources :properties admin.resources :order_items, :as => :orders admin.resources :blog_posts, :as => :blog end map.connect 'myimmonatie/:action' , :controller => 'users', :id => 'current', :requirements => {:action => /(profile)|(password)|(email)/} map.namespace :myimmonatie do |myimmonatie| myimmonatie.resources :messages, :controller => 'messages' myimmonatie.resources :password, :as => "password", :controller => 'users', :action => 'password' myimmonatie.resources :properties , :controller => 'properties' myimmonatie.resources :orders , :only => [:index,:show,:create,:new] end map.root :controller => "home" map.connect ':controller/:action' map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' end ActionController::Routing::Translator.translate_from_file('config','i18n-routes.yml')

    Read the article

  • How to refresh the textbox text when tabs are Changed in WPF

    - by StonedJesus
    Well in my WPF application I am using Tab Control which has around 5 tabs. The view of each tab is a user control which I add via a tool box. Main Xaml File: <Grid> <TabControl Height="Auto" HorizontalAlignment="Stretch" Margin="0" Name="tabControl1" VerticalAlignment="Stretch" Width="Auto"> <TabItem Header="Device Control" Name="Connect"> <ScrollViewer Height="Auto" Name="scrollViewer1" Width="Auto"> <my:ConnectView Name="connectView1" /> </ScrollViewer> </TabItem> <TabItem Header="I2C"> <ScrollViewer Height="Auto" Name="scrollViewer2" Width="Auto"> <my1:I2CControlView Name="i2CControlView1" /> </ScrollViewer> </TabItem> <TabItem Header="Voltage"> <ScrollViewer Height="Auto" Name="scrollViewer3" Width="Auto"> <my2:VoltageView Name="voltageView1" /> </ScrollViewer> </TabItem> </TabControl> </Grid> If you notice each view ie.e Connect, I2C and Voltage is a user control which has a view, viewmodel and model class :) Each of these views have set of textboxes in their respective xaml files. Connect.xaml: <Grid> <Textbox Text="{Binding Box}", Name="hello" /> // Some more textboxes </Grid> I2c.xaml: <Grid> <Textbox Text="{Binding I2CBox}", Name="helI2c" /> // Some more textboxes </Grid> Voltage.xaml: <Grid> <Textbox Text="{Binding VoltBox}", Name="heVoltllo" /> // Some more textboxes </Grid>** By default I have set the text of these textboxes to some value. Lets say "12" "13" "14" respectively in my view model classes. My main requirement is to set the text of these textboxes present in each user control to get refreshed when I change the tab. Description: Lets say Connect View is displayed: Value of Textbox is 12 and I edit it and change it to 16. Now I click on I2C tab and then I go back to Connect tab, I want the textbox value to get refreshed back to the initial value i.e. 12. To be precise, is their a method called visibilitychanged() which I can write in all my user control classes, where I can set the value of these Ui components whenever tabs are changed? Please help :)

    Read the article

  • Diving into OpenStack Network Architecture - Part 1

    - by Ronen Kofman
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} rkofman Normal rkofman 83 3045 2014-05-23T21:11:00Z 2014-05-27T06:58:00Z 3 1883 10739 Oracle Corporation 89 25 12597 12.00 140 Clean Clean false false false false EN-US X-NONE HE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} Before we begin OpenStack networking has very powerful capabilities but at the same time it is quite complicated. In this blog series we will review an existing OpenStack setup using the Oracle OpenStack Tech Preview and explain the different network components through use cases and examples. The goal is to show how the different pieces come together and provide a bigger picture view of the network architecture in OpenStack. This can be very helpful to users making their first steps in OpenStack or anyone wishes to understand how networking works in this environment.  We will go through the basics first and build the examples as we go. According to the recent Icehouse user survey and the one before it, Neutron with Open vSwitch plug-in is the most widely used network setup both in production and in POCs (in terms of number of customers) and so in this blog series we will analyze this specific OpenStack networking setup. As we know there are many options to setup OpenStack networking and while Neturon + Open vSwitch is the most popular setup there is no claim that it is either best or the most efficient option. Neutron + Open vSwitch is an example, one which provides a good starting point for anyone interested in understanding OpenStack networking. Even if you are using different kind of network setup such as different Neutron plug-in or even not using Neutron at all this will still be a good starting point to understand the network architecture in OpenStack. The setup we are using for the examples is the one used in the Oracle OpenStack Tech Preview. Installing it is simple and it would be helpful to have it as reference. In this setup we use eth2 on all servers for VM network, all VM traffic will be flowing through this interface.The Oracle OpenStack Tech Preview is using VLANs for L2 isolation to provide tenant and network isolation. The following diagram shows how we have configured our deployment: This first post is a bit long and will focus on some basic concepts in OpenStack networking. The components we will be discussing are Open vSwitch, network namespaces, Linux bridge and veth pairs. Note that this is not meant to be a comprehensive review of these components, it is meant to describe the component as much as needed to understand OpenStack network architecture. All the components described here can be further explored using other resources. Open vSwitch (OVS) In the Oracle OpenStack Tech Preview OVS is used to connect virtual machines to the physical port (in our case eth2) as shown in the deployment diagram. OVS contains bridges and ports, the OVS bridges are different from the Linux bridge (controlled by the brctl command) which are also used in this setup. To get started let’s view the OVS structure, use the following command: # ovs-vsctl show 7ec51567-ab42-49e8-906d-b854309c9edf     Bridge br-int         Port br-int             Interface br-int type: internal         Port "int-br-eth2"             Interface "int-br-eth2"     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2" ovs_version: "1.11.0" We see a standard post deployment OVS on a compute node with two bridges and several ports hanging off of each of them. The example above is a compute node without any VMs, we can see that the physical port eth2 is connected to a bridge called “br-eth2”. We also see two ports "int-br-eth2" and "phy-br-eth2" which are actually a veth pair and form virtual wire between the two bridges, veth pairs are discussed later in this post. When a virtual machine is created a port is created on one the br-int bridge and this port is eventually connected to the virtual machine (we will discuss the exact connectivity later in the series). Here is how OVS looks after a VM was launched: # ovs-vsctl show efd98c87-dc62-422d-8f73-a68c2a14e73d     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port br-int             Interface br-int type: internal         Port "qvocb64ea96-9f" tag: 1             Interface "qvocb64ea96-9f"     Bridge "br-eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2" ovs_version: "1.11.0" Bridge "br-int" now has a new port "qvocb64ea96-9f" which connects to the VM and tagged with VLAN 1. Every VM which will be launched will add a port on the “br-int” bridge for every network interface the VM has. Another useful command on OVS is dump-flows for example: # ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=735.544s, table=0, n_packets=70, n_bytes=9976, idle_age=17, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=76679.786s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=1 actions=drop cookie=0x0, duration=76681.36s, table=0, n_packets=68, n_bytes=7950, idle_age=17, hard_age=65534, priority=1 actions=NORMAL As we see the port which is connected to the VM has the VLAN tag 1. However the port on the VM network (eth2) will be using tag 1000. OVS is modifying the vlan as the packet flow from the VM to the physical interface. In OpenStack the Open vSwitch agent takes care of programming the flows in Open vSwitch so the users do not have to deal with this at all. If you wish to learn more about how to program the Open vSwitch you can read more about it at http://openvswitch.org looking at the documentation describing the ovs-ofctl command. Network Namespaces (netns) Network namespaces is a very cool Linux feature can be used for many purposes and is heavily used in OpenStack networking. Network namespaces are isolated containers which can hold a network configuration and is not seen from outside of the namespace. A network namespace can be used to encapsulate specific network functionality or provide a network service in isolation as well as simply help to organize a complicated network setup. Using the Oracle OpenStack Tech Preview we are using the latest Unbreakable Enterprise Kernel R3 (UEK3), this kernel provides a complete support for netns. Let's see how namespaces work through couple of examples to control network namespaces we use the ip netns command: Defining a new namespace: # ip netns add my-ns # ip netns list my-ns As mentioned the namespace is an isolated container, we can perform all the normal actions in the namespace context using the exec command for example running the ifconfig command: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:16436 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) We can run every command in the namespace context, this is especially useful for debug using tcpdump command, we can ping or ssh or define iptables all within the namespace. Connecting the namespace to the outside world: There are various ways to connect into a namespaces and between namespaces we will focus on how this is done in OpenStack. OpenStack uses a combination of Open vSwitch and network namespaces. OVS defines the interfaces and then we can add those interfaces to namespace. So first let's add a bridge to OVS: # ovs-vsctl add-br my-bridge Now let's add a port on the OVS and make it internal: # ovs-vsctl add-port my-bridge my-port # ovs-vsctl set Interface my-port type=internal And let's connect it into the namespace: # ip link set my-port netns my-ns Looking inside the namespace: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:65536 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) my-port   Link encap:Ethernet HWaddr 22:04:45:E2:85:21           BROADCAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) Now we can add more ports to the OVS bridge and connect it to other namespaces or other device like physical interfaces. Neutron is using network namespaces to implement network services such as DCHP, routing, gateway, firewall, load balance and more. In the next post we will go into this in further details. Linux Bridge and veth pairs Linux bridge is used to connect the port from OVS to the VM. Every port goes from the OVS bridge to a Linux bridge and from there to the VM. The reason for using regular Linux bridges is for security groups’ enforcement. Security groups are implemented using iptables and iptables can only be applied to Linux bridges and not to OVS bridges. Veth pairs are used extensively throughout the network setup in OpenStack and are also a good tool to debug a network problem. Veth pairs are simply a virtual wire and so veths always come in pairs. Typically one side of the veth pair will connect to a bridge and the other side to another bridge or simply left as a usable interface. In this example we will create some veth pairs, connect them to bridges and test connectivity. This example is using regular Linux server and not an OpenStack node: Creating a veth pair, note that we define names for both ends: # ip link add veth0 type veth peer name veth1 # ifconfig -a . . veth0     Link encap:Ethernet HWaddr 5E:2C:E6:03:D0:17           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) veth1     Link encap:Ethernet HWaddr E6:B6:E2:6D:42:B8           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) . . To make the example more meaningful this we will create the following setup: veth0 => veth1 => br-eth3 => eth3 ======> eth2 on another Linux server br-eth3 – a regular Linux bridge which will be connected to veth1 and eth3 eth3 – a physical interface with no IP on it, connected to a private network eth2 – a physical interface on the remote Linux box connected to the private network and configured with the IP of 50.50.50.1 Once we create the setup we will ping 50.50.50.1 (the remote IP) through veth0 to test that the connection is up: # brctl addbr br-eth3 # brctl addif br-eth3 eth3 # brctl addif br-eth3 veth1 # brctl show bridge name     bridge id               STP enabled     interfaces br-eth3         8000.00505682e7f6       no              eth3                                                         veth1 # ifconfig veth0 50.50.50.50 # ping -I veth0 50.50.50.51 PING 50.50.50.51 (50.50.50.51) from 50.50.50.50 veth0: 56(84) bytes of data. 64 bytes from 50.50.50.51: icmp_seq=1 ttl=64 time=0.454 ms 64 bytes from 50.50.50.51: icmp_seq=2 ttl=64 time=0.298 ms When the naming is not as obvious as the previous example and we don't know who are the paired veth interfaces we can use the ethtool command to figure this out. The ethtool command returns an index we can look up using ip link command, for example: # ethtool -S veth1 NIC statistics: peer_ifindex: 12 # ip link . . 12: veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 Summary That’s all for now, we quickly reviewed OVS, network namespaces, Linux bridges and veth pairs. These components are heavily used in the OpenStack network architecture we are exploring and understanding them well will be very useful when reviewing the different use cases. In the next post we will look at how the OpenStack network is laid out connecting the virtual machines to each other and to the external world. @RonenKofman

    Read the article

  • ubuntu wifi disconnection & frustratingly connects to unavailable wifi

    - by ashishsony
    Hi, i have already posted this here: here This has happened before with ubuntu 9.1 Beta2 build too that my wifi disconnects if im idle for 5 minutes... so i cant leave my lappy to download anything... i have to keep on continuously using it.. as soon i leave it idle for abt 5 minutes... wifi disconnects... and the pop up asking for password for wifi pops up...with the password already filled in... i just click on connect and it connects again... so whats the use of asking the password if the pre filled in pass works correctly... and this is happening on ubuntu 10.04 Beta2 too... and the workaround is that just open any menu like the applications menu in the taskbar and keep it open... under this state the ubuntu idleness never activates and so the wifi gets never disconnected... this has been confirmed by me many times.. this seems to be repeating again and again... i dont know why... and the second thing i want to report is that there is no way to report this bug from ubuntu... the launchpad.net talks of going through bug reporting process which is done against a definite package... now how does a user know which package would be causing this error?? there should be a more clear process of reporting such bugs to ubuntu team... thirdly the apport utility that reports crashing apps is totally uselss on 10.04 beta 2... as it collests information and reports that i cant submit the report because i dont have 100 other packages... without updating which i cant submit the report.... surely on a beta build there would be packages continuously being updated... so no system would be reported as fully updated... and so no practical apport reporting is possible?? please address these issues... really frustrating all this ... im a big fan of ubuntu but these things really bug me... and just to add fourthly... the suspend/hibernate feature has never ever worked on my toshiba m70-113 laptop... on any ubuntu version... always have to hard reboot after putting into suspend/hibernate mode.. on windows this has never been the case... why cant ubuntu beat windows in such cases too?? i would really like to see this soon... most importantly, when the router switches off... the wifi signals go off... then why the hell ubuntu keeps on connecting to that very wifi like hell and when doesnt connect shows the prompt to manually connect... with the wifi key already filled in... whats the use of saving the key when it has to ask the question from me either to connect or not?? and if its isnt available... just wait when its available.. i have only option to cancel and if i cancel it wont auto-connect!! what the heck?? one can see in the image that it says "authentication required by wireless network" when there isnt any.. as router has gone down!!

    Read the article

  • Windows Server 2003 RDP not listening on IPv6

    - by Ian Boyd
    i have a Windows Server 2003 machine; with IPv6 enabled: Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : newland.local IP Address. . . . . . . . . . . . : 192.168.1.244 Subnet Mask . . . . . . . . . . . : 255.255.0.0 IP Address. . . . . . . . . . . . : 2001:470:¦¦¦¦:¦¦¦¦:¦¦¦:¦¦¦¦:¦¦¦¦:¦¦¦¦ IP Address. . . . . . . . . . . . : fe80::224:1dff:fe86:fdf2%4 Default Gateway . . . . . . . . . : 192.168.1.1 fe80::250:bfff:fe91:955f%4 i can connect to IIS server using IPv, remotely and locally, using IPv4 and IPv6: > telnet 127.0.0.1 80 (connects) > telnet 192.168.1.244 80 (connects) > telnet ::1 80 (connects) > telnet fe80::224:1dff:fe86:fdf2 80 (connects) And TCPView shows that the server is listening on port 80: Note: This is useful to establish that Windows Server 2003 does have support for IPv6 services. And i can connect to terminal services, locally and remotely, using IPv4: > telnet 127.0.0.1 3389 (connects) > telnet 192.168.1.244 3389 (connects) But i cannot connect to RDP, locally or remotely, over IPv6: > telnet ::1 3389 (fails to connect) > telnet fe80::224:1dff:fe86:fdf2 3389 (fails to connect) We can see that the system is listening on 3389: Except why won't it listen on port 3389 (ipv6)? Unfortunately it's not the firewall. Aside from the fact that i'm connecting locally (in which case the firewall doesn't apply), the firewall doesn't apply:

    Read the article

  • Bind a Wijmo Grid to Salesforce.com Through the Salesforce OData Connector

    - by dataintegration
    This article will explain how to connect to any RSSBus OData Connector with Wijmo's data grid using JSONP. While the example will use the Salesforce Connector, the same process can be followed for any of the RSSBus OData Connectors. Step 1: Download and install both the Salesforce Connector from RSSBus and the Wijmo javascript library. Step 2: Next you will want to configure the Salesforce Connector to connect with your Salesforce account. If you browse to the Help tab in the Salesforce Connector application, there is a link to the Getting Started Guide which will walk you through setting up the Salesforce Connector. Step 3: Once you have successfully configured the Salesforce Connector application, you will want to open a Wijmo sample grid file to edit. This example will use the overview.html grid found in the Samples folder. Step 4: First, we will wrap the jQuery document ready function in a callback function for the JSONP service. In this example, we will wrap this in function called fnCallback which will take a single object args. <script id="scriptInit" type="text/javascript"> function fnCallback(args) { $(document).ready(function () { $("#demo").wijgrid({ ... }); }); }; </script> Step 5: Next, we need to format the columns object in a format that Wijmo's data grid expects. This is done by adding the headerText: element for each column. <script id="scriptInit" type="text/javascript"> function fnCallback(args) { var columns = []; for (var i = 0; i < args.columnnames.length; i++){ var col = { headerText: args.columnnames[i]}; columns.push(col); } $(document).ready(function () { $("#demo").wijgrid({ ... }); }); }; </script> Step 6: Now the wijgrid parameters are ready to be set. In this example, we will set the data input parameter to the args.data object and the columns input parameter to our newly created columns object. The resulting javascript function should look like this: <script id="scriptInit" type="text/javascript"> function fnCallback(args) { var columns = []; for (var i = 0; i < args.columnnames.length; i++){ var col = { headerText: args.columnnames[i]}; columns.push(col); } $(document).ready(function () { $("#demo").wijgrid({ allowSorting: true, allowPaging: true, pageSize: 10, data: args.data, columns: columns }); }); }; </script> Step 7: Finally, we need to add the JSONP reference to our Salesforce Connector's data table. You can find this by clicking on the Settings tab of the Salesforce Connector. Once you have found the JSONP URL, you will need to supply a valid table name that you want to connect with Wijmo. In this example, we will connect to the Lead table. You will also need to add authentication options in this step. In the example we will append the authtoken of the user who has access to the Salesforce Connector using the @authtoken query string parameter. IMPORTANT: This is not secure and will expose the authtoken of the user whose authtoken you supply in this step. There are other ways to secure the user's authtoken, but this example uses a query string parameter for simplicity. <script src="http://localhost:8181/sfconnector/data/conn/Lead.rsd?@jsonp=fnCallback&sql:query=SELECT%20*%20FROM%20Lead&@authtoken=<myAuthToken>" type="text/javascript"></script> Step 8: Now, we are done. If you point your browser to the URL of the sample, you should see your Salesforce.com leads in a Wijmo data grid.

    Read the article

  • SQL Server IO handling mechanism can be severely affected by high CPU usage

    - by sqlworkshops
    Are you using SSD or SAN / NAS based storage solution and sporadically observe SQL Server experiencing high IO wait times or from time to time your DAS / HDD becomes very slow according to SQL Server statistics? Read on… I need your help to up vote my connect item – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage. Instead of taking few seconds, queries could take minutes/hours to complete when CPU is busy.In SQL Server when a query / request needs to read data that is not in data cache or when the request has to write to disk, like transaction log records, the request / task will queue up the IO operation and wait for it to complete (task in suspended state, this wait time is the resource wait time). When the IO operation is complete, the task will be queued to run on the CPU. If the CPU is busy executing other tasks, this task will wait (task in runnable state) until other tasks in the queue either complete or get suspended due to waits or exhaust their quantum of 4ms (this is the signal wait time, which along with resource wait time will increase the overall wait time). When the CPU becomes free, the task will finally be run on the CPU (task in running state).The signal wait time can be up to 4ms per runnable task, this is by design. So if a CPU has 5 runnable tasks in the queue, then this query after the resource becomes available might wait up to a maximum of 5 X 4ms = 20ms in the runnable state (normally less as other tasks might not use the full quantum).In case the CPU usage is high, let’s say many CPU intensive queries are running on the instance, there is a possibility that the IO operations that are completed at the Hardware and Operating System level are not yet processed by SQL Server, keeping the task in the resource wait state for longer than necessary. In case of an SSD, the IO operation might even complete in less than a millisecond, but it might take SQL Server 100s of milliseconds, for instance, to process the completed IO operation. For example, let’s say you have a user inserting 500 rows in individual transactions. When the transaction log is on an SSD or battery backed up controller that has write cache enabled, all of these inserts will complete in 100 to 200ms. With a CPU intensive parallel query executing across all CPU cores, the same inserts might take minutes to complete. WRITELOG wait time will be very high in this case (both under sys.dm_io_virtual_file_stats and sys.dm_os_wait_stats). In addition you will notice a large number of WAITELOG waits since log records are written by LOG WRITER and hence very high signal_wait_time_ms leading to more query delays. However, Performance Monitor Counter, PhysicalDisk, Avg. Disk sec/Write will report very low latency times.Such delayed IO handling also occurs to read operations with artificially very high PAGEIOLATCH_SH wait time (with number of PAGEIOLATCH_SH waits remaining the same). This problem will manifest more and more as customers start using SSD based storage for SQL Server, since they drive the CPU usage to the limits with faster IOs. We have a few workarounds for specific scenarios, but we think Microsoft should resolve this issue at the product level. We have a connect item open – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage - (with example scripts) to reproduce this behavior, please up vote the item so the issue will be addressed by the SQL Server product team soon.Thanks for your help and best regards,Ramesh MeyyappanHome: www.sqlworkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Why does my Messaging Menu code not work when split into functions?

    - by fluteflute
    Below are two python programs. They're exactly the same, except for one is split into two functions. However only the one that's split into two functions doesn't work - the second function doesn't work. Why would this be? Note the code is taken from this useful blog post. Without functions (works): import gtk def show_window_function(x, y): print x print y # get the indicate module, which does all the work import indicate # Create a server item mm = indicate.indicate_server_ref_default() # If someone clicks your server item in the MM, fire the server-display signal mm.connect("server-display", show_window_function) # Set the type of messages that your item uses. It's not at all clear which types # you're allowed to use, here. mm.set_type("message.im") # You must specify a .desktop file: this is where the MM gets the name of your # app from. mm.set_desktop_file("/usr/share/applications/nautilus.desktop") # Show the item in the MM. mm.show() # Create a source item mm_source = indicate.Indicator() # Again, it's not clear which subtypes you are allowed to use here. mm_source.set_property("subtype", "im") # "Sender" is the text that appears in the source item in the MM mm_source.set_property("sender", "Unread") # If someone clicks this source item in the MM, fire the user-display signal mm_source.connect("user-display", show_window_function) # Light up the messaging menu so that people know something has changed mm_source.set_property("draw-attention", "true") # Set the count of messages in this source. mm_source.set_property("count", "15") # If you prefer, you can set the time of the last message from this source, # rather than the count. (You can't set both.) This means that instead of a # message count, the MM will show "2m" or similar for the time since this # message arrived. # mm_source.set_property_time("time", time.time()) mm_source.show() gtk.mainloop() With functions (second function is executed but doesn't actually work): import gtk def show_window_function(x, y): print x print y # get the indicate module, which does all the work import indicate def function1(): # Create a server item mm = indicate.indicate_server_ref_default() # If someone clicks your server item in the MM, fire the server-display signal mm.connect("server-display", show_window_function) # Set the type of messages that your item uses. It's not at all clear which types # you're allowed to use, here. mm.set_type("message.im") # You must specify a .desktop file: this is where the MM gets the name of your # app from. mm.set_desktop_file("/usr/share/applications/nautilus.desktop") # Show the item in the MM. mm.show() def function2(): # Create a source item mm_source = indicate.Indicator() # Again, it's not clear which subtypes you are allowed to use here. mm_source.set_property("subtype", "im") # "Sender" is the text that appears in the source item in the MM mm_source.set_property("sender", "Unread") # If someone clicks this source item in the MM, fire the user-display signal mm_source.connect("user-display", show_window_function) # Light up the messaging menu so that people know something has changed mm_source.set_property("draw-attention", "true") # Set the count of messages in this source. mm_source.set_property("count", "15") # If you prefer, you can set the time of the last message from this source, # rather than the count. (You can't set both.) This means that instead of a # message count, the MM will show "2m" or similar for the time since this # message arrived. # mm_source.set_property_time("time", time.time()) mm_source.show() function1() function2() gtk.mainloop()

    Read the article

  • SSH / SFTP connection issue using Tamir.SharpSsh

    - by jinsungy
    This is my code to connect and send a file to a remote SFTP server. public static void SendDocument(string fileName, string host, string remoteFile, string user, string password) { Scp scp = new Scp(); scp.OnConnecting += new FileTansferEvent(scp_OnConnecting); scp.OnStart += new FileTansferEvent(scp_OnProgress); scp.OnEnd += new FileTansferEvent(scp_OnEnd); scp.OnProgress += new FileTansferEvent(scp_OnProgress); try { scp.To(fileName, host, remoteFile, user, password); } catch (Exception e) { throw e; } } I can successfully connect, send and receive files using CoreFTP. Thus, the issue is not with the server. When I run the above code, the process seems to stop at the scp.To method. It just hangs indefinitely. Anyone know what might my problem be? Maybe it has something to do with adding the key to the a SSH Cache? If so, how would I go about this? EDIT: I inspected the packets using wireshark and discovered that my computer is not executing the Diffie-Hellman Key Exchange Init. This must be the issue. EDIT: I ended up using the following code. Note, the StrictHostKeyChecking was turned off to make things easier. JSch jsch = new JSch(); jsch.setKnownHosts(host); Session session = jsch.getSession(user, host, 22); session.setPassword(password); System.Collections.Hashtable hashConfig = new System.Collections.Hashtable(); hashConfig.Add("StrictHostKeyChecking", "no"); session.setConfig(hashConfig); try { session.connect(); Channel channel = session.openChannel("sftp"); channel.connect(); ChannelSftp c = (ChannelSftp)channel; c.put(fileName, remoteFile); c.exit(); } catch (Exception e) { throw e; } Thanks.

    Read the article

  • Schema objects not visible in SQL Server Management Studio 2008

    - by Germ
    I'm experiencing a weird problem with a SQL login. When I connect to the server in Microsoft SQL Server Management Studio (2008) using this account, I cannot see any of the tables, stored procedures etc. that this account should have access to on a particular database. When I connect to the same server within Visual Studio (2008) with the same account everything is there. When I connect with the same account on a Virtual Machine everything is there. I've also had a co-worker connect to the server using the same login and he's able to view everything as well. I use Microsoft SQL Server Management Studio all day connecting to different servers and databases and I've never experienced this problem. Does anyone have any suggestions on how I can diagnose this problem? I've checked to make sure I don't have any Table filters etc. There's several database on this server and I'm able to see the correct tables that this account has access to in the other databases just fine. Running this query lists the tables I'm expecting to see. SELECT * FROM INFORMATION_SCHEMA.TABLES

    Read the article

  • Help debugging c fifos code - stack smashing detected - open call not functioning - removing pipes

    - by nunos
    I have three bugs/questions regarding the source code pasted below: stack smashing deteced: In order to compile and not have that error I have addedd the gcc compile flag -fno-stack-protector. However, this should be just a temporary solution, since I would like to find where the cause for this is and correct it. However, I haven't been able to do so. Any clues? For some reason, the last open function call doesn't work and the programs just stops there, without an error, even though the fifo already exists. I want to delete the pipes from the filesystem after before terminating the processes. I have added close and unlink statements at the end, but the fifos are not removed. What am I doing wrong? Thanks very much in advance. P.S.: I am pasting here the whole source file for additional clarity. Just ignore the comments, since they are in my own native language. server.c: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <errno.h> #define MAX_INPUT_LENGTH 100 #define FIFO_NAME_MAX_LEN 20 #define FIFO_DIR "/tmp/" #define FIFO_NAME_CMD_CLI_TO_SRV "lrc_cmd_cli_to_srv" typedef enum { false, true } bool; bool background = false; char* logfile = NULL; void read_from_fifo(int fd, char** var) { int n_bytes; read(fd, &n_bytes, sizeof(int)); *var = (char *) malloc (n_bytes); read(fd, *var, n_bytes); printf("read %d bytes '%s'\n", n_bytes, *var); } void write_to_fifo(int fd, char* data) { int n_bytes = (strlen(data)+1) * sizeof(char); write(fd, &n_bytes, sizeof(int)); //primeiro envia o numero de bytes que a proxima instrucao write ira enviar write(fd, data, n_bytes); printf("writing %d bytes '%s'\n", n_bytes, data); } int main(int argc, char* argv[]) { //CRIA FIFO CMD_CLI_TO_SRV, se ainda nao existir char* fifo_name_cmd_cli_to_srv; fifo_name_cmd_cli_to_srv = (char*) malloc ( (strlen(FIFO_NAME_CMD_CLI_TO_SRV) + strlen(FIFO_DIR) + 1) * sizeof(char) ); strcpy(fifo_name_cmd_cli_to_srv, FIFO_DIR); strcat(fifo_name_cmd_cli_to_srv, FIFO_NAME_CMD_CLI_TO_SRV); int n = mkfifo(fifo_name_cmd_cli_to_srv, 0660); //TODO ver permissoes if (n < 0 && errno != EEXIST) //se houver erro, e nao for por causa de ja haver um com o mesmo nome, termina o programa { fprintf(stderr, "erro ao criar o fifo\n"); fprintf(stderr, "errno: %d\n", errno); exit(4); } //se por acaso já existir, nao cria o fifo e continua o programa normalmente //le informacao enviada pelo cliente, nesta ordem: //1. pid (em formato char*) do processo cliente //2. comando /CONNECT //3. nome de fifo INFO_SRV_TO_CLIXXX //4. nome de fifo MSG_SRV_TO_CLIXXX char* command; char* fifo_name_info_srv_to_cli; char* fifo_name_msg_srv_to_cli; char* client_pid_string; int client_pid; int fd_cmd_cli_to_srv, fd_info_srv_to_cli; fd_cmd_cli_to_srv = open(fifo_name_cmd_cli_to_srv, O_RDONLY); read_from_fifo(fd_cmd_cli_to_srv, &client_pid_string); client_pid = atoi(client_pid_string); read_from_fifo(fd_cmd_cli_to_srv, &command); //recebe commando /CONNECT read_from_fifo(fd_cmd_cli_to_srv, &fifo_name_info_srv_to_cli); //recebe nome de fifo INFO_SRV_TO_CLIXXX read_from_fifo(fd_cmd_cli_to_srv, &fifo_name_msg_srv_to_cli); //recebe nome de fifo MSG_TO_SRV_TO_CLIXXX //CIRA FIFO MSG_CLIXXX_TO_SRV char fifo_name_msg_cli_to_srv[FIFO_NAME_MAX_LEN]; strcpy(fifo_name_msg_cli_to_srv, FIFO_DIR); strcat(fifo_name_msg_cli_to_srv, "lrc_msg_cli"); strcat(fifo_name_msg_cli_to_srv, client_pid_string); strcat(fifo_name_msg_cli_to_srv, "_to_srv"); n = mkfifo(fifo_name_msg_cli_to_srv, 0660); if (n < 0) { fprintf(stderr, "error creating %s\n", fifo_name_msg_cli_to_srv); fprintf(stderr, "errno: %d\n", errno); exit(5); } //envia ao cliente a resposta ao commando /CONNECT fd_info_srv_to_cli = open(fifo_name_info_srv_to_cli, O_WRONLY); write_to_fifo(fd_info_srv_to_cli, fifo_name_msg_cli_to_srv); free(logfile); free(fifo_name_cmd_cli_to_srv); close(fd_cmd_cli_to_srv); unlink(fifo_name_cmd_cli_to_srv); unlink(fifo_name_msg_cli_to_srv); unlink(fifo_name_msg_srv_to_cli); unlink(fifo_name_info_srv_to_cli); printf("fim\n"); return 0; } client.c: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <errno.h> #define MAX_INPUT_LENGTH 100 #define PID_BUFFER_LEN 10 #define FIFO_NAME_CMD_CLI_TO_SRV "lrc_cmd_cli_to_srv" #define FIFO_NAME_INFO_SRV_TO_CLI "lrc_info_srv_to_cli" #define FIFO_NAME_MSG_SRV_TO_CLI "lrc_msg_srv_to_cli" #define COMMAND_MAX_LEN 100 #define FIFO_DIR "/tmp/" typedef enum { false, true } bool; char* nickname; char* name; char* email; void write_to_fifo(int fd, char* data) { int n_bytes = (strlen(data)+1) * sizeof(char); write(fd, &n_bytes, sizeof(int)); //primeiro envia o numero de bytes que a proxima instrucao write ira enviar write(fd, data, n_bytes); printf("writing %d bytes '%s'\n", n_bytes, data); } void read_from_fifo(int fd, char** var) { int n_bytes; read(fd, &n_bytes, sizeof(int)); *var = (char *) malloc (n_bytes); printf("read '%s'\n", *var); read(fd, *var, n_bytes); } int main(int argc, char* argv[]) { pid_t pid = getpid(); //CRIA FIFO INFO_SRV_TO_CLIXXX char pid_string[PID_BUFFER_LEN]; sprintf(pid_string, "%d", pid); char* fifo_name_info_srv_to_cli; fifo_name_info_srv_to_cli = (char *) malloc ( (strlen(FIFO_DIR) + strlen(FIFO_NAME_INFO_SRV_TO_CLI) + strlen(pid_string) + 1 ) * sizeof(char) ); strcpy(fifo_name_info_srv_to_cli, FIFO_DIR); strcat(fifo_name_info_srv_to_cli, FIFO_NAME_INFO_SRV_TO_CLI); strcat(fifo_name_info_srv_to_cli, pid_string); int n = mkfifo(fifo_name_info_srv_to_cli, 0660); if (n < 0) { fprintf(stderr, "error creating %s\n", fifo_name_info_srv_to_cli); fprintf(stderr, "errno: %d\n", errno); exit(6); } int fd_cmd_cli_to_srv, fd_info_srv_to_cli; fd_cmd_cli_to_srv = open("/tmp/lrc_cmd_cli_to_srv", O_WRONLY); char command[COMMAND_MAX_LEN]; printf("> "); scanf("%s", command); while (strcmp(command, "/CONNECT")) { printf("O primeiro comando deverá ser \"/CONNECT\"\n"); printf("> "); scanf("%s", command); } //CRIA FIFO MSG_SRV_TO_CLIXXX char* fifo_name_msg_srv_to_cli; fifo_name_msg_srv_to_cli = (char *) malloc ( (strlen(FIFO_DIR) + strlen(FIFO_NAME_MSG_SRV_TO_CLI) + strlen(pid_string) + 1) * sizeof(char) ); strcpy(fifo_name_msg_srv_to_cli, FIFO_DIR); strcat(fifo_name_msg_srv_to_cli, FIFO_NAME_MSG_SRV_TO_CLI); strcat(fifo_name_msg_srv_to_cli, pid_string); n = mkfifo(fifo_name_msg_srv_to_cli, 0660); if (n < 0) { fprintf(stderr, "error creating %s\n", fifo_name_info_srv_to_cli); fprintf(stderr, "errno: %d\n", errno); exit(7); } // ENVIA COMANDO /CONNECT write_to_fifo(fd_cmd_cli_to_srv, pid_string); //envia pid do processo cliente write_to_fifo(fd_cmd_cli_to_srv, command); //envia commando /CONNECT write_to_fifo(fd_cmd_cli_to_srv, fifo_name_info_srv_to_cli); //envia nome de fifo INFO_SRV_TO_CLIXXX write_to_fifo(fd_cmd_cli_to_srv, fifo_name_msg_srv_to_cli); //envia nome de fifo MSG_TO_SRV_TO_CLIXXX // recebe do servidor a resposta ao comanddo /CONNECT printf("msg1\n"); printf("vamos tentar abrir %s\n", fifo_name_info_srv_to_cli); fd_info_srv_to_cli = open(fifo_name_info_srv_to_cli, O_RDONLY); printf("%s aberto", fifo_name_info_srv_to_cli); if (fd_info_srv_to_cli < 0) { fprintf(stderr, "erro ao criar %s\n", fifo_name_info_srv_to_cli); fprintf(stderr, "errno: %d\n", errno); } printf("msg2\n"); char* fifo_name_msg_cli_to_srv; printf("msg3\n"); read_from_fifo(fd_info_srv_to_cli, &fifo_name_msg_cli_to_srv); printf("msg4\n"); free(nickname); free(name); free(email); free(fifo_name_info_srv_to_cli); free(fifo_name_msg_srv_to_cli); unlink(fifo_name_msg_srv_to_cli); unlink(fifo_name_info_srv_to_cli); printf("fim\n"); return 0; } makefile: CC = gcc CFLAGS = -Wall -lpthread -fno-stack-protector all: client server client: client.c $(CC) $(CFLAGS) client.c -o client server: server.c $(CC) $(CFLAGS) server.c -o server clean: rm -f client server *~

    Read the article

  • flex and jsf access the same instance of bean

    - by David
    i integrate a flex app in a jsf-icefaces app (in a jspx site with the ice:outputmedia-tag) and want to access the same instance of a bean from flex by remote, that jsf inject. i already connect with blazeds to a java-bean. this bean - like all other beans - get other beans by injection of jsf, but when i access the bean by remote from flex it doesnt hold the injected beans (like localizer and accesmanager, both session scoped) and i can't connect to the jsf session (FacesContext.getCurrentInstance() is null). this is because flex create a new instance of the bean and it’s not the same current instance, that jsf inject, i think. i can connect from flex to the database by create a new entity manager in the java bean, but that's not what i want, because it's again another entity manager...i want persist and get data over the accessmanager-bean. i know exadel fiji and flamingo, but i couldn't work with fiji, because my jsf app include the icefaces components and then it doesn't work with richfaces which fiji needs. and flamingo work only with jboss seam and spring. is it right? i also read about the spring-flex-integration, but the jsf application did not create with spring and i don't want to integrate spring in such a large jsf app. yesterday i read about the FlexFactory interface. this interface i have to implement in my own Factory and set it in the service-config.xml of blazeds as a factory read this. i still implement my own factory but i only get application scoped beans over the servlet context which i get over FlexContext.getServletContext().getAttribute("Bean"); and not session scoped beans... i hope there is a chance to connect throw flex and jsf... thanks!

    Read the article

  • eclipse and tomcat

    - by Panayiotis Karabassis
    Hi! I am trying to integrate eclipse with tomcat. My system is Debian Lenny and I have installed tomcat from http://tomcat.apache.org/. My problem is that when launching Tomcat from within eclipse I get the following error: SEVERE: StandardServer.await: create[8005]: java.net.SocketException: Invalid argument at java.net.PlainSocketImpl.socketBind(Native Method) at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:365) at java.net.ServerSocket.bind(ServerSocket.java:319) at java.net.ServerSocket.<init>(ServerSocket.java:185) at org.apache.catalina.core.StandardServer.await(StandardServer.java:373) at org.apache.catalina.startup.Catalina.await(Catalina.java:662) at org.apache.catalina.startup.Catalina.start(Catalina.java:614) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Jun 16, 2010 9:54:34 PM org.apache.coyote.http11.Http11Protocol pause INFO: Pausing Coyote HTTP/1.1 on http-8080 Jun 16, 2010 9:54:34 PM org.apache.catalina.connector.Connector pause SEVERE: Protocol handler pause failed java.net.SocketException: Network is unreachable at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at java.net.Socket.connect(Socket.java:478) at java.net.Socket.<init>(Socket.java:375) at java.net.Socket.<init>(Socket.java:218) at org.apache.jk.common.ChannelSocket.unLockSocket(ChannelSocket.java:487) at org.apache.jk.common.ChannelSocket.pause(ChannelSocket.java:284) at org.apache.jk.server.JkMain.pause(JkMain.java:725) at org.apache.jk.server.JkCoyoteHandler.pause(JkCoyoteHandler.java:153) at org.apache.catalina.connector.Connector.pause(Connector.java:1029) at org.apache.catalina.core.StandardService.stop(StandardService.java:566) at org.apache.catalina.core.StandardServer.stop(StandardServer.java:744) at org.apache.catalina.startup.Catalina.stop(Catalina.java:648) at org.apache.catalina.startup.Catalina$CatalinaShutdownHook.run(Catalina.java:692) I suspect this is related to the java option java.net.preferIPv4Stack. Thanks in advance!

    Read the article

  • Qt, no such slot error

    - by Martin Beckett
    I'm having a strange problem with slots in Qt4.6 I have a tree control that I am trying to fire an event back to the mainwindow to edit something. I have boiled it down to this minimal example: class MainWindow : public QMainWindow { Q_OBJECT public: MainWindow(int argc, char **argv ); private Q_SLOTS: void about() {QMessageBox::about(this, "about","about"); } // Just the defaul about box void doEditFace() {QMessageBox::about(this, "test","doEditFace"); } // pretty identical .... } class TreeModel : public QAbstractItemModel { Q_OBJECT Q_SIGNALS: void editFace(); ... } In my Mainwindow() I have connect(treeModel, SIGNAL(editFace()), this, SLOT(about())); // returns true connect(treeModel, SIGNAL(editFace()), this, SLOT(doEditFace())); // returns false When I run it the second line gives a warning. Object::connect: No such slot MainWindow::doEditFace() in \src\mainwindow.cpp:588 As far as I can see the doEditFace() is in the moc_ qmeta class perfectly correctly. And the edit event fires and pops up the about box. The order doesn't matterm if I connect the about box second it still works but my slot doesnt! vS2008 windowsXP qt4.6.2

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >