Search Results

Search found 11180 results on 448 pages for 'serial port'.

Page 64/448 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Port Win32 DLL hook to Linux

    - by peachykeen
    I have a program (NWShader) which hooks into a second program's OpenGL calls (NWN) to do post-processing effects and whatnot. NWShader was originally built for Windows, generally modern versions (win32), and uses both DLL exports (to get Windows to load it and grab some OpenGL functions) and Detours (to hook into other functions). I'm using the trick where Win will look in the current directory for any DLLs before checking the sysdir, so it loads mine. I have on DLL that redirects with this method: #pragma comment(linker, "/export:oldFunc=nwshader.newFunc) To send them to a different named function in my own DLL. I then do any processing and call the original function from the system DLL. I need to port NWShader to Linux (NWN exists in both flavors). As far as I can tell, what I need to make is a shared library (.so file). If this is preloaded before the NWN executable (I found a shell script to handle this), my functions will be called. The only problem is I need to call the original function (I would use various DLL dynamic loading methods for this, I think) and need to be able to do Detour-like hooking of internal functions. At the moment I'm building on Ubuntu 9.10 x64 (with the 32-bit compiler flags). I haven't been able to find much on Google to help with this, but I don't know exactly what the *nix community refers to it as. I can code C++, but I'm more used to Windows. Being OpenGL, the only part the needs modified to be compatible with Linux is the hooking code and the calls. Is there a simple and easy way to do this, or will it involve recreating Detours and dynamically loading the original function addresses?

    Read the article

  • Recommended approach to port to ASP.NET MVC

    - by tshao
    I think many of us used to face the same question, what's the best practices to port existing web forms App to MVC. The situation for me is that we'll support both web forms and MVC at the same time. It means, we create new features in MVC, while maintaining legacy pages in web forms, and they're all in a same project. The point is: we want to keep the DRY (do not repeat yourself) principle and reduce duplicate code as much as possible. The ASPX page is not a problem as we only create new features in MVC, but there're still some shared components we want to re-use the both new / legacy pages: Master page UserControl The question here is: Is that possible to create a common master page / usercontrol that could be used in both web forms and MVC? I know that ViewMasterPage inherits from MasterPage and ViewUserControl inherits from UserControl, so it's maybe OK to let both web forms and MVC ASPX page refer to the MVC version. I did some testing and found sometimes it generates errors during the rendering of usercontrols. Any idea / experience you can share with me? Very appreciate to it.

    Read the article

  • Send file FTP over SSL with custom port number

    - by JM4
    I have asked the question before but in a different manner. I am trying taking form data, compiling into a temporary CSV file and trying to send over to a client via FTP over SSL (this is the only route I am interested in hearing solutions for unless there is a workaround to doing this, I cannot make changes). I have tried the following: ftp_connect - nothing happens, the page just times out ftp_ssl_connect - nothing happens, the page just times out curl library - same thing, given URL it also gives error. I am given the following information: FTPS Server IP Address TCP Port (1234) Username Password Data Directory to dump file FTP Mode: Passive very, very basic code (which I believe should initiate a connection at minimum): Code: <?php $ftp_server = "00.000.00.000"; //masked for security $ftp_port = "1234"; // masked but not 990 $ftp_user_name = "username"; $ftp_user_pass = "password"; // set up basic ssl connection $conn_id = ftp_ssl_connect($ftp_server, $ftp_port, "20"); // login with username and password $login_result = ftp_login($conn_id, $ftp_user_name, $ftp_user_pass); echo ftp_pwd($conn_id); // / echo "hello"; // close the ssl connection ftp_close($conn_id); ?> When I run this over a SmartFTP client, everything works just fine. I just can't get it to work using PHP (which is a necessity). Has anybody had success doing this in the past? I would be very interested to hear your approach.

    Read the article

  • Removing the port number from URL

    - by DrewSSP
    I'm new to anything related to servers and am trying to deploy a django application. Today I bought a domain name for the app and am having trouble configuring it so that the base URL does not need the port number at the end of it. I have to type www.trackthecharts.com:8001 to see the website when I only want to use www.trackethecharts.com. I think the problem is somewhere in my nginx, gunicorn or supervisor configuration. gunicorn_config.py command = '/opt/myenv/bin/gunicorn' pythonpath = '/opt/myenv/top-chart-app/' bind = '162.243.76.202:8001' workers = 3 root@django-app:~# nginx config server { server_name 162.243.76.202; access_log off; location /static/ { alias /opt/myenv/static/; } location / { proxy_pass http://127.0.0.1:8001; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; } } supervisor config [program:top_chart_gunicorn] command=/opt/myenv/bin/gunicorn -c /opt/myenv/gunicorn_config.py djangoTopChartApp.wsgi autostart=true autorestart=true stderr_logfile=/var/log/supervisor_gunicorn.err.log stdout_logfile=/var/log/supervisor_gunicorn.out.log Thanks for taking a look.

    Read the article

  • Yeoman 'grunt test' fails on clean project with 'port already in use'

    - by XMLilley
    With: Mac OS 10.8.4 Node 0.10.12 npm 1.3.1 grunt-cli 0.1.9 yo 1.0.0-rc.1 bower 0.9.2 [email protected] I encounter the following error with a clean yo angular project, followed by grunt server then grunt test: Running "connect:test" (connect) task Fatal error: Port 9000 is already in use by another process. I'm new to Yeoman and am stumped. I've deleted my original project and created a new one in a fresh folder just to make sure I wasn't overlooking any invisible configs. I restarted the machine to make sure I wasn't running any temporary server processes I had forgotten about. After all attempts, the basic server starts fine, attaches to Chrome, and the watcher updates the browser on any changes. (Notably, the server is running on 9000, which seems odd for the test-runner to also be trying to use 9000.) But I get that same error on attempting to start the test runner. Is this something I can fix, or an issue I should report to the Yeoman team? Thanks.

    Read the article

  • How can I solve http_port 3129 intercept with squid?

    - by wmoreno3
    My system: uname -a FreeBSD server.local.jmorenov.com.co 9.1-RELEASE FreeBSD 9.1-RELEASE #0 r243825: Tue Dec 4 09:23:10 UTC 2012 [email protected]:/usr/obj/usr/src/sys/GENERIC amd64 pkg info | grep squid squid-3.2.7 HTTP Caching Proxy I have this configuration in squid.conf: http_port 3128 accel vhost allow-direct # OK http_port 3129 intercept # Does not work icp_port 0 When I tried with: http_port 3129 intercept By switch line on ipnat.rules. In access log appears: 2013/01/09 00:46:03 kid1| IPF (IPFilter) NAT open failed: (13) Permission denied 2013/01/09 00:46:03 kid1| BUG #3329: Orphan Comm::Connection: local=127.0.0.1:3129 remote=192.168.1.129:51595 FD 24 flags=33 2013/01/09 00:46:03 kid1| NOTE: 1 Orphans since last started. /var/log/squid/cache.log 2013/02/08 09:02:33 kid1| Squid plugin modules loaded: 0 2013/02/08 09:02:33 kid1| Accepting reverse-proxy HTTP Socket connections at local=127.0.0.1:3128 remote=[::] FD 33 flags=9 2013/02/08 09:02:33 kid1| Accepting NAT intercepted HTTP Socket connections at local=127.0.0.1:3129 remote=[::] FD 34 flags=41 My /etc/ipnat.rules: root@server:/root # cat /etc/ipnat.rules # em0 = External NIC # bge0 = Internal NIC map em0 0/0 -> 0/32 proxy port ftp ftp/tcp map em0 0/0 -> 0/32 portmap tcp/udp auto map em0 0/0 -> 0/32 # Redirect direct web traffic to local web server. rdr em0 192.168.0.3/32 port 80 -> 127.0.0.1 port 80 tcp rdr bge0 192.168.1.3/32 port 80 -> 127.0.0.1 port 80 tcp # Redirect everything else to squid on port 3128 or 3129 intercept rdr em0 0.0.0.0/0 port 80 -> 127.0.0.1 port 3128 tcp rdr bge0 0.0.0.0/0 port 80 -> 127.0.0.1 port 3128 tcp #rdr em0 0.0.0.0/0 port 80 -> 127.0.0.1 port 3129 tcp #rdr bge0 0.0.0.0/0 port 80 -> 127.0.0.1 port 3129 tcp With 3128 is OK, but with 3129, Does not work, when switch in ipnat.rules.

    Read the article

  • 1and1 ssh - connection refused

    - by kitensei
    I'm having troubles connecting through SSH to my 1&1 account. When I try to connect with command userXXX@host -p22 -vv I have the following output: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to mySite.com [ip_here] port 22. debug1: connect to address ip_here port 22: Connection refused Moreover, once I try to connect through SSH and it fails, even the HTTP access is dead, I cannot access the website through explorer anymore :/ please help < I'm running ubuntu 11.10 EDIT: don't know if it can help, here's the .htaccess of the 1and1 server Options +Indexes Satisfy any Order Deny,Allow Allow from 212.227.X.X Deny from all RemoveType .html .gif AuthType Basic AuthName "Access to /logs" AuthUserFile /kunden/homepages/43/d376072470/htpasswd Require user "user_here" and sftp.log: Mar 26 09:21:24 193.251.X USER_HERE Connection from 193.251.X port 51809 Mar 26 09:21:30 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 09:23:39 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 09:23:41 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 09:23:45 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 09:23:57 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 10:53:36 212.227.X tmp64459736-3228 Connection from 212.227.X port 23275 Mar 26 10:53:36 212.227.X tmp64459736-3228 Accepted password for tmp64459736-3228 from 212.227.X port 23275 ssh2 Mar 26 11:53:37 212.227.X tmp64459736-3228 Connection closed by 212.227.X Mar 26 18:58:17 212.227.X tmp64459736-5363 Connection from 212.227.X port 23353 Mar 26 18:58:17 212.227.X tmp64459736-5363 Accepted password for tmp64459736-5363 from 212.227.X port 23353 ssh2 Mar 26 19:53:36 212.227.X tmp64459736-8525 Connection from 212.227.X port 5166 Mar 26 19:53:36 212.227.X tmp64459736-8525 Accepted password for tmp64459736-8525 from 212.227.X port 5166 ssh2 Mar 26 19:58:17 212.227.X tmp64459736-5363 Connection closed by 212.227.X

    Read the article

  • Port forwarding in C#/software possible? Isn't it only managed by the router?

    - by Rudi
    Isn't port forwarding managed by the router? Like, I've googled up some software applications that seem to port forward with great success, but it technically seems to be impossible. The packet must first go to the router. The router must forward it to the correct computer based on port forwarding rules. So how can a software application manage port forwarding if the packet must GO to the computer running this software application in the first place, meaning that port forwarding already is successful in the first place?

    Read the article

  • How do I install my Wacom Intuos 2 serial tablet?

    - by Gizmoatwork
    I've seen many topics on the subjects but they are too complicated for me. I'm not confident in compiling stuff. Is there some headache-free way to make it work under Ubuntu? Where do I start? Edit : It doesn't seem to work. looking at device '/devices/pnp0/00:08/tty/ttyS0': KERNEL=="ttyS0" SUBSYSTEM=="tty" DRIVER=="" looking at parent device '/devices/pnp0/00:08': KERNELS=="00:08" SUBSYSTEMS=="pnp" DRIVERS=="serial" ATTRS{id}=="PNP0501" looking at parent device '/devices/pnp0': KERNELS=="pnp0" SUBSYSTEMS=="" DRIVERS=="" ACTION=="add|change", SUBSYSTEMS=="pnp", ATTRS{id}=="PNP0501", ENV{ID_INPUT}="1", ENV{ID_INPUT_TABLET}="1" ATTRS{id}==PNP0501, : commande introuvable I am a bit confused. Am I right to type it in the terminal?

    Read the article

  • How do I allow a non-default user to use serial device ttyUSB0?

    - by lucaghera
    I have an Ubuntu 11.10 system with 2 users: The first was created during the installation The second instead was created after. It belongs to the sudoers group. Now the problem is that when the second tries to use a device ttyUSB0 the following error is returned: "Could not open serial port /dev/ttyUSB0" I was able to fix it by using: sudo chown :second_user /dev/ttyUSB0 However when I disconnect the device and reconnect it the problem comes back. Is there a way to allow different users to access the devices? I suppose I have to add the user to a specific group. Currently the owner is root and the group is dialout. However I'm not sure about the group and I don't know how to add the user. Thanks!

    Read the article

  • wopen calls when porting to Linux

    - by laura
    I have an application which was developed under Windows, but for gcc. The code is mostly OS-independent, with very few classes which are Windows specific because a Linux port was always regarded as necessary. The API, especially that which gets called as a direct result of user interaction, is using wide char arrays instead of char arrays (as a side note, I cannot change the API itself - at this point, std::wstring cannot be used). These are considered as encoded in UTF-16. In some places, the code opens files, mostly using the windows-specific _wopen function call. The problem with this is there is no wopen-like substitute for Linux because Linux "only deals with bytes". The question is: how do I port this code ? What if I wanted to open a file with the name "something™.log", how would I go about doing so in Linux ? Is a cast to char* sufficient, would the wide chars be picked up automatically based on the locale (probably not) ? Do I need to convert manually ? I'm a bit confused regarding this, perhaps someone could point me to some documentation regarding the matter.

    Read the article

  • SSH tunneling with Synology

    - by dvkch
    I try to tunnel SMB and AFP services through SSH to acces my NAS shares on my machine. I already do it successfully with my ReadyNAS using the following command line (ran as my user on my mac) : ssh -Nf -p 22 -c 3des-cbc USER@SERVER -L 8888/127.0.0.1/548 -L 9999/127.0.0.1/139 but I cannot reproduce the same with the Synology NAS. Connecting using this command gives me the following error : channel 4: open failed: administratively prohibited: open failed I also tried with a windows client (used bitvise tunneler): it works with the ReadyNAS but not the Synology and get the following error msg : server denied request for client-side server-2-client forwarding on 127.0.0.1:139 I modified /etc/ssh/sshd_config : MaxSessions 10 PasswordAuthentication yes PermitEmptyPasswords no AllowTcpForwarding yes GatewayPorts yes PermitTunnel yes Is there any way to make it work ? I must add that I can successfully connect via SSH to the NAS so I donnot think this is a firewall issue between the Synology and my computer. Thanks for you answers

    Read the article

  • Self-powered USB hub and power supply adapter ampere capacity

    - by galacticninja
    I am looking for a power supply adapter for my USB Hub so it can support at least 2 bus-powered external hard drives. The hub's rating is 5 volts, 2 amperes. I would like to know if it would be OK to buy a power supply adapter rated at less than 2A. I've been looking for power supply adapters and the ones that do support 2A are more expensive (more expensive than the USB hub itself) compared to those that support less than 2A. Will power supply adapter that supports less than 2A (~1-1.5A) work fine to support two external hard drives? The external hard drives are both bus-powered Western Digital My Passport Essentials 250 GB. The OS is Windows XP SP3.

    Read the article

  • USB to LPT adapter?

    - by Dave
    I'm bummed out, pretty much all of our computers here lack parallel ports. I have an EETools ChipMax programming tool that has one of the old-school Centronics connectors on the back. I figured that someone must make a USB to LPT adapter. Sure enough, I found one from iogear, the GUC1284B that is a USB to Parallel Printer cable. Note the boldface on the Printer. It must connect to a printer -- it isn't some generic USB to parallel interface, unfortunately. Does anyone here know of an adapter that works for parallel devices that aren't printers? I'd hate to have to buy a USB version of the ChipMax when I don't need to use it very much.

    Read the article

  • Restrict VPN user to Remote Desktop only with Sonicwall

    - by Matt
    Basically I want him to only be able to log onto the VPN in order to use Remote Desktop to use HIS machine. Not surf the internet or do anything like that, but just use the programs on his machine that he doesn't have at home. We use a Sonicwall NSA 220 with their regular VPN client. I can create a user for him, but when I create an access rule it applies to all VPN users. How can I make something like that only apply to ONE user?

    Read the article

  • Does the nginx “upstream” directive have a port setting?

    - by user55467
    moved from:http://stackoverflow.com/questions/3748517/does-nginx-upstream-has-a-port-setting I use upstream and proxy for load balancing. The directive proxy_pass http://upstream_name uses the default port, which is 80. However, if the upstream server does not listen on this port, then the request fails. How do I specify an alternate port? my configuration: http{ #... upstream myups{ server 192.168.1.100:6666; server 192.168.1.101:9999; } #.... server{ listen 81; #..... location ~ /myapp { proxy_pass http://myups:81/; } } nginx -t: [warn]: upstream "myups" may not have port 81 in /opt/nginx/conf/nginx.conf:78.

    Read the article

  • My datacard goes online but doesn't gives internet access (Fedora 14).

    - by Harsh
    I am using MTS datacard. I have usb_modeswitch installed and have configured the wvdial.conf file. When I do sudo wvdial cdma, the IPs and DNS addresses are also allocated but I still cant access internet. The reply to dmesg | grep -e 'tty' -e 'modem' is: [ 0.000000] console [tty0] enabled [ 11.098238] USB Serial support registered for GSM modem (1-port) [ 11.098352] option 6-1:1.0: GSM modem (1-port) converter detected [ 11.102170] usb 6-1: GSM modem (1-port) converter now attached to ttyUSB0 [ 11.102207] option 6-1:1.1: GSM modem (1-port) converter detected [ 11.102334] usb 6-1: GSM modem (1-port) converter now attached to ttyUSB1 [ 11.102364] option 6-1:1.2: GSM modem (1-port) converter detected [ 11.102488] usb 6-1: GSM modem (1-port) converter now attached to ttyUSB2 [ 11.102522] option 6-1:1.3: GSM modem (1-port) converter detected [ 11.102643] usb 6-1: GSM modem (1-port) converter now attached to ttyUSB3 [ 11.102672] option 6-1:1.4: GSM modem (1-port) converter detected [ 11.102793] usb 6-1: GSM modem (1-port) converter now attached to ttyUSB4 [ 11.103074] option: v0.7.2:USB Driver for GSM modems Can anyone tell me what shall I do?

    Read the article

  • http, https and ftp is not working but smtp and imap is working

    - by Unicron
    hi all, yesterday on a computer of a friend a strange thing happened. after booting the ports fo http, https and ftp are closed but e-mail is still working. in the control panel the windows firewall seems active even if he tries to deactivate it. i have a suspision that it is the faul of norton internet security 2010, we have tried to uninstall it, but the uninstallation did not work. when using the removal tool from symantec it just goes to 23% and then it crashes. the process ccSvcHst.exe is still running. how can i safeley remove the rest of norton internet security? thanks in advance [edit] norton internet security 2010 is sucesfully removed, but still no connectivity

    Read the article

  • Does stunnel prevent non ssl traffic to "the" specified port?

    - by user432024
    So say I have an arbitrary tcp port 12345 and it's non ssl and I want to put stunnel to secure traffic to it. When stunnel is in front of it does it mean that this port is now tls/ssl only? Or can you still connect to it unencrypted? Basically I want to make sure that this port can only be accessed through ssl/tls and stunel and no other way. Clarification I want to make sure only stunnel port is open. Which is answered in the comments that the unsecured port should be fire-walled but preferably bound to localhost.

    Read the article

  • (Free) SSH tunneling to S60?

    - by Jawa
    What solutions are there to SSH tunneling certain ports on a Symbian S60 mobile phone? My goal is to rsync files to/from the phone but the only S60 SSH forwarding software that I know is pretty old and most importantly not free. (sshforwarding.com)

    Read the article

  • How can I specify nameserver's port number in osx? [duplicate]

    - by Cofyc
    This question already has an answer here: How is DNS lookup configured for OSX Mountain Lion? 2 answers In resolver manual, it said: The address may optionally have a trailing dot followed by a port number. For example, 10.0.0.17.55 specifies that the nameserver at 10.0.0.17 uses port 55. But it doesn't work, in /etc/resolv.conf or files under /etc/resolver/. 208.67.222.222.5353 Can I specify non-default port number for nameserver in osx? Update: osx don't use /etc/resolve.conf, but use files under /etc/resolver/. I have wrote a file dev with content '127.0.0.1' to route all dns queries for *.dev domains to a local dns server (127.0.0.). But I cannot specify dns server's port here. (It uses 53 anyway) Maybe there is no way to specify port number under osx?

    Read the article

  • iptables forward rule not working in openwrt

    - by Udit Gupta
    I am trying to apply some iptables forwarding rules in openwrt. Here is my scenario - My server has two cards ath0 and br-lan. br-lan is connected to internet and ath0 to private network. The other m/c in n/w also has ath0 that connects with this server's ath0 and they are able to ping each other. Now, I want other m/c in network to use internet using br-lan of server so I thought of using iptables forwarding rule- Here is what I tried - Server : $ ping 1.1.1.6 // <ath0-ip of client> works fine $ iptables -A FORWARD -i ath0 -o br-lan -j ACCEPT $ /etc/init.d/firewall restart Client : $ ping 1.1.1.5 // <ath0-ip of server> works fine $ ping 132.245.244.60 // <br-lan ip of server> (not working) I am new to iptables stuff and openwrt. What I am doing wrong here ?? Any other help if anyone could suggest for my scenario Edit- m/c - machine n/w - network

    Read the article

  • Can I run a web site from my home network without jeapordizing other computers on my LAN?

    - by Alchemical
    I have a home LAN with 5 computers and a NAS, all connected to a Linksys router which is connected to my Cox cable modem. I'm interested in having one of my computers run an IIS-based web site and to have it be accessible to the internet with a static IP. However, I do not want o jeapordize the safety of the other computers on my home network! Is there anyway to do this safely, or as safe as possible? I may also like to run an FTP server from this computer. Finally, optionally I would like to allow remote access to this computer from the internet--but it seems to me that that may increase the security risk to the other computers substantially.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >