Search Results

Search found 6397 results on 256 pages for 'ssh agent'.

Page 191/256 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Not able to get response back on java code while http get with S60

    - by Rishabh
    Hi, I am using Net Beans for developing an application on S60. I made one page for user authentication and using .net wcf service to authenticate user. i am able to send data on .net service by HttpGet method but not able to get response back on java page. I have tested it on fiddler with this url its working fine and returning response code 200, but not getting response code by java code. Following code i am using. Is some thing wrong here ? httpConn = (HttpConnection)Connector.open(url); httpConn.setRequestMethod(HttpConnection.GET); httpConn.setRequestProperty("User-Agent", "Profile/MIDP-2.1 Confirguration/CLDC-1.1"); int respCode = httpConn.getResponseCode(); Thanks Rishabh

    Read the article

  • Unable to connect guest using VMWare Player

    - by eLAN
    I'm running RedHat server 5.3 as guest on Window XP VMware palyer. the network setting is set to "Host Only", but I have tries all other settings. I'm able to ping the guest machine, but I'm unable to connect it in any other way including webserver, Tomcat, Telnet, ssh. all of the services above are working from within the guest (using localhost). Guest firewall and SELinux are disabled. any idea on what I should check next? every idea will be appreciated... thnaks Ilan

    Read the article

  • Lost sudo/su on Amazon EC2 instance

    - by barrycarter
    I have an Amazon EC2 instance. I can login just fine, but neither "su" nor "sudo" work now (they worked fine previously): "su" requests a password, but I login using ssh keys, and I don't think the root user even has a password. "sudo <anything>" does this: sudo: /etc/sudoers is owned by uid 222, should be 0 sudo: no valid sudoers sources found, quitting I probably did "chown ec2-user /etc/sudoers" (or, more likely "chown -R ec2-user /etc" because I was sick of rsync failing), so this is my fault. How do I recover? I stopped the instance and tried the "View/Change User Data" option on the AWS EC2 console, but this didn't help. EDIT: I realize I could kill this instance and create a new one, but was hoping to avoid something that extreme.

    Read the article

  • Have Ubuntu 9.10 desktop, just got Macbook Pro. Share over Samba, NFS, other?

    - by miamisoftware
    Hi everyone. As the title says, I have and love my Ubuntu 9.10 desktop (use it for programming). Just got a Macbook Pro (Snow Leopard) and stuff like Documents, etc, trying to figure out easiest way to share my Ubuntu desktop with my Macbook Pro. Should I use Samba or NFS and is it easy to configure one (or something else) for only in network access (192.168.1.x). It took me about 2 days to find/setup Macfuse and Macfusion for sshfs to the Fedora web server and I'm hoping there's something much easier for this in network access. But if it requires or is suggested I go ssh, I can do that. Are there any security problems with either Samba or NFS - don't know much about AFP-Apple protocol so I've not brought it up. Thanks in advance.

    Read the article

  • How do I modify this download function in Python?

    - by TIMEX
    Right now, it's iffy. Gzip, images, sometimes it doesn't work. How do I modify this download function so that it can work with anything? (Regardless of gzip or any header?) How do I automatically "Detect" if it's gzip? I don't want to always pass True/False, like I do right now. def download(source_url, g = False, correct_url = True): try: socket.setdefaulttimeout(10) agents = ['Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)','Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.1)','Microsoft Internet Explorer/4.0b1 (Windows 95)','Opera/8.00 (Windows NT 5.1; U; en)'] ree = urllib2.Request(source_url) ree.add_header('User-Agent',random.choice(agents)) ree.add_header('Accept-encoding', 'gzip') opener = urllib2.build_opener() h = opener.open(ree).read() if g: compressedstream = StringIO(h) gzipper = gzip.GzipFile(fileobj=compressedstream) data = gzipper.read() return data else: return h except Exception, e: return ""

    Read the article

  • Upload a directory recursively to an FTP server

    - by Nicolas Raoul
    I am writing a Linux shell script to copy a local directory to a remote server (removing any existing files). Local server: ftp and lftp commands are available, no ncftp or any graphical tools. Remote server: only accessible via FTP. No rsync nor SSH nor FXP. I am thinking about listing local and remote files to generate a lftp script and then run it. Is there a better way? Note: Uploading only modified files would be a plus, but not required

    Read the article

  • Remote access to BIOS?

    - by j-g-faustus
    I have a home server running headless (i.e. without a monitor), using ssh for remote access. This works fine most of the time, but I still need a graphics card and still need to drag out a monitor whenever I have to access BIOS, (re-)install the OS and similar. I know that there are business servers that let you control everything remotely (over Ethernet), including power up and BIOS access. Is this type of functionality available for "prosumer" class hardware? If so, where does it sit - should I look for motherboard support, a PCI-e card or an external device? And does this type of functionality have a name, so I know what to google for?

    Read the article

  • Can I remove the ctrl-z key binding in my shell?

    - by Nagel
    The background for this question: I currently have to do a lot of my work in terminal over ssh, and I use screen quite a bit. Because I found the ctrl-a key binding for screen commands so annoying since I'm accustomed to using ctrl-a to go to the beginning of a line, I changed it to ctrl-z. The only problem with this is that when I'm in Matlab, think I am in Screen but am not, pressing ctrl-z will instantly kill my Matlab session, because ctrl-z is the key binding for suspending processes in *nix. So the question is: can I remove the key binding for ctrl-z in my shell so that it does no longer suspend a process? My shell is terminal.app on OSX.

    Read the article

  • Routing Traffic on Ubuntu to give Raspberry PI Internet Access

    - by Scruffers
    I'm hoping someone can point me in the right direction for setting up my Linux (Ubuntu 12.04) box to route traffic from eth0 to wlan0. I'll try and explain the problem I am trying to solve: I currently have two separate networks: [RaspberryPi/eth0] 192.168.2.2 / 255.255.255.0 ^ | v [Ubuntu/eth0] 192.168.2.1 / 255.255.255.0 And: [Ubuntu/wlan0] 192.168.1.100 / 255.255.255.0 ^ | v [ADSL router] 192.168.1.1 / 255.255.255.0 So currently if I want to access the RaspberryPI I can SSH from the Ubuntu box to the PI. And if I want to use the Internet, I have full access from the Ubuntu box, but nothing from the RaspberryPI - the two networks are partitioned. What I would like to do is configure things so that the RaspberryPI has Internet access via the Ubuntu box and out to the Internet. I tried to create a bridge, but got the message "wlan0: operation not supported" (wireless chipset is Ralink RT3062). I'm sure giving the Raspberry PI Internet access should be easy to do in this configuration, but I am a bit lost - can someone point me in the right direction please?

    Read the article

  • "ssh_exchange_identification: Connection closed by remote host lost connection" when running cron job

    - by grautur
    I have a Ruby script that connects to a remote machine via ssh and executes a command. The script runs fine when I just run it in my terminal. In my crontab, I have 1 * * * * /bin/bash -l -c 'ruby myfile.rb' and if I go ahead and run /bin/bash -l -c 'ruby myfile.rb', everything executes fine. But when cron itself executes the job, I get a ssh_exchange_identification: Connection closed by remote host error. What's the cause of this? How do I fix it?

    Read the article

  • Cannot access site via IP / hostname

    - by DaveB
    I am renting a VPS with Debian installed running JBoss AS6 for my web app. I recently had some problems with my DNS hosts as they messed up the A-records for my domain which caused some new A-records to be added by mistake The DNS problem is now sorted and the domain is working ok, however I noticed that the web server no longer responds via direct IP or hostname in a web browser (although it pings ok and I can SSH in using the hostname ok) Is there any explanation for this? I am using rinetd to forward traffic from 80 to port 8080 but thats been ok for a while Any suggestions would be appreciated Regards

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe Irony

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • Looking for a router-like web interface for my Debian gateway.

    - by marcusw
    Hey, I need a web interface program for my debian gateway which has the features of a router's one. Specifically, I must be able to easily Forward ports to various clients on the LAN or the router itself (it's also a server) Manage a DHCP server preferably including DHCP reservation for certain MACs Give me a list of the connected DHCP clients (optionally) Show which clients are the most active as far as bandwidth (something like iftop) Alternatively, it could be a graphical app which I could tunnel over ssh. No command line programs please...I'm used to doing this stuff with a point-and-click interface. Not adverse to command-line setup; just need to be able to reconfigure things graphically. Have a working LAMP setup. I've tried webmin, but it didn't satisfy the "easy" part...too many clicks and too many meny options.

    Read the article

  • wildcard in httpd conf file?

    - by Joe
    Here is an example httpd config I'm currently using: <VirtualHost 123.123.123.123:80> ServerName mysite.com ServerAlias www.mysite.com DocumentRoot /home/folder </VirtualHost> I'm wondering, is it possible to have a wildcard for the ServerName & ServerAlias variable? Reason for asking is I have some software that is shared among multiple URL's all controlled in a CMS and it's kind of a pain to add new domains via ssh everytimee. And before someone points out a security hole, the software does check the current URL before doing any webpages :)

    Read the article

  • Successful su for user by root in /var/log/auth.log

    - by grs
    I have this sorts of entries in my /var/log/auth.log: Apr 3 12:32:23 machine_name su[1521]: Successful su for user1 by root Apr 3 12:32:23 machine_name su[1654]: Successful su for user2 by root Apr 3 12:32:24 machine_name su[1772]: Successful su for user3 by root Situation: All users are real accounts in /etc/passwd; None of the users has its own crontab; All of those users are logged in the machine some time ago via SSH or No Machine - time varies from few minutes to few hours; no cron jobs are scheduled to run at that time, anacron is removed; I can see similar entries for other days and other times. The common part is the users are logged in when it appears. It does not appear during login, but some time afterwards. This machine has similar setup with few others but it is the only one where I see these entries. What causes them? Thanks

    Read the article

  • What is a good solution for an adaptive iptables daemon?

    - by Matt
    I am running a series of web servers and already have a pretty good set of firewall rules set up, however I'm looking for something to monitor the traffic and add rules as needed. I have denyhosts monitoring for bad SSH logins, and that's great - but I'd love something I could apply to the whole machine that would help prevent bute force attacks against my web applications as well, and add rules to block IPs that display evidence of common attacks. I've seen APF, but it looks as though it hasn't been updated in several years. Is it still in use and would it be good for this? Also, what other solutions are out there that would manipulate iptables to behave in some adaptive fashion? I'm running Ubuntu Linux, if that helps.

    Read the article

  • a problem in socket programing in perl

    - by isu
    I write this code : #!/usr/local/bin/perl use strict; use LWP::UserAgent; my $ua = new LWP::UserAgent(agent => 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.5) Gecko/20060719 Firefox/1.5.0.5'); $ua->proxy([qw(http https)] => 'http://203.185.28.228:1080' #that is just socks:port); my $response = $ua->get("http://www.google.com"); print $response->code,' ', $response->message,"\n"; but when i execute it i get this error: 500 Can't connect to 203.185.28.228:1080 (connect: timeout) what am i going to do ?

    Read the article

  • How to connect with MySQL server if it won't connect via the socket?

    - by cwd
    I have an account on a shared server. I have jailshell access and also PhpMyAdmin. I want to run mysql commands via SSH but I'm getting an error: $ mysql -u mySqlUser -p mySqlPw Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' I can connect with PHP and phpMyAdmin, so would it be possible to call mysql from the shell and have it connect via an ip and port instead of the socket? The file /var/lib/mysql/mysql.sock does not exist - maybe that is intentional, and the only thing in /etc/my.cnf is [mysqld] skip-innodb More Info I don't have access to change system settings. I did a search in /var for mysql.sock but found nothing. However, phpMyAdmin might be connecting via a socket somehow: Really it would just be great if I could connect via IP. Also tried these two syntaxes: $ mysql -u mySqlUser -p mySqlPw -h localhost $ mysql -u mySqlUser -p mySqlPw -h localhost -P 3306 Both with the same result: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)

    Read the article

  • Cannot Login To phpMyAdmin

    - by Zach Dziura
    I'm running a simple LAMP server at home from which I host a personal blog. The server is running Arch Linux, with the latest-and-greatest versions of Apache, MySQL, and PHP. In order to easily maintain the databases, I installed phpMyAdmin. However, I cannot login. If I were to SSH into the server and run mysql -u <user> -p <password>, no errors show up and I'm immediately placed into the MySQL prompt. No problem. However, when I try to log in with phpMyAdmin, using those exact same credentials, nothing happens. No errors, no nothing, I'm just redirected back to the login page. Did I do something wrong? Thanks in advance for any and all answers!

    Read the article

  • Is Rsync like subversion, but for a server?

    - by johnlai2004
    I'm trying to learn how to use rsync. I want to create daily backs up of my production server. Right now I run the command rsync -azr /var/www/* [email protected]:/var/www Now let's say one day, I want to roll back the /var/www/ directory on my production server to last month's version. How do I tell rsync to retrieve version N? On reading that rsync only copies differences between src and dest, I assumed rsync works like subversion where you commit changes to a destination, and keep track of every version, and with the option to checkout any version at anytime. Is that the way rsync works? It's like subversion but for an entire server? That would be great because then it means I don't have to do full ssh copies for my nightly backups.

    Read the article

  • How to force rsync to use destination directory as root

    - by thepurplepixel
    I have a simple script to one-way-sync files/folders within a directory: #!/bin/bash HOST='<hostname>' USER='<username>' DIR='/downloads/' SOURCE='/srv/torrents' rsync -e "ssh -l $USER" --remove-source-files -h -4 -r --stats --progress -i $SOURCE $HOST:$DIR find $SOURCE -type d -empty -prune -exec rmdir -p \{\} \; However, when this rsync operation runs, it creates a folder, torrents in /downloads on the destination machine. How can I force rsync to put all folders & files from /srv/torrents (remote) into /downloads/ (local) instead of creating /downloads/torrents as a separate directory?

    Read the article

  • Help me upgrade my pf.conf for OpenBSD 4.7

    - by polemon
    I'm planning on upgrading my OpenBSD to 4.7 (from 4.6) and as you may or may not know, they changed the syntax for pf.conf. This is the relevant portion from the upgrade guide: pf(4) NAT syntax change As described in more detail in this mailing list post, PF's separate nat/rdr/binat (translation) rules have been replaced with actions on regular match/filter rules. Simple rulesets may be converted like this: nat on $ext_if from 10/8 -> ($ext_if) rdr on $ext_if to ($ext_if) -> 1.2.3.4 becomes match out on $ext_if from 10/8 nat-to ($ext_if) match in on $ext_if to ($ext_if) rdr-to 1.2.3.4 and... binat on $ext_if from $web_serv_int to any -> $web_serv_ext becomes match on $ext_if from $web_serv_int to any binat-to $web_serv_ext nat-anchor and/or rdr-anchor lines, e.g. for relayd(8), ftp-proxy(8) and tftp-proxy(8), are no longer used and should be removed from pf.conf(5), leaving only the anchor lines. Translation rules relating to these and spamd(8) will need to be adjusted as appropriate. N.B.: Previously, translation rules had "stop at first match" behaviour, with binat being evaluated first, followed by nat/rdr depending on direction of the packet. Now the filter rules are subject to the usual "last match" behaviour, so care must be taken with rule ordering when converting. pf(4) route-to/reply-to syntax change The route-to, reply-to, dup-to and fastroute options in pf.conf move to filteropts; pass in on $ext_if route-to (em1 192.168.1.1) from 10.1.1.1 pass in on $ext_if reply-to (em1 192.168.1.1) to 10.1.1.1 becomes pass in on $ext_if from 10.1.1.1 route-to (em1 192.168.1.1) pass in on $ext_if to 10.1.1.1 reply-to (em1 192.168.1.1) Now, this is my current pf.conf: # $OpenBSD: pf.conf,v 1.38 2009/02/23 01:18:36 deraadt Exp $ # # See pf.conf(5) for syntax and examples; this sample ruleset uses # require-order to permit mixing of NAT/RDR and filter rules. # Remember to set net.inet.ip.forwarding=1 and/or net.inet6.ip6.forwarding=1 # in /etc/sysctl.conf if packets are to be forwarded between interfaces. ext_if="pppoe0" int_if="nfe0" int_net="192.168.0.0/24" polemon="192.168.0.10" poletopw="192.168.0.12" segatop="192.168.0.20" table <leechers> persist set loginterface $ext_if set skip on lo match on $ext_if all scrub (no-df max-mss 1440) altq on $ext_if priq bandwidth 950Kb queue {q_pri, q_hi, q_std, q_low} queue q_pri priority 15 queue q_hi priority 10 queue q_std priority 7 priq(default) queue q_low priority 0 nat-anchor "ftp-proxy/*" rdr-anchor "ftp-proxy/*" nat on $ext_if from !($ext_if) -> ($ext_if) rdr pass on $int_if proto tcp to port ftp -> 127.0.0.1 port 8021 rdr pass on $ext_if proto tcp to port 2080 -> $segatop port 80 rdr pass on $ext_if proto tcp to port 2022 -> $segatop port 22 rdr pass on $ext_if proto tcp to port 4000 -> $polemon port 4000 rdr pass on $ext_if proto tcp to port 6600 -> $polemon port 6600 anchor "ftp-proxy/*" block pass on $int_if queue(q_hi, q_pri) pass out on $ext_if queue(q_std, q_pri) pass out on $ext_if proto icmp queue q_pri pass out on $ext_if proto {tcp, udp} to any port ssh queue(q_hi, q_pri) pass out on $ext_if proto {tcp, udp} to any port http queue(q_std, q_pri) #pass out on $ext_if proto {tcp, udp} all queue(q_low, q_hi) pass out on $ext_if proto {tcp, udp} from <leechers> queue(q_low, q_std) pass in on $ext_if proto tcp to ($ext_if) port ident queue(q_hi, q_pri) pass in on $ext_if proto tcp to ($ext_if) port ssh queue(q_hi, q_pri) pass in on $ext_if proto tcp to ($ext_if) port http queue(q_hi, q_pri) pass in on $ext_if inet proto icmp all icmp-type echoreq queue q_pri If someone has experience with porting the 4.6 pf.conf to 4.7, please help me do the correct changes. OK, this is how far I've got: I commented out nat-anchor and rdr-anchor, as describted in the guide: #nat-anchor "ftp-proxy/*" #rdr-anchor "ftp-proxy/*" And this is how I've "converted" the rdr rules: #nat on $ext_if from !($ext_if) -> ($ext_if) match out on $ext_if from !($ext_if) nat-to ($ext_if) #rdr pass on $int_if proto tcp to port ftp -> 127.0.0.1 port 8021 match in on $int_if proto tcp to port ftp rdr-to 127.0.0.1 port 8021 #rdr pass on $ext_if proto tcp to port 2080 -> $segatop port 80 match in on $ext_if proto tcp tp port 2080 rdr-to $segatop port 80 #rdr pass on $ext_if proto tcp to port 2022 -> $segatop port 22 match in on $ext_if proto tcp tp port 2022 rdr-to $segatop port 22 rdr pass on $ext_if proto tcp to port 4000 -> $polemon port 4000 match in on $ext_if proto tcp tp port 4000 rdr-to $polemon port 4000 rdr pass on $ext_if proto tcp to port 6600 -> $polemon port 6600 match in on $ext_if proto tcp tp port 6600 rdr-to $polemon port 6600 Did I miss anything? Is the anchor for ftp-proxy OK as it is now? Do I need to change something in the other pass in on... lines?

    Read the article

  • linux: selective sudo access for a particular command

    - by bguiz
    Hi, Is it possible to grant a particular user sudo access for one particular command only? Thanks -- More info: We farm out lengthy optimisation runs to each other's boxes over ssh. These runs take hours, sometimes days. The shutdown command can only be run in sudo. Being conscious of my environmental footprint, I would like to give the initiator(s) of these runs sudo access to the shutdown command on my box, without sudo access for everything else - so that they may shutdown my machine when they no longer need it. I am aware that I can schedule a shutdown before I leave my box, but I am looking for a better solution.

    Read the article

  • Code Signing Identity does not match in my keychain, for mac app store developing?

    - by larntin
    hi, 1, I already download the "Apple Worldwide Developer Relations Certification Authority",and add it into my keychain. 2, My team leader already had created two Cers for Mac App store developing, I download and add it into my keychain. 3, I used two methods to sign my add, but failed all. First, add code sign section in my .xcodeproj(3.2.5). Second, I used script: productbuild --component ./bin/MAS_Release/MyApp.app /Applications --sign "3rd Party Mac Developer Application: My Company Co., Ltd." --product ./src/MyApp/MyApp-Info.plist MyApp.pkg But it failed with information: Code Signing Identity '3rd Party Mac Developer Application: My Company Co., Ltd.' does not match any valid, non-expired, code-signing certificate in your keychain. I observed that my certifications in keychain don't have small trangle. how make the small trangle absence?(when I'am importing the Cers from my Agent, it don't have the trangle absence)

    Read the article

  • setup advanced filtering and access restrictions on dd-wrt using iptables

    - by Nova deViator
    I have a linksys WRT54GL router with a DD-WRT installed and I want to setup some advanced filtering that seem to not be available through "Access restrictions" web gui option. I guess I would be using IPTABLES then. I have ssh access to router and can run iptables, but I'm not so experienced with iptables. So here are my needs: my policy would be deny all first and then allow exceptions allow all http (port 80) access to WAN through wireless allow all other traffic only to PCs with specific MAC addresses allow internet access to PC with specific MAC address according to schedule (let's say everyday between 18:00-21:00) is this possible to setup with IPtables? could somebody help me a bit with it? or should go and RTFM?

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >