Search Results

Search found 1571 results on 63 pages for 'daniel ribeiro'.

Page 24/63 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • How to forward connection from one interface to another under linux

    - by Daniel
    Hi, I have linux box which has two network interface, eth0, eth1. from eth1 I can access an internal website, say under port 8080. from outside the box, I can't access that network. my question is, is there a way I set up something so from outside the box, there appears to be a web server running in port 8080 and when I connect to it, it automatically forwards to eht1 the internal site? I tried to enable ip forward and add a static route, but it doesn't work. thanks.

    Read the article

  • Running Emacs on Multiple TTYs in screen

    - by Daniel Kessler
    When working with EMACS over SSH, is there any way to spawn a new frame of the same emacs session on a different terminal? In my use case, I have screen running, so I have multiple terminals, and can recover which pseudo terminal they're attached to with pts. Suppose I have two "windows" (in GNU screen parlance). The first one is attached to /dev/pts/12 and the second one is attached to /dev/pts/13. I launch emacs on the first window. Is there any way for me to start a new frame of the same session on the second window? I've been playing with passing arguments to make-frame but it seems that the usage that allows me to specify a terminal requires that a terminal object already exists, and I can't see any way to create a new terminal object.

    Read the article

  • When ran as a scheduled task, cannot save an Excel workbook when using Excel.Application COM object in PowerShell

    - by Daniel Richnak
    I'm having an issue where I've automated creating an Excel.Application COM object, add some data into a workbook, and then saving the document as an xlsx. This works fine if: I'm already in Powershell interactive host and either run each command in sequence, or execute as a ps1. I run it from cmd.exe, using the syntax: powershell.exe -command "c:\path\to\powershellscript.ps1" I create a scheduled task in Windows 7 / Server 2008 R2, use the above powershell.exe -command syntax, and use the mode "Run only when the user is logged on". It fails when I modify the same scheduled task, but set it to "run whether the user is logged on or not". Here's a sample script that illustrates the problem I'm having: $Excel = New-Object -Com Excel.Application $Excelworkbook = $Excel.Workbooks.Add() $excelworkbook.saveas("C:\temp\test.xlsx") $excelworkbook.close() I have a theory that the COM object fails somehow if my profile isn't loaded / if it's not performed in a command window. Any ideas on which options to choose when creating the scheduled task, or which options to use when creating the Excel object or using the SaveAs() function? Can anybody reproduce this? I've been able to see this behavior on both a Server 2008 R2 machine, and Windows 7. Haven't tried other platforms.

    Read the article

  • Lost SSH Key, is it possible to log in via some other method now?

    - by Daniel Hai
    So I happened to do the cardinal sin of administration. I changed the openssh settings, and accidently closed that terminal before I could test it, and of course, now I'm completely locked out. I was using password-less private key authentication. Am I completely screwed out of my server? This is a VPS system -- is there a way where I could have someone at the datacenter log in locally and redo the ssh settings?

    Read the article

  • Creating a network link between 2 very close buildings

    - by Daniel Johnson
    I have a charity who have two adjacent medium sized modern detached houses (in the UK): the buildings stand next to each other and are less than 5 metres apart. They have DSL connected to a single computer in one of the buildings. They want to add a network with wireless, and want it to work across both buildings. Being a charity they need to keep costs down. The network would be used for sharing Word documents, e-mail, browsing and skyping. My initial thoughts were to connect the buildings with fibre. So: Option 1 Use fibre between the buildings. Sufficient cable and two TP-LINK MC100CM Fast Ethernet Media Converters. Cost ~£80.00. But there is the extra cost and hassle of running the cable down and up the external walls, lifting and relaying paving, and burying underground. Never having fitted fibre I'm also a little worried about going up the wall and then bending the cable at 90 degrees to go through the wall and into the building. Option 2 Use two TP-Link TL-WA7510N High Powered Outdoor 5Ghz 15dBi Wireless antennas to connect the buildings. There is a clear line of sight at first floor level. Cost ~£100. And much easier to fit than fibre! Is using the TL-WA7510Ns overkill? Is there something more suitable? I had hoped to use some Netgear stuff, e.g. two DGN2200, one in each house and also use them to provide the wireless link between the buildings. However, in bridge mode wireless client association is not available and repeater mode with client association only supports WEP security which isn't strong enough. Is there something similar that would be up to the job? Option 3 Connect the buildings with UTP cable. My concerns here are risk of electric shock due to a difference of potential between the buildings (or are they so close this shouldn't be an issue) and protection from lightning strikes. Is fitting lighting arrestors expensive? And what can be done to ameliorate against the risk of shock? This all falls outside my area of expertise so I would really appreciate some advice.

    Read the article

  • Apache + plesk vhost problem: .htaccess ignored!

    - by DaNieL
    Hi guys, i have a problem with a simple apache configuration. When the user ask for https://mydomain.com i have to redirect it to https://www.mydomain.com, becose my https certificate is valid just for the domain with www. I create the vhost.conf into my /var/www/vhosts/mydomain.com/conf/ directory, with inside: <Directory /var/www/vhosts/mydomain.com/httpsdocs> AllowOverride All </Directory> And my .htaccess file into the /var/www/vhosts/mydomain.com/httpsdocs/ is: RewriteEngine on RewriteCond %{HTTPS_HOST} ^mydomain\.com RewriteRule ^(.*)$ https://www.mydomain.com/$1 [R=301,L] But seem like the .htaccess is completely ignored. Any idea?

    Read the article

  • How do I get a 300Mbps connection over 802.11n?

    - by Daniel Schaffer
    I just bought a new wireless setup, consisting of a Cisco E2000 router, Edimax 7718un USB adapter for my laptop and an Edimax 7728in PCI adapter for my HTPC (which isn't in a location I can run cat5 to). I have to stay in the 2.4GHz band because I have an iPhone and a Wii that will need to connect. I'm aware that 11g devices will drop speeds for 11n devices, but they aren't connected yet. The fastest connection I've been able to get, with the router within 5 feet of either the laptop or HTPC is 144Mbps. The router has settings for "20MHz" and "Auto (20 MHz or 40Mhz) channel width, which I've set to the "Auto" setting. I haven't been able to find anything similar for either of the Edimax adapters. This is the first I've dealt with 11n, so I'm not even sure what else could be causing a problem. How do I get up to 300Mbps, or at least a fair bit closer?

    Read the article

  • mongod fork vs nohup

    - by Daniel Kitachewsky
    I'm currently writing process management software. One package we use is mongo. Is there any difference between launching mongo with mongod --fork --logpath=/my/path/mongo.log and nohup mongod >> /my/path/mongo.log 2>&1 < /dev/null & ? My first thought was that --fork could spawn more processes and/or threads, and I was suggested that --fork could be useful for changing the effective user (downgrading privileges). But we run all under the same user (process manager and mongod), so is there any other difference? Thank you

    Read the article

  • IPTABLE & IP-routed netwok solution for HOST net and VM's subnet

    - by Daniel
    I've got ProxmoxVE2.1 ruled KVM node on Debian and bunch of VM's guests machine. That is how my networking looks like: # network interface settings auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address 175.219.59.209 gateway 175.219.59.193 netmask 255.255.255.224 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp And I've got two working subnet solution auto vmbr0 iface vmbr0 inet static address 10.10.0.1 netmask 255.255.0.0 bridge_ports none bridge_stp off bridge_fd 0 post-up ip route add 10.10.0.1/24 dev vmbr0 This way I can reach internet, to resolve outside hosts, update and download everything I need but can't reach one guest VM out of any other VM's inside my network. The second solution allows me to communicate between VM's: auto vmbr1 iface vmbr1 inet static address 10.10.0.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE I can even NAT internal addresses: -t nat -I PREROUTING -p tcp --dport 789 -j DNAT --to-destination 10.10.0.220:345 My inexperienced mind is ready to double VM's net adapters: one for the first solution and another - for second (with slightly different adresses) but I'm pretty sure that it's a dumb way to resolve the problem and everything can be resolved via iptables/ip route rules that I can't create. I've tried a dozen of "wizard manuals" and "howto's" to mix both solution but without success. Looking for an advice (and good reading links for networking begginers).

    Read the article

  • Apache: Assign SSL server / client certs to directories

    - by Daniel Amaya
    I have multiple directories on my system, e.g., /var/www/dir1 /var/www/dir2 /var/www/dir3 And what I'd like to do is to generate a server/client SSL certificate for each directory, and then set up each directory such that the client cert must match the server cert in order to access said directory. Now, if someone has the client cert for /var/www/dir2 and they try to access /var/www/dir1, they will be unable to do so since those directories use different certs. Each of these directories is hosted on the same domain (i.e., domain.com/dir1, domain.com/dir2). Now, the problem I am having is that I am not exactly sure how to accomplish this in Apache. (Also, I don't really care for domain.com to require SSL, but I do want the directories to require it.)

    Read the article

  • How do you open a folder in OSX using the keyboard?

    - by Daniel T.
    On Windows, pressing enter when you highlight on a folder in Windows Explorer will open that folder. On OSX, pressing enter when you highlight a folder edits the folder's name (like F2 in Windows). Is there a keyboard shortcut to do the same thing on Windows, so that you'll open the folder? It doesn't have to be enter, but I'd like to know if there's another hotkey that does it. The reason why I ask is because I like to navigate through deep folder structures by using the arrow keys for navigation and enter to drill down into them.

    Read the article

  • Looking for a software / something to automate some simple audio processing

    - by Daniel Magliola
    I'm looking for a way to take a 1-hour podcast MP3 file and split it into several several 2-minute MP3s. Along the way, I'd like to also do a few things like Amplify the volume. The problem I'm solving is that I have a crappy MP3 player that won't let me seek forward or backward, nor will it remember where I left it when I turn it off, plus, I listen to these in a seriously high-noise situation. Thus, I need to be able to skip forward in large chunks (2-5 minutes) to the point where I left it. Is there any decent way to do this? Audacity doesn't seem to have command-line capabilities. I'm willing to write some code, for example, to call something over the command line and get how long the MP3 file is, to later know how many pieces i'll have, and then say "create an MP3 with 0:00 to 2:00", "create an MP3 with 2:00 to 4:00", etc. I'm also willing to pay for the right tools if necessary. I also don't care how slow this runs, as long as I can automate it :-) I'm doing this on Windows. Any pointers / ideas? Thanks!

    Read the article

  • TF2010 Build Definition and Access to Path is Denied error?

    - by Daniel DiVita
    I am new to TFS with regards to build definitions. I have a a build folder setup where I have set the permissons so EVERYONE has full control. Here is the exact error I am getting: E:\Builds\PIMSite\PIM.Site\PIM_Site.metaproj: Unable to copy file "C:\Builds\1\PIM System\PIM Site Build\Binaries\HtmlAgilityPack.xml" to "..\PIM.Site\Bin\HtmlAgilityPack.xml". Access to the path '..\PIM.Site\Bin\HtmlAgilityPack.xml' is denied. I have tried everyhitng. I have removed everything from that folder adn can delete it just fine so it is not being used by another process. Any thoughts?

    Read the article

  • Simple IPV6 Address Question

    - by Daniel
    Hello, I have a /96 block of IPV6 addresses and i'm wondering how i could some how find the next address (Since ipv6 addresses can contain numbers and letters). I know the first address could be in numbers but i've yet to find out how i could really find in some kind of order for that amount of addresses E.G: What technique could i use to make sure i'll actually be able to use all of the addresses

    Read the article

  • How to clean this Dell Precision M6400

    - by Daniel Pratt
    I have (well, ok, my employer has and I use) a Dell Precision M6400 notebook. It's a decent piece of hardware, but I have at least one major gripe: It's a dust and...uh...crumb (I repent! I repent!) magnet! And I cannot seem to exorcise the dust/crumbs from it! There is a strip of metal above the keyboard that is punched full of tiny holes. Well, maybe it's better to describe them as 'pits'. If a sufficiently small particle finds its way into one of those pits, there is only about a 50% that I will manage to get it out. Consequently, there is now a chorus of tiny little particles silently chiding me about eating cookies a cracker whilst I browse the intarwebs. Does anyone have any suggestions about how I could remove these particles from this machine...while still preserving the function of the machine?

    Read the article

  • Developing high-performance and scalable zend framework website [on hold]

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery or maybe throw Backbone.js in there, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand for developing an high performance, scalable application expected to have a lot of traffic using PHP(Zend Framework 2)...I would be interested in your approch. I should note that I'm a Zend developer, I'm working with Zend for over 3 years, this is why I'm choosing it.

    Read the article

  • What's the strengths and weaknesses of existing configuration management systems?

    - by Daniel C. Sobral
    I was looking up here for some comparisons between CFEngine, Puppet, Chef, bcfg2, AutomateIt and whatever other configuration management systems might be out there, and was very surprised I could find very little here on Server Fault. For instance, I only knew of the first three links above -- the other two I found on a related google search. So, I'm not interested in what people think is the best one, or which they like. I'd like to know the following: Configuration Management System's name. Why it was created (as opposed to using an existing solution). Relative strengths. Relative weaknesses. License. Link to project and examples.

    Read the article

  • SSD suddenly full

    - by Daniel
    Today the hard drive of our server was suddenly full. The disk usage always stayed around 50 % in the weeks and months before (old data is regularly expunged from the server). I deleted 10 GB of files in /tmp, which strangely freed 51 GB. Here is what I did: root@***:~# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 137G 0 100% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot root@***:/var# du -hs * 3,3M backups 438M cache 9,4G lib 4,0K local 12K lock 76M log 24K mail 4,0K opt 88K run 184K spool 10G tmp 12K www root@***:/var/tmp# find -type f -print0 | xargs -0 rm root@***:/var/tmp# df -h Dateisystem Size Used Avail Use% Eingehängt auf /dev/sda3 139G 81G 51G 62% / tmpfs 3,9G 0 3,9G 0% /lib/init/rw udev 3,9G 116K 3,9G 1% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm /dev/sda1 985M 25M 910M 3% /boot Any explanation as to why deleting 10 GB in /tmp gave me back 51 GB on the disk? Could this point to an SSD failure? Are there any tools for Debian to test SSD health? I already have checked syslog. The first entry relating to this incidient is a mysql message: 1:22:02 [ERROR] /usr/sbin/mysqld: Disk is full writing... So I have absolutely no idea what caused this.

    Read the article

  • Microsoft Exchange mail features and AD question

    - by Daniel Fukuda
    Hello, I wanted to ask is there a feature that allows Microsoft Exchange to download emails through POP3 from another mail provider like Google Apps (Gmail for your own domain), then store it and allowing users to download (POP3/IMAP) emails to Outlook/Live Mail. So I want to Microsoft Exchange to become like middle mail provider. My another question is regarding Microsoft Server Active Directory, is Windows Server 2008 Active Directory working with Windows XP Professional and is there any new feature added to Win2k8 AD?

    Read the article

  • On clients use generic driver for printer shared by CUPS

    - by Daniel
    I know this worked really easy with a recent CUPS version some years ago. Unfortunately I fail to do the following with latest CUPS nowadays. How to share a printer without using printer specific drivers on the clients with CUPS? The printer is a Samsung ML-2010. The important fact is that it needs a quite specific library for printing called splix. That is installed on the server and prints well. What makes trouble is using that printer over the network. I found out how to use Avahi under Linux to make use of DNSSD to advertise and discover printers. But as far as I understand the new CUPS offers the internal driver interface on the network. This has 2 major issues: anybody can fiddle with the driver and I don't trust any uncommon printer library to be "network secure" anyone who wants to use my printer including guests needs to install the specific drivers first I remember the old days when I could enable "Share this printer" on the CUPS server and all clients would magically detect the printer and just send their job data to the server and have it do the driver stuff. After everything a read I guess this is related to the changes Apple introduced with CUPS including dropping of the integrated network discovery protocol. If it helps: Server: Ubuntu LTS 12.04 Server with CUPS 1.5.3 Client: Arch Linux with current CUPS 1.6.1 On another box with Ubuntu the printer was setup automatically at least but the mechanism used the Splix library for that.

    Read the article

  • HAProxy crashes on all requests in 1.5-dev12

    - by Daniel Hough
    I'm having an issue where HAProxy is crashing with no explanation when I switch from 1.4.12 to 1.5-dev12. The reason I'm switching is for the SSL offloading. My config file doesn't have any errors, it's quite simple and it works well with 1.4 - but for some reason when I run it with 1.5-dev12 I see the logs noting that the two backends I have have been set up, and then when I hit one of the frontends, I get an HTTP 400 in the browser and suddenly HAProxy isn't running anymore when I check. I understand that a common workaround to the lack of SSL support for HAProxy is to use Stud, and I may go with that since I am in need of an SSL solution for my service, but before I dele into that world I thought I might see if anybody has experienced the same problems and might know how to fix it. The server is Ubuntu 10.04 and I followed the make instructions on the Exceliance blog here. EDIT: On the advice of Kyle Brandt, I did a bit more investigation. I attached gdb to the haproxy process and when the crash occurred this is what I got: Program received signal SIGSEGV, Segmentation fault. 0x0804e5c2 in dequeue_all_listeners (list=0x9e1a418) at src/protocols.c:184 184 list_for_each_entry_safe(listener, l_back, list, wait_queue) { P.S. HAProxy is awesome, so thank you Exceliance for providing us with something so useful :)

    Read the article

  • Windows 7 Paging file apparently not being used

    - by Daniel F.
    I'm running Windows 7 Home Premium 32bit on a mobo with 24GB RAM. Of those 24GB, 20GB are assigned as a RAMDISK via ASRock XFastRAM. This RAMDISK has the drive letter X assigned to it. On X:\ I'm storing the temporary files folder, as well as pagefile.sys. Pagefile.sys has 6GB of size. The X:\ has usually around 14GB free space, so the temporary files are negligible, it's mostly the browsers which are storing their caches on there. Now my issue is that Firefox is crashing a lot on me, no error message pops up, but I know that this is because it's out of memory. I could kind of live with that, but now that I switched from using Eclipse to Android Studio, I know that I'm in trouble, because Java isn't capable of allocating, and Android Studio, together with the Java instances it launches, is quite a memory hog. So I tried to figure out what's wrong, and apparently Windows isn't swapping out memory onto the paging file. While my applications are crashing (firefox) / not starting (java vm's), the paging file is only using constantly around 15% of its size (checked with the performance monitor). 15% equals to 1GB aprox. I know that the correct solution would be to switch to 64 bit Windows, but I had to use the 32 bit version because of driver issues which I had about two years ago, and I guess that I'll have them again if I reformat and install the 64 bit version. Also, the machine is running quite stable, the only issue is the memory, so I'd like to use it as it is (as the apps are installed and configured) Is there a way to make Windows use the paging file more efficiently? None of my processes require more than 1GB, I'd just like it to swap out some seldomly used stuff, like GoogleCrashHandler.exe and stuff like that in order to have "more physical memory avaliable". Is that possible?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >