Search Results

Search found 68249 results on 2730 pages for 'sudo work'.

Page 403/2730 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • any good networking book recommended? [closed]

    - by Jian Lin
    i have wanted to learn about networking and how they work. Are there books or websites that are good for them? Some things I wanted to learn about: how to write a program that works like traceroute / tracert how subnet masks work how http://192.168.1.105 works how http://127.0.0.1 works how http://localhost works how http://room3pc works how smb://room3pc works how a LAN works with or without DHCP how to connect to a mac using a URL

    Read the article

  • Emacs open files from a filename list

    - by crasic
    I have a largish tex project that is separated into several tex files. Everytime I want to work on it I open emacs and manually C-x C-f all the files that I want to work on. I was wondering if there is a way to open files (from command line) from a file containing a list of filenames, something like filelist.txt: file1.tex file2.tex file3.tex then do cat files | emacs -nw except that emacs doesn't support the command used as it doesn't like that stdin is reassigned. any ideas?

    Read the article

  • Puppet - Possible to use software design patterns in modules?

    - by Mike Purcell
    As I work with puppet, I find myself wanting to automate more complex setups, for example vhosts for X number of websites. As my puppet manifests get more complex I find it difficult to apply the DRY (don't repeat yourself) principle. Below is a simplified snippet of what I am after, but doesn't work because puppet throws various errors depending up whether I use classes or defines. I'd like to get some feed back from some seasoned puppetmasters on how they might approach this solution. # site.pp import 'nodes' # nodes.pp node nodes_dev { $service_env = 'dev' } node nodes_prod { $service_env = 'prod' } import 'nodes/dev' import 'nodes/prod' # nodes/dev.pp node 'service1.ownij.lan' inherits nodes_dev { httpd::vhost::package::site { 'foo': } httpd::vhost::package::site { 'bar': } } # modules/vhost/package.pp class httpd::vhost::package { class manage($port) { # More complex stuff goes here like ensuring that conf paths and uris exist # As well as log files, which is I why I want to do the work once and use many notify { $service_env: } notify { $port: } } define site { case $name { 'foo': { class 'httpd::vhost::package::manage': port => 20000 } } 'bar': { class 'httpd::vhost::package::manage': port => 20001 } } } } } That code snippet gives me a Duplicate declaration: Class[Httpd::Vhost::Package::Manage] error, and if I switch the manage class to a define, and attempt to access a global or pass in a variable common to both foo and bar, I get a Duplicate declaration: Notify[dev] error. Any suggestions how I can implement the DRY principle and still get puppet to work? -- UPDATE -- I'm still having a problem trying to ensure that some of my vhosts, which may share a parent directory, are setup correctly. Something like this: node 'service1.ownij.lan' inherits nodes_dev { httpd::vhost::package::site { 'foo_sitea': } httpd::vhost::package::site { 'foo_siteb': } httpd::vhost::package::site { 'bar': } } What I need to happen is that sitea and siteb have the same parent "foo" folder. The problem I am having is when I call a define to ensure the "foo" folder exists. Below is the site define as I have it, hopefully it will make sense what I am trying to accomplish. class httpd::vhost::package { File { owner => root, group => root, mode => 0660 } define site() { $app_parts = split($name, '[_]') $app_primary = $app_parts[0] if ($app_parts[1] == '') { $tpl_path_partial_app = "${app_primary}" $app_sub = '' } else { $tpl_path_partial_app = "${app_primary}/${app_parts[1]}" $app_sub = $app_parts[1] } include httpd::vhost::log::base httpd::vhost::log::app { $name: app_primary => $app_primary, app_sub => $app_sub } } } class httpd::vhost::log { class base { $paths = [ '/tmp', '/tmp/var', '/tmp/var/log', '/tmp/var/log/httpd', "/tmp/var/log/httpd/${service_env}" ] file { $paths: ensure => directory } } define app($app_primary, $app_sub) { $paths = [ "/tmp/var/log/httpd/${service_env}/${app_primary}", "/tmp/var/log/httpd/${service_env}/${app_primary}/${app_sub}" ] file { $paths: ensure => directory } } } The include httpd::vhost::log::base works fine, because it is "included", which means it is only implemented once, even though site is called multiple times. The error I am getting is: Duplicate declaration: File[/tmp/var/log/httpd/dev/foo]. I looked into using exec, but not sure this is the correct route, surely others have had to deal with this before and any insight is appreciated as I have been grappling with this for a few weeks. Thanks.

    Read the article

  • resume file copy linux

    - by Andrew Johnson
    How do I resume a copy of a large file in linux? I have a huge file (serveral gigabyes) partially copied to a network drive, and it took a long time, and it was mostly done before the copy operation stopped due to a network problem that is now fixed. How do I resume the file copy. I don't want an inefficient script, and ecp didn't work (it doesn't seem to work for large files).

    Read the article

  • Can not access IIS7 website externally - but can locally

    - by mactruck
    I have 2 websites, site A running on port 80 and site B configured to run on port 8080. I can access site A no problem, but site B I can only access from the local web server machine. Externally it is not accessable, I have tried the url and ip and neither work. I have tried different configuring on port 8081 as a test and that didnt work either. What IIS settings should I look at?

    Read the article

  • Is real-time or synchronous replication possible over WAN link?

    - by johnnyb10
    The company I work for is looking to implement truly real-time file replication with file locking over a WAN link that spans over 2000 miles. We currently have a 16-drive SAN setup in our east coast office. We also have an office out in Colorado that will have the same exact SAN setup. The idea is to have those two SANs contain the same exact data at all times, which will allow us to work with the same data pool, and which will also provide use with an offsite backup solution, should a failure occur on either end. We're running Server 2008. The objective is to enable users in the east coast office to work on files and have those changes be instantly updated on the Colorado SAN as well. We also need there to be file locking so that there will be no conflicts or overwritten changes if users attempt to work on the same file. Is this scenario even possible, at speeds that would make the files usable? And if so, what software would we need to pull this off? As I understand it, DFS-R does not provide file locking, so if we used that, we would need to go with a third-party product like Peerlock. But I don't even know if DFS-R is an option. Can it replicate quickly enough over a WAN link? Can any product? It seems that if we were to use synchronous replication, the programs would be unacceptably slow, as every write would have to wait for confirmation from the other end of the link. But if we used asynchronous replication, what kind of latency would we be looking at? There is a product from GlobalScape called WAFS that claims to provide "File coherence with real-time file locking, file release, and synchronization" and says that "As files are modified, changes are mirrored instantly using intelligent byte-level differencing to minimize the impact on network bandwidth". So this sounds like synchronous replication, but that doesn't even seem possible, given physical limitations such as the speed of light. If anyone has any experience with this kind of setup, or knows whether it's even possible, I'd appreciate your input and suggestions, including recommendations for software that we should check out.

    Read the article

  • Port forwarding to asp.net development server

    - by ile
    I have configured my router so that I can access my localhost from internet. But I can't manage to port forward it to asp.net development server. In router's port forwarding I did the same thing as for the localhost but only changed port number so that it is the same as the one that is assigned to my application, but this doesn't work. Any idea how to get it work? Thanks in advance, Ile

    Read the article

  • How can Windows XP/7 users cleanly connect to Mac OS X Server 10.9.4 Mavericks with Active Directory integration?

    - by JakeGould
    I’m a Linux/Unix systems admin who also manages a Macintosh server infrastructure & there is a lone Mac Mini in the mix running 10.9.4 that I would like Windows XP & Windows 7 users to connect to with little or no hassle. The problem? Windows users can’t seem to even get to the point of a password prompt yet connect. Mind you this server replaced a Mac OS X 10.6.8 server that had issues, but never had issues with Windows users connected. The gist of this post is: The tons of different messages out there about Mac OS X 10.9.4 Samba support are mind-numbingly confusing. Can anyone share some solid specifics here? I’ve read pieces like this one here that suggest turning off file sharing & then adding a share with AFP/SMB enabled would work. But the suggestion seems to apply to 10.8. And from what I know a lot has changed in Samba support in 10.9 let alone the iterations to 10.9.4. Then I found this great tutorial here that explains things step-by-step. Which seems like it should work, but the problem is the example given applies to a local user created on the Mac when I would like users in an Active Directory group—which the Mac is bound to—access the Mac Mini shares. There are also tons of great tips here on MacWindows.com but nothing seems solid to the issue I am facing. So from what I am reading these are my options: Local User Versus Active Directory: Setup a common local user on the Mac OS X 10.9.4 server to be used for Samba sharing since Active Directory won’t work. Is this really the case? Because loss of AD integration is a major pain. Do Extended File Attributes Get Retained from Windows Users: If this were to work, how do extended attributes come into play? Loss of metadata & related info is not an option. How Fragile is Any of this to Updates: How does any of this shake out with Mac OS X updates as well as Windows updates? Installing Official, Open Source Samba: Would upgrading the Samba install on the server to the official open source Samba via a package like SMBUp or via the Hombrew method described here help or make the issue worse? I fully understand there have historically been issues in mixed environments, but nowadays Windows users connecting to a Mac seem to have a truly hellish road ahead of them. Unless I am missing something?

    Read the article

  • How do I integrate GPG with alpine?

    - by Thomas
    I have tried a couple of howtos, but they seem to either be too out of date or just plain don't work (don't work for me, at least). How do I sign and encrypt messages, as well as check integrity of received messages (and decrypt them if needed) in pine/alpine? I tried pipeing (with "|") but the terminal doesn't seem to cooperate so that I can provide my passphrase.

    Read the article

  • Sharepoint 2010 My-sites Active Directory connection error

    - by Tuomas Hietanen
    There is a good tutorial of creation of Sharepoint 2010 My-sites here. In my environment I can't get the connection to Active Directory work, as it fails with: "An operation error occurred." There is a text "For Active Directory connections to work, this account must have directory sync rights." but I don't know what that means... My question is: There is nothing in the Event Log. Where is the error-log? :-)

    Read the article

  • removing the .local in local network name serving

    - by Paul Nathan
    I have several local machines; I use a OSX 10.6 machine to do most of the serving. Annoyingly, it postfixes its network name with .local. How would I set up a system so that I could access it by its hostname? server: httpd apache2 default install I am accessing it with a web browser(surprisingly). also when I ping my osx machine as name, it doesn't work; ping name.local does work.

    Read the article

  • Windows XP Problem

    - by LoRdiE
    I'm facing error on Windows XP Start, KEYBOARD not work and can't choose Start windows normally. After wait 30 seconds, windows not start. I'm test with Windows Live CD and keyboard work well. How can i fix?

    Read the article

  • Windows XP Problem

    - by LoRdiE
    I'm facing error on Windows XP Start, KEYBOARD not work and can't choose Start windows normally. After wait 30 seconds, windows not start. I'm test with Windows Live CD and keyboard work well. How can i fix?

    Read the article

  • Windows 7 / Ubuntu Dualboot GRUB Problem.

    - by Tek
    I'd like to first say ahead of time that I'm running a RAID-0 Setup. 1.First of all, I'm glad Ubuntu 9.10 installed flawlessly and detected my RAID-0 setup just fine. The issue I'm having now is that I already had Windows 7 installed and made a small 12GB partition for Linux/Swap. I grabbed EasyBCD 2.0 to edit the W7 bootloader and configured it to use dual boot Grub2 because before it didn't even show the option for Ubuntu. The bootloader points to a file made in the windows directory made by EasyBCD called "C:\NST\AutoNeoGrub0.mbr" which is what I'm guessing grub is booting from. After that I got the option for booting Ubuntu. The problem is that it's sending me to the Grub prompt (probably because it's pointing to \NST|AutoNeoGrub0.mbr?), at first I didn't know what to do but I researched and have to type grub commands to manually boot into Ubuntu Linux. Ex: grubroot (hd0,4) grubkernel /boot/vmlinuz-2.6... root=/dev/disk/by-uuid/24624-2424... grubinitrd boot/initrd.img-2.6... grubboot After all that Ubuntu boots just fine, but how do I fix it permanently? Do I need to edit the bootloader manually (since Easy BCD "autoconfigures")? Some insight on this would rock! Also, it sucks to type the actual uuid since it's REALLY long. I tried getting the name of the drive via fdisk -l but since it's raid 0 I'm guessing I can't do that. How can I get a shorter name of the drive? like /dev/sda, /dev/sdb etc? I've also tried to update to the latest GRUB and I got this: Creating config file /etc/default/grub with new version Generating core.img error: cannot seek /dev/sdc' error: cannot seek/dev/sdc' grub-probe: error: no mapping exists for nvidia_dbedfcca5' Auto-detection of a filesystem module failed. Please specify the module with the option--modules' explicitly. dpkg: error processing grub-pc (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of grub2: grub2 depends on grub-pc; however: Package grub-pc is not configured yet. dpkg: error processing grub2 (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. E: Sub-process /usr/bin/dpkg returned an error code (1) I've also tried: b@dnb:~$ sudo update-grub error: cannot seek /dev/sdc' error: cannot seek/dev/sdc' Generating grub.cfg ... Found linux image: /boot/vmlinuz-2.6.31-14-generic Found initrd image: /boot/initrd.img-2.6.31-14-generic error: cannot seek /dev/sdc' grub-probe: error: no mapping exists fornvidia_dbedfcca5' error: cannot seek /dev/sdc' grub-probe: error: no mapping exists fornvidia_dbedfcca5' Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/mapper/nvidia_dbedfcca1 error: cannot seek /dev/sdc' grub-probe: error: no mapping exists fornvidia_dbedfcca1' done To no avail. Any idea what I can do to fix this mess? :( Edit: This is my disk configuration. b@dnb:~$ sudo df -l Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/nvidia_dbedfcca5 12302232 2744788 8932520 24% / udev 1030288 268 1030020 1% /dev none 1030288 964 1029324 1% /dev/shm none 1030288 92 1030196 1% /var/run none 1030288 0 1030288 0% /var/lock none 1030288 0 1030288 0% /lib/init/rw /dev/sr0 706532 706532 0 100% /media/cdrom0 Note: /dev/mapper/nvidia_dbedfcca5 is my Linux boot partition

    Read the article

  • Drupal and Passenger on the same virtual host

    - by aussiegeek
    I have drupal installed in the root of the site, with two Rails sites hosted with Passenger under /connect and /administration. My problem is while the roots of these rails apps work correctly, any sub page gets handle by the Drupal Rewrite ruleset I'm not sure how to make this work

    Read the article

  • Making application behind reverse proxy aware of https

    - by akaIDIOT
    https in tomcat being the hassel it is, I've been trying to get an Axis2 webapp to work behind a reverse proxy for ages now, can't seem to get it to work. The proxying itself works like a charm, but the app fails to generate 'links' (or ports as it concerns SOAP) using https. It would seem I need some way to let Axis2 know it is being accessed through https, even though the actual transport to it is done over http (proxied from localhost). The nginx config that proxies https to localhost:8080: server { listen 443; server_name localhost; ssl on; ssl_certificate /path/to/.pem ssl_certificate_key /path/to/.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; ssl_prefer_server_ciphers on; location / { # force some http-headers (avoid confusing tomcat) proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; # pass requests to local tomcat server listening on default port 8080 proxy_pass http://localhost:8080; } } The proxy itself works fine, the info pages of the webapp work. The problem lies in the ports generated in the .wsdl: <wsdl:service name="WebService"> <wsdl:port name="WebServiceHttpSoap11Endpoint" binding="ns:WebServiceSoap11Binding"> <soap:address location="http://10.10.3.96/axis2/services/WebService.WebServiceHttpSoap11Endpoint/"/> </wsdl:port> <wsdl:port name="WebServiceHttpSoap12Endpoint" binding="ns:WebServiceSoap12Binding"> <soap12:address location="http://10.10.3.96/axis2/services/WebService.WebServiceHttpSoap12Endpoint/"/> </wsdl:port> <wsdl:port name="WebServiceHttpEndpoint" binding="ns:WebServiceHttpBinding"> <http:address location="http://10.10.3.96/axis2/services/WebService.WebServiceHttpEndpoint/"/> </wsdl:port> </wsdl:service> The Host header does its job; it shows 10.10.3.96 in stead of localhost, but as the snippet shows: it says http:// in front of it in stead of https://. My client app can't deal with this... Adding proxyPort and proxyName to the tomcat6 server.xml in the default <Connector> doesn't help; I'm at a loss on how to get this to work properly.

    Read the article

  • Is it safe to use Email service that provided from webhosting for business use?

    - by Kronass
    I work in a company who uses their web-hosting as their email provider, they use it for normal send, receive and basic contacts management, they use it in customers support, sales and marketing, I would prefer to use a dedicated or professional email hosting instead for this type of work. So for business use is it safe to use the email hosting that is included with hosting package or go with a professional email provider?

    Read the article

  • One SSL certificate (one domain) for two servers ?

    - by marioosh.net
    I have two servers. On SERVER1 i have configured SSL certificate (on Apache) for domain https://somedomain.com. I need to connect to my working domain some app that exists on remote server SERVER2 - working app for example: https://remoteapps.com/remoteApp. I used mod_proxy to do it, but SSL certificate doesn't work. ProxyPass /remoteApp https://remoteapps.com/remoteApp ProxyPassReverse /remoteApp https://myapp.com/remoteApp How to make certificate for https://somedomain.com/remoteApp work too ?

    Read the article

  • Any ideea why every NSIS setup file(.exe) i download gives CRC error?

    - by daniels
    Any ideea why every NSIS setup file(.exe) i download gives CRC error when trying to install. I do not have any firewall installed and also i tried with and withouth a download manager. Also files downloaded with bittorrent have crc errors. But this happens only with setups that were made with NSIS and only on windows 7, on XP they work fine and aloso any other files work except those made with NSIS.

    Read the article

  • Launching an application from Bash

    - by JBoy
    I'm right now busy with moving the first steps into Linux, i'm using a bash shell within a mac osx I see in all tutorials that in order to launch an application from the bash its necessary to cd to its directory and simply type the name of the app. This is exatly what i'm doing and it does not work (i have on my desktop a 'Eclipse' folder with the launcher icon in it): cd Desktop cd Eclipse Eclipse.app Why will this not work? I read everywhere that typing the name of the app its enough

    Read the article

  • eSATA hard drive falls asleep, takes ages to wake up.

    - by Dave Van den Eynde
    I'm using an external hard drive, a Western Digital MyBook 1TB of some sort with eSATA on an eSATA II ExpressCard adapter from Belkin on Windows Vista 32-bit. The issue I'm having is that after a while power management kicks in and puts the hard drive to sleep. When I resume my work and browse the drive, for example, the Explorer hangs and while I can still use my other apps, it takes a couple minutes for it to realize it should wake up the drive and recommence work. What's going on?

    Read the article

  • Connect to host computer from Virtual PC 2007

    - by Vegard Larsen
    I am having trouble using my guest (Windows XP SP3) to communicate over TCP/IP to the host computer (Windows 7) using Virtual PC 2007. I have WAMPServer running on my host, and want to be able to access the websites on there from my guest OS. What do I do to make this work? What is the IP address of the host computer when using Shared Networking? As far as I can tell "Internal Networking" won't work, because that only allows communication between the guests, not between a guest and the host.

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >