Search Results

Search found 308 results on 13 pages for 'joseph turian'.

Page 3/13 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • how to fix "BusyBox v1.17.1 (Ubuntu 1:1.17.1-10ubuntu1) built-in shell (ash) Enter 'help' for a list of built-in commands?"

    - by Joseph
    So I was using Ubuntu when suddenly the whole thing froze up and I had to reboot. And from that moment on, the system when it is starting up, prompts this little selection menu: GNU GRUB version 1.99~rc1-13ubuntu3 Ubuntu, with Linux 2.6.38-10-generic ubuntu, with Linux 2.6.38-10-generic (recovery mode) Previous Linux versions Memory test (memtest86+) Memory test (memtest86+, serial console 115200) I have chosen all of the available choices but all I get is another command line system that reads: BusyBox v1.17.1 (Ubuntu 1:1.17.1-10ubuntu1) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs): And honestly I can't do anything with it. Does anyone have any idea of what is going on and how I can get Ubuntu to work again?

    Read the article

  • Setting up Github post-receive webhook with private Jenkins and private repo

    - by Joseph S.
    I'm trying to set up a private GitHub project to send a post-receive request to a private Jenkins instance to trigger a project build on branch push. Using latest Jenkins with the GitHub plugin. I believe I set up everything correctly on the Jenkins side because when sending a request from a public server with curl like this: curl http://username:password@ipaddress:port/github-webhook/ results in: Stacktrace: net.sf.json.JSONException: null object which is fine because the JSON payload is missing. Sending the wrong username and password in the URI results in: Exception: Failed to login as username I interpret this as a correct Jenkins configuration. Both of these requests also result in entries in the Jenkins log. However, when pasting the exact same URI from above into the Github repository Post-Receive URLs Service Hook and clicking on Test Hook, absolutely nothing seems to happen on my server. Nothing in the Jenkins log and the GitHub Hook Log in the Jenkins project says Polling has not run yet. I have run out of ideas and don't know how to proceed further.

    Read the article

  • Ubuntu/Nvidia lists DVI dual cable as single

    - by Joseph Mastey
    I have an NVidia Quadro FX 880M graphics card, from which I am trying to drive 2 monitors: my internal laptop montior (15.6", 1920x1080, Nvidia driver says it's running via DisplayPort) and an external 27" monitor (Dell U2711, 2560x1440 native resolution, via DVI). I've hooked the dual DVI cable to the dual DVI port on my dock (Dell PR03X) and installed the proprietary NVidia driver, but I cannot seem to get the full 2560x1440 out of the larger 27" external monitor. Looking at the NVidia driver settings, the monitor's connection is reported as a single DVI cable, rather than a dual one, which would explain the reduced resolution. Does anyone have any experience with an issue like this? What can I do to make full use of my new monitor? (Possibly) Relevant Information: There is no DVI port on the laptop itself, but one is provided via the dock. The laptop and dock both provide a DisplayPort jack, but I have been unable to get this working on either w/ the monitor. I did have the nouveau driver installed when I installed the nvidia proprietary driver, but have since removed it (no change in the monitor situation when I removed it). The 27" reports a max resolution of 1680x1050. Thanks, Joe

    Read the article

  • Monitoring Bandwidth Usage (Per Internal IP) - Cisco ASA 5505

    - by Joseph Sturtevant
    I manage a small network with a Cisco ASA 5505 and a shared DSL connection. I would like to be able monitor the bandwidth usage of the various users/devices on my network (by IP). Can I do that using the ASA? Has anyone got this working? What is the best way to do this? Some Ideas I Have Seen Online: SNMP with a tool like Cacti Does this give per IP usage with an ASA or just overall usage? Netflow with a tool like ntop Couldn't get this to work. It seems that the Netflows sent by ASA are not exactly standard. Ntop receives them, but doesn't seem to know what do with them.

    Read the article

  • Adding tables to a herd in bucardo

    - by Joseph the Dreamer
    Forgive my ignorance, I am a JS programmer given the task to do DB replication using bucardo. I understand the concept of how bucardo works, but setting it up is a bit confusing. The set-up is: Lubuntu Linux Two databases test_master and test_slave, using PostgreSQL Each DB has a table named test, containing 2 columns: id (PK) and test (int) I use pgAdmin3 I have already added them to bucardo's list of databases and added all tables. Table: public.test DB: test_slave PK: id (int4) Table: public.test DB: test_master PK: id (int4) As you see, due to the fact that the DBs are identical, even the schema names are identical. So when I do: bucardo_ctl add herd sample_herd public.test Ok, so it got added to the herd. But this command gets confused which database public.test comes from. So when I add a sync: $ bucardo_ctl add sync sample_sync source=sample_herd targetdb=test_slave type=fullcopy Failed to add sync: DBD::Pg::st execute failed: ERROR: Source and target databases cannot be the same: test_slave at line 118. at line 30. CONTEXT: PL/Perl function "validate_sync" at /usr/bin/bucardo_ctl line 3362. What does it mean that source and target cannot be the same? If it got confused as to which public.test to use as source, how do I differentiate?

    Read the article

  • Certain banking pages not loading

    - by Joseph Lee
    For some unknown reason, I am suddenly unable to access my accounts at several banking and credit sites. I have been a registered user at each site for several years and know I am using the correct user ID and password. Yet, after entering the data, answering security questions, and clicking the submit button, I land on a page with an error message saying their is a technical problem preventing me from accessing my account. On one site, I end up at the sign in page repeatedly. I am never told that my ID/password are incorrect. I believe may be firewall related. Windows firewall was damaged after a recent malware attack. I am now using a third party firewall (Fort Knox). I am not seeing a pop-up indicating sites are blocked or asking me to indicate yes or no. I am using Windows 7 Home Premium. I get the same result regardless of the browser. I switched to Maxthon last night and am getting the same result. This is not happening at other sites. And I am able to access some banking sites normally. This is frustrating because I need to make payments and have gone paperless. Any feedback will be appreciated. ---- Joe ----

    Read the article

  • Setting up multiple servers for one domain

    - by Joseph Torraca
    So I am starting up a new website and I was wondering how to set up 5 servers to host the site. I have already purchased 5 Apple XServes, one will be used as a test server and the other 4 will be for the live site. So I have read some website on the internet and they all reference using one server and installing software onto it and have that server do the load balancing. I have also read that you could use a hardware, rack-mounted system and plug the servers into that. The load balancer would then distribute the load. So I have a few questions about each: 1) How do you set up the software version and have the other servers as "slaves" and have one "master" to direct traffic? 2) Which of the two options above are more reliable, and better suited for a startup that doesn't have many users per month, yet(hopefully)? 3) Is there a theoretical max limit of servers that can be connected to a software load balancing system? Note: Obviously this will change from software to software, but in terms of the server being able to handle it? 4) In your own opinion, what are you using for your sites? Have you had any problems setting up that system or operating it once its running? Are there any things you would stay away from if you had to start over? 5) I also purchased a Apple RAID system, so if you are familiar with it, is there any way to connect it to multiple Xserves so they all serve the same data? I'm a little confused on this, so thanks for all your help and being patient with me. Note: Take it easy on me, I am learning this as I go along, so I may have used terms incorrectly or explained things that don't really make sense. Sorry. P.S. If you need me to supply the specs on the servers to determine which system makes the most sense, I can post them for you.

    Read the article

  • apache2: Could not reliably determine the server's fully qualified domain name

    - by Joseph Silvashy
    I've never encountered this error before. And secondly I'd like to know how you folks debug your apache configurations. apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName In my Virtual Host configuration I do have these lines: ServerName example.com ServerAlias www.example.com (of course it has my actual info in there) So I guess my question is, why wouldn’t apache be able to determine my fully qualified domain name?

    Read the article

  • Have a service start on startup with Ubuntu

    - by Joseph Silvashy
    I'm not clear on how to start a service when the server boots, I read on some of the other questions asked about adding the script to /etc/init.d, but It's just one line that I need to execute in the commandline: sudo /etc/init.d/avahi-daemon restart But I have a few issues with this, firstly, I apparently need to use sudo, and it gives me the following: ngl-server-01:~% sudo /etc/init.d/avahi-daemon start Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service avahi-daemon start Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the start(8) utility, e.g. start avahi-daemon But when I try just avahi-daemon start I get: Too many arguments Why is this? and how would you start this service?

    Read the article

  • Only show hidden files in certain directories

    - by Joseph Silvashy
    On a Unix system (or more specifically on OS X) is it possible to show hidden files in only some directories? For example as a developer I want to see the hidden files in my Rails projects but not on my desktop as well. I guess I'm just tired of seeing all these .DS_Store and .trashes files swimming around, any remedies not directly related are welcome too!

    Read the article

  • Getting "-bash: fork: Resource temporarily unavailable" in OSX

    - by Joseph Tura
    I seem to run into problems with the max. number of processes every so often. Anyone know what is best practice for fixing this? Running OSX 10.6 on a MacBook Pro i7. ulimit -a returns these values: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited When the error occurred I checked, and there were 102 running tasks and 523 threads.

    Read the article

  • Insecure world writable dir

    - by Joseph Silvashy
    I can't figure out how to fix this, apparently ruby doesn't like anything in my home directory. /Users/Connor/.rvm/rubies/ree-1.8.7-2010.01/bin/gem:4: warning: Insecure world writable dir /Users/Connor/.rvm/rubies/ree-1.8.7-2010.01/bin in PATH, mode 040766 How can I fix this?

    Read the article

  • php5-mysqlnd on debian wheezy/sid?

    - by Joseph
    I am trying to install php5-mysqlnd on a fresh install of Wheezy (/etc/debian_version refers to it as wheezy/sid) and I'm having a problem: root@debian:/var/www/lottery1# apt-get install php5-mysqlnd Reading package lists... Done Building dependency tree Reading state information... Done php5-mysqlnd is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up php5-mysqlnd (5.4.0-3) ... ucfr: Attempt from package php5-mysqlnd to take /etc/php5/mods-available/mysql.ini away from package php5-mysql ucfr: Aborting. dpkg: error processing php5-mysqlnd (--configure): subprocess installed post-installation script returned error exit status 4 Processing triggers for libapache2-mod-php5 ... configured to not write apport reports Reloading web server config: apache2. Errors were encountered while processing: php5-mysqlnd E: Sub-process /usr/bin/dpkg returned an error code (1) It seems there is some sort of conflict with the php5-mysql package, but I still get this error even after removing (with --purge) the php5-mysql package. Any thoughts? I'm trying to run a web tool that makes heavy use of mysqli_result::fetch_all(). Thanks!

    Read the article

  • bad printer isolation on print server or better way?

    - by Joseph
    I have noticed that when a printer or driver screws up on a Windows server it usually locks up or kills the print spooler and everyone can't print until it is fixed. Usually we have to put the troublesome printer on another server so when it fails, it doesn't take the whole group with it. That is assuming we ever figure out which printer is the problem. Is there a way to have it so that one bad apple doesn't ruin the bunch? Even if it is another form of printer serving, that would work as long as it's not hard for the user to find a printer and install drivers.

    Read the article

  • Good Wireless Range Extender

    - by Joseph Sturtevant
    Can anyone recommend a good wireless (802.11g) range extender? I would like to find something that will provide reliable wireless to areas of the building that aren't covered or get poor reception. I would also like a product that won't require big changes to my current wireless setup (multiple APs with a wireless controller are out). Latency and bandwidth aren't terribly important. Does anyone have experience with a product like this?

    Read the article

  • Odd Tools and Techniques for System Administrators

    - by Joseph Kern
    There's been a lot of questions centering on Software Tools for System Administrators. But I would like to know about any odd physical tools or techniques that you've used; Something that you never expected to be useful, but ended up saving the day. I'll go first: A Camera Phone. An application server had a major power issue that borked the RAID. Many of the disks were offline. Before I took the plunge and forced disks back online, I took a picture of the RAID BIOS screen with my camera phone. Having the exact layout of the RAID stored safely in my pocket, I was able to reset the RAID, and reboot the server. What odd tools/techniques have you used?

    Read the article

  • EC2 out of space on root disk, moving it to ephemeral

    - by Joseph Misiti
    I am spawning a few test servers on ec2 that happen to be m1.larges. I am using these test servers for load balancin testing. Anyways, most of the servers I have used before have been backed by EBS, but these instances (ubuntu 11.04) obviously come with a lot of ephemeral space located @ /mnt. What I noticed that is happening is I am running on space on the root disk. I am trying out this tutorial http://www.turnkeylinux.org/docs/using-instance-storage moving my /home + /usr directories to /mnt and then remounting them. This works except it does not survive a reboot. Am I missing something here or is this tutorial not completely correct. How do I make space on my / drive so I can do stuff and survive re-boots.

    Read the article

  • Recycle Bin for Windows Server 2003 File Shares

    - by Joseph Sturtevant
    One of the networks I administrate uses Windows Server 2003 File Shares to provide network storage for users. To prevent against accidental deletion, I use Shadow Copies to create snapshots twice a day. This method is only effective, however, for files which were on the share during the last snapshot. When users accidentally deleted files recently placed on the share, I have no recourse except to remote desktop into the server and attempt retrieval with an undelete utility (this is only effective if the file has not been overwritten). Is there a feature like the Windows Recycle Bin for Windows Server 2003 File Shares? What is the best way to protect my users against accidental file deletion in this scenario?

    Read the article

  • Keeping my zsh or bash profile synced up on all my machines.

    - by Joseph Silvashy
    I work on several different machines, all of which are *nix. I have a lot of specific things I like my shell to, or the prompt to look like, or aliases, etc, etc. I'm sure all of you folks deal with this as well. What do you think the best way to keep all my machines' shells to act the same? First off, I'm aware that different machines will need different paths to bins and other differences, so my first inclination is to just include a file at the end of my profile, this is the one that we'll keep in sync. What is the best way to keep files synced up? I can put the file on a remote system, and perhaps use git, to push, then pull my changes every once and a while. However, isn't Rsync better suited for this?

    Read the article

  • Postfix: Relay access denied

    - by Joseph Silvashy
    When I telnet to my server thats running postfix and try to send an email: MAIL FROM:<[email protected]> #=> 250 2.1.0 Ok RCPT TO:<[email protected]> #=> 554 5.7.1 <[email protected]>: Relay access denied I couldn't really find the answer on the site or by looking at other users question/answers, I'm not sure where to start. Ideas? Update So basically looking at the docs: http://www.postfix.org/SMTPD_ACCESS_README.html (section: Getting selective with SMTP access restriction lists), I don't seem to have any of those directives in etc/postfix/main.cf like smtpd_client_restrictions = permit_mynetworks, reject or any of the other ones, so I'm quite confused. But really I'm going to have a rails app connect to the server and send the emails, so I'm not sure how to handle it. Here is what my config file looks like: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = rerecipe-utils alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = $myhostname, localhost.$mydomain, localhost, mail.rerecipe.com, rerecipe.com relayhost = mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all mynetworks = 127.0.0.0/8 204.232.207.0/24 10.177.64.0/19 [::1]/128 [fe80::%eth0]/64 [fe80::%eth1]/64 Something to note is that relayhost is blank, this is the default configuration file that was created when I installed Postfix, when testing to connect with openssl I get this: ~% openssl s_client -connect mail.myhostname.com:25 -starttls smtp CONNECTED(00000003) depth=0 /CN=myhostname verify error:num=18:self signed certificate verify return:1 depth=0 /CN=myhostname verify return:1 --- Certificate chain 0 s:/CN=myhostname i:/CN=myhostname --- Server certificate -----BEGIN CERTIFICATE----- MIIBqTCCARICCQDDxVr+420qvjANBgkqhkiG9w0BAQUFADAZMRcwFQYDVQQDEw5y ZXJlY2lwZS11dGlsczAeFw0xMDEwMTMwNjU1MTVaFw0yMDEwMTAwNjU1MTVaMBkx FzAVBgNVBAMTDnJlcmVjaXBlLXV0aWxzMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB iQKBgQDODh2w4A1k0qiPNPhkrPj8sfkxpKPTk28AuZhgOEBYBLeHacTKNH0jXxPv P3TyhINijvvdDPzyuPJoTTliR2EHR/nL4DLhr5FzhV+PB4PsIFUER7arx+1sMjz6 5l/Ubu1ppMzW9U0IFNbaPm2AiiGBQRCQN8L0bLUjzVzwoSRMOQIDAQABMA0GCSqG SIb3DQEBBQUAA4GBALi2vvk9TGKJubXYJbU0PKmVmsfzFK35yLqr0keiDBhK2Leg 274sWxEH3ds8mUaRftuFlXb7RYAGNlVyTuMTY3CEcnqIsH7F2McCUTpjMzu/o1mZ O/B21CelKetBd1u79Gkrv2vWyN7Csft6uTx5NIGG2+pGi3r0gX2r0Hbu2K94 -----END CERTIFICATE----- subject=/CN=myhostname issuer=/CN=myhostname --- No client certificate CA names sent --- SSL handshake has read 1203 bytes and written 360 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 1AA4B8BFAAA85DA9ED4755194C50311670E57C35B8C51F9C2749936DA11918E4 Session-ID-ctx: Master-Key: 9B432F1DE9F3580DCC6208C76F96631DC5A4BC517BDBADD5F514414DCF34AC526C30687B96C5C4742E9583555A118232 Key-Arg : None Start Time: 1292985376 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) --- 250 DSN Oddly enough when I try to send an email from the machine itself it does work: echo test | mail -s "test subject" [email protected]

    Read the article

  • When running a shell script, how can you protect it from overwriting or truncating files?

    - by Joseph Garvin
    If while an application is running one of the shared libraries it uses is written to or truncated, then the application will crash. Moving the file or removing it wholesale with 'rm' will not cause a crash, because the OS (Solaris in this case but I assume this is true on Linux and other *nix as well) is smart enough to not delete the inode associated with the file while any process has it open. I have a shell script that performs installation of shared libraries. Sometimes, it may be used to reinstall versions of shared libraries that were already installed, without an uninstall first. Because applications may be using the already installed shared libraries, it's important the the script is smart enough to rm the files or move them out of the way (e.g. to a 'deleted' folder that cron could empty at a time when we know no applications will be running) before installing the new ones so that they're not overwritten or truncated. Unfortunately, recently an application crashed just after an install. Coincidence? It's difficult to tell. The real solution here is to switch over to a more robust installation method than an old gigantic shell script, but it'd be nice to have some extra protection until the switch is made. Is there any way to wrap a shell script to protect it from overwriting or truncating files (and ideally failing loudly), but still allowing them to be moved or rm'd? Standard UNIX file permissions won't do the trick because you can't distinguish moving/removing from overwriting/truncating. Aliases could work but I'm not sure what entirety of commands need to be aliased. I imagine something like truss/strace except before each action it checks against a filter whether to actually do it. I don't need a perfect solution that would work even against an intentionally malicious script. Ideas I have so far: Alias cp to GNU cp (not the default since I'm on Solaris) and use the --remove-destination option. Alias install to GNU install and use the --backup option. It might be smart enough to move the existing file to the backup file name rather than making a copy, thus preserving the inode. "set noclobber" in ~/.bashrc so that I/O redirection won't overwrite files

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >