Search Results

Search found 78189 results on 3128 pages for 'file management'.

Page 250/3128 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Google Blogger Website CName and/or Text File Issues

    - by Francis Gibbons
    I have a blogger Blog website and I would like to have it show up on my company website. I have read a couple articles out there on how to do it. A hand full of them talk about using FTP which is old and no longer available. However, I am trying to following along with this one: http://www.infinite42.com/small-business/integrate-blogger-blog-website Which seems pretty easy but I am having a problem getting Google to Verify the DNS CName or Text Record that I created on my Windows 2007 Server. Do I need to create this record at the registra level. Right now the domain is setup at the registra to point the www record to my server where on my server I tried the Txt Record and the CName Record with no luck in DNS. Here are the Google instructions for creating a CName file record in DNS: Follow the steps below to create a DNS (Domain Name System) record that proves to Google that you own the domain. Add the CNAME record below to the DNS configuration for abc.com. CNAME Label / Host: CNAME Destination / Target: Click Verify below. When Google finds this DNS record, we'll make you a verified owner of the domain. (Note: DNS changes may take some time. If we don't find the record immediately, we'll check for it periodically.) To stay verified, don't remove the DNS record, even after verification succeeds. Here is the link to do it with a CName: http://googlewebmastercentral.blogspot.com/2012/08/domain-verification-using-cname-records.html When I go to add my CName record on my server's DNS the only two fields available are Alias Name and Fully Qualified Domain Name. How am I suppose to create this record can someone please tell me? Thanks, Frank

    Read the article

  • Unable to boot Windows after installing Ubuntu 12.04 - error: invalid efi file path

    - by user113350
    I have a Laptop (ASUS X310A, I installed Ubuntu 12.04 to be side by side with Windows 7 but I seem to have gotten a problem with booting Windows 7. I used the Boot Repair twice with no results. Boot-Repair info: http://paste.ubuntu.com/1417623/ The error I get when starting Windows 7 from GRUB is: error: invalid efi file path In Boot Manager or Menu, I have 3 options now: 2x for Ubuntu (maybe cause I did boot-repair twice) 1x Windows boot manager (If I boot this it opens "ASUS Preload Wizard", it gives me the option to re-install windows losing all previous data -) When I was making the partition before installing Ubuntu, I made the new partition by making sda4 smaller and adding ext4 mounted: "\" and adding a swap area. Installed it and it didn't work, nothing worked. So i booted Ubuntu from the USB again and deleted the partitions I made and decided to make sda3 smaller and making the partitions but this time it gave me the option that I could mount sda3 on "\windows" or "\dos" I ignored it and didn't choose neither because the I know that it doesn't need to be mounted and proceeded to create what is now sda7 (ext4) and sda8 (swap area). It still didn't work so I booted from USB and did the first boot-repair, so I was able to boot Ubuntu now but not windows, but when I did it through my USB I was not able to update boot-repair, so i decided to redo the boot-repair from Ubuntu running on the Hardisk (fully updated) and it still didn't work. In GRUB this is what i see (when booting using Ubuntu as first option in Boot Menu): Ubuntu, with Linux 3.2.0-29-generic Ubuntu, with Linux 3.2.0-29-generic (recovery mode) Windows UEFI loader Windows Boot UEFI bootx64.efi.bkp Windows 7 (loader) (on /dev/sda3) Windows Recovery Environment (loader) (on /dev/sda5) I tried all the ones starting with "Windows" they all don't work Please help, Many Thanks

    Read the article

  • JSCompress fails to compress my js file - why?

    - by Renso
    Issue: You use the online compression utility jscompress.com to compress your js file but it fails with an error. Why this may be happening and how to fix it. Possible causes: Apparently not using open and closing curly brackets in an IF statement would cause this. Well turns out this is not the case. Look at the following example and see if you can figure out what the issue is :-)   function SetupDeliveredVPRecontactNotes($item, id) {     var theData;     $.ajax({         data: { deliveredVPId: id },         url: $('#ajaxGetDeliveredVPRecontactNotesUrl').val(),         type: "GET",         async: false,         dataType: "html",         success: function(data, result) {             $item.empty();             var input = '<textarea class="recontactNote" rows="4" name="DeliveredVPRecontactNotes_' + id + '" id="DeliveredVPRecontactNotes_' + id + '" cols="115">' + data + '</textarea>';             $item.append(input);             theData = data;         },         error: function(XMLHttpRequest, textStatus, errorThrown) {             $item.empty();             alert("An error occurred: The operation to retrieve the DeliveredVP's Recontact Notes has failed");         }     });                  //ajax     return theData; }     Solution: The name of the method/function is the same as the message in the ALERT message when the spaces are removed: " DeliveredVP Recontact Notes" becomes " DeliveredVPRecontactNotes" and mathes that of the function. So I changed it to " DeliveredVP's Recontact Notes"

    Read the article

  • File permission issues after setting up an amazon ec2 instance

    - by Pardoner
    I've set up an amazon ec2 instance and I'm have some file permission issues. I've created myself a new user and added myself to the following groups: adm:x:4:me,ubuntu www-data:x:33:me,www-data ssh:x:108:me admin:x:111:me ubuntu:x:1000:www-data,me me:x:1001:me but when I cd /var/www I can't do simple commands without doing sudo first. So I chmod -R www-data:www-data /var/www to ensure that I'm in the owning group but I still have to type sudo for everything. If I sudo su www-data it works fine. Since I'm in the www-data group shouldn't I have the same privilages as www-data? One strange thing I'm noticing is that when I ls -l it list the owner but not the group names. Could this possibly be part of the issue? Is is posible for a directory to not be part of a group? drwxr-xr-x 4 www-data 4.0K Oct 24 16:39 . drwxr-xr-x 14 root 4.0K Oct 10 16:58 .. drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 admin.mywebsite.com drwxrwxr-x 2 www-data 4.0K Oct 4 00:29 mywebsite.com drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 staging.mywebsite.com

    Read the article

  • Read only file system error on ubuntu after partitioning

    - by Ranjith R
    I am not sure if I am the root cause of this problem but this is what I did: I wanted latest ubuntu and latest linux mint together on my thinkpad laptop. Windows 7 was already there. I already had mint. So I put in the USB with ubuntu image and started installing ubuntu. I chose to install side by side. It was taking a long time to finish defragmenting and partitioning. I decided to give up as I became a little impatient and I pressed the skip button. After the skipping, I realized that the partitioning was complete and went ahead with installing ubuntu. Now the linux mint OS starts reporting the file system as read only at least once every day and I have restart and tell the OS to fix errors in hard disk. After I press F key, the system fixes the issues, restarts and all is well again. Is there some way to fix the issue permanently. I think reinstalling will solve the issues, but I can not do it as I have a lot of data and I will have reinstall and configure a lot of softwares that I use daily. I checked the smart check in disk utility and the hard disk seems to be fine Also I checked both the partitions for errors with disk utility and the report says they are fine. Is there something I can do before I reinstall?

    Read the article

  • No boot or grub file after using ls command

    - by Poke Moke
    I had xubuntu installed, i believe version 12.04 and then tried to dual boot with backbox. It worked from the flash drive but upon installing it onto the hard drive, I could no longer boot backbox. I decided I would just clear my OS and put just backbox on the hard drive and only have a single boot. To do this, I erased my boot file completely which led to my current position. I am put into the grub rescue prompt. I can't do a system restore, I can't boot with anything I might try including puppy linux and a boot rescue, and I've looked up the commands and I've tried to fix this. I can do ls, I find the correct hd but then I'm stuck as I don't have a boot or grub folder. The command is typed as: ls (hd1,msdos1)/ and listed is ./ ../ lost+found/ etc/ media/ bin/ dev/ home/ lib/ mnt/ opt/ proc/ root/ run/ sbin/ selinux/ srv/ sys/ tmp/ usr/ var/ initrd.img vmlinuz cdrom/ apfolder initrd.img.old vmlinuz.old (not sure if it's initrd or init rd.img. as it wraps around the screen there.) I've seen commands regarding boot or grub if they are seen but as listed, they aren't there. Any ideas?

    Read the article

  • how to create java zip archives with a max file size limit [closed]

    - by Marci Casvan
    I need to write an algorithm in java (for an android app) to read a folder containing more folders and each of those containing images and audio files so the structure is this: mainDir/subfolders/myFile1.jpg It must be in java, something like perl script is not an option. It would preferably be for the compressed archive in order to squeeze as many files as possible before mailing the zip. Just a normal zip (no jar). My problem is that I need to limit the size of the archive to 16mb and at runtime, create as many archives as needed to contain all my files from my main mainDir folder. I tried several examples from the net, I read the java documentation, but I can't manage to understand and put it all together the way I need it. Has someone done this before or has a link or an example for me? I resolved the reading of the files with a recursive method but I can't write the logic for the zip creation I'm open for suggestions or better, a working example. EDIT: FileNotFoundException (no such file or directory) this was my initial post at Stack Overflow. I've got an answer to it, but I can't set the size of the ZipEntry and the logic doesn't work and also when extracting the my files from the zip I get the compression method not supported error.

    Read the article

  • Getting "Unable to find a medium containing a live file system" when installing 10.10

    - by Krastin Konstantinov
    I got this error while trying to install ubuntu 10.10 from a bootable USB stick on to Sony Vaio P series laptop. The disk boots into the language and installation type screen. After that it goes through the splash and pulls up this error: BusyBox v1.13.3 (Ubuntu 1:1.13.3-ubuntu11) built in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) Unable to find a medium containing a live file system After getting this error the installation fails to start. I have used the same USB stick on some other laptops and the installation started as usual. Any help will be appreciated. My installation is i386 and my machine is Vaio P VGN-P610. I've tried every possible thing: Bios: [enable boot external] Boot order: [external] [hard drive] [network boot] Tried 2 different USB drives Tried 2 different external CD drives Tried 6 different downloads of both the desktop and netbook remix. All downloads were checked with MD5SUM. Ubuntu Desktop 9.10 installs properly in every version and from every source. Getting reaaaally frustrated.

    Read the article

  • Cannot Create Bootable USB Drive from .iso file

    - by tarabyte
    I've tried formatting the flash drive as FAT as well as Mac OS journaled through diskutility but still cannot successfully create a bootable drive. I'm following the directions here exactly: http://www.ubuntu.com/download/help/create-a-usb-stick-on-mac-osx Environment: Macbook Pro trying to create a bootable flash drive for a Macbook Pro. 8GB flash drive. Tested ubuntu-12.04.1 as well as ubuntu 12.20 .iso 64-bit downloads. Nothing to repair in disk utility for this hard drive. Every time I finish step 8 of the tutorial I get "file system not recognized" with the options to "initialize" meaning to reformat my drive, "ignore" or "eject." When I try to re-inspect the flash drive in disk utility after plugging it back in I see that it has some error when I try to verify it but the "repair" button is disabled. I just want to boot to ubuntu when my mac first starts up. Oh the pain. http://lifehacker.com/5934942/how-to-dual-boot-linux-on-your-mac-and-take-back-your-powerhouse-apple-hardware "linux is free insomuch as your time is worthless" - old wise man

    Read the article

  • File doesn't exist when trying to change permissions following the avasys image scan manual

    - by Howard Graham
    I was finally able to connect to avasys.jp and downloaded and installed iscan_2.28.1-3.ltdl7_amd64.deb iscan-data_1.13.0-1_all.deb. The programs appeared to install correctly. I then ran sane-find-scanner and got back: found USB scanner (vendor=0x04b8, product=0x012d) at libusb:001:003 I then ran lsusb and got back: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 003: ID 04b8:012d Seiko Epson Corp. Perfection V10/V100 (GT-S600/F650) Bus 001 Device 004: ID 03f0:4817 Hewlett-Packard Bus 002 Device 002: ID 093a:2510 Pixart Imaging, Inc. Optical Mouse the avasys image scan manual instructed me to run chmod 0666 /proc/bus/usb/001/003 which returned chmod: cannot access `/proc/bus/usb/001/003': No such file or directory In 12.04, no such directory exists. 12.04 appears to deal with USB in another way. What must I do to get the usb port 001/003 recognized by xsane and sane as the port where the scanner can be located? What must I do to continue installing the scanner?

    Read the article

  • configuration issue with respect to .htaccess file on ubuntu

    - by Registered User
    I am building an application tshirtshop I have following configuration in /etc/apache2/sites-enabled/tshirtshop <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/tshirtshop <Directory /var/www/tshirtshop> Options Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> and following in .htaccess file in location /var/www/tshirtshop/.htaccess <IfModule mod_rewrite.c> # Enable mod_rewrite RewriteEngine On # Specify the folder in which the application resides. # Use / if the application is in the root. RewriteBase /tshirtshop #RewriteBase / # Rewrite to correct domain to avoid canonicalization problems # RewriteCond %{HTTP_HOST} !^www\.example\.com # RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] # Rewrite URLs ending in /index.php or /index.html to / RewriteCond %{THE_REQUEST} ^GET\ .*/index\.(php|html?)\ HTTP RewriteRule ^(.*)index\.(php|html?)$ $1 [R=301,L] # Rewrite category pages RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2&Page=$3 [L] RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2 [L] # Rewrite department pages RewriteRule ^.*-d([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&Page=$2 [L] RewriteRule ^.*-d([0-9]+)/?$ index.php?DepartmentId=$1 [L] # Rewrite subpages of the home page RewriteRule ^page-([0-9]+)/?$ index.php?Page=$1 [L] # Rewrite product details pages RewriteRule ^.*-p([0-9]+)/?$ index.php?ProductId=$1 [L] </IfModule> the site is working on localhost and is working as if there is no .htaccess rule specified i.e. if I were to view a page as http://localhost/tshirtshop/nature-d2 then I get a 404 Error but if I view the same page as http://localhost/tshirtshop/index.php?DepartmentId=2 then I can view it. sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) status_module (shared) Syntax OK What is the mistake if any one can point out in above configuration, or else I need to check any thing else?

    Read the article

  • Understanding IDAT chunk of PNG file format

    - by DRapp
    From the sample image below, I have a border in yellow just for display purposes only. The actual .png file is a simple black/white image 3 pixels by 3 pixels. I was originally thinking to try as a 2x2, but that would not help trying to interpret low/hi vs hi/low drawing stream. At least this way, I would have two black, one white from the top, or one white, two black from the bottom.. So I read the chunks of data, get to the IDAT chunk, decode that (zlib) and come up with 12 bytes as follows 00 20 00 40 00 80 So, my question, how does the above get broken down into the 3x3 black and white sample... Also, it is saved in palette format and properly recognizes the bit depth of 1 and color palette of 2... color pallet[0] is RGBA all zeros. Palette1 has RGBA of 255, 255, 255, 0 I'll eventually get into the multiple other depth formats later, just wanted to start with what would expect to be the easiest. Part II. Any guidance on handling the other depth formats would help if anything special to be considered especially regarding alpha channel (which I am already looking for in the palette) that might trip me up.

    Read the article

  • Who keeps removing that file?

    - by mgerdts
    Over the years, I've had many times when some file gets removed and there's no obvious culprit.  With dtrace, it is somewhat easy to figure out:  #! /usr/sbin/dtrace -wqs syscall::unlinkat:entry /cleanpath(copyinstr(arg1)) == "/dev/null"/ {         stop();         printf("%s[%d] stopped before removing /dev/null\n", execname, pid);         system("ptree %d; pstack %d", pid, pid); } That script will stop the process trying to remove /dev/null before it does it.  You can allow it to continue by restarting (unstopping?) the command with prun(1) or killing it with kill -9.  If you want the command to continue automatically after getting the ptree and pstack output, you can add "; prun %d" and another pid argument to the system() call.

    Read the article

  • Spotlight on Oracle Social Relationship Management. Social Enable Your Enterprise with Oracle SRM.

    - by Pat Ma
    Facebook is now the most popular site on the Internet. People are tweeting more than they send email. Because there are so many people on social media, companies and brands want to be there too. They want to be able to listen to social chatter, engage with customers on social, create great-looking Facebook pages, and roll out social-collaborative work environments within their organization. This is where Oracle Social Relationship Management (SRM) comes in. Oracle SRM is a product that allows companies to manage their presence with prospects and customers on social channels. Let's talk about two popular use cases with Oracle SRM. Easy Publishing - Companies now have an average of 178 social media accounts - with every product or geography or employee group creating their own social media channel. For example, if you work at an international hotel chain with every single hotel creating their own Facebook page for their location, that chain can have well over 1,000 social media accounts. Managing these channels is a mess - with logging in and out of every account, making sure that all accounts are on brand, and preventing rogue posts from destroying the brand. This is where Oracle SRM comes in. With Oracle Social Relationship Management, you can log into one window and post messages to all 1,000+ social channels at once. You can set up approval flows and have each account generate their own content but that content must be approved before publishing. The benefits of this are easy social media publishing, brand consistency across all channels, and protection of your brand from inappropriate posts. Monitoring and Listening - People are writing and talking about your company right now on social media. 75% of social media users have written a negative post about a brand after a poor customer service experience. Think about all the negative posts you see in your Facebook news feed about delayed flights or being on hold for 45 minutes. There is so much social chatter going on around your brand that it's almost impossible to keep up or comprehend what's going on. That's where Oracle SRM comes in. With Social Relationship Management, a company can monitor and listen to what people are saying about them on social channels. They can drill down into individual posts or get a high level view of trends and mentions. The benefits of this are comprehending what's being said about your brand and its competitors, understanding customers and their intent, and responding to negative posts before they become a PR crisis. Oracle SRM is part of Oracle Cloud. The benefits of cloud deployment for customers are faster deployments, less maintenance, and lower cost of ownership versus on-premise deployments. Oracle SRM also fits into Oracle's vision to social enable your enterprise. With Oracle SRM, social media is not just a marketing channel. Social media is also mechanism for sales, customer support, recruiting, and employee collaboration. For more information about how Oracle SRM can social enable your enterprise, please visit oracle.com/social. For more information about Oracle Cloud, please visit cloud.oracle.com.

    Read the article

  • Are VMWare ESXi 5 patches cumulative?

    - by ewwhite
    It seems basic, but there's confusion about the patching strategy needed to manually update standalone VMWare ESXi hosts. The VMWare vSphere blog attempts to explain this, but it's still not clear. From the blog: Say Patch01 includes updates for the following VIBs: "esxi-base", "driver10" and "driver 44". And then later Patch02 comes out with updates to "esxi-base", "driver20" and "driver 44". P2 is cumulative in that the "esxi-base" and "driver44" VIBs will include the updates in Patch01. However, it's important to note that Patch02 not include the "driver 10" VIB as that module was not updated. Many of my ESXi installations are standalone and do not make use of Update Manager. It is possible to update an individual host using the patches make available through the VMWare patch download portal. The process is quite simple, and that part makes sense. The bigger issue is determining what to actually download and install. In my case, I have a good number of HP-specific ESXi builds that incorporate sensors and management for HP ProLiant hardware. Let's say that those servers start at ESXi build #474610 from 9/2011. Looking at the patch portal screenshot below, there is a patch for ESXi update01, build #623860. There are also patches for builds #653509 and #702118. Coming from the old version of ESXi, what is the proper approach to bring the system fully up-to-date? Which patches are cumulative and which need to be applied sequentially? Perhaps the download size is the confusing factor, but is installing the newest build the right approach, or do I need to step back and patch incrementally?

    Read the article

  • How to use Salt Stack with minions all behind NAT (not publicly accessible, default salt ports not open)?

    - by MountainX
    Can Salt Stack minions communicate with the salt master from behind NAT/Firewalls, etc., using standard ports that would be open be default in all consumer NAT routers (and without the minions having a public DNS record or static IP)? I'm working my way through my first salt tutorial, and this is where I'm stuck. I am able to configure iptables on the Ubuntu salt-master. But I have no control over the routers/NAT that the minions will sit behind. So far I tried these settings: /etc/salt/master: publish_port: 465 ret_port: 443 /etc/salt/minion: master_port: 465 That did not work. Background: I have a custom developed application presently running on about 40 Kubuntu laptops (& more planned). Every few months I have to update the application. (Often this just amounts to replacing a .jar file, which requires root permissions.) I also have to run Ubuntu updates and a few other minor things. I've been doing it manually, one by one, using Team Viewer to log into each client. I would like to dramatically improve this process. The two options I'm aware of are either: use reverse ssh tunnels and bash scripts. I tested this and it works. But I don't get any of the reporting, etc., I would get with Salt Stack. use Salt Stack (or similar) management tool. But I need a really simple tool. I can't invest any time in a big learning curve. I looked at Puppet and a bunch of related tools. The only one I found that looked simple enough for me (so far) was Salt Stack. But I'm stuck now because my minion can't reach the salt-master, as stated above. I appreciate suggestions.

    Read the article

  • Adding an user to samba

    - by JustMaximumPower
    I'm trying to setup some samba shares in my home network on an Ubuntu 12.04 machine. Everything works fine for my user account (max) but I can not add any new user. Every time I try to add new user they can not use the shares. It's likely that the error is very basic to the concept of samba but please don't just tell me to read the docs. I've been trying that for about 2 weeks now. I've set up the server with my user max who can mount transfer and the share max. Than I added the user simon with sudo adduser --no-create-home --disabled-login --shell /bin/false simon because the user should not be able to ssh into the machine. I did an sudo smbpasswd -a simon and set an (samba) password for simon and added an share for simon. I also added simon to transferusers to give him access to the share transfer. But simon can't connect to transfer or simons. ---- output of testparam: ------- Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[printers]" Processing section "[print$]" Processing section "[max]" Processing section "[simons]" Processing section "[transfer]" Loaded services file OK. Server role: ROLE_STANDALONE Press enter to see a dump of your service definitions [global] server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [max] comment = Privater share von Max path = /media/Main/max read only = No create mask = 0700 [simons] comment = Privater share von Simon path = /media/Main/simon read only = No create mask = 0700 [transfer] comment = Transferlaufwerk path = /media/Main/transfer read only = No create mask = 0755 ---- The files in /media/Main: ------ drwxrwxr-x 17 max max 4096 Oct 4 19:13 max/ drwx------ 5 simon max 4096 Aug 4 15:18 simon/ drwxrwxr-x 7 max transferusers 258048 Oct 1 22:55 transfer/

    Read the article

  • How can the little guys effectively learn and use puppet?

    - by drumfire
    Six months ago, in our not-for-profit project we decided to start migrating our system management to a Puppet controlled environment because we are expecting our number of servers to grow substantially between now and a year from now. Since the decision has been made our IT guys have become a bit too annoyed a bit too often. Their biggest objections are: "We're not programmers, we're sysadmins"; Modules are available online but many differ from one another; wheels are being reinvented too often, how do you decide which one fits the bill; Code in our repo is not transparent enough, to find how something works they have to recurse through manifests and modules they might have even written themselves a while ago; One new daemon requires writing a new module, conventions have to be similar to other modules, a difficult process; "Let's just run it and see how it works" Tons of hardly known 'extensions' in community modules: 'trocla', 'augeas', 'hiera'... how can our sysadmins keep track? I can see why a large organisation would dispatch their sysadmins to puppet courses to become puppet masters. But how would smaller players get to learn puppet to a professional level if they do not go to courses and basically learn it via their browser and editor?

    Read the article

  • How to add an SSH user to my Ubuntu 12 server to upload PHP files

    - by user229209
    I have an Ubuntu 12 VPS and wanted to create a user account to upload and download my PHP code. So when logged in as root I created a user "chris" and then created a directory /var/www/chris I want "chris" to be able to upload and run files to the /var/www/chris directory. Permissions for the chris dir look like this: drwxrwxr-x 2 root chris 4096 Aug 20 03:35 chris As root I created a sample file called abc.php and put it in the chris dir. It worked fine when I test it in a browser. I logged in as chris and uploaded a file called 1234.php. That did not work. I just got a blank PHP page. The code was identical in both files. So it is not the code. The permissions now look like this: -rw-r--r-- 1 root chris 59 Aug 20 03:34 1234.php -rw-r--r-- 1 root root 49 Aug 20 03:21 abc.php How do I alow the "chris" user to upload files and get them to work?

    Read the article

  • Lightweight Linux distro that includes developer tools? (or, the most BSD-like Linux)

    - by RevAaron
    I cut my teeth on Minix and Slackware 1.1, but I've been in the OS X Wilderness for the last few years. I'm trying to standardize on a Linux distribution for personal and work-related use on less powerful laptops and under virtualization. So far, NetBSD and OpenBSD are the best fit for my purposes- but after plenty of frustration I've come to the conclusion that I need to stick with Linux to get the hardware and software support that comes with it. What I like about NetBSD/OpenBSD that I'd like to keep: X, but no default KDE, GNOME or XFCE! A sensible /etc and dot file setup- startx calls xinit, xinit looks for ~/.xinitrc; nothing more complicated than that is needed. Command line tools and file-based configuration: I shouldn't need a GUI to connect to a WAP. Decent selection of binary packages; building from source is OK, but nothing source-only like Gentoo. pkg_add (BSD) and apt-get both have treated me well in the past. Modest RAM and HDD requirements: boot + X + awesome+ two xterms takes up 80 MB on OpenBSD and 240 MB on Debian 5 and Crunchbang In my experience, most "lightweight" and Live CDs focus on a nice desktop environment crammed into a CD or USB stick; once you add build-essentials you end up with something just about as bloated as Ubuntu or Debian full install. Crunchbang is a great example. Thanks in advance for all suggestions!

    Read the article

  • How can a Linux Administrator improve their shell scripting and automation skills?

    - by ewwhite
    In my organization, I work with a group of NOC staff, budding junior engineers and a handful of senior engineers; all with a focus on Linux. One interesting step in the way the company grows talent is that there's a path from the NOC to the senior engineering ranks. Viewing the talent pool as a relative newcomer, I see that there's a split in the skill sets that tends to grow over time... There are engineers who know one or several particular technologies well and are constantly immersed... e.g. MySQL, firewalls, SAN storage, load balancers... There are others who are generalists and can navigate multiple technologies. All learn enough Linux (commands, processes) to do what they need and use on a daily basis. A differentiating factor between some of the staff is how well they embrace scripting, automation and configuration management methodologies. For instance, we have two engineers who do the bulk of Amazon AWS CloudFormation work, and another who handles most of the Puppet infrastructure. Perhaps a quarter of the engineers are adept at BASH shell scripting. Looking at this in the context of the incredibly high demand for DevOps skills in the job market, I'm curious how other organizations foster the development of these skills and grow their internal talent. Scripting doesn't seem like a particularly-teachable concept. How does a sysadmin improve their shell scripting? Is there still a place for engineers who do not/cannot keep up in the DevOps paradigm? Are we simply to assume that some people will be left behind as these technologies evolve? Is that okay?

    Read the article

  • Ubuntu rm not deleting files

    - by ILMV
    My colleague and I have been struggling with deleting a directory and its contents. We are working on a new version of our websites source code on Ubuntu 8.04 (dir: /var/www/websites), what we want to do is delete the websites directory and recreate it from a .tar backup we created a couple weeks ago. The purpose of this is so we can run our deployment procedure in a local environment before we do so on our live / public environment. We use this command: rm -r websites This deletes the directory and the files within it. The problem occurs when we un-tar our backup file and view the website we are getting files that don't exist in the .tar backup, in fact these files were only created a few days ago and should have been deleted. We delete the directory once more in the manner stated above, we then create a new websites directory using the mkdir command. Strangely at this stage the 'deleted files' do not come back, but if we unpack our .tar file the 'deleted files' appear again. Is there a way to ensure these files are deleted, or at least the pointers that associate them with said directory. Our .tar backup does not include these files We do not want to use the shred command We do not want to use 3rd party applications Solution should be functional via terminal (SSH) Many thanks! EDIT Er... we fixed it. Turns out the files that are reappearing are because of a link we have to another directory (outside the /var/www/websites), we were restoring the link but not deleting the files on the other end. D'oh! Many thanks for your help guys... friday afternoon syndrome :-)

    Read the article

  • managing a high traffic media sharing website

    - by Jordan Westerman
    i'm in the process of developing a website that i predict will generate a lot of traffic. the site will be similar to many other sites offering free media streaming: mp3's. we are going to start with a pretty minimal amount of media to share, but the basic idea is that artists will set up a profile page with music they have made available for consumers to visit the page and listen to the music. we are starting with just a handful of artists, but i think that this project will generate more and more artist pages. eventually i'd like to set it up so consumers can create personalized playlists. how can i best prepare server space and bandwidth capabilities? i have a small team of web designers and programmers working on the site, as i am pretty illiterate when it comes to site management. as the ring leader of this organization, i am more or less looking for financial requirements and monthly burn rate estimates. i don't have a ton of capitol to start with, putting together a business plan, but i am seeking investments. i have a game plan to grow fast enough to be successful, and slow enough to manage the financial growth requirements. any questions i may have failed to ask myself? is it realistic to start this project on a shared server, and upgrade? any financial advice you think i can use? i really appreciate any advice given, as this is my first business venture. thank you all in advance. Jordan Westerman D.B.A. Badfish Productions, LLC

    Read the article

  • Wireless USB keyboard and mouse can wake system, but then receiver is inactive

    - by BlueMonkMN
    I have a Microsoft brand USB device that acts as a receiver for a wireless Microsoft Keyboard and a wireless Mouse. When it's operating normally, there are LEDs on the device indicating Caps Lock, Num Lock and Function Lock, of which the latter 2 are usually lit. It is plugged into a Dell Isnpiron 531 with Windows 7 32-bit running on an AMD Athlon 64 X2 Dual Core processor 5000+. When the computer goes to sleep (the power indicator on the main box is flashing), I can wake it by moving the mouse. So far all is good. However, something changed in, I think, the past couple weeks (I suspect due to a Microsoft driver update problem). Before the change, after waking the computer, everything would operate normally as far as I could tell, but now after waking the computer, the receiver has no lights on, and the keyboard and mouse are completely unresponsive (which is odd, considering the mouse woke up the computer). There is a button on the receiver that's supposed to reset the wireless connection and flash the lights while it does so, but it has no effect in this state. It's like the receiver doesn't have power (but how would the system know I moved the mouse, unless the power was on until it woke up?). I have checked the BIOS/CMOS settings or whatever you call them, and did not see anything related to USB in the power management section. I have checked Windows 7 device manager and ensured that all the USB Root Hub devices have the setting unchecked for allowing the USB power to be turned off. Like I said, this was working before, and the only thing I can think of that's changed is applying Windows Updates.

    Read the article

  • Ubuntu: Getting rid of a mimetype entry

    - by Epaga
    I have a pesky mimetype entry that I can't seem to get rid of. Here is the current situation: xdg-mime query filetype myfile.mfe application/pesky Using assogiate I have found out the information about this mime type entry (but can't delete it there). I have the following 'pesky.xml' XML file which was used to create the mime type (as far as I can tell, since it exactly matches the entry in assogiate...): <?xml version='1.0'?> <mime-info xmlns='http://www.freedesktop.org/standard'> <mime-type type="application/pesky"> <comment>my pesky type</comment> <glob pattern="*.mfe"/> <magic priority="100"> <match type="string" offset="0" value="application/pesky"/> </magic> </mime-type> <mime-info> However, the following has no effect: sudo xdg-mime uninstall --mode system --novendor pesky.xml The file association remains. Any ideas?

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >