Search Results

Search found 892 results on 36 pages for 'mirror'.

Page 25/36 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Post build events using ROBOCOPY instead of XCOPY

    - by Vizioz Limited
    I don't know about you, but for a long time I have used XCOPY statements in my Visual Studio post build events to copy my Umbraco files from the project folders to the local version of the website associated with the project.For the last few months we have been building a website framework for a client, who has subsequently sold the site to 5 clients, each with a different skin and some variations in their functional requirements.So, we now have a single source solutions, that builds and copies the site files into 5 seperate local websites, which enables us to easily test them all, what we had found was that this process was starting to slow up our build process and was reaching 30-45 seconds on a high spec Quad core machine (and slower on others)Today I asked Colin to create seperate Solution Configurations within Visual Studio so that while we were developing we could target a single site, and when we wanted to test all sites, we could target "ALL" and the Post Build script would then copy the files to all sites.This worked well, and with a couple of other optimisations, our build was now taking about 10 seconds for a single site.Then Colin came across ROBOCOPY and suggested that maybe this would be a suitable alternative to XCOPY, well, I had not heard of it.. (shock horror some of you shout, some I am sure like me, are also wondering what it is!)ROBOCOPY is new in Windows Vista & Windows 7 (you can also download it for XP & Windows 2003) and it has a lot of additional features, the two that were most interesting to us were:/MIR = Mirror a folder tree/XD = Exclude Directories/NP = No Progress (i.e. it does not give you a chart of it's results, which just fills up your Output window!)So, we set about implementing ROBOCOPY, we decided to use the /MIR switch on all folders that we knew were always stored in our project folders:- images- css- masterpages- xsltAnd for other files we just used the straight robocopy functionality.We also decided to exclude all the .SVN directories using the /XD switch and finally we added the /NP switch as mentioned above.The beauty of all of this, is the /MIR functionality, as this means that only files that have changed will be copied across which greatly speeds up the process, especially on the images folders which previously copied across on every build, now, if we add a new image to the project it will be copied across automatically and then never again, unless we change it of course!The build time now for all sites is approximately 4 seconds and for a single site, 2 seconds, I would highly recommend the time to make the same optimisations to your build processes if you have not done so already.

    Read the article

  • Disaster Recovery Plan&ndash;Rebuild System Disk (Dell Server 2900 with PERC RAID controller)

    - by Jim Lahman
    Goal: Since the system disk is a RAID 1 mirrored set, we can rebuild the shadow set by replacing one of the good sets with a blank disk Steps Shutdown and power down server Remove the disk from bay 9, which is part of the system shadow set. Put this disk on the shelf Insert blank/old disk into the empty bay     Label the new disk before inserting it into the empty bay       Power up server During the booting process, the following message appears: “Some configured disks have been removed from your system…”       Press ‘C’ to Load Configuration utility             Press 'Y' to confirm to load the foreign configuration       In this example, the system shadow set is Disk Group 2.  (Before proceeding, confirm this is the disk group in your case).  Expanding the physical disks shows a disk in bay 8 and a missing disk in bay 9.  This is correct.   Now, we have to include the new inserted disk in this group       RAID controller reporting bay 9 is empty       There may be times when the new disk is seen as a foreign disk.  In this case, do the following:     Foreign disk is reported in bay 9 CTRL-N (Next Page) to Foreign Mgt All the disk groups will be displayed.  Typically, the disk group containing the foreign disk will be grey.  To remove the foreign disk Highlight Controller Press F2 Select Foreign Select Clear (do NOT import the configuration!)       Clear the foreign configuration Now the disk can be brought into the system shadow set disk group as a hot spare   To include the newly inserted disk into the system shadowset disk group, it must be brought in as a hot spare Highlight Disk Group 2 (VD Management) Hit F2 Select 'Manage Ded. HS'     Manage dedicated hot swap Select the disk in bay 9 (Hit space bar to select) Tab to 'OK'.  Hit the return key     Select hot spare to bring into RAID 1 mirror set   Rebuild automatically commences     Rebuild in process   Restart now or restart after rebuild is completed

    Read the article

  • RAID5 over LVM on Ubuntu Server 12.04.3

    - by April Ethereal
    I'm trying to create a RAID5 software array using LVM. I use VirtualBox as I'm only learning how LVM works. So I've created 4 virtual SCSI drives and then did the following: pvcreate /dev/sd[b-e] vgcreate /dev/sd[b-e] raid5_vg lvcreate --type raid5 -i 3 -L 1G -n raid_lv raid5_vg However, I get an error after the last command: WARNING: Unrecognised segment type raid5 Using default stripesize 64.00 KiB Rounding size (256 extents) up to stripe boundary size (258 extents) Cannot update volume group raid5_vg with unknown segments in it! So it looks like raid5 is not a valid segment type. "lvm segtypes" also doesn't contain 'raid5' entry: root@ubuntu-lvm:~# lvm segtypes striped zero error free snapshot mirror So my question is - how could I create RAID5 logical volume using LVM only? It seems that it is possible, I saw a few references (not for Ubuntu, unfortunately) for RedHat and Gentoo systems. I don't want to use mdadm for now, until I find out that it is mandatory. Some info about my system is below: root@ubuntu-lvm:~# uname -a Linux ubuntu-lvm 3.8.0I use Ubuntu Server 12.04.3 (i686)-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 15:31:16 UTC 2013 i686 i686 i386 GNU/Linux root@ubuntu-lvm:~# dpkg -l | grep lvm ii lvm2 2.02.66-4ubuntu7.3 The Linux Logical Volume Manager Thanks.

    Read the article

  • General directions on developing a server side control system for JS/Canvas Action RPG

    - by Billy Ninja
    Well, yesterday I asked on anti-cheat JS, and confirmed what I kind of already knew that it's just not possible. Now I wanna measure roughly how hard it is to implement a server side checking that is agnostic to client input, that does not mess with the game experience so much. I don't wanna waste to much resource on this matter, since it's going to be initially a single player game, that I may or would like to introduce some kind of ranking, trading system later on. I'd rather deliver better more cool game features instead. I don't wanna have to guarantee super fast server response to keep the game going lag free. I'd rather go with more loose discrete control of key variables and instances. Like store user's action on a fifo buffer on the client, and push that actions to the server gradually. I'd love to see a elegant, generic solution that I could plug into my client game logic root (not having to scatter treatments everywhere in my client js) - and have few classes on Node.js server that could handle that - without having to mirror/describe all of my game entities a second time on the server.

    Read the article

  • codesniffer command not being recognized after several installs and upgrades

    - by numerical25
    I've tried to install codesniffer using pear but my mac is not recognizing the phpcs command. pear config Configuration (channel pear.php.net): ===================================== Auto-discover new Channels auto_discover 1 Default Channel default_channel pear.php.net HTTP Proxy Server Address http_proxy <not set> PEAR server [DEPRECATED] master_server pear.php.net Default Channel Mirror preferred_mirror pear.php.net Remote Configuration File remote_config <not set> PEAR executables directory bin_dir /usr/local/pear/bin PEAR documentation directory doc_dir /usr/local/pear/docs PHP extension directory ext_dir /opt/local/lib/php/extensions/no-debug-non-zts-20090626 PEAR directory php_dir /usr/local/pear/share/pear PEAR Installer cache directory cache_dir /private/tmp/pear/cache PEAR configuration file cfg_dir /usr/local/pear/cfg directory PEAR data directory data_dir /usr/local/pear/data PEAR Installer download download_dir /tmp/pear/install directory PHP CLI/CGI binary php_bin /opt/local/bin/php php.ini location php_ini /opt/local/etc/php5/php.ini-development --program-prefix passed to php_prefix <not set> PHP's ./configure --program-suffix passed to php_suffix <not set> PHP's ./configure PEAR Installer temp directory temp_dir /tmp/pear/install PEAR test directory test_dir /usr/local/pear/tests PEAR www files directory www_dir /usr/local/pear/www Cache TimeToLive cache_ttl 3600 Preferred Package State preferred_state stable Unix file mask umask 22 Debug Log Level verbose 1 PEAR password (for password <not set> maintainers) Signature Handling Program sig_bin /usr/local/bin/gpg Signature Key Directory sig_keydir /opt/local/etc/pearkeys Signature Key Id sig_keyid <not set> Package Signature Type sig_type gpg PEAR username (for username <not set> maintainers) User Configuration File Filename /Users/anthonygordon/.pearrc System Configuration File Filename /opt/local/etc/pear.conf i checked php_bin and the php executable is there. when i run phpcs i get command not found Ive tried to upgrade pear, uninstall reinstall code sniffer, everything. when i run installs list i get Pear List Package Version State Archive_Tar 1.3.10 stable Console_Getopt 1.3.1 stable PEAR 1.9.4 stable PHP_CodeSniffer 1.4.0 stable Structures_Graph 1.0.4 stable XML_Util 1.2.1 stable

    Read the article

  • PHP remote development workflow: git, symfony and hudson

    - by user2022
    I'm looking to develop a website and all the work will be done remotely (no local dev server). The reason for this is that my shared hosting company a2hosting has a specific configuration (symfony,mysql,git) that I don't want to spend time duplicating when I can just ssh and develop remotely or through netbeans remote editing features. My question is how can I use git to separate my site into three areas: live, staging and dev. Here's my initial thought: public_html (live site and git repo) testing: a mirror of the site used for visual tests (full git repo) dev/ticket# : git branches of public_html used for features and bug fixes (full git repo) Version Control with git: Initial setup: cd public_html git init git add * git commit -m ‘initial commit of the site’ cd .. git clone public_html testing mkdir dev Development: cd /dev git clone ../testing ticket# all work is done in ./dev/ticket#, then visit www.domain.com/dev/ticket# to visually test make granular commits as necessary until dev is done git push origin master:ticket# if the above fails: merge latest testing state into current dev work: git merge origin/master then try the push again mark ticket# as ready for integration integration and deployment process: cd ../../testing git merge ticket# -m "integration test for ticket# --no-ff (check for conflicts ) run hudson tests visit www.domain.com/testing for visual test if all tests pass: if this ticket marks the end of a big dev sprint: make a snapshot with git tag git push --tags origin else git push origin cd ../public_html git checkout -f (live site should have the latest dev from ticket#) else: revert the merge: git checkout master~1; git commit -m "reverting ticket#" update ticket# that testing failed with the failure details Snapshots: Each major deployment sprint should have a standard name and should be tracked. Method: git tag Naming convention: TBD Reverting site to previous state If something goes wrong, then revert to previous snapshot and debug the issue in dev with a new ticket#. Once the bug is fixed, follow the deployment process again. My questions: Does this workflow make sense, if not, any recommendations Is my approach for reverting correct or is there a better way to say 'revert to before x commit'

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • Compare Your Internet Cost and Speed to Global Averages [Infographic]

    - by ETC
    Internet pricing and speed varies wildly across the world. The US, for instance, currently ranks 15th in speed but enjoys reasonably priced internet access. How reasonably priced? If you’re a US citizen you likely have an average internet access speed of 4.8 mbps and you pay a little over $3 per mbps. If you’re in Sweden, however, you likely have an 18 mbps connection and you pay a scant 63 cents per mpbs. The real envy of the internet speed Olympics by far is Japan with a mighty 61 mbps at a mere 27 cents per mbps. Hit up the link below for the full infographic (or use this local mirror if you need to dodge a firewall), then sound off in the comments with how you compare on the international scale. Internet Speeds and Costs Around the World [via Daily Infographic] Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Manage Your Favorite Social Accounts in Chrome and Iron with Seesmic E.T. II – Extinction [Fake Movie Sequel Video] Remastered King’s Quest Games Offer Classic Gaming on Modern Machines Compare Your Internet Cost and Speed to Global Averages [Infographic] Orbital Battle for Terra Wallpaper WizMouse Enables Mouse Over Scrolling on Any Window

    Read the article

  • Cannot FTP without simultaneous SSH connection?

    - by Lucas
    I'm trying to set up an old box as a backup server (running 10.04.4 LTS). I intend to use 3rd party software on my PC to periodically connect to my server via FTP(S) and to mirror certain files. For some reason, all FTP connection attempts fail UNLESS I'm simultaneously connected via SSH. For example, if I use putty to test the connection to port 21, the system hangs and times out. I get: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] <cursor> However, when I'm simultaneously logged in (in another session) everything works: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] 230 Login successful. Basically, this means that my software will never be able to connect on its own, as intended. I know that the correct port is open because it works (sometimes) and nmap gives me: Starting Nmap 5.00 ( http://nmap.org ) at 2012-03-20 16:15 CDT Interesting ports on xx.xxx.xx.x: Not shown: 995 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 53/tcp open domain 139/tcp open netbios-ssn 445/tcp open microsoft-ds Nmap done: 1 IP address (1 host up) scanned in 0.15 seconds My only hypothesis is that this has something to do with iptables. Maybe it's allowing only established connections? I don't think that's how I set it up, but maybe? Here's my iptables rules for INPUT: lucas@rearden:~$ sudo iptables -L INPUT Chain INPUT (policy DROP) target prot opt source destination fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh ufw-before-logging-input all -- anywhere anywhere ufw-before-input all -- anywhere anywhere ufw-after-input all -- anywhere anywhere ufw-after-logging-input all -- anywhere anywhere ufw-reject-input all -- anywhere anywhere ufw-track-input all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:ftp I'm using vsftpd. Any thoughts/resources on how I could fix this? L

    Read the article

  • Network really slow with TL-WN951N wireless card

    - by Sam
    I literally just installed ubuntu and it seems to be working great except the network is deadly slow. I'm running a TL-WN951N wireless card which can download at about 600-700 KB/s in windows but in Ubuntu the max speed it seems to get is around 5KB/s. I guess I should note that my WAP is only wireless-G but like I said, I can get much better speeds in Windows. I'm testing the speeds by downloading files from here: http://mirror.internode.on.net/pub/test/ Anyone have any idea? I saw some people recommend downloading drivers and compiling them myself but I'm really really new to all of this so would appreciate someone babying me through it so I don't brick my computer! Here are the results of lspci -v: 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02) Subsystem: Giga-byte Technology GA-EP45-DS5 Motherboard Flags: bus master, fast devsel, latency 0, IRQ 44 I/O ports at c000 [size=256] Memory at e9110000 (64-bit, prefetchable) [size=4K] Memory at e9100000 (64-bit, prefetchable) [size=64K] [virtual] Expansion ROM at e9120000 [disabled] [size=64K] Capabilities: <access denied> Kernel driver in use: r8169 Kernel modules: r8169 05:02.0 Network controller: Atheros Communications Inc. AR5008 Wireless Network Adapter (rev 01) Subsystem: Atheros Communications Inc. Device 3071 Flags: bus master, 66MHz, medium devsel, latency 168, IRQ 18 Memory at e9200000 (32-bit, non-prefetchable) [size=64K] Capabilities: <access denied> Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • Hybrid USB Install Method - netboot and iso

    - by Samus Arin
    I was following the steps here ("Preparing Files for USB Memory Stick Booting") https://help.ubuntu.com/10.04/installation-guide/i386/boot-usb-files.html to create a installation usb drive for 12.1. The very first paragraph of the article states "The second is to also copy a CD image onto the USB stick and use that as a source for packages, possibly in combination with a mirror." However, the only instructions mentioned regarding an iso image is to simply copy one somewhere on the drive (after its been made bootable and syslinux, vmlinuz and initrd.gz installed/copied): "you should now copy an Ubuntu ISO image onto the stick." I thought it strange there where no configuration steps for "pointing" the kernel to the iso (like a line in syslinux.cfg or a boot: option or something), but went ahead with the install anyway. I don't think the iso was used at all, it appeared that all the OS files where downloaded during the install process. Therefore, I was wondering if anyone knew how to use this local iso image in this particular installation technique (I know the image can be installed with dd, but thats a different technique), b/c I need to reinstall (I installed unity, but it's wayy to much for my little Atom based netbook) ? Thank you.

    Read the article

  • Upgrade 10.04LTS to 10.10 problem

    - by Gopal
    Checking for a new ubuntu release Done Upgrade tool signature Done Upgrade tools Done downloading extracting 'maverick.tar.gz' authenticate 'maverick.tar.gz' against 'maverick.tar.gz.gpg' tar: Removing leading `/' from member names Reading cache Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Updating repository information WARNING: Failed to read mirror file A fatal error occurred Please report this as a bug and include the files /var/log/dist-upgrade/main.log and /var/log/dist-upgrade/apt.log in your report. The upgrade has aborted. Your original sources.list was saved in /etc/apt/sources.list.distUpgrade. Traceback (most recent call last): File "/tmp/tmpe_xVWd/maverick", line 7, in <module> sys.exit(main()) File "/tmp/tmpe_xVWd/DistUpgradeMain.py", line 158, in main if app.run(): File "/tmp/tmpe_xVWd/DistUpgradeController.py", line 1616, in run return self.fullUpgrade() File "/tmp/tmpe_xVWd/DistUpgradeController.py", line 1534, in fullUpgrade if not self.updateSourcesList(): File "/tmp/tmpe_xVWd/DistUpgradeController.py", line 664, in updateSourcesList if not self.rewriteSourcesList(mirror_check=True): File "/tmp/tmpe_xVWd/DistUpgradeController.py", line 486, in rewriteSourcesList distro.get_sources(self.sources) File "/tmp/tmpe_xVWd/distro.py", line 103, in get_sources source.template.official == True and AttributeError: 'Template' object has no attribute 'official' This is what i got when i tried to upgrade the desktop edition:sudo do-release-upgrade. One more info: I have kde installed.

    Read the article

  • Unity calendar lens not showing events

    - by David_G
    I'm trying to get proper/useful calendar integration into Ubuntu 12.04. I have a Google Calendar (& account) and I want to be able to use this without opening the browser. I want to get the Unity Calendar lens working, so that it shows events coming up, and it allows me a quick way to add new events. However, after installing it, it does not find any events, nor allow me to add a new event. Note that I've installed Lightning 1.4, Evolution mirror 0.2.3, Evolution, and unity-calendar lens. I've also installed Calendar-indicator. I suspect that somehow the lens is not getting the calendar information from thunderbird via evolution. A bit of searching around led me to try this command: /usr/lib/calendar-lens/calendar-lens-daemon.py. With this result: /usr/lib/python2.7/dist-packages/gobject/constants.py:24: Warning: g_boxed_type_register_static: assertion `g_type_from_name (name) == 0' failed import gobject._gobject Traceback (most recent call last): File "/usr/lib/calendar-lens/calendar-lens-daemon.py", line 324, in daemon = Daemon() File "/usr/lib/calendar-lens/calendar-lens-daemon.py", line 80, in init for calendar in evolution.ecal.list_calendars(): AttributeError: 'NoneType' object has no attribute 'list_calendars' Any ideas?

    Read the article

  • Cloud computing - database loading question

    - by workwise
    Following is the situation, I want to know whether what I want is possible in cloud computing and is it the best way for me: 1) My main site has a Database with tables with millions of rows, and entries are added almost every second. 2) I will setup a mysql mirror, so there will be a backup database always in sync with the main one. 3) There are few tens of thousands of images- growing. So say total size of images few tens of gigabytes. I will be keeping the image data also in sync on the backup server. 4) There can be short periods where traffic can go 100X the average traffic. 5) I will be using memcache heavily - most database and even frequently used disk files/images will be in RAM. I want that the main site runs on a dedicated server. The backup server is say an Amazon EC2 instance. Now note that since it is live backup, I need to run a small instance continuously. I want that when I anticipate high traffic, I should be able to run a large instance on the cloud and transfer the traffic there. The main point is - I do not want to spend time in "loading" the database on the large instance, as it typically can take few minutes or even hours (experience). So is it possible to just scale the memory/CPU on demand, and not having to load the database or sync up the filesystem? I want to setup my backup scripts etc just ONCE. Thanks JP

    Read the article

  • New Content: Customer Engagement & Oracle OpenWorld Preview

    - by user462779
    Two new bits of content available on Profit Online: In A Cross-Channel Approach to Consumer Engagement, Cassandra Moren, senior director of consumer goods industry marketing at Oracle, shares her thoughts on how consumer goods manufacturers are reaping benefits from developing a direct relationship with customers: "Consumer goods manufacturers are starting to adapt in ways that mirror retailers. They are making investments in innovative technologies and processes to build the infrastructure to support the market demand. With advances in aspects like social networking, digital marketing and mobility fundamentally changing the way consumers behave, the door has opened to building a more direct relationship with their customers." We've also published a Special Report on Oracle OpenWorld that gives a great overview of recommendations for must-see sessions and insider advice from experienced attendees. For example, this top from John Matelski, newly elected president of the Independent Oracle Users Group: “Based on developments of the last 12 months, I think big data is definitely going to be hot. The challenges and opportunities of data governance will be another biggie. And there will obviously be a big emphasis on Oracle Exadata and the other Oracle Engineered Systems, with more than 100 sessions.” More updates to come as we continue to add content to Profit Online on a regular basis. Thanks for reading!

    Read the article

  • Reboot failure after upgrade from 8.04 LTS to 10.04 LTS

    - by Alan Fietz
    I bought our computer from Freegeeks with Ubuntu 8.04 installed. I upgraded from Ubuntu 8.04 to 10.04 on Thursday November 10. I have an ASUS P4P800SE with dual Intel P4@3GHZ. Installation messages were: - Error loading Nautilus config info - Replaced customied /etc/login.defs - Replaced customized /etc/dhcp3/dhclient.conf - 189 packages removed - WARNING: Failed to read mirror file When I rebooted, the usual ASUS screen appeared, then "Loading GRUB" then "starting Up..." then "starting Up..." again then a blank screen (the moniter went dormant). I rebooted, started GRUB and selected: version 10.04.3 LTS kernel 2.6.32-35 generic I got the same results. I rebooted, started GRUB and selected: kernel 2.6.24-29 generic Here's what was displayed: udevd [875]: error getting socket: Invalid argument libudev:udev_monitor_new_from_netlink: error getting socket: Invalid argument Segmentation fault **Gave up waiting for root device** Common problems - Boot args (cat/proc/cmdline) - Check root delay - check root - Missing modules (cat/pro/modules; **Alert! /dev/disk/by_vvid/c59c6361 etc... does not exist. Dropping to a shell.** Then Busybox v1.13.3 started with the following prompt (?) (initramfs) _ But my typing did not appear on the screen. It appears the hard drive cannot be found. Any suggestion on how to remedy this? Thank you.

    Read the article

  • Unavailable packages repository

    - by bitmask
    I'm running ubuntu 11.10 (oneiric) on this machine, and suddenly, apt is unable to update properly. If I ask it to update its package information, by running apt-get update (or alternatively telling the update manager to "check"), it succeeds for about 120 packages (more precisely, I get about 120 Ign/Hit notes) and then says it cannot find universe Sources and restricted amd64: Hit http://de.archive.ubuntu.com oneiric-backports/multiverse Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/universe Translation-en Err http://de.archive.ubuntu.com oneiric/universe Sources 404 Not Found [IP: 141.30.13.20 80] Err http://de.archive.ubuntu.com oneiric/restricted amd64 Packages 404 Not Found [IP: 141.30.13.20 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/universe/source/Sources 404 Not Found [IP: 141.30.13.20 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/restricted/binary-amd64/Packages 404 Not Found [IP: 141.30.13.20 80] E: Some index files failed to download. They have been ignored, or old ones used instead. I manually checked the de server and cannot find anything wrong with the stuff it's complaining about. Also it looks pretty much like, say, the us mirror. But oddly enough, the IP it lists, seems to point to a debian package server, which obviously does not contain ubuntu packages. So, is this a local problem that I can fix somehow (and if so, how?) or is there actually some server down right now?

    Read the article

  • apt-get update very slow, stuck at "Waiting for headers"

    - by Liam
    I have looked at similar questions: Stuck at 0% [waiting for headers] (apt) apt-get update stuck on "Waiting for Headers" However neither one of them answer my problem. I am running 12.04 AMD64 and have recently started getting an issue that when I update my repos from my connection at home through a terminal, using sudo apt-get update, it takes forever (literally after 2 hours it was at 28%), however when I run from a different location it takes less than 5 minutes to complete. I have attempted changing which mirror I use but that does not solve the issue. I have also cut down what is in my sources list but this also makes no difference. There are no faults on my ADSL line as I have already contacted my ISP to check this. It also makes no difference if I use a WiFi or network cable connection. What could be my issue? A speed test (www.speedtest.net) comes out at about 0.9 Mbps down and 0.42 Mbps up (which is a shade under the advertised line speed), I reside in South Africa and use the UCT LEG server. But I have also tried the other mirrors available in SA....none of them make a difference.

    Read the article

  • What's the proper way to calculate probability for a card game?

    - by Milan Babuškov
    I'm creating AI for a card game, and I run into problem calculating the probability of passing/failing the hand when AI needs to start the hand. Cards are A, K, Q, J, 10, 9, 8, 7 (with A being the strongest) and AI needs to play to not take the hand. Assuming there are 4 cards of the suit left in the game and one is in AI's hand, I need to calculate probability that one of the other players would take the hand. Here's an example: AI player has: J Other 2 players have: A, K, 7 If a single opponent has AK7 then AI would lose. However, if one of the players has A or K without 7, AI would survive. Now, looking at possible distribution, I have: P1 P2 AI --- --- --- AK7 loses AK 7 survives A7 K survives K7 A survives A 7K survives K 7A survives 7 KA survives AK7 loses Looking at this, it seems that there is 75% chance of survival. However, I skipped the permutations that mirror the ones from above. It should be the same, but somehow when I write them all down, it seems that chance is only 50%: P1 P2 AI --- --- --- AK7 loses A7K loses K7A loses KA7 loses 7AK loses 7KA loses AK 7 survives A7 K survives K7 A survives KA 7 survives 7A K survives 7K A survives A K7 survives A 7K survives K 7A survives K A7 survives 7 AK survives 7 KA survives AK7 loses A7K loses K7A loses KA7 loses 7AK loses 7KA loses 12 loses, 12 survivals = 50% chance. Obviously, it should be the same (shouldn't it?) and I'm missing something in one of the ways to calculate. Which one is correct?

    Read the article

  • Is this a dual monitor reset bug?

    - by Tentresh
    My two displays are: Intel GMA x4500 Laptop (1280x800 native resolution of the built-in display) External display (1920x1080) A few minutes after I login to my dual monitor setup, it gets reset to mirror screens. If I restore the settings via the displays application, everything is fine. On each reset, the following messages are written into /var/log/Xorg.0.log: [ 60.852] (II) PM Event received: Capability Changed [ 60.852] I830PMEvent: Capability change [ 132.920] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 132.920] (II) intel(0): Printing DDC gathered Modelines: [ 132.920] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 134.228] (II) intel(0): Allocated new frame buffer 1280x800 stride 5120, tiled Whereas right on startup or manual resolution reset, /var/log/Xorg.0.log reports the expected frame buffer allocation: [ 1562.382] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 1562.382] (II) intel(0): Printing DDC gathered Modelines: [ 1562.382] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 1576.740] (II) intel(0): Allocated new frame buffer 3200x1080 stride 12800, tiled Is Ubuntu 12.04 not compatible with my video card? Can this be solved within Ubuntu? I like its interface, but manually fiddling with resolution on every login is not bearable.

    Read the article

  • Multiple monitors showing same screen but different resolutions

    - by Luis Alvarado
    Is it possible to have 2 or more monitors showing the same screen, for example the same desktop but with different resolutions. Like the clone option in Nvidia or the mirror option using the Display settings in Ubuntu but instead of showing the same output with the same resolution, the both show the same output using a resolution that is native for each monitor connected. In my case if I have a netbook that has max resolution of 1360x768 and a TV that has 1280x1024, the would both show the same desktop but each with their own resolution that is compatible for each device. This would help in trying to find a resolution that works on both monitors and in cases like a mini netbook and a huge TV it would solve issues like having max 800x600 in one monitor and min 1024x768 in the other. In the case I tested I was using an HDMI cable but this question also involves VGA and any other connection. I have 3 tests scenarios for this: Scenario 1 - Laptop HP DV6000 (Intel Integrated Video) with 1360x760 connected to a Samsung LED 42 TV that has 1280x900. Scenario 2 - Laptop EEE with 1024x600 (Intel Integrated Video) connected to Sony LCD TV that supports 1280x900. Scenario 3 - Intel Desktop with Nvidia 440 GT with HDMI connected to Soneview 32' TV that supports 1920x1080 and VGA connected to an Epson Video Beam that supports 1280x1024 max. In this 3 scenarios I need to be able to show the same desktop and same views but on different resolutions for each output device. UPDATE: Tested with Xubuntu and the way it handles multiple monitors is precisely what I am asking. The ability to handle the resolution of different monitors showing the same thing.

    Read the article

  • How to install custom c library?

    - by arijit
    I just wanted to add a c library to Ubuntu which was created by Harvard University for cs50 course. They provided instructions for how to install the library which is listed below. Debian, Ubuntu First become root, as with: sudo su - Then install the CS50 Library as follows: apt-get install gcc wget http://mirror.cs50.net/library/c/cs50-library-c-3.1.zip unzip cs50-library-c-3.1.zip rm -f cs50-library-c-3.1.zip cd cs50-library-c-3.1 gcc -c -ggdb -std=c99 cs50.c -o cs50.o ar rcs libcs50.a cs50.o chmod 0644 cs50.h libcs50.a mkdir -p /usr/local/include chmod 0755 /usr/local/include mv -f cs50.h /usr/local/include mkdir -p /usr/local/lib chmod 0755 /usr/local/lib mv -f libcs50.a /usr/local/lib cd .. rm -rf cs50-library-c-3.1 I did exactly as directed. But the compiler reported “Undefined reference to a function”--the function was Get String. So, I searched for a solution and found one. It said to use the -l switch. Now when I compile I use something like: gcc –o hello.c hello –lcs50 (I don’t remember the exact command.) However, I cannot use the make command, which is easier to use. I understand that there is some problem with linking the library. What is a good solution to this problem?

    Read the article

  • Generated 3d tree meshes

    - by Jari Komppa
    I did not find a question on these lines yet, correct me if I'm wrong. Trees (and fauna in general) are common in games. Due to their nature, they are a good candidate for procedural generation. There's SpeedTree, of course, if you can afford it; as far as I can tell, it doesn't provide the possibility of generating your tree meshes at runtime. Then there's SnappyTree, an online webgl based tree generator based on the proctree.js which is some ~500 lines of javascript. One could use either of above (or some other tree generator I haven't stumbled upon) to create a few dozen tree meshes beforehand - or model them from scratch in a 3d modeller - and then randomly mirror/scale them for a few more variants.. But I'd rather have a free, linkable tree mesh generator. Possible solutions: Port proctree.js to c++ and deal with the open source license (doesn't seem to be gpl, so could be doable; the author may also be willing to co-operate to make the license even more free). Roll my own based on L-systems. Don't bother, just use offline generated trees. Use some other method I haven't found yet.

    Read the article

  • Creating a backup - Rsync - Connection refused (111)

    - by pablofiumara
    I am trying to create a backup of my website for free. I just want to have a backup of my website, including not only all files and the configuration but also the databases. I mean, a full backup. If it can be done automatically, it would be better. I feel there are better ways than using the cpanel to achieve that (actually, I believe sometimes web hosters does not have any cpanel). I read the following on how to do it: Automatically mirror the entire contents and configuration of your main server to a secondary backup server on a completely separate network in a different data centre. Use RSync, FXP, cPanel voodoo, or whatever method you wish to automate syncing. That is why I installed Rsync Daemon which is an alternative to SSH for remote backups. I configured it but the test went wrong. The terminal is showing me this: pablofiumara@pablofiumara-Lenovo-G470:~$ sudo rsync [email protected]::share [sudo] password for pablofiumara: rsync: failed to connect to pablofiumara.com (50.87.147.75): Connection refused (111) rsync error: error in socket IO (code 10) at clientserver.c(122) [Receiver=3.0.9] pablofiumara@pablofiumara-Lenovo-G470:~$ sudo rsync [email protected]::share failed to connect to 50.87.147.7 (50.87.147.7): Connection refused (111) rsync error: error in socket IO (code 10) at clientserver.c(122) [Receiver=3.0.9] What should I do? Is there a better or easier way to achieve what I wish (I mentioned this in the first paragraph)?

    Read the article

  • 9 Gigapixel Photo Captures 84 Million Stars

    - by Jason Fitzpatrick
    The European Southern Observatory has released an absolutely enormous picture of the center of the Milky Way captured by their VISTA telescope–the image is 9 gigapixels and captures over 84 million stars. From the press release: The large mirror, wide field of view and very sensitive infrared detectors of ESO’s 4.1-metre Visible and Infrared Survey Telescope for Astronomy (VISTA) make it by far the best tool for this job. The team of astronomers is using data from the VISTA Variables in the Via Lactea programme (VVV), one of six public surveys carried out with VISTA. The data have been used to create a monumental 108 200 by 81 500 pixel colour image containing nearly nine billion pixels. This is one of the biggest astronomical images ever produced. The team has now used these data to compile the largest catalogue of the central concentration of stars in the Milky Way ever created. Want to check out all 9 billion glorious pixels in their uncompressed state? Be prepared to wait a bit, the uncompressed image is available for download but it weighs in at a massive 24.6GB. 84 Million Stars and Counting [via Wired] How Hackers Can Disguise Malicious Programs With Fake File Extensions Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >