Search Results

Search found 22668 results on 907 pages for 'command prompt'.

Page 610/907 | < Previous Page | 606 607 608 609 610 611 612 613 614 615 616 617  | Next Page >

  • Start Chrome in incognito mode in Ubuntu netboox remix

    - by Rajkumar S
    I am using Ubuntu 10.04 (Lucid) Netbook Remix in my Asus Aspire One. My default browser is Firefox and I am using Chrome for Google search in incognito mode. I want to start Chrome in incognito mode by default. I know that I need to use the following command to start the Chrome in incognito mode by default. Google-chrome --incognito %s What I do not know is how/where to edit the commandline in the launcher. Any tips on how to do this is much appreciated.

    Read the article

  • Need help with exploring a USB drive

    - by Bob Getsla
    When I plug in a USB drive, I see it on the left hand edge of the Unity desktop (11.10, 64 bit) but when I try to explore it, VLC starts and tries to play whatever it can find in the USB drive. This behavior began when I updated from 11.04 to 11.10. I literally cannot look into the contents of any of the USB drives I have, because I cannot stop VLC, nor can I do anything when I click Open other than watch VLC start up. This is very frustrating because it makes my USB sticks essentially useless. HELP! I'm sure that there is something a wizard could do about this, but I am not a wizard, and I am at my wits ends. Trying to get to the System Settings menu works, and I can see the setup for "Removable" devices, and they are all set to "Ask" but that is clearly not what is happening. So it looks like I must reach for the command line, but where do I go to find the settings for what the desktop does when I plug in a USB drive and wish to explore the file structure in it and possible copy a file into the USB or from the USB drive. Right now, VLC media player is always getting in my way. :-(

    Read the article

  • Unable to mount external hard drive - Damaged file system and MFT

    - by Khalifa Abbas Lame
    I get the following error when i try to mount my external hard drive. UNABLE TO MOUNT Error mounting /dev/sdc1 at /media/khalibloo/Khalibloo2: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sdc1" "/media/khalibloo/Khalibloo2"' exited with non-zero exit status 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read of MFT, mft=6 count=1 br=-1: Input/output error Failed to open inode FILE_Bitmap: Input/output error Failed to mount '/dev/sdc1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. It doesn't mount on windows either: "I/O Device error" it's an ntfs hard drive with a single partition Of course, i tried chkdsk /f. it reported several file segments as unreadable, but didn't say whether it fixed them or not (apparently not). also tried with the /b flag. ntfsfix reported the volume as corrupt. TestDisk was able to fix a small error with the partition table by adding the "80" flag for the active (only) partition. TestDisk also confirmed that the boot sector was fine and it matched the backup. However, when attempting to repair the MFT, it couldn't read the MFT. It also couldn't list the files on the hard drive. It says file system may be damaged. Active@ also shows that MFT is missing or corrupt. So how do i fix the file system? or the MFT?

    Read the article

  • Restoring GRUB2 on Software RAID 0 after Windows 7 wiped it using LiveCD

    - by unknownthreat
    I have installed Ubuntu 10.10 on my system. However, I need to install Windows 7 back, and I expect that it would alter GRUB and it did. Right now, my partition on my Software RAID 0 looks like this: nvidia_acajefec1 is Ubuntu 10.10 and nvidia_acajefec3 is Windows 7. I've been following some guides around and I am always stuck at GRUB not able to detect the usual RAID content. I've tried running: sudo grub > root (hd0,0) GRUB complains it couldn't find my hard disk. So I tried: find (hd0,0) And it complains that it couldn't find anything. So I tried: find /boot/grub/stage1 It said "file not found". Here's the text from the console: ubuntu@ubuntu:~$ grub Probing devices to guess BIOS drives. This may take a long time. [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ] grub> root (hd0,0) root (hd0,0) Error 21: Selected disk does not exist grub> find /boot/grub/stage1 find /boot/grub/stage1 Error 15: File not found Fortunately, I got one person suggesting that what I've been trying to do is for GRUB Legacy, not GRUB2. So I went to the suggested website, ** (http://grub.enbug.org/Grub2LiveCdInstallGuide) **try to look around, and try: ubuntu@ubuntu:~$ sudo fdisk -l Unable to seek on /dev/sda This is just the step 2 of the instruction in the http://grub.enbug.org/Grub2LiveCdInstallGuide and I cannot proceed because it cannot seek /dev/sda. However, ubuntu@ubuntu:~$ sudo dmraid -r /dev/sdb: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 /dev/sda: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 So what now? Do you have an idea for how to make fdisk see my RAID array on live cd (Ubuntu 10.10)? Honestly, I am lost, very lost in trying to restore GRUB2 on this software RAID 0 system right now.

    Read the article

  • Is there any way to kill a zombie process without reboot?

    - by Pedram
    Is there any way to kill a zombie process without reboot?Here is how it happens: I wanted to download a 12GB torrent.After adding the .torrent file, transmission turned into a zombie process.I tried ktorrent too.Same behavior.Finally I could download the file using µTorrent but after closing the program, it turns into a zombie as well. I tried using kill skill and pkill with different options and -9 signal but no success. In some answers in web I found out killing the parent can kill the zombie, but killing wine didn't help either. Is there another way? Edit: ps -o pid,ppid,stat,comm PID PPID STAT COMMAND 7121 2692 Ss bash 7317 7121 R+ ps pstree output: init---GoogleTalkPlugi---4*[{GoogleTalkPlug}] +-NetworkManager---dhclient ¦ +-{NetworkManager} +-acpid +-apache2---5*[apache2] +-atd +-avahi-daemon---avahi-daemon +-bonobo-activati---{bonobo-activat} +-clock-applet +-console-kit-dae---63*[{console-kit-da}] +-cron +-cupsd +-2*[dbus-daemon] +-2*[dbus-launch] +-desktopcouch-se---desktopcouch-se +-explorer.exe +-firefox---run-mozilla.sh---firefox-bin---plugin-containe---8*[{plugin-contain}] ¦ +-14*[{firefox-bin}] +-gconfd-2 +-gdm-binary---gdm-simple-slav---Xorg ¦ ¦ +-gdm-session-wor---gnome-session---bluetooth-apple ¦ ¦ ¦ ¦ +-fusion-icon---compiz---sh---gtk-window-deco ¦ ¦ ¦ ¦ +-gdu-notificatio ¦ ¦ ¦ ¦ +-gnome-panel ¦ ¦ ¦ ¦ +-gnome-power-man ¦ ¦ ¦ ¦ +-gpg-agent ¦ ¦ ¦ ¦ +-nautilus---bash ¦ ¦ ¦ ¦ ¦ +-{nautilus} ¦ ¦ ¦ ¦ +-nm-applet ¦ ¦ ¦ ¦ +-polkit-gnome-au ¦ ¦ ¦ ¦ +-2*[python] ¦ ¦ ¦ ¦ +-qstardict---{qstardict} ¦ ¦ ¦ ¦ +-ssh-agent ¦ ¦ ¦ ¦ +-tracker-applet ¦ ¦ ¦ ¦ +-trackerd ¦ ¦ ¦ ¦ +-wakoopa---wakoopa ¦ ¦ ¦ ¦ ¦ +-3*[{wakoopa}] ¦ ¦ ¦ ¦ +-{gnome-session} ¦ ¦ ¦ +-{gdm-session-wo} ¦ ¦ +-{gdm-simple-sla} ¦ +-{gdm-binary} +-6*[getty] +-gnome-keyring-d---2*[{gnome-keyring-}] +-gnome-screensav +-gnome-settings- +-gnome-system-mo---{gnome-system-m} +-gnome-terminal---bash---ssh ¦ +-bash---pstree ¦ +-gnome-pty-helpe ¦ +-{gnome-terminal} +-gvfs-afc-volume---{gvfs-afc-volum} +-gvfs-fuse-daemo---3*[{gvfs-fuse-daem}] +-gvfs-gdu-volume +-gvfsd +-gvfsd-burn +-gvfsd-http +-gvfsd-metadata +-gvfsd-trash +-hald---hald-runner---hald-addon-acpi ¦ ¦ +-hald-addon-cpuf ¦ ¦ +-hald-addon-inpu ¦ ¦ +-hald-addon-stor ¦ +-{hald} +-hotot---xdg-open ¦ +-3*[{hotot}] +-indicator-apple +-indicator-me-se +-indicator-sessi +-irqbalance +-kded4 +-kdeinit4---kio_http_cache_ ¦ +-klauncher +-kglobalaccel +-knotify4 +-modem-manager +-multiload-apple +-mysqld---10*[{mysqld}] +-named---10*[{named}] +-nmbd +-notification-ar +-notify-osd +-pidgin---{pidgin} +-polkitd +-pulseaudio---gconf-helper ¦ +-2*[{pulseaudio}] +-rsyslogd---2*[{rsyslogd}] +-rtkit-daemon---2*[{rtkit-daemon}] +-services.exe---plugplay.exe---2*[{plugplay.exe}] ¦ +-winedevice.exe---3*[{winedevice.exe}] ¦ +-3*[{services.exe}] +-smbd---smbd +-snmpd +-sshd +-timidity +-trashapplet +-udevd---2*[udevd] +-udisks-daemon---udisks-daemon ¦ +-{udisks-daemon} +-upowerd +-upstart-udev-br +-utorrent.exe---8*[winemenubuilder] ¦ +-{utorrent.exe} +-vnstatd +-winbindd---2*[winbindd] +-2*[winemenubuilder] +-wineserver +-wnck-applet +-wpa_supplicant +-xinetd System monitor and top screenshots which show the zombie process is using resources:

    Read the article

  • Running Mathimatica-5 remotely

    - by oxinabox.ucc.asn.au
    Ok, I have Mathmatica 5 - a powerful CAS. I have a cheap netbook, wich not olny is too slow to run mathmatica on, I doubt it has the harddrive space. I do however have remote access to a number of very powerful computers, (most of wich run variose linuxes, but one of which is windows server 2008) Mostly over SSH but other protocols can be arraged for some, i'm sure. (I might even be able to remote desktop the windows server 2008) So I'ld like to install Mathmatica onto one of these machine and then run it remotely. Either from the command line via putty or via some other method. I glanced through the mathmatical documentaion and read soemthing about using some MathLink program, wich linkes the front end istalled on my computer to a remote kernal. Anyone have any expirience with this? I'm not sure if this belongs here or in SuperUser.

    Read the article

  • WIndows Emacs Keybinding

    - by Josh
    I know this is not a Windows site, so my apologies. I use Ubuntu all day, every day, and have finally convinced my buddy to try it. He is on Windows 7, so we installed this: http://www.ourcomments.org/cgi-bin/emacsw32-dl-latest.pl . It seems to be working great, but when he hits C-p ( prev. line ) it is trying to print the page for some reason. So, 2 questions. Is there a way to make it stop that, and is there a way to just run it from the command line, or without all of the fancy mouse stuff? Essentially as --no-windows? Thanks!

    Read the article

  • apt-get doesnt download files from NFS location

    - by Pravesh
    I have switched to unix from last 3 months and trying to understand install process and in particular apt-get. I am able to successfully install and download the packages when I configure my repository on http location in /etc/apt/sources.list file. e.g. deb http://web.myspqce.com/u/eng/rose/debian-mirror-squeeze-amd64/mirror/ftp.us.debian.org/debian/ squeeze main contrib non-free This command will download(/var/cache/apt/archive) and install the package when i use apt-get install When I change the source location to file instead of http(nfs mount point), the package is getting installed but NOT getting downloaded in /var/cache/apt/archive. deb file:/deb_repository/debian-mirror-squeeze-amd64/mirror/ftp.us.debian.org/debian/ squeeze main contrib non-free Please let me know if there is any configuration or settings that i have to make to let apt-get to both download and install package when i use (nfs)file:/ instead of http:/ in sources.list. To achieve this, I can use apt-get --downlaod-only and then use apt-get install for both download and install in two separate calls, but I want to know why package is not getting downloaded with apt-get install but only getting installed when used with file:/ in sources.list

    Read the article

  • Multiple client connecting to master MySQL over SSL

    - by Bastien974
    I successfully configured a MySQL replication over SSL between 2 servers accross the internet. Now I want a second server in the same location as the replication slave, to open a connection to the master db over ssl. I used the same command found here http://dev.mysql.com/doc/refman/5.1/en/secure-create-certs.html to generate a new set of client-cert.pem and client-key.pem with the same master db ca-cert/key.pem and I also used a different Common Name. When I try to initiate a connection between this new server and the master db, it fails : mysql -hmasterdb -utestssl -p --ssl-ca=/var/lib/mysql/newcerts/ca-cert.pem --ssl-cert=/var/lib/mysql/newcerts/client-cert.pem --ssl-key=/var/lib/mysql/newcerts/client-key.pem ERROR 2026 (HY000): SSL connection error It's working without SSL.

    Read the article

  • Unable to connect to mysql through JDBC connector through Tomcat or externally

    - by Stefan Kendall
    I've installed a stock mysql 5.5 installation, and while I can connect to the mysql service via the mysql command, and the service seems to be running, I cannot connect to it through spring+tomcat or from an external jdbc connector. I'm using the following URL: jdbc:mysql://myserver.com:myport/mydb with proper username/password, but I receive the following message: server.com: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. the driver has not received any packets from the server. and tomcat throws: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) Which seems to be the same issue as if I try to connect externally.

    Read the article

  • Can I create an SSH user which can access only certain directory?

    - by RiMMER
    I have a Virtual Private Server which I can connect to using SSH with my root account, being able to execute any linux command and access all the disk area, obviously. I would like to create another user account, which would be able to access this server using SSH too, but only to a certain directory, for example /var/www/example.com/ For example, imagine this user has a HUGE error.log file (500 MB) located in /var/www/example.com/logs/error.log When accessing this file using FTP, this user needs to download 500 MB to view the last lines of the log, but I'd like him to be able to execute something like this: tail error.log Therefore I need him to be able to access the server using SSH, but I don't want to grant him access to all server areas. How can I do this?

    Read the article

  • wbadmin incremental system state backup

    - by user74513
    I am doing system state backups on a Windows Server 2008 R2 Enterprise (Service Pack 1) machine and expected the backups after the first one to be incremental. However with each backup a new directory with vhd files are created and the vhd files are almost the same size as the with the first backup. So the backups does not seem to be incremental. I used the following command to do the backup: wbadmin start systemstatebackup -backupTarget:f: I played around with the settings under "Configure Performance Settings" in the Windows Server Backup plugin in Server Manager but according to the description at the top of the dialog these settings are not applied to system state backups. Are there any settings available for wbadmin system state backup to make the backups incremental?

    Read the article

  • I cannot install anything in ubuntu that has dependency problem

    - by phpGeek
    I wanted to install teamviewer on linux 64-bit system. What I did was to download teamviewer.deb file and install it as below: sudo dpkg -i install teamviewer.deb Then I wanted to correct dependency problem so I issued the following command: sudo apt-get install libc6:i386 libgcc1:i386 libasound2:i386 libfreetype6:i386 zlib1g:i386 libsm6:i386 libxdamage1:i386 libxext6:i386 libxfixes3:i386 libxrender1:i386 libxtst6:i386 I got the following error: E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. E: Unable to correct dependencies I then tried: sudo apt-get install -f Again I got the following error: E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. E: Unable to correct dependencies Even I tried to install gdebi, but I got the above error again. I emptied archives folder: sudo apt-get clean sudo apt-get update sudo apt-get upgrade Again I have problem installing my deb package. Is there anything I could do now to solve this problem? I've read the below article as well: Install Teamviewer using a 64-bits system but I get a dependency error EDIT: I found libperl5.14:amd64 as a broken package. I used: sudo apt-get remove libperl5.14:amd64 I got the following message: E: Unable to locate package Broken

    Read the article

  • App installed in ~/usr launches from terminal but not Applications menu (or why does setting ld_library_path in .profile not work as it should)

    - by levesque
    I have built and installed an application under a directory of my choosing, let's say under /home/jim/usr, so files have been put in three-four folders, all under this $HOME/usr folder (e.g., bin, include, lib, share, etc.). I can launch this application from the command line just fine as I added the proper paths to my environement variables PATH and LD_LIBRARY_PATH in ~/.bashrc. I added the same paths to the ~/.profile file, which, if I'm not mistaken, is supposed to be parsed by Ubuntu. Doesn't work. Nothing. Where can I go from there? EDIT: I logged out/in and restarted my computer. Both didn't change a thing. The problem seems to come from the fact that no matter what I do the LD_LIBRARY_PATH environment variable is not properly passed to Ubuntu. Using log files, I found that the application I'm trying to run in this example doesn't find one it's dependencies located in ~/usr/lib. One solution would be to add the /home/jim/usr/lib folder inside a file located in /etc/ld.so.conf.d/, but I don't have admin rights on this machine. Making a wrapper script like this one works: #!/bin/bash export LD_LIBRARY_PATH=$HLOC/usr/lib application &> $HOME/application_messages.log but that would force me to wrap all my home compiled applications with this script. Any ideas? Why does Ubuntu/Gnome remove the LD_LIBRARY_PATH environment variable from my set variables? Is it because trying to do this is bad practice? UPDATE (and solution): As found by Christopher, there is a bug report about this on launchpad. LD_LIBRARY_PATH is unset after parsing of the ~/.profile file. See the bug report. Seems the only solution for now is to make a wrapper script.

    Read the article

  • X11 forwarding - have window showing on two computers.

    - by Tim
    I currently have 2 desks: one with a mac, the other with a 2 Linux computers on. One of those Linux computers is constantly on, so I would run the chat app on that (pidgin, most likely). I can run pidgin on Mac by sshing into the Linux computer with "ssh -X", i.e. X11 forwarding. However, I want to have the chat windows viewable on both computers simultaneously (or at the very least, by entering a command or whatever). Is this possible and if so, how? (Also: I know that it'd be possible via VNC or whatever, but yuck - X11 on mac is bad enough.) Thanks, Tim

    Read the article

  • wget not converting links

    - by acrosman
    I am trying to mirror a fairly large site (20,000+ pages) prior to a major overhaul. Basically, I need a backup before cutting over to the new one in case we forgot something we need (we'll have about 1,000 pages at launch). The site is run on a CMS that I cannot easily extract usable data from, so I'm trying to make the copy with wget. My problem is that wget does not appear to be actually converting links, despite the presence of --convert-links or -k in the command. I've tried a couple of different combinations of flags, but I haven't been able to get the output I need. Most recent failed attempt was: nohup wget --mirror -k -l10 -PafscSnapshot --html-extension -R *calendar* -o wget.log http://www.example.org & I've also included the --backup-converted, and --convert-links instead of -k (not that it have mattered). I've done it with and without -P and -l, again no that they should matter. Results in files that still have links like: http://www.example.org/ht/d/sp/i/17770

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • rsync option --log-file seems is not working

    - by user1017735
    I am using the --log-file option in rsync to see the logs. But when I tried to run, it says: --log-file unrecognized option Here is my command: #/usr/bin/rsync -av -u --log-file="/sreeni/log.txt" --rsync-path=/usr/local/bin/rsync /sreeni nnmhpt20.ind.hp.com:/sreeni could some one help me with the right syntax ? I tried these options also. #/usr/bin/rsync -av -u --log-file /sreeni/log.txt --rsync-path=/usr/local/bin/rsync /sreeni nnmhpt20.ind.hp.com:/sreeni and #/usr/bin/rsync -av -u --log-file="/sreeni/log.txt" --rsync-path=/usr/local/bin/rsync /sreeni nnmhpt20.ind.hp.com:/sreeni

    Read the article

  • Why Wouldn't Root Be Able to Change a Zone's IP Address in Oracle Solaris 11?

    - by rickramsey
    You might assume that if you have root access to an Oracle Solaris zone, you'd be able to change the root's IP address. If so, you'd proceed along these lines ... First, you'd log in: root@global_zone:~# zlogin user-zone Then you'd remove the IP interface: root@user-zone:~# ipadm delete-ip vnic0 Next, you'd create a new IP interface: root@user-zone:~# ipadm create-ip vnic0 Then you'd assign the IP interface a new IP address (10.0.0.10): root@user-zone:~# ipadm create-addr -a local=10.0.0.10/24 vnic0/v4 ipadm: cannot create address: Permission denied Why would that happen? Here are some potential reasons: You're in the wrong zone Nobody bothered to tell you that you were fired last week. The sysadmin for the global zone (probably your ex-girlfriend) enabled link protection mode on the zone with this sweet little command: root@global_zone:~# dladm set-linkprop -p \ protection=mac-nospoof,restricted,ip-nospoof vnic0 How'd your ex-girlfriend learn to do that? By reading this article: Securing a Cloud-Based Data Center with Oracle Solaris 11 by Orgad Kimchi, Ron Larson, and Richard Friedman When you build a private cloud, you need to protect sensitive data not only while it's in storage, but also during transmission between servers and clients, and when it's being used by an application. When a project is completed, the cloud must securely delete sensitive data and make sure the original data is kept secure. These are just some of the many security precautions a sysadmin needs to take to secure data in a cloud infrastructure. Orgad, Ron, and Richard and explain the rest and show you how to employ the security features in Oracle Solaris 11 to protect your cloud infrastructure. Part 2 of a three-part article on cloud deployments that use the Oracle Solaris Remote Lab as a case study. About the Photograph That's the fence separating a small group of tourist cabins from a pasture in the small town of Tropic, Utah. Follow Rick on: Personal Blog | Personal Twitter | Oracle Forums   Follow OTN Garage on: Web | Facebook | Twitter | YouTube

    Read the article

  • HTC Android Fails to mount- Mount from computer?

    - by Ben Franchuk
    I Have an HTC Incredible S (S-Off, Rooted, ViperVIVO 1.3.0 ICS) that has seemingly ceased to posses the ability to mount its SD Storage to my computer. For whatever reason, whenever I plug in my device to transfer files from computer to phone and vice versa, the computer, for some reason, cannot actually aces the phone. I get prompted with a window on my phone when I first plug it in, asking me which mode I want to put the device into (Charge mode, tether mode, etc.), and even if I select the "Disk Drive" function, the phone still cannot successfully mount to my computer. The phone itself unmounts itself from the SD and says that the computer is connected, but again, it doesn't work. Is there any way to force mount the device from my computer- either via command or otherwise? This should help in that if I unmount the SD from the phone I should be able to mount it to my computer, from my computer, Correct?

    Read the article

  • How to configure chrome to open magnet url's with deluge?

    - by michael_n
    After upgrading to Ubuntu 11.04 (natty) from 10.10, I can no longer open magnet (torrent) links in Chromium, and set deluge to automatically open and accept the url. (Edit: currently ".torrent" files are not a problem, but magnet url's, e.g. of the form "magnet:?xt=urn:...", are now the only problem. Not sure if something updated...?) Rather, now only transmission will automatically open torrents, magnet links, etc. There doesn't seem to be a way to set deluge to be the default torrent client. (And, there also doesn't seem to be a "default application" setting for bittorrent client to replace transmission w/ deluge.) Notes: I found some old threads on this issue, and only a one or two newer ones. The newer threads seem to suggest xdg-open is to blame. But not many people seem to be running into this problem, so... maybe it's just me? Not using firefox, so manually setting apps for mime-types or extensions doesn't work (that's not an option in chrome/chromium, afaik -- you have to rely on the OS) I uninstalled transmission, and then basically nothing happened when clicking on torrent/magnet links. running from the shell also opens transmission (not deluge): xdg-open "magnet:?xt=urn:bt..&tr=http://tracker.....com/announce" My current url handlers are: $ gconftool -a /desktop/gnome/url-handlers/magnet command = deluge "%s" needs_terminal = false enabled = true The only work-around I have (which does work) is to rename /usr/bin/transmission-gtk{,.bak} and create my own /usr/bin/transmission-gtk : $ cat /usr/bin/transmission-gtk #!/bin/bash deluge "$@" Anyone else run into this, know of a bug, workaround, or...?

    Read the article

  • Upgrade to ubuntu 13.04 from 12.04 with iso image

    - by Digvijay Yadav
    I have ubuntu 12.04 installed on my system. I want to upgrade it to ubuntu 13.04. I want to do this upgrade using an iso image of ubuntu 13.04. I tried this Solution But it didn't work for me. After running these command I didn't get any alerts about updating. Also I don't understand the gksu part of the solution. Here are the steps I tried: sudo mount -t iso9660 -o loop PATH/TO/ISO /cdrom then sudo /cdrom/cdromupgrade Read more: http://linuxpoison.blogspot.tw/2011/06/how-to-upgrade-ubuntu-using-alternate.html#ixzz2SFMqlOPx I also wanted to know, If I can do this using a networked computer. By this I mean the iso file is on some other computer. Thank you.

    Read the article

  • How to generate user-specific PDF with encrypted hidden watermark?

    - by Dave Jarvis
    Background Using LaTeX to write a book. When a user purchases the book, the PDF will be generated automatically. Problem The PDF should have a watermark that includes the person's name and contact information. Question What software meets the following criteria: Applies encrypted, undetectable watermarks to a PDF Open Source Platform independent (Linux, Windows) Fast (marks a 200 page PDF in under 1 second) Batch processing (exclusively command-line driven) Collusion-attack resistant Non-fragile (e.g., PDF - EPS - PDF still contains the watermark) Well documented (shows example usages) Ideas & Resources Some thoughts and findings: Natural language processing (NLP) watermarks. Apply steganography on a randomly selected image. http://openstego.sourceforge.net/cmdline.html The problem with NLP is that grammatical errors can be introduced. The problem with steganography is that the images are sourced from an image cache, and so recreating that cache with watermarked images will impart a delay when generating the PDF (I could just delete one image from the cache, but that's not an elegant solution). Thank you!

    Read the article

  • How to write in a <array><dict> structure with defaults write?

    - by Hedge
    I've got a .plist-file with a structure like this: <plist version="1.0"> <array> <dict> <key>BundleIsVersionChecked</key> <false/> <key>BundleIsRelocatable</key> <false/> <key>BundleHasStrictIdentifier</key> <false/> <key>RootRelativeBundlePath</key> <string>value</string> </dict> </array> </plist> I want to add or edit the RootRelativeBundlePath-key with the defaults write command. Another possibility would be writing the whole plist-file but it has to be the same exact structure. How can I do this?

    Read the article

  • What could be causing frequent display freezes?

    - by austen
    I just installed Ubuntu 14.04 two days ago (coming from Win8) and in the two days that I've been using it, my display has frozen four or five times. The mouse won't move but the keyboard does respond so I can use the Ctrl+Alt+Bkspc command to fix it. It seems like it might just be the display freezing because one of the times I was watching a Youtube video and the audio continued playing. I have an Nvidia graphics card with the most recent Nvidia drivers for it enabled. I see that a lot of questions about Ubuntu freezing get marked as a duplicate and pretty much always linked back to a thread about what to do when it freezes. Clearly, I've got that bit figured out already and I did read that thread for further advice. What I'm looking for though is how to fix this permanently. output from lspci -nnk | grep -iA2 VGA: 00:02.0 VGA compatible controller [0300]: Intel Corporation 3rd Gen Core processor Graphics Controller [8086:0166] (rev 09) Subsystem: Lenovo Device [17aa:2200] Kernel driver in use: i915 Update: JohnnyEnglish pointed out that Ubuntu is using the integrated graphics, not my Nvidia card. It turns out my laptop uses Nvidia Optimus and I cannot enable only the graphics card through the BIOS. I found out about Nvidia Prime and got it set up using this article. The settings panel which allows you to select the graphics says that 'performance mode' is enabled but when I check which graphics controller is enabled through the terminal, it still says it's using the integrated graphics. I'm not sure if this could be causing the freezes but I guess it's a starting point. Any ideas on how to resolve this?

    Read the article

< Previous Page | 606 607 608 609 610 611 612 613 614 615 616 617  | Next Page >