Search Results

Search found 93838 results on 3754 pages for 'aspire one'.

Page 36/3754 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • SharePoint 2007 Hosting :: How to Move a Document from One Lbrary to Another

    - by mbridge
    Moving a document using a SharePoint Designer workflow involves copying the document to the SharePoint document library you want to move the document to, and then deleting the document from the current document library it is in. You can use the Copy List Item action to copy the document and the Delete item action to delete the document. To create a SharePoint Designer workflow that can move a document from one document library to another: 1. In SharePoint Designer 2007, open the SharePoint site on which the document library that contains the documents to move is located. 2. On the Define your new workflow screen of the Workflow Designer, enter a name for the workflow, select the document library you want to attach the workflow to (this would be a document library containing documents to move), select Allow this workflow to be manually started from an item, and click Next. 3. On the Step 1 screen of the Workflow Designer, click Actions, and then click More Actions from the drop-down menu. 4. On the Workflow Actions dialog box, select List Actions from the category drop-down list box, select Copy List Item from the actions list, and click Add. The following text is added to the Workflow Designer: Copy item in this list to this list 5. On the Step 1 screen of the Workflow Designer, click the first this list (representing the document library to copy the document from) in the text of the Copy List Item action. 6. On the Choose List Item dialog box, leave Current Item selected, and click OK. 7. On the Step 1 screen of the Workflow Designer, click the second this list (representing the document library to copy the document to) in the text of the Copy List Item action, and select the document library (this is the document library to where you want to move the document) from the drop-down list box that appears. 8. On the Step 1 screen of the Workflow Designer, click Actions, and then click More Actions from the drop-down menu. 9. On the Workflow Actions dialog box, select List Actions from the category drop-down list box, select Delete Item from the actions list, and click Add. The following text is added to the Workflow Designer: then Delete item in this list 10. On the Step 1 screen of the Workflow Designer, click this list in the text of the Delete Item action. 11. On the Choose List Item dialog box, leave Current Item selected and click OK. The final text for the workflow should now look like: Copy item in DocLib1 to DocLib2   then Delete item in DocLib1 where DocLib1 is the SharePoint document library containing the document to move and DocLib2 the document library to move the document to. 12. On the Step 1 screen of the Workflow Designer, click Finish. How to Test the Workflow? 1. Go to the SharePoint document library to which you attached the workflow, click on a document, and select Workflows from the drop-down menu. 2. On the Workflows page, click the name of your SharePoint Designer workflow. 3. On the workflow initiation page, click Start.

    Read the article

  • Some thoughts on email hosting for one’s own domain

    - by jamiet
    I have used the same email providers for my own domains for a few years now however I am considering moving over to a new provider. In this email I’ll share my current thoughts and hopefully I’ll get some feedback that might help me to decide on what to do next. What I use today I have three email addresses that I use primarily (I have changed the domains in this blog post as I don’t want to give them away to spammers): [email protected] – My personal account that I give out to family and friends and which I use to register on websites [email protected]  - An account that I use to catch email from the numerous mailing lists that I am on [email protected] – I am a self-employed consultant so this is an account that I hand out to my clients, my accountant, and other work-related organisations Those two domains (jtpersonaldomain.com & jtworkdomain.com) are both managed at http://domains.live.com which is a fantastic service provided by Microsoft that for some perplexing reason they never bother telling anyone about. It offers multiple accounts (I have seven at jtpersonaldomain.com though as already stated I only use two of them) which are accessed via Outlook.com (formerly Hotmail.com) along with usage reporting plus a few other odds and sods that I never use. Best of all though, its totally free. In addition, given that I have got both domains hosted using http://domains.live.com I can link my various accounts together and switch between them at Outlook.com without having to login and logout: N.B. You’ll notice that there are two other accounts listed there in addition to the three I already mentioned. One is my mum’s account which helps me provide IT support/spam filtering services to her and the other is the donation account for AdventureWorks on Azure. I find that linking feature to be very handy indeed. Finally, http://domains.live.com is the epitome of “it just works”. I set up jtworkdomain.com at http://domains.live.com over three years ago and I am pretty certain I haven’t been back there even once to administer it. Proposed changes OK, so if I like http://domains.live.com so much why am I considering changing? Well, I earn my corn in the Microsoft ecosystem and if I’m reading the tea-leaves correctly its looking increasingly likely that the services that I’m going to have to be familiar with in the future are all going to be running on top of and alongside Windows Azure Active Directory and Office 365 respectively. Its clear to me that Microsoft’s are pushing their customers toward cloud services and, like it or lump it, data integration developers like me may have to come along for the ride. I don’t think the day is too far off when we can log into Windows Azure SQL Database (aka SQL Azure), Team Foundation Service, Dynamics etc… using the same credentials that are currently used for Office 365 and over time I would expect those things to get integrated together a lot better – that integration will be based upon a Windows Azure Active Directory identity. This should not come as a surprise, in my opinion Microsoft’s whole enterprise play over the past 15 or 20 years can be neatly surmised as “get people onto Windows Server and Active Directory then upsell from there” – in the not-too-distant-future the only difference is that they’re trying to do it in the cloud. I want to get familiar with these services and hence I am considering moving jtworkdomain.com onto Office 365. I’ll lose the convenience of easily being able to switch to that account at Outlook.com and moreover I’ll have to start paying for it (I think it’ll be about fifty quid a year – not a massive amount but its quite a bit more than free) but increasingly this is beginning to look like a move I have to make. So that’s where my head is at right now. Anyone have any relevant thoughts or experiences to share? Please let me know in the comments below. @Jamiet

    Read the article

  • UEFI Dual-Boot - Ubuntu 12.04.3 + Windows 8.1 (One GPT HDD)

    - by swafbrother
    UEFI Dual-Boot - Ubuntu 12.04.3 + Windows 8.1 (One GPT HDD) Hello, I'm having trouble setting up a dual-boot (Ubuntu 12.04 LTS and Windows 8.1) in my ASUS K55VM laptop's hard drive disk (500 GB). I was mostly following tutorials for doing this, but at some point something has gone wrong. Up to now, I have followed these steps: I formatted my HDD into GPT. I clean-installed Windows 8.1. I didn't prevent Windows from choosing the partitions to use and it created these partitions: A Recovery partition (sda1). An EFI System Partition (sda2). A Microsoft Reserved Partition (sda3). A Windows Data Partition or C drive (sda4). I reduced the Windows Data Partition via Windows' Disk Management. I made a bootable USB Stick with Ubuntu 12.04 LTS from ISO, using Universal USB Installer. I created these partitions for Ubuntu: A Boot partition, mounted at /boot (sda5). A Root partition, mounted at / (sda6). A Swap partition (sda7). In Device for boot loader installation I chose: /dev/sda. Then, when I rebooted, it went straight into Ubuntu. So I installed Boot-Repair, and clicked on Recommended Repair. It automatically did its job without asking for anything. I rebooted and Grub showed up, with a lot of options. At this point I had a decent dual-boot setup; Ubuntu and both Windows entries worked fine: Ubuntu. Windows Boot UEFI Loader. Windows UEFI bkpbootmgfw.efi. I executed this command: sudo grub-install --force /dev/sda5. Then I tried to make Windows 8.1's Boot Manager the main boot manager, so that I could choose which OS to boot into from a menu. I downloaded EasyBCD on Windows. It showed 2 Ubuntu entries and 1 Windows entry. I went into BCD Deployment tab and clicked on Write MBR. At this point, I went into BIOS and made Windows Boot Manager the first boot option. When I rebooted, I got a black screen with the message efidisk read error, and then (I guess) it switched to the next boot option, which is Ubuntu, resulting in Grub showing up. From Grub, Ubuntu entry is working and so are both Windows entries. If I choose Ubuntu, it normally boots into Ubuntu. But if I choose Windows, it goes into Windows' boot manager. In Windows' boot manager, a menu shows up: Ubuntu. Ubuntu. Windows 8.1. If I choose Windows, it boots into Windows without any problem. If I choose Ubuntu, it boots into Grub (back to step 14). Here's my BootInfo Summary: http://paste.ubuntu.com/6698171/ Windows Boot Manager is clearly not working as expected; I can't directly boot into it and I can't boot into it from BIOS either (efidisk read error again). If I want to boot into Windows I need to boot into Grub first, which is the opposite of what I wanted. I need help at this point. What is the best thing I can do? Is there a more reliable and/or simpler way of acomplishing a satisfying dual-boot for this situation? Can someone provide a way for going back to step 8, where I had a more efficient dual-boot setup? If only I could undo what I did with Easy BCD and skip Windows' Boot Menu... Can someone provide a way to fix this mess? Thanks in advance and sorry for the length of this, I wanted to be exhaustive.

    Read the article

  • How to move complete SharePoint Server 2007 from one box to another

    - by DipeshBhanani
    It was time of my first onsite client assignment on SharePoint. Client had one server production environment. They wanted to upgrade the topology with completely new SharePoint Farm of three servers. So, the task was to move whole MOSS 2007 stuff to the new server environment without impacting data. The last three scary words “… without impacting data…” were actually putting pressure on my head. Moreover SSP was required to move because additional information has been added for users apart from AD import.   I thought I had to do only backup and restore. It appeared pretty easy at first thought. Just because of these damn scary words, I thought to check out on internet for guidance related to this scenario. I couldn’t get anything except general guidance of moving server on Microsoft TechNet site. I promised myself for starting blogs with this post if I would be successful in this task. Well, I took long time to write this but finally made it. I hope it will be useful to all guys looking for SharePoint server movement.   Before beginning restoration, make sure that, there is no difference in versions of SharePoint at source and destination server. Also check whether the state of SharePoint Installation at the time of backup and restore is same or not. (E.g. SharePoint related service packs and patches if any)   The main tasks of the server movement are as follow:   Backup all the databases Install and configure SharePoint on new environment Deploy all solution (WSP Files) globally to destination server- for installing features attached to the solutions Install all the custom features Deploy/Copy custom pages/files which are added to the “12Hive” folder later Restore SSP Restore My Site Restore other web application   Tasks 3 to 5 are for making sure that we have configured the environment well enough for the web application to be restored successfully. The main and complex task was restoring SSP. I have started restoring SSP through Central Admin. After a while, the restoration status was updated to “unsuccessful”. “Damn it, what went wrong?” I thought looking at the error detail down the page. I couldn’t remember the error message but I had corrected and restored it again.   Actually once you fail restoring SSP, until and unless you don’t clean all related stuff well, your restoration will be failed again and again. I wanted to find the actual reason. So cleaned, restored, cleaned, restored… I had tried almost 5-6 times and finally, I succeeded. I had realized how pleasant it is, to see the word “Successful” on the screen. Without wasting your much time to read, let me write all the detailed steps of restoring SSP:   Delete the SSP through following STSADM command. stsadm -o deletessp -title <SSP name> -deletedatabases -force e.g.: stsadm -o deletessp -title SharedServices1 -deletedatabases –force Check and delete the web application associated with SSP if it exists. Remove Link from Check and remove “Alternate Access Mapping” associated with SSP if it exists. Check and delete IIS site as well as application pool associated with SSP if it exists. Stop following services: ·         Office SharePoint Server Search ·         Windows SharePoint Services Search ·         Windows SharePoint Services Help Search Delete all the databases associated/related to SSP from SQL Server. Reset IIS. Start again following services: ·         Office SharePoint Server Search ·         Windows SharePoint Services Search ·         Windows SharePoint Services Help Search Restore the new SSP.   After the SSP restoration, all other stuffs had completed very smoothly without any more issues. I did few modifications to sites for change of server name and finally, the new environment was ready.

    Read the article

  • Columbus Regional Airport Authority Cuts Unbudgeted Carryover Costs for Capital Projects by 88% in One Year

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The Columbus Regional Airport Authority (CRAA) is a public entity that works to connect Central Ohio with the world. It oversees operations at three airports?Port Columbus International Airport, Rickenbacker International Airport, and Bolton Field Airport?and manages the Rickenbacker Inland Port and Foreign Trade Zone # 138. It was created in 2002 through the merger of the Columbus Airport Authority and Rickenbacker Port Authority. CRAA manages approximately 100 projects annually, including initiatives as diverse as road and runway construction and maintenance, terminal improvements, construction of a new air traffic control tower, technology infrastructure development, customer service projects, and energy conservation programs. CRAA deployed Oracle’s Primavera P6 Enterprise Project Portfolio Management to create a unified methodology for scheduling and capital cash flow management. Today, the organization manages schedules and costs for all of its capital projects by using Primavera to provide enterprise wide visibility. As a result, CRAA cut unbudgeted carryover costs from US$24.4 million in 2010 to US$3.5 million in 2011?an 88% improvement. "Oracle’s Primavera P6 and Primavera Contract Management are transforming project management at CRAA. We have enabled resource-loaded scheduling and expanded visibility into cash flow, which allowed us to reduce unbudgeted carryover by 88% in a single year.” – Alex Beaver, Manager, Project Controls Office, Columbus Regional Airport Authority Challenges Standardize project planning and management for the approximately 100 projects?including airport terminal upgrades to road and runway creation and rehabilitation?that the airport authority undertakes annually Improve control over project scheduling and budgets to reduce unplanned carryover costs from one fiscal year to the next Ensure on-time, on-budget completion of critical infrastructure projects that support the organization’s mission to connect Central Ohio with the world through its three airports and inland port Solutions · Used Primavera P6 Enterprise Project Portfolio Management to develop a unified methodology for scheduling and managing capital projects for the airport authority, including the organization’s largest capital project ever?a five-year runway construction project · Gained a single, consolidated view into the organization’s capital projects and the ability to drill down into resource-loaded schedules and cash flow, enabling CRAA to take action earlier to avert the impact of emerging issues?including budget overages and project delays · Cut unbudgeted carryover costs from US$24.4 million in 2010 to US$3.5 million in 2011?an 88% improvement Click here to view all of the solutions. “Oracle’s Primavera solutions are the industry standard for project management. They provide robust and proven functionality that give us the power to effectively schedule and manage budgets for a wide range of projects, from terminal maintenance, to runway work, to golf course redesign,” said Alex Beaver, manager, project controls office, Columbus Regional Airport Authority. Click here to read the full version of the customer success story.

    Read the article

  • Social Targeting: This One's Just for You

    - by Mike Stiles
    Think of social targeting in terms of the archery competition we just saw in the Olympics. If someone loaded up 5 arrows and shot them straight up into the air all at once, hoping some would land near the target, the world would have united in laughter. But sadly for hysterical YouTube video viewing, that’s not what happened. The archers sought to maximize every arrow by zeroing in on the spot that would bring them the most points. Marketers have always sought to do the same. But they can only work with the tools that are available. A firm grasp of the desired target does little good if the ad products aren’t there to deliver that target. On the social side, both Facebook and Twitter have taken steps to enhance targeting for marketers. And why not? As the demand to monetize only goes up, they’re quite motivated to leverage and deliver their incredible user bases in ways that make economic sense for advertisers. You could target keywords on Twitter with promoted accounts, and get promoted tweets into search. They would surface for your followers and some users that Twitter thought were like them. Now you can go beyond keywords and target Twitter users based on 350 interests in 25 categories. How does a user wind up in one of these categories? Twitter looks at that user’s tweets, they look at whom they follow, and they run data through some sort of Twitter secret sauce. The result is, you have a much clearer shot at Twitter users who are most likely to welcome and be responsive to your tweets. And beyond the 350 interests, you can also create custom segments that find users who resemble followers of whatever Twitter handle you give it. That means you can now use boring tweets to sell like a madman, right? Not quite. This ad product is still quality-based, meaning if you’re not putting out tweets that lead to interest and thus, engagement, that tweet will earn a low quality score and wind up costing you more under Twitter’s auction system to maintain. That means, as the old knight in “Indiana Jones and the Last Crusade” cautions, “choose wisely” when targeting based on these interests and categories to make sure your interests truly do line up with theirs. On the Facebook side, they’re rolling out ad targeting that uses email addresses, phone numbers, game and app developers’ user ID’s, and eventually addresses for you bigger brands. Why? Because you marketers asked for it. Here you were with this amazing customer list but no way to reach those same customers should they be on Facebook. Now you can find and communicate with customers you gathered outside of social, and use Facebook to do it. Fair to say such users are a sensible target and will be responsive to your message since they’ve already bought something from you. And no you’re not giving your customer info to Facebook. They’ll use something called “hashing” to make sure you don’t see Facebook user data (beyond email, phone number, address, or user ID), and Facebook can’t see your customer data. The end result, social becomes far more workable and more valuable to marketers when it delivers on the promise that made it so exciting in the first place. That promise is the ability to move past casting wide nets to the masses and toward concentrating marketing dollars efficiently on the targets most likely to yield results.

    Read the article

  • Single Key Multiple Values Data Structure for one to many mapping

    - by nijhawan.saurabh
    Dictionaries are good, they are great to store Key / Value pairs but what if you want to store multiple values for a single key? Dictionaries would not allow duplicate keys. I came across a nice way to represent such a Data Structure using one of the Extension Method (ToLookup) present in System.Linq Namespace which converts an IEnumerable<T> to an ILookup<TKey, TElement>.   Now, there are two parameters this method expects (The other overload expects 3 parameters): IEnumerable<TSource> - This list would contain the actual data. Func<TSource, TKey> keySelector - The Delegate which which computes the keys   The method returns the following: ILookup<TKey, TElement>   This DS would store Keys and multiple values along those keys.   Let's see a small example:        12  using System;    13     using System.Collections.Generic;    14     using System.Linq;    15     16     /// <summary>    17     /// </summary>    18     internal class Program    19     {    20         #region Methods    21     22         /// <summary>    23         /// </summary>    24         /// <param name="args">    25         /// The args.    26         /// </param>    27         private static void Main(string[] args)    28         {    29             // Create an array of strings.    30             var list = new List<string> { "IceCream1", "Chocolate Moose", "IceCream2" };    31     32             // Generate a lookup Data Structure    33             ILookup<int, string> lookupDs = list.ToLookup(item => item.Length);    34     35           // Enumerate groupings.    36             foreach (var group in lookupDs)    37             {    38                 foreach (string element in group)    39                 {    40                     Console.WriteLine(element);    41                 }    42             }    43         } Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Acer Aspire One 725 - missing graphic card driver for Radeon HD 7290?

    - by Melon
    Recently I bought an Acer Aspire One 725 Netbook and installed Ubuntu 12.10 on it. I bought it, because it can run HD movies and has Full HD on external VGA port. However, movies from youtube have a really slow framerate. If you open three tabs in Opera (for example g-mail, youtube and askubuntu) it gets really laggy. My suspicion is that the driver for graphic card is missing. When I check the System->Details->Graphics the driver is unknown. After running lspci | grep VGA I get this output: 00:01.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Device 980a From what I see, I have a AMD C70 processor integrated with AMD Radeon HD 7290. Has anyone had the same problem? Do you know which drivers need to be installed for the graphics to work properly? On official Acer page there are only drivers for Win7 and Win8... Update: OK. Another attempt. I have a fresh Ubuntu 12.10. All updates done. downloaded Catalyst 12.11 beta drivers and decided to create a package. After installing package, I have this error from /var/log/Xorg.0.log: [ 13.394] (**) fglrx(0): NoAccel = NO [ 13.394] (**) fglrx(0): AMD 2D Acceleration Architecture enabled [ 13.394] (--) fglrx(0): Chipset: "AMD Radeon HD 7290 Graphics" (Chipset = 0x980a) [ 13.394] (--) fglrx(0): (PciSubVendor = 0x1025, PciSubDevice = 0x0740) [ 13.394] (==) fglrx(0): board vendor info: third party graphics adapter - NOT original AMD [ 13.394] (--) fglrx(0): Linear framebuffer (phys) at 0xe0000000 [ 13.394] (--) fglrx(0): MMIO registers at 0xf0200000 [ 13.394] (--) fglrx(0): I/O port at 0x00003000 [ 13.394] (==) fglrx(0): ROM-BIOS at 0x000c0000 [ 13.484] (II) fglrx(0): ATIF platform detected [ 13.564] (II) fglrx(0): AC Adapter is used [ 13.565] (EE) fglrx(0): V_BIOS address 0xd00 out of range [ 13.565] (EE) fglrx(0): Failed to obtain VBIOS from Kernel! [ 13.565] (EE) fglrx(0): VBIOS read from Kernel, Invalid signature! [ 13.565] (EE) fglrx(0): GetBIOSParameter failed [ 13.565] (EE) fglrx(0): PreInitAdapter failed [ 13.565] (EE) fglrx(0): PreInit failed [ 13.565] (II) fglrx(0): === [xdl_xs113_atiddxPreInit] === end

    Read the article

  • How do I get Graphics drivers / bluetooth / card reader working on an Acer Aspire V3-571G?

    - by Adam
    A couple of days ago I bought an Acer Aspire V3-571G laptop without a system installed on it. The only thing that was there was Linux Linpus. I created a bootable CD with Ubuntu 12.04 64-bit - I read that my processor was 64 bit and that it might be a good configuration for my gear (I'm not especially fluent with all the computer stuff, still trying to learn) and replaced Linpus with Ubuntu. Everything seemed to work fine, but there're few exceptions to that which came pass my way. My bluetooth doesn't work. It seems to be switched on, but when I check my system settings the button is actually off, and I can't drag it 'perminently' to the 'on' position. Tried a couple of commands I found on the net, none of them helped and there was no word whatsoever in my BIOS settings about enabling bluetooth. My card reader has some serious problems with copying more than one file at a time. I tried to put some music on my phone through a MicroSD card adapter (because my bluetooth doesn't work) and it got stuck every single time I copied an album on it. I'm not sure if all my drivers were properly installed, so I checked in the terminal if it could tell me sth about my graphics. typed: sudo lshw -c display and what i got was: *-display UNCLAIMED description: VGA compatible controller product: NVIDIA Corporation vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller cap_list configuration: latency=0 resources: memory:b2000000-b2ffffff memory:a0000000-afffffff memory:b0000000-b1ffffff ioport:2000(size=128) *-display description: VGA compatible controller product: Ivy Bridge Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:44 memory:b3000000-b33fffff memory:c0000000-cfffffff ioport:3000(size=64) As I said I'm no expert and not english-speaking generally, but it doesn't seem to be right. I've got a NVIDIA GeForce GT 640M.

    Read the article

  • Test A SSH Connection from Windows commandline

    - by IguanaMinstrel
    I am looking for a way to test if a SSH server is available from a Windows host. I found this one-liner, but it requires the a Unix/Linux host: ssh -q -o "BatchMode=yes" user@host "echo 2>&1" && echo "UP" || echo "DOWN" Telnet'ing to port 22 works, but that's not really scriptable. I have also played around with Plink, but I haven't found a way to get the functionality of the one-liner above. Does anyone know Plink enough to make this work? Are there any other windows based tools that would work? Please note that the SSH servers in question are behind a corporate firewall and are NOT internet accessible. Arrrg. Figured it out: C:\>plink -batch -v user@host Looking up host "host" Connecting to 10.10.10.10 port 22 We claim version: SSH-2.0-PuTTY_Release_0.62 Server version: SSH-2.0-OpenSSH_4.7p1-hpn12v17_q1.217 Using SSH protocol version 2 Server supports delayed compression; will try this later Doing Diffie-Hellman group exchange Doing Diffie-Hellman key exchange with hash SHA-256 Host key fingerprint is: ssh-rsa 1024 aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa Initialised AES-256 SDCTR client->server encryption Initialised HMAC-SHA1 client->server MAC algorithm Initialised AES-256 SDCTR server->client encryption Initialised HMAC-SHA1 server->client MAC algorithm Using username "user". Using SSPI from SECUR32.DLL Attempting GSSAPI authentication GSSAPI authentication initialised GSSAPI authentication initialised GSSAPI authentication loop finished OK Attempting keyboard-interactive authentication Disconnected: Unable to authenticate C:\>

    Read the article

  • Whats the best cloud backup solution for a small scale server envoirnment?

    - by nbv4
    I have a server that runs a postgres database that contains about 200MB of data. Currently I have a cron job setup on my home computer which: ssh's into my server runs a remote script which makes a backup of the database scp's that dump over to my local hard drive for storage. Each dump is 20MiB. does this every six hours (one months of backups is roughly 2GiB) The problem with this setup is that if my local machine goes down for whatever reason, no backups will be made. Also, I can't have the cron run from the server, because I can't have it scp'd to my local machine from my server (firewalls and all that crap). My local machine is running Ubuntu 10.04, and my server is Ubuntu 9.10 server edition. I looked into Ubuntu One, but currently it's gui-only. I also looked into dropbox, but it's a pain in the ass to get setup in linux without gui support. Amazon S3 looks good but it's not free (yet dirt cheap). Is there any other alternative that I should look into? I'd prefer something where I can just have my script dump the database into a directory, and have the backup service 'watch' that folder and sync accordingly. I can maybe also have my local machine sync to the cloud backup so I have even more redundancy, plus easy access to my backups for use in testing.

    Read the article

  • Whats the best cloud backup solution for a small scale server environment?

    - by nbv4
    I have a server that runs a postgres database that contains about 200MB of data. Currently I have a cron job setup on my home computer which: ssh's into my server runs a remote script which makes a backup of the database scp's that dump over to my local hard drive for storage. Each dump is 20MiB. does this every six hours (one months of backups is roughly 2GiB) The problem with this setup is that if my local machine goes down for whatever reason, no backups will be made. Also, I can't have the cron run from the server, because I can't have it scp'd to my local machine from my server (firewalls and all that crap). My local machine is running Ubuntu 10.04, and my server is Ubuntu 9.10 server edition. I looked into Ubuntu One, but currently it's gui-only. I also looked into dropbox, but it's a pain in the ass to get setup in linux without gui support. Amazon S3 looks good but it's not free (yet dirt cheap). Is there any other alternative that I should look into? I'd prefer something where I can just have my script dump the database into a directory, and have the backup service 'watch' that folder and sync accordingly. I can maybe also have my local machine sync to the cloud backup so I have even more redundancy, plus easy access to my backups for use in testing. Edit: My server is a VPS, so what solution I end up using has to be 100% software based.

    Read the article

  • Separate domains vs. one domain with alias-domains

    - by Quasdunk
    I have tried to ask this question a few days ago but I'm afraid it was not clear enough, so here's another try. I have set up a LAMP-server using ISPConfig 3 for the administration. PHP is running over Fast-CGI. I have several domains, like my_site.com, my_site.net and my_site.org, but they all point to the same application/website. Each domain has its own web-root-folder and is running under its own user. The application itself is in a common directory which is owned by another user, like so: # path to my_application (owned by web1) /var/www/clients/client1/web1/web/my_application/ # sym-link to my_application from my_site.com-web-root (owned by web5) /var/www/my_site.com/web -> /var/www/clients/client1/web1/web/ # sym-link to my_application from my_site.net (owned by web4) /var/www/my_site.net/web -> /var/www/clients/client1/web1/web/ With a setup like this I have encountered a few problems concerning the permissions when performing filesystem-operations with PHP. For instance, if the application is called via my_site.com, the user web5 is trying to write something to the application-folder. But the application-folder is owned by the user web1, so web5 is not allowed to write there. As far as I unterstand, this is how Fast-CGI works. After some research and asking a few people, the solution seems to be to break it all down to one domain (e.g. my_site.com) and define the other domains (my_site.org, my_site.net) as alias for this one domain. That way, there would be only one user who has all necessary permissions. However, this would mean that we'd have to buy a multidomain SSL-certificate - but we already have an SSL-certificate for each domain. We were able to use them with our previous provider (managed hosting), and there we also had only one web-directory and multiple domains. So if this was possible, I wonder: Is putting all the domains together into one v-host with one main- and several alias-domains the right approach in this case? Or may I have misunderstood something?

    Read the article

  • Two python distributions, sudo picking the wrong one

    - by DHK
    I'm back to Linux after an over 10 year abstinence (fool me thinks). And a little rusty in the sys admin department. I'm faced with an issue with my python distribution. I'm using Python 2.7, but based on the Anaconda flavour. I followed the standard guidance but recently I discovered an issue that I'm not sure how to fix. Under sudo, the standard Python as comes with Ubuntu is provided. Under my user account python points to the Anaconda version: dhk@localhost:~/home/$which python /opt/anaconda/bin/python dhk@localhost:~/home/$sudo which python /usr/bin/python This is an issue as using sudo pip [anything] usually acts on the wrong directory, yet I cannot use it without sudo.

    Read the article

  • Managing multiple reverse proxies for one virtual host in apache2

    - by Chris Betti
    I have many reverse proxies defined for my js-host VirtualHost, like so: /etc/apache2/sites-available/js-host <VirtualHost *:80> ServerName js-host.example.com [...] ProxyPreserveHost On ProxyPass /serviceA http://192.168.100.50/ ProxyPassReverse /serviceA http://192.168.100.50/ ProxyPass /serviceB http://192.168.100.51/ ProxyPassReverse /serviceB http://192.168.100.51/ [...] ProxyPass /serviceZ http://192.168.100.75/ ProxyPassReverse /serviceZ http://192.168.100.75/ </VirtualHost> The js-host site is acting as shared config for all of the reverse proxies. This works, but managing the proxies involves edits to the shared config, and an apache2 restart. Is there a way to manage individual proxies with a2ensite and a2dissite (or a better alternative)? My main objective is to isolate each proxy config as a separate file, and manage it via commands. First Attempt I tried making separate files with their own VirtualHost entries for each service: /etc/apache2/sites-available/js-host-serviceA <VirtualHost *:80> ServerName js-host.example.com [...] ProxyPass /serviceA http://192.168.100.50/ ProxyPassReverse /serviceA http://192.168.100.50/ </VirtualHost> /etc/apache2/sites-available/js-host-serviceB <VirtualHost *:80> ServerName js-host.example.com [...] ProxyPass /serviceB http://192.168.100.51/ ProxyPassReverse /serviceB http://192.168.100.51/ </VirtualHost> The problem with this is apache2 loads the first VirtualHost for a particular ServerName, and ignores the rest. They aren't "merged" somehow as I'd hoped.

    Read the article

  • One National Team One Event &ndash; SharePoint Saturday Kansas City

    - by MOSSLover
    I wasn’t expect to run an event from 1,000 miles away, but some stuff happened you know like it does and I opted in.  It was really weird, because people asked why are you living in NJ and running Kansas City?  I did move, but it was like my baby and Karthik didn’t have the ability to do it this year.  I found it really challenging, because I could not physically be in Kansas City.  At first I was freaking out and Lee Brandt, Brian Laird, and Chris Geier offered to help.  Somehow I couldn’t come the day of the event.  Time-wise it just didn’t work out.  I could do all the leg work prior to the event, but weekends just were not good.  I was going to be in DC until March or April on the weekdays, so leaving that weekend was too tough.  As it worked out Lee was my eyes and ears for the venue.  Brian was the sponsor and prize box coordinator if anyone needed to send items.  Lee also helped Brian the day of the event move all the boxes.  I did everything we could do electronically, such as get the sponsors coordinate with Michael Lotter on invoicing and getting the speakers, posting the submissions, budgeting the money, setting up a speaker dinner by phone, plus all that other stuff you do behind the scenes.  Chris was there to help Lee and Brian the day of the event and help us out with the speaker dinner.  Karthik finally got back from India and he was there the night before getting the folders together and the signs and stuffing it all.  Jason Gallicchio also helped me out (my cohort for SPS NYC) as he did the schedule and helped with posting the speakers abstracts and so did Chris Geier by posting the bios.  The lot of them enlisted a few other monkeys to help out.  It was the weirdest thing I’ve ever seen, but it worked.  Around 100+ attendees ended up showing and I hear it was  a great event.  Jason, Michael, Chris, Karthik, Brian, and Lee are not all from the same area, but they helped me out in bringing this event together.  It was a national SharePoint Saturday team that brought together a specific local event for Kansas City.  It’s like a metaphor for the entire SharePoint Community.  We help our own kind out we don’t let me fail.  I know Lee and Brian aren’t technically SharePoint People they are honorary SharePoint Community Members.  Thanks everyone for the support and help in bringing this event together.  Technorati Tags: SharePoint Saturday,SPS KC,SharePoint,SharePoint Saturday Kanas City,Kansas City

    Read the article

  • Microsoft Silverlight MVP one more time

    - by pluginbaby
    Another wonderful first email of the year… announcing that I’ve just been re-awarded Most Valuable Professional (MVP) by Microsoft for Silverlight. This is my 5th year as an MVP in a row and I am still very honoured and excited! In 2010 I had the pleasure to be involved in many community events around Silverlight, speaking at Microsoft conferences and user groups (doing the launch of the Vancouver Silverlight User Group was fun!), as well as taking part in worldwide conference like MIX Las Vegas and the MVP Summit in Redmond. Also I did new kind of activities in 2010: I wrote questions for the first Microsoft Silverlight certification exam (70-506), and I was Technical reviewer of 3 Silverlight books. I finally started to share more on Twitter @LaurentDuveau. In 2010 the content of this blog was mostly about Silverlight, I expect it to be the same in 2011, plus a touch of Windows Phone as well. I already know that 2011 will be hell of a good year.. I’ll be at the next MVP Summit in Seattle, also speaker at DevTeach which comes back to Montreal (at last!) and have some nice Silverlight trainings plans for France and Tunisia. More than that, my business RunAtServer is healthy (proud of my team!) and I have insane news and a very big surprise coming on that front.... stay tuned! Happy New Year!

    Read the article

  • How to Get All the Windows 8 Editions on One Install Disk

    - by Taylor Gibb
    There are a lot of different versions of Windows, but you probably didn’t know that short of the Enterprise edition, the disc or image that you own contains all versions for that architecture. Read on to see how we can use them to make a universal Windows 8 install disc. Things You Will Need A x86 Version of Windows 8 A x64 Version of Windows 8 A x86 Version of Windows 8 Enterprise A x64 Version of Windows 8 Enterprise A Windows 8 PC Note: While we will use all the images above you don’t really need the Enterprise Edition. You could always leave out parts of the tutorial if you know what you are doing, if you are not comfortable with that and still want to follow through you could always grab the Enterprise evaluation images that are available for free to the public, on MSDN. Getting Started To get started you will need to Download the Windows 8 ADK from Microsoft. Once downloaded go ahead and install it, you will only need the Deployment tools so be sure to uncheck the rest of the options. Lastly you will also need to create the following folder structure on the root of your C:\ drive to make things a bit easier. C:\Windows8Root C:\Windows8Root\x86 C:\Windows8Root\x64 C:\Windows8Root\Enterprisex86 C:\Windows8Root\Enterprisex64 C:\Windows8Root\Temp C:\Windows8Root\Final OK lets get started. Making The Image The first thing we need to do is create a base image, so mount the x86 version of Windows 8 and copy its files to: C:\Windows8Root\Final Now move the install.wim file from: C:\Windows8Root\Final\sources To: C:\Windows8Root\x86 Next go ahead and copy the install.wim file from the other 3 images, Windows 8 x64, Windows 8 Enterprise x86 and Windows 8 Enterprise x64 to the respective folders in Windows8Root, the install.wim file can be located at: D:\sources\install.wim Note: The above assumes that the images are always mounted at drive D. Remember that each install.wim is different so don’t copy them to the wrong directories or the rest of the tutorial wont work. Next switch to the Metro Start Screen and open the Deployment and Imaging Tools Environment. Note: If you are not a local administrator on your PC, you will need to right-click on it and choose to run it as an administrator. Now run the following commands: Dism /Export-Image /SourceImageFile:c:\Windows8Root\x86\install.wim /SourceIndex:2 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8″ /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\x86\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Pro” /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\x86\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Pro with Media Center” /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\Enterprisex86\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Enterprise” /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\x64\install.wim /SourceIndex:2 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8″ /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\x64\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Pro” /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\x64\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Pro with Media Center” /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\Enterprisex64\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Enterprise” /compress:maximum Next navigate to: C:\Windows8Root\sources\ And create a new text file. You will need to call it: EI.cfg Then edit it to look like the following: The last thing we need to do is work some magic to get Windows Media Center added to the WMC editions of Windows 8. For that I have written a little script to make it easier for everybody, you can grab it here. Once you have downloaded it extract it. In order to use it right-click in the bottom left hand corner of the screen, and open an elevated command prompt. Then go ahead and paste the following into the command prompt window. powershell.exe -ExecutionPolicy Unrestricted -File C:\Users\Taylor\Documents\HTGWindows8Converter.ps1 Note: You will need to replace the path to the script, another thing to note is that if the path you replace it with has spaces you will need to enclose the path in quotes. The script should kick off straight away and has some progress bars you can watch while it does its thing. Half way through another Window will pop open, which will start creating your final ISO image. When its complete, close the command prompt and you should have an ISO image on the root of your C drive called: HTGWindows8.iso That’s all there is to it. 7 Ways To Free Up Hard Disk Space On Windows HTG Explains: How System Restore Works in Windows HTG Explains: How Antivirus Software Works

    Read the article

  • SEO: Moving articles from one domain to another

    - by Melanie
    Currently I have articles up on a website (site A) that is not mine (but I can remove the articles.) The article's aren't faring well (not only due to the recent Google changes) but because they really could do better if I made some tweaks myself instead of relying on the domain owner's SEO skills. So I would like to set up my own website and have just my articles on it (site B.) In the past when I've moved content, I've set up redirects but this time I can't do that. What would be the best way to move the articles without having to worry about them being counted as duplicate content or any other lame stuff? Should I, A: Save the articles on my computer and remove them from Site A. Wait for Google to remove them from the index (several months.) B: Remove the articles from Site A and immediately place them on Site B.

    Read the article

  • JMX Based Monitoring - Part One

    - by Anthony Shorten
    In all versions of the Oracle Utilities Application Framework there is an ability to use Java Management eXtensions (JMX) to both manage and monitor the various components of the product. This means that sites can use a JSR120 compliant JMX browser or JMX console to view or manage the components of the product with little or no configuration required. In each version we have progressively added JMX capabilities to allow IT groups more detailed information. In Oracle Utilities Application Framework V2.1 and above it was possible to use JMX on the Web Application Server provided Mbeans to allow you to monitor the online component of the product as well as manage the configuration. Also with a few additional java options it is possible to get a good level of detail about the Java Virtual machine including memory and thread usage. In Oracle Utilities Application Framework V2.2 and above, we added support for Java 5 statistics (Java enabled them by default), database pool statistics and also added the ability to manage and moinitor the batch component of the architecture. Now, in Oracle Utilities Application Framework V4 and above, we added support for Java 6 MXBeans, online management of the cache using JMX, additional JVM information and Performance monitoring using JMX. JMX allows the product to be managed from a common console such as Oracle Enterprise Manager, Tivoli, HP OpenView (and a lot more). Over the next week or so I will be compiling a set of blog entries discussing what is available (in summary format) using JMX and how to get access to the JMX statistics for your version of the product.

    Read the article

  • Upgrading only several packages, ou packages from one source

    - by Cédric Girard
    we use Deb/apt system to deploy PHP softwares (around 200 + libraries with dependencies). We have a build server, scripts, and a private repository. It's ok and run fine but we want to update ours packages very often, and update Ubuntu packages only when our adminsys have time to handle them. How can we do? The only solution I see for now is to iterate on package list and do a apt-get install $packagename. Not very easy or even resilient. Another idea?

    Read the article

  • One Api Pilot

    - by Manish Agrawal
    Presentations made at Mobile World Congress, MWC 2010, on the Canadian OneAPI Pilot by Graham Trickey (GSMA), and Shane Logan (Telus). Thanks Alan for sharing it.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >