Search Results

Search found 14475 results on 579 pages for 'veritas storage manager'.

Page 46/579 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • High Sqlservr.exe Memory Usage

    - by user18576
    I have a problem with sqlservr.exe (version 2008). It use a more memory. I checked on windows taskbar manager, sqlservr.exe usage ( Mem usage - 8GB Ram). I dont know how can I fix it.Got the following metrics of the server using Perfmon: SQLServer:Buffer Manager Buffer cache hit ratio 13 SQLServer:Buffer Manager Page lookups/sec 46026128096 SQLServer:Buffer Manager Free pages 129295 SQLServer:Buffer Manager Total pages 997309 SQLServer:Buffer Manager Target pages 1053560 SQLServer:Buffer Manager Database pages 484117 SQLServer:Buffer Manager Reserved pages 0 SQLServer:Buffer Manager Stolen pages 383897 SQLServer:Buffer Manager Lazy writes/sec 384369 SQLServer:Buffer Manager Readahead pages/sec 69315446 SQLServer:Buffer Manager Page reads/sec 71280353 SQLServer:Buffer Manager Page writes/sec 12408371 SQLServer:Buffer Manager Checkpoint pages/sec 7053801 SQLServer:Buffer Manager Page life expectancy 735262 SQLServer:General Statistics Active Temp Tables 161 SQLServer:General Statistics Temp Tables Creation Rate 3131845 SQLServer:General Statistics Logins/sec 2336011 SQLServer:General Statistics Logouts/sec 2335984 SQLServer:General Statistics User Connections 27 SQLServer:General Statistics Transactions 0 SQLServer:Access Methods Full Scans/sec 34422821 SQLServer:Access Methods Range Scans/sec 2027247756 SQLServer:Access Methods Workfiles Created/sec 49771600 SQLServer:Access Methods Worktables Created/sec 28205828 SQLServer:Access Methods Index Searches/sec 4890715219 SQLServer:Access Methods FreeSpace Scans/sec 21178928 SQLServer:Access Methods FreeSpace Page Fetches/sec 21226653 SQLServer:Access Methods Pages Allocated/sec 41483279 SQLServer:Access Methods Extents Allocated/sec 4743504 SQLServer:Access Methods Extent Deallocations/sec 4806606 SQLServer:Access Methods Page Deallocations/sec 41419137 SQLServer:Access Methods Page Splits/sec 23834799 SQLServer:Memory Manager SQL Cache Memory (KB) 29160 SQLServer:Memory Manager Target Server Memory (KB) 8428480 SQLServer:Memory Manager Total Server Memory (KB) 7978472 Some body could help me please.And I really want to know the cause for the above.

    Read the article

  • Lion server profile manager, device enrollment doesn't work

    - by user964406
    I am in the process of setting up Lion Servers profile manager to manage iPads on our local school network. I don't need to manage them while they are outside the network. I have successfully had it working on my personal network. The school network is behind a proxy which we have no control over. I can get the iPads to view the mydevices page and install a trust cert. I have managed to get an iPad to successfully install the remote management profile. After this the profile manager bugs out. It will list the active task of 'new device (sending)' but it's unable to complete the task. If I click on the device on profile manager and try any of the actions out they will all fail to complete. I am using the auto generated certificates and this works if I bring the server and iPad outside of the school network. Shortly after device enrollment the system log on the Lion server reports the following Replaced the actual ip address with INTERNALIP Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini applepushserviced[778]: Got connection error Error Domain=NSPOSIXErrorDomain Code=1 "The operation couldn\u2019t be completed. Operation not permitted" UserInfo=0x7fa483b1a340 {NSErrorFailingURLStringKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS, NSErrorFailingURLKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS} Jun 4 08:40:53 mini applepushserviced[778]: Failed to get client cert on attempt 2, will retry in 15 seconds Does anyone have any ideas on how to get past this stage? Thanks in advance.

    Read the article

  • Cannot log into Oracle Enterprise Manager 11g: ORA-28001

    - by Álvaro G. Vicario
    I can no longer log into Oracle Enterprise Manager 11g. I get this error message: ORA-28001: the password has expired (DBD ERROR: OCISessionBegin) I could log into the server using SQL*Plus. I warned me that the password was going to expire in 7 days (which is not the same as being already expired). Following advice from several documents, I ran these commands from SQL*Plus: ALTER USER sys IDENTIFIED BY new_password; ALTER USER system IDENTIFIED BY new_password; SQL*Plus no longer warns about passwords, but I still cannot use the Enterprise Manager. Then I followed this to remove password expiration: ALTER PROFILE default LIMIT password_life_time UNLIMITED And I've also restarted the Oracle services. In case it was using cached credentials, I've tried to connect from several browsers in several computers. No way: I still get ORA-28001 in Enterprise Manager. What am I missing? Update: Some more info SQL> select username,ACCOUNT_STATUS,EXPIRY_DATE from dba_users; USERNAME ACCOUNT_STATUS EXPIRY_D ------------------------------ -------------------------------- -------- MGMT_VIEW OPEN SYS OPEN SYSTEM OPEN

    Read the article

  • "The directory name is invalid" when trying to install drivers in Windows 7 via Device Manager

    - by Luke
    First off, this computer is not mine, it's a customer's system. Having said that... The hard drive was moved to a new motherboard, CPU, RAM combo, and booted up fine. Customer puts in driver CD, drivers won't load. He brings it into me. Under Device Manager for Windows 7 x64, I see lots of PCI to PCI bridge, one SMBus Controller, and about 20 Unknown Devices. Greeeeeat... So I start with the SMBus driver directly from the Asus website for the motherboard (P8H77-M Pro). If I install from the setup program, it tells me to reboot, then it starts the install. It gets half way through the setup, then fails (An unknown error occurred. Setup will exit). When I try to point to the folder from Device Manager, it starts copying files for the driver, even presents me with the proper name of the device, but says that an error has occurred there as well: The directory name is invalid. Doing some Googling, I saw that many people had this issue with Vista. K, Vista and 7 are similar, maybe the solutions are the same... But they aren't. I tried: Copying the entire driver folder and setup utility to the Program Files folder and running it / selecting it in DM Downloading another set of drivers in case this one is corrupt Disabling UAC Deleting and recreating the %WINDIR%\TEMP folder Removing all references to previous hardware that I could find, even in Device Manager's hidden mode So far, nothing has worked. A wipe and reload will be out of the question

    Read the article

  • OSX Server 10.5 - Cannot log into Workgroup Manager - diradmin password is correct

    - by Mister IT Guru
    I've got a setup where I am trying to rescue a broken AD. We can no longer authenticate on the Workgroup manager, with passwords being rejected all the time - even though it is correct. I can connect using the workgroup manager on another server and I get the user list as expected, but when I click the padlock to make changes, I get the following screen: The problem is, I know the password is correct, I just used it to connect to the server in the first place. I can log into the server using the local admin, and services such as AFP, VPN and SMB continue to serve users. I have about 300 or so users on this server, and I would very much like to avoid having a rebuild. As there is much configuration that has been done without my knowledge (it's a client machine), I'd like to attempt to fix it, and then create another server and migration OD off this broken machine, then decommission it "gently". Ultimately this would mean no disruption of services. What I'd like it some tips as to how to fix the problem with authenticating to make changes in the work group manager, and maintenance on open directory in general. Thanks

    Read the article

  • HP Power Manager SMTP setup doesn't have space for username & password

    - by Martha
    Is there some way to configure HP Power Manager to not assume that there's an email server running locally? We recently acquired an HP T1500 G3 UPS, which we're trying to control using HP Power Manager 4.2. The main reason we wanted to get this particular UPS is because it says it's capable of sending notifications (of the "Yo, the power's out, you may want to look into it" type) via email, as opposed to SNMP. Turns out, that's not entirely true. The server is running Windows Server 2003. It is not running an email server of any sort - we do that via two different providers. Outlook email is provided by Verizon, and our SMTP email service is provided by a small local company. When we use CDO to send auto-generated notification emails, we have to provide the SMTP server name, port, username, and password. The HP Power Manager interface only allows us to enter the server name and the username. Thus, not surprisingly, the emails never go anywhere. Help?

    Read the article

  • How to disable Utility Manager (Windows Key + U)

    - by Skizz
    How do I disable the Windows + U hotkey in Windows XP? Alternatively, how do I stop the utility manager from being active? The two are related. The utilty manager is currently providing a potential security hole and I need to remove it[1]. The system I'm developing uses a custom Gina to log in and start a custom shell. This removes most Windows Key hotkeys but the Win + U still pops up the manager app. Update: Things I've tried and don't work: NoWinKeys registry setting - this only affects explorer hotkeys; Renaming utilman.exe - program reappears next login; Third party software - not really an option, these machines are audited by the clients and additional, third party software would be unlikely to be accepted. Also, the proedure needs to be reasonably straightforward - this has to be done by field service engineers to existing machines (machines currently in Russia, Holland, France, Spain, Ireland and USA). [1] The hole is via the internet options in the help viewer the utility app links to.

    Read the article

  • Perfmon % Processor Time vs. task manager's CPU usage

    - by nat
    I'm new to using Perfmon and performance monitoring in general (so go easy on me please ;) I know that Perfmon doesn't have anything exactly like Task Manager's CPU usage display, but I'm trying to figure out how to monitor user's CPU usage via Perfmon in a similar way, and trying to understand the measurements (or how to convert the numbers to get a similar understanding) For example, if in Task Manager, a particular user is consistently using more than 5% CPU, I would want to contact the user about it. I learn best by example, so here is exactly what I'm trying to do, with a specific example: This is for a 32-bit Dual Quad Core Windows 2003 web server (8 CPUs), there are many web sites on the server, each running within their own application pool/worker process ID. Through other research here I learned of a registry change that I made so that the PID shows up with the w3wp process so I can easily identify the site later by cross-referencing it. I set up a counter with the following settings: Process -> % Processor Time -> all instances Here is an example. Say I'm interested in "black line" user in this graph below, as his process is spiking quite high compared to all the other users: (I wasn't allowed to post the image as I'm a new user on this site.. I've uploaded the image to:) http://i35.tinypic.com/106yn8k.jpg So... using this as an example, I see that they have an AVERAGE % PROCESSOR TIME of 23.264 , and have spiked as high as 103.124 So what exactly does this 23.264 number mean to me? Is it similar to an average of Task Manager's CPU reading for this user? Or, since this server has 8 CPUs, should I divide this number by 8? (23.264/8 = 2.9% AVERAGE CPU LOAD?) Thanks in advance.

    Read the article

  • Pysdm has disabled my ability to write to my storage partition

    - by Atlas
    I have a dual boot setup with Windows 7 and Mint 13 Cinnamon. As well as their respective partitions I also have a large one (NTFS) for storing all my music, videos, documents etc. I downloaded pysdm as I was told it would enable me to configure Linux to auto-mount my storage partition. It has indeed been helpful in auto-mounting my storage. However, since installing it I can no longer write to the partition which makes 500GB of my hard drive utterly useless! I've tried to unselect the "Mount file system in read only mode" option, but the program keeps re-checking it after I close that window (and even when I click apply). Why is it doing this and how can I get it to recognise that I need to read AND write on that partition?

    Read the article

  • Device Manager - does USB listing look right?

    - by Carl
    I obtained the drivers from the manufacturer for my HT-Link NEC USB 2.0 2-port Cardbus card. When I plugged in the card before I got the drivers, 3 new entries showed up in the Device Manager - two "NEC PCI to USB Open Host Controller" and one "Standard Enhanced PCI to USB Host controller." With the card plugged in, I uninstalled those two drivers. I then removed the card. I copied the new drivers to c:\windows\system32\drivers and the .inf file to c:\windows\inf. I also copied the drivers & inf to a new directory called c:\windows\drivers\ousb2. I reinserted the card. Windows automatically installed the same drivers as before. I selected 'update driver' on the "NEC PCI to USB..." entry and didn't see any other options. I then selected 'have disk' and pointed to c:\windows\drivers\ousb2 and got a message "The specified location does not contain information about your hardware." I then selected 'update driver' on the "Standard Enhanced PCI to USB...," and manually selected "USB 2.0 Enhanced Host Controller" (OWC 4/15/2003 2.1.3.1). Windows then automatically found a USB root hub, and I manually selected "USB 2.0 Root Hub Device" (OWC 4/15/2003 2.1.3.1). Now there are two sections in the Device Manager titled "Universal Serial Bus controllers." I plugged in my external USB hard disk adapter, and "USB Mass Storage Device" was added to the first set. Here's how it looks (w/drivers from the properties): [Universal Serial Bus controllers] Intel(R) 82801DB/DBM USB 2.0 Enhanced Host Controller - 24CD (6/1/2002 5.1.2600.0) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C2 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C4 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C7 (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) USB Mass Storage Device USB Root Hub (7/1/2001 5.1.2600.5512) (5 more USB Root Hubs - same driver) [Universal Serial Bus controllers] USB 2.0 Enhanced Host Controller (OWC 4/15/2003 2.1.3.1) USB 2.0 Root Hub Device (OWC 4/15/2003 2.1.3.1) When I unplug the card the two "NEC PCI to USB..." entries in the first set disappear, and the whole second set disappears. (I unplugged the hard disk adapter first...) The hard disk adapter still doesn't work in that Cardbus card with the new drivers. I don't think the above looks right - a second set of USB controllers listed in the Device Manager, and the NEC entries still in the first set, and the the USB mass storage device still in the first set. Any help appreciated. (Windows XP PRO SP3 w/all current updates.)

    Read the article

  • Mountable online storage (no syncing)

    - by Sam
    I have a Linux VPS that I would like to turn into a media server. Like most cheap VPS's, it has a fairly small storage capacity. What I would like to do is attach the box to an online backup system such as SpiderOak or DropBox, where the files would reside and be directly accessible to either a webserver or media server software. Since the VPS hdd is small, I do not want the files to be synced to it. I would like a storage system that is online only. Ideally mountable like a network drive. Are there any services that suit my needs, or workarounds for services such as SpiderOak that do not require syncing?

    Read the article

  • Bacula Director and Storage in LAN

    - by B14D3
    I have two networks LAN and DMZ.. Machines in DMZ are accesible from internet ( only over http). In LAN I have servers that see all LAN and all DMZ machines but machinse from DMZ don't see any LAN servers. Machines in LAN have access only to all LAN and DMZ, no direct access to internet and no access from internet. DMZ <------ LAN DMZ ----X--->LAN I'm planning to configure Bacula as major backup system. My plan is to install Bacula Director and Storage deamon on the same server in LAN for safety reasons. So my question is: Will this configuration work, is it posible for bacula director and storage deamon installed on server in LAN to makes backup servers that are in my DMZ? Or in this network configuration Bacula should be in DMZ? (If yes will I can backup with it servers in LAN ?)

    Read the article

  • Have both domain and non-domain users use NetApp CIFS storage

    - by zladuric
    My specific use case is that I have a NetApp CIFS storage that's in the domain, say, intranet. But I also have one hyper-v host that's not in the domain. I can't allow it into the domain, but I need to create guests whose VHD is on the storage. How can I achieve this? When I simply connect to a CIFS share and create a VHD there, it works, but when I try to add that VHD to the virtual machine, I get a "Failed to set folder permissions" error - probably due to folder ownership being in the domain admin. Is there a way around this? So both domain and non-domain users use the same directory?

    Read the article

  • Intel Rapid Storage Technology service always crashes

    - by Massimo
    I'm running Windows 7 x64 on a system based on an Asus Z87-Deluxe motherboard; the storage is configured for RAID mode; there is a single SSD drive for the O.S. and two 4-TB disks in a RAID 1 setup for the data. I've installed the latest version of Intel's Rapid Storage Technology drivers, 12.8.0.1016. The program complains about its service not being running, and the service is actually stopped; if I try to start it, it crashes. I've already tried reinstalling the package, but nothing changed. All the disks work correctly, but the RST program is unusable. How can I fix this?

    Read the article

  • OS X server large scale storage and backup

    - by user135217
    I really hope this question doesn't come across as trolling or asking for buying advice. It's not intended. I've just started working for a small ad agency (40 employees). I actually quit being a system administrator a few years ago (too stressful!), but the company we're currently outsourcing our IT stuff to is doing such a bad job that I've felt compelled to get involved and do what I can to improve things. At the moment, all the company's data is stored on an 8TB external firewire drive attached to a Mac Mini running OS X Server 10.6, which provides filesharing (using AFP) for the whole company. There is a single backup drive, which is actually a caddy containing two 3TB hard drives arranged in RAID 0 (arrggghhhh!), which someone brings in as and when and copies over all the data using Carbon Copy Cloner. That's the entirety of the infrastructure, and the whole backup and restore strategy. I've been having sleepless nights. I've just started augmenting the backup process with FreeBSD, ZFS, sparse bundles and snapshot sends to get everything offsite. I think this is a workable behind the scenes solution, but for people's day to day use I'm struggling. Given the quantity and importance of the data, I think we should really be looking towards enterprise level storage solutions, high availability and so on, but the whole company is all Mac all the time, and I cannot find equipment that will do what we need. No more Xserve; no rack storage; no large scale storage at all apart from that Pegasus R6 that doesn't seem all that great; the Mac Pro has fibre channel, but it's not a real server and it's ludicrously expensive; Xsan looks like it's on the way out; things like heartbeatd and failoverd have apparently been removed from Lion Server; the new Mac Mini only has thunderbolt which severely limits our choices; the list goes on and on. I'm really, really not trying to troll here. I love Macs, but I just genuinely don't know where I'm supposed to look for server stuff. I have considered Linux or FreeBSD and netatalk for serving files with all the server-y goodness those OSes bring, but some the things I've read make me wonder if it's really the way to go. Also, in my own (admittedly quite cursory) experiments with it, I've struggled to get decent transfer speeds. I guess there's also the possibility of switching everyone off AFP and making them use SMB or NFS, but I understand that this can cause big problems with resource forks and file locks. I figure there must be plenty of all Mac companies out there. If you're the sysadmin at one, what do you use? Any suggestions very gratefully received.

    Read the article

  • Anyone know any good backend user online file manager?

    - by skyhigh
    Hi I'm looking for a backend system where your clients can login and upload files to your server, download files from the server and you can delete the users, create users, etc. I do not know the proper name for this kind of software. Maybe its called online file manager? Any recommendations? My server supports PHP, apache and mysq. Thanks

    Read the article

  • Advantages of SQL Backup Pro

    - by Grant Fritchey
    Getting backups of your databases in place is a fundamental issue for protection of the business. Yes, I said business, not data, not databases, but business. Because of a lack of good, tested, backups, companies have gone completely out of business or suffered traumatic financial loss. That’s just a simple fact (outlined with a few examples here). So you want to get backups right. That’s a big part of why we make Red Gate SQL Backup Pro work the way it does. Yes, you could just use native backups, but you’ll be missing a few advantages that we provide over and above what you get out of the box from Microsoft. Let’s talk about them. Guidance If you’re a hard-core DBA with 20+ years of experience on every version of SQL Server and several other data platforms besides, you may already know what you need in order to get a set of tested backups in place. But, if you’re not, maybe a little help would be a good thing. To set up backups for your servers, we supply a wizard that will step you through the entire process. It will also act to guide you down good paths. For example, if your databases are in Full Recovery, you should set up transaction log backups to run on a regular basis. When you choose a transaction log backup from the Backup Type you’ll see that only those databases that are in Full Recovery will be listed: This makes it very easy to be sure you have a log backup set up for all the databases you should and none of the databases where you won’t be able to. There are other examples of guidance throughout the product. If you have the responsibility of managing backups but very little knowledge or time, we can help you out. Throughout the software you’ll notice little green question marks. You can see two in the screen above and more in each of the screens in other topics below this one. Clicking on these will open a window with additional information about the topic in question which should help to guide you through some of the tougher decisions you may have to make while setting up your backup jobs. Here’s an example: Backup Copies As a part of the wizard you can choose to make a copy of your backup on your network. This process runs as part of the Red Gate SQL Backup engine. It will copy your backup, after completing the backup so it doesn’t cause any additional blocking or resource use within the backup process, to the network location you define. Creating a copy acts as a mechanism of protection for your backups. You can then backup that copy or do other things with it, all without affecting the original backup file. This requires either an additional backup or additional scripting to get it done within the native Microsoft backup engine. Offsite Storage Red Gate offers you the ability to immediately copy your backup to the cloud as a further, off-site, protection of your backups. It’s a service we provide and expose through the Backup wizard. Your backup will complete first, just like with the network backup copy, then an asynchronous process will copy that backup to cloud storage. Again, this is built right into the wizard or even the command line calls to SQL Backup, so it’s part a single process within your system. With native backup you would need to write additional scripts, possibly outside of T-SQL, to make this happen. Before you can use this with your backups you’ll need to do a little setup, but it’s built right into the product to get this done. You’ll be directed to the web site for our hosted storage where you can set up an account. Compression If you have SQL Server 2008 Enterprise, or you’re on SQL Server 2008R2 or greater and you have a Standard or Enterprise license, then you have backup compression. It’s built right in and works well. But, if you need even more compression then you might want to consider Red Gate SQL Backup Pro. We offer four levels of compression within the product. This means you can get a little compression faster, or you can just sacrifice some CPU time and get even more compression. You decide. For just a simple example I backed up AdventureWorks2012 using both methods of compression. The resulting file from native was 53mb. Our file was 33mb. That’s a file that is smaller by 38%, not a small number when we start talking gigabytes. We even provide guidance here to help you determine which level of compression would be right for you and your system: So for this test, if you wanted maximum compression with minimum CPU use you’d probably want to go with Level 2 which gets you almost as much compression as Level 3 but will use fewer resources. And that compression is still better than the native one by 10%. Restore Testing Backups are vital. But, a backup is just a file until you restore it. How do you know that you can restore that backup? Of course, you’ll use CHECKSUM to validate that what was read from disk during the backup process is what gets written to the backup file. You’ll also use VERIFYONLY to check that the backup header and the checksums on the backup file are valid. But, this doesn’t do a complete test of the backup. The only complete test is a restore. So, what you really need is a process that tests your backups. This is something you’ll have to schedule separately from your backups, but we provide a couple of mechanisms to help you out here. First, when you create a backup schedule, all done through our wizard which gives you as much guidance as you get when running backups, you get the option of creating a reminder to create a job to test your restores. You can enable this or disable it as you choose when creating your scheduled backups. Once you’re ready to schedule test restores for your databases, we have a wizard for this as well. After you choose the databases and restores you want to test, all configurable for automation, you get to decide if you’re going to restore to a specified copy or to the original database: If you’re doing your tests on a new server (probably the best choice) you can just overwrite the original database if it’s there. If not, you may want to create a new database each time you test your restores. Another part of validating your backups is ensuring that they can pass consistency checks. So we have DBCC built right into the process. You can even decide how you want DBCC run, which error messages to include, limit or add to the checks being run. With this you could offload some DBCC checks from your production system so that you only run the physical checks on your production box, but run the full check on this backup. That makes backup testing not just a general safety process, but a performance enhancer as well: Finally, assuming the tests pass, you can delete the database, leave it in place, or delete it regardless of the tests passing. All this is automated and scheduled through the SQL Agent job on your servers. Running your databases through this process will ensure that you don’t just have backups, but that you have tested backups. Single Point of Management If you have more than one server to maintain, getting backups setup could be a tedious process. But, with Red Gate SQL Backup Pro you can connect to multiple servers and then manage all your databases and all your servers backups from a single location. You’ll be able to see what is scheduled, what has run successfully and what has failed, all from a single interface without having to connect to different servers. Log Shipping Wizard If you want to set up log shipping as part of a disaster recovery process, it can frequently be a pain to get configured correctly. We supply a wizard that will walk you through every step of the process including setting up alerts so you’ll know should your log shipping fail. Summary You want to get your backups right. As outlined above, Red Gate SQL Backup Pro will absolutely help you there. We supply a number of processes and functionalities above and beyond what you get with SQL Server native. Plus, with our guidance, hints and reminders, you will get your backups set up in a way that protects your business.

    Read the article

  • Can't add NetworkManager applet to gnome panel

    - by Nate
    I have researched this problem extensively and I can't seem to find an answer. In Ubuntu 10.04 LTS, I want to connect to my VPN through the NetworkManager applet. I installed all the network manager packages, including the gnome client. I understand I need to add the "Notification Area" to the panel, which I have done. I checked that the NetworkManager is running: nate@nate-desktop:~$ service network-manager status network-manager start/running, process 763 In /etc/NetworkManager/nm-system-settings.conf, I have added managed=true (don't know if this matters, but I saw it suggested on one forum): nate@nate-desktop:~$ more /etc/NetworkManager/nm-system-settings.conf # This file is installed into /etc/NetworkManager, and is loaded by # NetworkManager by default. To override, specify: '--config file' # during NM startup. This can be done by appending to DAEMON_OPTS in # the file: # # /etc/default/NetworkManager # [main] plugins=ifupdown,keyfile [ifupdown] #managed=false managed=true I restarted NetworkManager and tried rebooting, too. At this point, it looks like NetworkManager is running but it's not appearing in the NotificationArea of the panel. I don't know what else to try. Any ideas?

    Read the article

  • Display Call To Action bar on page load [migrated]

    - by dasickle
    I am using the following code to load the bar on click but I can't figure our how to load it on page load automatically. <script> var autohide; $('body').prepend('<div id="bn-bar"><b>DON\'T MISS OUT!</b> Only 9 seats remain for the Google Tag Manager training on May 22! <a href="#">Book Your Seat Today!</a><div id="hider"> </div></div>'); $(document).ready(function(){ $("#hider").click(function(){ $("#bn-bar").animate({ top: "-50" }, "fast","linear", function(){}); }) $("#bn-bar").mouseover(function(){clearTimeout(autohide);}); setTimeout(function(){$("#bn-bar").animate({top: "0"}, "slow","linear", function(){});},2500); autohide = setTimeout(function(){$("#bn-bar").animate({top: "-30"}, "fast","linear", function(){});},10000); }) </script> Basically I am trying to load a the message when user enters my website and I will be inserting it via Google Tag Manager. Below is a page where I found the code: Creative Tag Manager – Ads, Promotions, and Visitor Messaging -Lunametrics

    Read the article

  • Distribution upgrade (12.04 -> 14.04 LTS) halted while unpacking/installing packages

    - by Bob Sully
    As the title states...it just stopped unpacking/installing. "Preparing to unpack .../lirc_0.9.0-0ubuntu5_amd64.deb ..." then stopped in its tracks. Everything else is still running. The update manager process is still alive; if I hit ctrl-c, it gives me the warning message about leaving the system in a broken state. Also, if I run top, there is a process called "trusty" which is still running. I have NOT killed either process. lsb_release -a gives: LSB Version: core-2.0-amd64:core-2.0-noarch:core-3.0-amd64:core-3.0-noarch:core-3.1-amd64:core-3.1-noarch:core-3.2-amd64:core-3.2-noarch:core-4.0-amd64:core-4.0-noarch Distributor ID: Ubuntu Description: Ubuntu 14.04.1 LTS Release: 14.04 Codename: trusty I assume that if I try to restart update-manager, I won't be offered the option to upgrade again. Anyone have a way I can get the update-manager/dist-upgrade process to simply finish the upgrade? Thanks!

    Read the article

  • The new Auto Scaling Service in Windows Azure

    - by shiju
    One of the key features of the Cloud is the on-demand scalability, which lets the cloud application developers to scale up or scale down the number of compute resources hosted on the Cloud. Auto Scaling provides the capability to dynamically scale up and scale down your compute resources based on user-defined policies, Key Performance Indicators (KPI), health status checks, and schedules, without any manual intervention. Auto Scaling is an important feature to consider when designing and architecting cloud based solutions, which can unleash the real power of Cloud to the apps for providing truly on-demand scalability and can also guard the organizational budget for cloud based application deployment. In the past, you have had to leverage the the Microsoft Enterprise Library Autoscaling Application Block (WASABi) or a services like  MetricsHub for implementing Automatic Scaling for your cloud apps hosted on the Windows Azure. The WASABi required to host your auto scaling block in a Windows Azure Worker Role for effectively implementing the auto scaling behaviour to your Windows Azure apps. The newly announced Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. Unlike WASABi hosted on a Worker Role, you don’t need to host any monitoring service for using the new Auto Scaling service and the Auto Scaling service will be available to individual Windows Azure Compute Services as part of the Scaling. Configure Auto Scaling for a Windows Azure Cloud Service Currently the Auto Scaling service supports Cloud Services, Web Sites and Virtual Machine. In this demo, I will be used a Cloud Services app with a Web Role and a Worker Role. To enable the Auto Scaling, select t your Windows Azure app in the Windows Azure management portal, and choose “SCLALE” tab. The Scale tab will show the all information regards with Auto Scaling. The below image shows that we have currently disabled the AutoScale service. To enable Auto Scaling, you need to choose either CPU or QUEUE. The QUEUE option is not available for Web Sites. The image below demonstrates how to configure Auto Scaling for a Web Role based on the utilization of CPU. We have configured the web role app for running with 1 to 5 Virtual Machine instances based on the CPU utilization with a range of 50 to 80%. If the aggregate utilization is becoming above above 80%, it will scale up instances and it will scale down instances when utilization is becoming below 50%. The image below demonstrates how to configure Auto Scaling for a Worker Role app based on the messages added into the Windows Azure storage Queue. We configured the worker role app for running with 1 to 3 Virtual Machine instances based on the Queue messages added into the Windows Azure storage Queue. Here we have specified the number of messages target per machine is 2000. The image below shows the summary of the Auto Scaling for the Cloud Service after configuring auto scaling service. Summary Auto Scaling is an extremely important behaviour of the Cloud applications for providing on-demand scalability without any manual intervention. Windows Azure provides greater support for enabling Auto Scaling for the apps deployed on the Windows Azure cloud platform. The new Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. In the new Auto Scaling service, you don’t have to host any monitor service like you have had in WASABi block. The Auto Scaling service is an excellent alternative to the manually hosting WASABi block in a Worker Role app.

    Read the article

  • ubuntu 13.04 recognizes usb mobile broadband modem as ethernet connection

    - by Bence Mihalka
    When I plug in my usb mobile broadband modem (ZTE MF-667), in the network manager instead of a mobile broadband connection, I get an ethernet connection, called: Ethernet Network (ZTE WCDMA Technologies MSM), which of course doesn't work. Here is my lsusb output and the relevant parts of dmesg output: lsusb: Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 0cf3:3005 Atheros Communications, Inc. AR3011 Bluetooth Bus 001 Device 004: ID 04f2:b1b9 Chicony Electronics Co., Ltd Asus Integrated Webcam Bus 001 Device 005: ID 058f:6366 Alcor Micro Corp. Multi Flash Reader Bus 002 Device 004: ID 19d2:1405 ZTE WCDMA Technologies MSM dmesg: [ 195.328467] usb 2-1.1: new high-speed USB device number 3 using ehci-pci [ 195.423545] usb 2-1.1: New USB device found, idVendor=19d2, idProduct=1225 [ 195.423555] usb 2-1.1: New USB device strings: Mfr=3, Product=2, SerialNumber=4 [ 195.423561] usb 2-1.1: Product: ZTE WCDMA Technologies MSM [ 195.423567] usb 2-1.1: Manufacturer: ZTE,Incorporated [ 195.423572] usb 2-1.1: SerialNumber: P680A1ZTED000000 [ 195.426319] scsi7 : usb-storage 2-1.1:1.0 [ 196.425354] scsi 7:0:0:0: CD-ROM CWID USB SCSI CD-ROM 2.31 PQ: 0 ANSI: 2 [ 197.447919] usb 2-1.1: USB disconnect, device number 3 [ 197.457582] sr0: scsi3-mmc drive: 243x/186x xa/form2 cdda pop-up [ 197.457594] cdrom: Uniform CD-ROM driver Revision: 3.20 [ 197.459058] sr 7:0:0:0: Attached scsi CD-ROM sr0 [ 197.459483] sr 7:0:0:0: Attached scsi generic sg2 type 5 [ 197.759186] usb 2-1.1: new high-speed USB device number 4 using ehci-pci [ 197.854543] usb 2-1.1: New USB device found, idVendor=19d2, idProduct=1405 [ 197.854556] usb 2-1.1: New USB device strings: Mfr=4, Product=3, SerialNumber=5 [ 197.854564] usb 2-1.1: Product: ZTE WCDMA Technologies MSM [ 197.854572] usb 2-1.1: Manufacturer: ZTE,Incorporated [ 197.854579] usb 2-1.1: SerialNumber: P680A1ZTED010000 [ 197.957739] scsi8 : usb-storage 2-1.1:1.2 [ 198.076554] cdc_ether 2-1.1:1.0 eth1: register 'cdc_ether' at usb-0000:00:1d.0-1.1, CDC Ethernet Device, 00:a0:c6:00:00:00 [ 198.076583] usbcore: registered new interface driver cdc_ether [ 198.955985] scsi 8:0:0:0: CD-ROM CWID USB SCSI CD-ROM 2.31 PQ: 0 ANSI: 2 [ 198.956797] scsi 8:0:0:1: Direct-Access ZTE MMC Storage 2.31 PQ: 0 ANSI: 2 I created the appropriate mobile broadband connection manually, but I cannot enable it in network manager, since the device is not recognized as mobile broadband. Any tips how to make it work?

    Read the article

  • Android SDK Manager and AVD Manager doesn't have the correct information and fails to update on Ubun

    - by Johan Carlsson
    I'm trying to install Android SDK on Ubuntu but fail when I try to use the SDK Manager and AVD Manager to install Android platforms. I've downloaded: android-sdk_r04-linux_86.tgz The I start the SDK Manager and AVD Manager (UI) according to the README file: ./tools/android And I get the following Installed Packages: - Install SDK Tools, revision 4 Available Packages: - https://dl-ssl.google.com/android/repoisotry/repository.xml - This repository requires a more recent version of the Tools. Please update- - Android SDK Tools, revision 4 - Archive for Linux (comment: funny since rev 4 seems to be what's installed this is what seems to be installed) Now doing an update of the Android SDK Tools, revision 4 or everything results in 99% progress and then the application hangs. Here's the console feedback: johanc@johan-desktop:~/android/android-sdk-linux_86$ tools/android Starting Android SDK and AVD Manager No command line parameters provided, launching UI. See 'android --help' for operations from the command line. Error: null In the app I choose to upgate the following package: Package Description Android SDK Tools, revision 4 Archive Description Archive for Linux Size: 15 MiB SHA1: 99380c9330c1c3728c836206947350cc00fa28c2 Site https://dl-ssl.google.com/android/repository/repository.xml The console output reads (and the app hangs at 99%): Exception in thread "Installing Archives" java.lang.AssertionError at com.android.sdkuilib.internal.tasks.ProgressTask.incProgress(ProgressTask.java:97) at com.android.sdkuilib.internal.repository.UpdaterData$2.run(UpdaterData.java:358) at com.android.sdkuilib.internal.tasks.ProgressTask$1.run(ProgressTask.java:135)

    Read the article

  • SSRS 2008 Report Manager Error

    - by Nick
    I have just installed SQL Server 2008 including Reporting Services on Windows Server 2003. I'm having a problem though accessing the Report Manager. When the Reporting Service is first started I can access it fine but after maybe an hour when I try and access it I get an error saying: Unable to connect to the remote server. The reporting service is still running at this point. I can connect to it through Reporting Services Configuration Manager and clicking on the Web Service URL gives a directory listing (I assume that is correct behaviour). If I stop and start the service through Reporting Services Configuration Manager then I can access Report Manager once again (although in maybe an hour I will get the same error once again). I've installed the latest SP1 service pack. I'm using the same domain account to run all the SQL services. The report server is set to use the default ReportServer virtual directory, is set to IP address All Assigned, TCP Port 80 and no SSL certificate. The report manager is set to use the default Reports virtual directory, IP address All Assigned, TCP Port 80 and no SSL certificates. In the log file I get an error: Unable to connect to remote server HTTP status code 500 An attempt was made to access a socket in a way forbidden by its access permissions. Does anyone have any idea why this is happening? I've searched the net but haven't been able to find a solution.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >