Search Results

Search found 16628 results on 666 pages for 'setup kit'.

Page 396/666 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • What would cause SQL 2008 Log Reader Agent to fail with "This process could not execute 'sp_replcmds

    - by Rick
    I've seen this error message in other posts. They didn't seem to help resolving our issue. We are trying this with two SQL Server 2008 servers. I backed up my database from the source server and then restored it on our destination server. We setup basic Transaction Replication. The Snapshot Agent is working fine. The Log Reader Agent fails with the error above. Is it most likely a login issue for this job or QueryTimeout?

    Read the article

  • How to create shared home directories across multiple computers?

    - by Joe D
    I know there are ways to share a folder across computers making it easy to move files. But I was wondering how one would setup a single login which lets you access the same files regardless of which machine you login on? What I would like is something similar to something you would see in a college campus where students login on machines in the lab and see their files regardless of which machine they use. I know there are server involved here. I have a need to create this on a smaller scale where we have a few computers available (and one of these could act as the server if needed and host the files) that every one shares. Note, the specific install of software might be different on each computer but the login and OS are the same. Since some computers have additional capability that our group members will need to use at rotating schedules (software licenses or hardware components, etc.). I have not done this before, so I would appreciate detailed instructions if possible or a reference to a guide that describes this. Thanks in advance.

    Read the article

  • Using runit and monit to run / monitor services

    - by murtaza52
    I am configuring some services to run on Ubuntu server. I was going through the link below where they use runit to run the services and monit to monitor the services - http://rubyworks.rubyforge.org/manual/monit.html http://rubyworks.rubyforge.org/manual/runit.html 1) The services are all started through monit. 2) Monit inturn starts them using runit. What is the advantage of using the above setup, where the services are run using runit via Monit. Why use runit in the middle, instead of directly starting them with monit?

    Read the article

  • print job doesn't go to print queue

    - by flatsguide
    I have two printers hooked up to my intel imac. I am having a printing error. It seems that whenever i try to print (a simple text doc) I am unable to get the print job to go to print queue with one of my printers. I have a HP c7280 and a HP c3100. I am able to get one working properly, but the other doesn't seem to allow it to go to the print queue. I have switched usb cables (with the printer that I know works) and both printers are recognized by the computer in the printer preferences pane. I've tried reseting the printer system in the printer setup utility.. reloaded the drivers from HP.. etc. If anyone has a suggestion or could point me to a little help i'd be VERY grateful Best Regards.. B

    Read the article

  • Desktop goes un-usable after upgrade to 12.04

    - by Tom Nail
    I have multiple Ubuntu systems connected to a KVM, one of which I recently upgraded from Ubuntu 10 to 12.04. After the upgrade, this system desktop does fine until it is allowed to go to idle (i.e., I've switched to another system on the KVM and it locks it's desktop). When I come back to it, the screen is garbled and paging across at a rate seemingly determined by the mouse. Although no pointer is visible, I can get the screen to stop paging (and just be garbled) by moving the mouse left and right. The paging will slow down and come to a stop, if I can align things carefully enough. This condition persists even when I try to go to a CLI-based login (e.g., CTRL+Alt+F1) and will continue until I reboot the machine. Unfortunately, I'm not very familiar with the Unity desktop, so I don't know where to find things to troubleshoot. A restart of lightdm doesn't change anything, so I'm wondering if this might be more hardware based( although this machine hasn't given me any trouble previously in the same setup). The .xsession-errors file has some issues with compiz, nautilus and GConf listed, but I'm not sure those are actually germane to the issue. Thanks for any help, -=Tom

    Read the article

  • Create intentional border with xrandr

    - by benizi
    Is there a way to tell xrandr "this space intentionally left blank"? I have a laptop that drives its internal display at 1920x1080, but the external monitor I'm using, due to its different aspect ratio, doesn't have that mode. It runs at 1920x1200. So, the basic setup: xrandr \ --output LVDS-1 --mode 1920x1080 \ --output DP-1 --mode 1920x1200 --same-as LVDS-1 [not to scale:] +-----------------------------------+ ¦ ¦ ¦ ¦ (laptop) ¦ (external) ¦ ¦ (LVDS-1) ¦ (DP-1) ¦ ¦ ¦ ¦ ¦ ¦ ¦ +-----------------¦ ¦ (blank...) ¦ ¦ +-----------------+ How can I specify that the 1920x120-sized region below LVDS-1 should be displayed as a black bar that can't be accessed by mouse on DP-1? I tried just coping with --panning 1920x1200+0+0/1920x1080+0+0/0/0/0/120, but I found the screen movement to be very annoying. Update: I found a workaround. (Update 2: changed it to an answer, per suggestion -- workaround doesn't answer the underlying question of leaving space blank.)

    Read the article

  • ESXi5 - management services crashes - vms running

    - by Frederik Nielsen
    I have a setup with two ESXi5 servers. We are(were) running with a ISCSi box to server disk for the VM's - however we are in the progress of migrating away from it, because the storage os disk is bad. Now, one of the ESXi hosts has been running for ~20hrs, and it seems like the management services just crashed on that host.. The vms are still running - so it's not really serious. However, I want to fix it. Should I be worried? Will the VM's keep running? The hosts does respond on pings. I am running a vcenter to administrate the hosts. Thanks in advance.

    Read the article

  • Recovering an old website

    - by noah
    I have a client with an old website that somebody setup for him long ago. The guy who set it up is unreachable, so how do we go about trying to take it over? A WHOIS lookup got us some contact information, but I don't have great hopes for that (it hasn't been update in quite some time). The nameservers are ns1.theplanet.com and ns2.theplanet.com, and we will try calling them, but I don't expect we'll be able to get much from them. What are our options? Is there a way I can discover the registrar so we can try contacting them as well? EDIT: It would be sufficient if we could get control of the domain name or put in some sort of redirect to the new site. Either hosting was prepaid for quite some time, or someone else is still paying for it, so we don't care about that.

    Read the article

  • Running a VM off of an external HD via USB

    - by Nelson LaQuet
    Is it viable to run (i.e. reference the vmx/vhd directly from the mounted drive) a VM (vmware running Windows Seven) off of an external HD via USB? I mean, I know it's possible, but I guess I'm asking if USB provides enough bandwidth for normal usage... If so, are there any particular brands that may be better or worse? I know that ESATA would be a more viable setup, but my laptop doesn't have an ESATA port. Currently I use the VM to segregate all of my work development servers and software from my main machine; so I will be running all development servers and tools on the VM directly.

    Read the article

  • Hard Drive upgrade advice for a Dell PowerEdge 1950

    - by user8185
    My setup is a Dell PowerEdge 1950 with 2x 140Gb Hard drives in a RAID 1 configuration. The OS is Windows 2003 Web Edition and disk is partitioned into to two, a 12Gb C: partition and the remainder is the D: drive. Both are very close to full capacity. Ideally I want to replace those drives with 2x 1Tb drives while retaining all data. First of all is this possible without rebuilding the server? If so, will I need any 3rd party software, Symantec Ghost, Partition Master for e.g., to do this? Any general advice as to how to go about doing this?

    Read the article

  • disable browser localization

    - by broiyan
    How do I get the websites that I visit to stop localizing the language probably according to my IP location? This is an website specific issue because, for example economist.com and superuser.com do not do it, but Google Checkout and craigslist.org are doing it. Is there a way to setup Ubuntu and Firefox so that English will always be used for all web pages displayed? Edit: Of course many webpages have a link to an English version, but sometimes they don't. For example I believe such links usually appear on the root resource but sometimes I see non-English languages on child resources where such links do not appear. Example: most Blogger.com blogs appear in English but when I go to the blogger's profile ("view my complete profile"), it appears in another language that matches my geographic location.

    Read the article

  • How to calculate required switch speed based on network usage?

    - by tobefound
    I have a 48 port HP Procurve Switch 2610 (J9088A) that can handle 13.0 million PPS (packets per second) and features wire speed switching capacity at 17.6Gbps. First off, what does that REALLY mean? Where do I start when trying to figure out if my office (with 70 employees) will be well setup with this switch? How to calculate through-put based on a user average load of X MB per day? 90% of the folks will only be sending email, access random websites, etc... the other 10% will be conducting heavier tasks like moving image files (10 MB) across network shares, constant external FTP streams through the switch to a server etc... Is this switch good enough?

    Read the article

  • Multiple .bkf files created in Backupexec 12.5 or 2010 related to heavy I/O?

    - by syuusuke
    Hey everyone, I was wondering if anyone who has used backupexec 12.5 or 2010 have ever experienced multiple .bkf files created for a single job. To describe what I mean by multiple files, the .bkf are being created with random file sizes under 2GB even though I've assigned the setting to chop the file after 10GB size. Some jobs will create 20x .bkf files in 1 job with file chunks ranging from 50MB to 800MB sizes. Is this is a sign of heavy I/O issues? Bandwidth limitations? I'm not sure, I'm here to seek some advices and suggestions. I've setup another backup server with the same exact settings and they seem to create a new .bkf file when 10GB limit has been reached. Although I am backing up different machines but I know my settings are an exact match to the problematic or atleast I think it's a problem.

    Read the article

  • Check the disk for problems on Debian Lenny

    - by Equ
    Hi guys! I just bought a VPS hosting with Debian Lenny (I'm new to all this world). I've managed to install and setup everthing I need pretty well. My testing website works fast as expected most of the time, but sometimes it is really slow (response time is about 5-10 seconds). I checked everything and seems that there are may be some disk issues. How can I check the disk for problems/performance? What else could possible cause such a behaviour? Thank you!

    Read the article

  • Adding multiple websites with different SSL certificates in IIS 7

    - by Timka
    I'm having troubles using SSL for 2 different websites on my IIS 7 server. Please see my setup below: website1: my.corporate.portal.com SSL certificate for website1: *.corporate.portal.com https/443 binded to my.corporate.portal.com website2: client.portal.com SSL certificate issued for: client.portal.com When I try to bind https in IIS7 with the client's certificate, I don't have an option to put host name(grayed out) and as soon as I select 'client.portal.com' cert, I'm getting the following error in IIS: At least one other site is using the same HTTPS binding and the binding is configured with a different certificate. Are you sure that you want to reuse this HTTPS binding and reassign the other site or sites to use the new certificate? If I click 'yes' my.corporate.portal.com website stops using the proper SSL cert. Could you suggest something?

    Read the article

  • Processor upgrade on a laptop vs. Ram upgrade. Also does ram always matter?

    - by Evan
    I have a Dell Inspiron 14r (N4110) with an Intel Core i3 and 4gb of ram. It runs very smoothly, however gaming on this laptop is very limited. This is mostly because of integrated graphics but i have seen a computer with a Core i5, and very similar specs otherwise, run games that the N4110 cannot. This other computer has integrated graphics and 6gb of ram. I am wondering whether upgrading ram or upgrading the processor make the most difference in performance. Which setup would get better performance, an i5 with 4gb of ram or an i3 with 8 gb of ram? (Both with integrated graphics) Also, is there a certain point at which you have too much ram for the computer to ever possibly use? For instance is there really any difference in performance between 8 gb and 16 gb of ram?

    Read the article

  • How to boot window from recovery partition?

    - by Zack
    Acer service center created recovery discs for my acer laptop. And they also created a partition in which contains the data from recovery discs. I can see that partition from disk management only. But how do I boot from it? Some months ago I have linux os installed. So when the laptop boot up I can see that partition. But not Now. How to boot from it? I can't see that drive when i press F12. F2 = enter BIOS setup F8 = Boot in safe mode F12 = choose the boot drive

    Read the article

  • sudoers security

    - by jetboy
    I've setup a script to do Subversion updates across two servers - the localhost and a remote server - called by a post-commit hook run by the www-data user. /srv/svn/mysite/hooks/post-commit contains: sudo -u cli /usr/local/bin/svn_deploy /usr/local/bin/svn_deploy is owned by the cli user, and contains: #!/bin/sh svn update /srv/www/mysite ssh cli@remotehost 'svn update /srv/www/mysite' To get this to work I've had to add the following to the sudoers file: www-data ALL = (cli) NOPASSWD: /usr/local/bin/svn_deploy cli ALL = NOEXEC:NOPASSWD: /usr/local/bin/svn_deploy Entries for both www-data and cli were necessary to avoid the error: post commit hook failed: no tty present and no askpass program specified I'm wary of giving any kind of elevated rights to www-data. Is there anything else I should be doing to reduce or eliminate any security risk?

    Read the article

  • Make laptop boot to external monitor

    - by Ozzy
    Hi all. Heres my setup: Dell 1737 2x Dell U2311H The laptop lid is always closed and is wall mounted behind the monitors. Every time i boot the laptop, i have to open the lid a little until it goes to the win 7 logon screen. Once there, i close the lid and both monitors get detected and the laptop screen switches off. As the laptop is wall mounted how ever, its really tedious to keep opening and closing the lid. Is there any way i can make it default to the external monitors permanently? Any suggestions are welcome, even hardware mods. Im willing to rip apart the laptop to install a switch or something if needs be lol

    Read the article

  • How-To: Run CMSDK against a RAC cluster

    - by frank.closheim
    Using CMSDK in a production environment often requires a robust, reliable and failover enabled repository. When using Oracle Real Application Cluster (RAC) with your CMSDK repository you need to have a specific configuration in place to support such a setup. This post will explain the configuration steps required when running CMSDK 9.0.4.6 with Oracle WebLogic Server (WLS).In the previous CMSDK 9.0.4.2 version a RAC enabled connect string looked like this: (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = rac2)(PORT = 1521))(LOAD_BALANCE = NO)(FAILOVER = ON)(CONNECT_DATA =(SERVICE_NAME = rac)(failover_mode = (type=select)(method=basic)))CMSDK 9.0.4.6 makes use of data sources to connect to the underlying database. These data sources are configured inside your Application Server, such as Oracle WebLogic Server.In Oracle WebLogic Server 10.3.4, a single data source implementation has been introduced to support an RAC cluster. It responds to Fast Application Notification (FAN) events to provide Fast Connection Failover (FCF), Runtime Connection Load-Balancing (RCLB), and RAC instance graceful shutdown. XA affinity is supported at the global transaction Id level. The new feature is called WebLogic Active GridLink for RAC; which is implemented as the GridLink data source within WebLogic Server.This GridLink data source also works with Oracle Single Client Access Name (SCAN). SCAN is a feature used in RAC environments that provides a single name for clients to access any Oracle Database running in a cluster. You can think of SCAN as a cluster alias for databases in the cluster. The benefit is that the client’s connect information does not need to change if you add or remove nodes or databases in the cluster.The CMSDK 9.0.4.6 documentation describes how to create a regular JDBC data source named jdbc/OracleDS. Please refer to the following document which describes in detail how to create a GridLink data source in WLS.

    Read the article

  • vps running out of memory, 200MB free

    - by demon
    At the beginning of this year I took a VPS for my website because I was running against the resource limits from a shared hosting. Here are the things I know: 2GB memory, with 1GB swap Debian X64 server ED installed Software running on the webserver: mysql apache postfix pop3 imap amavisd clamd cron fail2ban munin-node pure-ftpd spamd nginx Now for the setup: Nginx listens on port 80 and handles the static files, the php side is done by apache2 running mod_php in combi with apc(no var caching!). Iam using a pretty 'busy' drupal and phpbb stack on the server, for drupal iam using boost and authcache to handle of the server load with a pressflow stack. phpbb is just phpbb3 with some mods installed, but has at max 30 users online at a time.. The problem is that its staring to use the swap after a few days after a reboot and thus the site becomes slower. I'v added pictures of monit and munin, so maybe somebody can help me out... Monit: Munin:

    Read the article

  • Google is re-indexing pages after redirecting URLs from HTTP to HTTPS incorrectly

    - by SLIM
    I upgraded my site so that all pages have gone from using HTTP to HTTPS. I didn't consider that Google treats HTTPS pages differently than HTTP. I recreated my sitemap to so that all links now reflect the new HTTPS URLs and let it be for a few days. (Whoops!) Google is now re-indexing all the HTTPS pages. I have about 19k pages on the site, and Google has already indexed about 8k of the new HTTPS pages. The problem is that Google sees all of these as brand new pages when many of them have a long HTTP history. Of course most of you will recognize the problem, I didn't set up a 301 from the old HTTP to the new HTTPS URLs. Is it too late to do this? Should I switch my sitemap back to HTTP URLs and then 301 redirect to the new HTTPS URls? Or should I leave the sitemap as is, and setup 301 redirects anyway... I'm not even sure if Google is trying to reach the HTTP site anymore. Currently the site is doing 303 redirects (from HTTP to HTTPS), although I haven't figured out why yet.

    Read the article

  • correct format for datetime appended to filename

    - by jhayes
    I'm trying to setup a batch file to execute a set of stored procs and dump the output to a timestamped text file. I'm having problems finding the correct format for the timestamp. Here is what I'm using osql.exe -S <server> -E -Q "EXEC <stored procedure> " -o "c:\filename_%date:~-0,10%_%time:~-0,10%.txt" The error I get is: Cannot open output file - x:\filename_Thu 06/25/_16:26:43.1.txt No such file or directory I can't find the documentation and I've played around with it but can't find the correct format.

    Read the article

  • OpenGL render to texture causing edge artifacts

    - by mysticalOso
    This is my first post here so any help would be massively appreciated :) I'm using C++ with SDL and OpenGL 3.3 When rendering directly to screen I get the following result And when I render to texture I this happens Anti-aliasing is turned off for both. I'm guessing this has something to do with depth buffer accuracy but I've tried a lot of different methods to improve the result but, no success :( I'm currently using the following code to set up my FBO: GLuint frameBufferID; glGenFramebuffers(1, &frameBufferID); glBindFramebuffer(GL_FRAMEBUFFER, frameBufferID); glGenTextures(1, &coloursTextureID); glBindTexture(GL_TEXTURE_2D, coloursTextureID); glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,SCREEN_WIDTH,SCREEN_HEIGHT,0,GL_RGB,GL_UNSIGNED_BYTE,NULL); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); //Depth buffer setup GLuint depthrenderbuffer; glGenRenderbuffers(1, &depthrenderbuffer); glBindRenderbuffer(GL_RENDERBUFFER, depthrenderbuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, SCREEN_WIDTH,SCREEN_HEIGHT); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthrenderbuffer); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, coloursTextureID, 0); GLenum DrawBuffers[1] = {GL_COLOR_ATTACHMENT0}; glDrawBuffers(1, DrawBuffers); // if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) return false; Thank you so much for any help :)

    Read the article

  • R12 Diagnostic Script for Purchasing Encumbrance Issues

    - by Oracle_EBS
    Do you have a Release 12 Purchasing document with an accounting encumbrance error?  Get all the relevant data in one step using the new diagnostic in DOC ID: 1483743.1 -  ‘R12: Diagnostic Script to help troubleshoot Purchasing Encumbrance Issues’.   Avoid the back and forth pinging with support for data collection.   Query the document id in My Oracle Support and add it to your Favorites using the star icon for quick access. The note includes when to use the script and how to use it.  The script will produce a user friendly html output that contains information relevant to encumbrance issues, along with some data validation checks to identify common data corruption issues on your document.  For example in this one diagnostic it will provide information on the following: Ø Cross Product Setup Ø Document Data Dump Ø Funds availability Ø Subledger accounting information Ø GL and AP Invoice Data Ø Debug and Trace This output is ideal for self service, as it provides known issues in the Data Validation section (related to the document) with links to key documentation.   Or the report can be uploaded to support when logging a Service Request. To see more about the diagnostic, attend our September 11, 2012 Webcast ‘Overview of Procurement Patching and New Tools for Issue Resolution’.  Visit Doc ID 1479718.1 to signup.  Note: This topic will not be listed as it has been just added.

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >