Search Results

Search found 8634 results on 346 pages for 'base'.

Page 167/346 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • SQL Server Performance & Latching

    - by Colin
    I have a SQL server 2000 instance which runs several concurrent select statements on a group of 4 or 5 tables. Often the performance of the server during these queries becomes extremely diminished. The querys can take up to 10x as long as other runs of the same query, and it gets to the point where simple operations like getting the table list in object explorer or running sp_who can take several minutes. I've done my best to identify the cause of these issues, and the only performance metric which I've found to be off base is Average Latch Wait time. I've read that over 1 second wait time is bad, and mine ranges anywhere from 20 to 75 seconds under heavy use. So my question is, what could be the issue? Shouldn't SQL be able to handle multiple selects on a single table without losing so much performance? Can anyone suggest somewhere to go from here to investigate this problem? Thanks for the help.

    Read the article

  • Open-source training class/room/instructor resource management software?

    - by Kyle Eli
    We're looking to replace an internal system used for managing training classes with something a bit more robust. Needs to be open-source or have a license level that grants access to source, and needs to be ASP.net (C# preferred, but could live with VB.net) Ultimately, we'll need to be able to assign facilities and instructors, manage attendees, send notifications, and build calendar views. We'll also be integrating with our website to allow on-line sign-up and other things for attendees to manage on their own. We do expect to implement quite a bit of it in-house, but we'd like as broad of a base to start from as we can get. Still, just a really good web-based meeting-room reservation system might make a good enough starting point. In list form: Meeting/training resource management softwareASP.net (C# or VB.net)Source availableWe're expecting to have to modify the software to meet all of our requirements

    Read the article

  • Unable to configure DD-WRT SNMP monitoring with Zabbix

    - by Jien Wai
    Installed Zabbix on Ubuntu but not sure what setting I missed. Base on my concept, I would like to using SNMP to monitoring DD-WRT router which it using SNMP service. I did enable to SNMP service at DD-WRT router page. And also created a host at Zabbix with included DD-WRT template. After I done it I still unable to get any connection/information at Zabbix which mean the router doesn't communicate with Zabbix. The above picture is my DD-WRT's SNMP configuration. http://img13.imageshack.us/img13/2228/rhj2.png Also this is the Zabbix configuration which I have created the service to monitoring my DD-WRT router. http://imageshack.us/a/img853/7311/hlpr.png

    Read the article

  • How can I configure Firefox to assume I have less memory?

    - by WoLpH
    Firefox has a few different settings that automatically get tuned based on the system ram. This is all great if you're running nothing besides Firefox, but when you're running half a dozen apps at the same time and they all assume that they can take a decent chunk of mem it just kills the box. Example settings: http://kb.mozillazine.org/Browser.sessionhistory.max_total_viewers http://kb.mozillazine.org/Browser.cache.memory.capacity How can I make Firefox automatically configure all these settings with the assumption that I only have 512MB of memory instead of 4GB (or whatever number, but you get the idea). I am running Ubuntu 12.04 with Firefox 14 Current workarounds: Running a Windows XP virtual machine with 512MB of ram. It actually runs smooth and takes less memory (including Windows) to run than having Firefox (or Chrome for that matter) run standalone. Install the 32 bit version of Firefox By installing the 32 bit version of firefox (apt-get install firefox:i386) the base memory usage is only about 50% of what it is with the 64 bit.

    Read the article

  • Error attempting to log into Redmine through IIS 7.5 Reverse Proxy

    - by dneaster3
    I am trying to set up Redmine as a subdirectory of our department's intranet site, and also to rebrand it as "Workflow" using IIS's URL Rewrite extension. I have it "working" in that it will serve the page with all the correct rewrites in both the URL and the HTML code. However, when I try to submit a form (including logging in to redmine), IIS gives me one of the the following errors: Your browser sent a request that this server could not understand. or The specified CGI application encountered an error and the server terminated the process. Here's the setup: Redmine installed on a local Windows XP machine using the Bitnami all-in-one installer, which includes: Apache 2 Ruby-on-Rails MySQL Redmine Thin Redmine runs locally at http:/localhost/redmine Redmine runs over the intranet http:/146.18.236.xxx/redmine Windows Server + IIS 7.5 serving up an ASP.NET intranet web application mydept.mycompany.com IIS Extensions Url Rewrite and AAR installed Reverse proxy settings for IIS (shown below) to serve Redmine at mydept.mycompany.com/workflow <rewrite> <rules> <rule name="Route requests for workflow to redmine server" stopProcessing="true"> <match url="^workflow/?(.*)" /> <conditions> <add input="{CACHE_URL}" pattern="^(https?)://" /> </conditions> <action type="Rewrite" url="{C:1}://146.18.236.xxx/redmine/{R:1}" logRewrittenUrl="true" /> <serverVariables> <set name="HTTP_ACCEPT_ENCODING" value="" /> <set name="ORIGINAL_HOST" value="{HTTP_HOST}" /> </serverVariables> </rule> </rules> <outboundRules rewriteBeforeCache="true"> <clear /> <preConditions> <preCondition name="isHTML" logicalGrouping="MatchAny"> <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/html" /> <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/plain" /> <add input="{RESPONSE_CONTENT_TYPE}" pattern="^application/.*xml" /> </preCondition> <preCondition name="isRedirection"> <add input="{RESPONSE_STATUS}" pattern="3\d\d" /> </preCondition> </preConditions> <rule name="Rewrite outbound relative URLs in tags" preCondition="isHTML"> <match filterByTags="A, Area, Base, Form, Frame, Head, IFrame, Img, Input, Link, Script" pattern="^/redmine/(.*)" /> <action type="Rewrite" value="/workflow/{R:1}" /> </rule> <rule name="Rewrite outbound absolute URLs in tags" preCondition="isHTML"> <match filterByTags="A, Area, Base, Form, Frame, Head, IFrame, Img, Input, Link, Script" pattern="^(https?)://146.18.236.xxx/redmine/(.*)" /> <action type="Rewrite" value="{R:1}://mydept.mycompany.com/workflow/{R:2}" /> </rule> <rule name="Rewrite tags with hypenated properties missed by IIS bug" preCondition="isHTML"> <!-- http://forums.iis.net/t/1200916.aspx --> <match filterByTags="None" customTags="" pattern="(\baction=&quot;|\bsrc=&quot;|\bhref=&quot;)/redmine/(.*?)(&quot;)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="true" /> <action type="Rewrite" value="{R:1}/workflow/{R:2}{R:3}" /> </rule> <rule name="Rewrite Location Header" preCondition="isRedirection"> <match serverVariable="RESPONSE_LOCATION" pattern="^http://[^/]+/(.*)" /> <conditions> <add input="{ORIGINAL_URL}" pattern=".+" /> <add input="{URL}" pattern="^/(workflow|redmine)/.*" /> </conditions> <action type="Rewrite" value="http://{ORIGINAL_URL}/{C:1}/{R:1}" /> </rule> </outboundRules> </rewrite> <urlCompression dynamicCompressionBeforeCache="false" /> Any help that you can provide would be appreciated. I get the impression that I'm close adn that it is just one little setting here or there, but I can't seem to make it work.

    Read the article

  • Open source CMS for a university department

    - by Greg Kuperberg
    I realize that this type of question gets asked over and over again. Nonetheless, I want to ask a more specific version. I'm in a university math department. Long ago our sysadmins (or just one at the time) switched to a web content management system. At the time, Zope looked like an informed choice. We have used Zope for years, but at least in my opinion, it has always been a controversial decision. At the time I didn't understand why it was so important to have a web CMS. Now I see that it certainly is important, but I don't know that it should be Zope. The good (even necessary) features of Zope for us are: It's free and Linux-based. It is a true CMS and not something else (e.g. wiki or blog) It lets you write HTML and scripts. What I really don't like about Zope is that the outcome of using it is all-or-nothing in a lot of ways. At least in convenient use, it ends up dividing the enterprise into superusers who can do everything, and lusers who can't do anything (except write their own home pages in plain HTML). It has a huge user manual, which end users won't have time to read. Somehow with the access permissions, the simple thing to do is to let a few admins access all of the source and data and that's it. Since this is a math department, the user base varies from real novices to people who understand computers reasonably well. But as it stands, any change that involves Zope has to go through the sysadmins. When the sysadmins are in a hurry, sometimes they will also just add plain HTML pages to the web site instead of using the Zope framework. It doesn't help matters that Zope is fairly disk-intensive and fairly hype-intensive. Not to dwell on Zope too much, but I am wondering what is the right web CMS for a mixed user base of terminal novices, quick studies, and experienced users. Some users might want intermediate permissions, e.g. read permission but not write permission, or permission to change some subset of the pages or see some subset of the database tables. Also it should be Linux-based and open source and a little bit scalable, and of course widely used and well-supported is a good idea. I might guess that the answer is Drupal just because that was the general answer before, but I don't know if it is the right type of CMS for this purpose. (But note that Python is a relatively popular language in a math department, among other reasons because Sage is based on Python.) I can see that I didn't completely define the question and that people are guessing what type of site it is. It is the UC Davis Math Department. The main structure of the site is not suitable for a wiki and it is also not the same thing as a course environment like Moodle. Rather, the site is mostly structured as a generic medium-small enterprise. Some components of the site could be a wiki, Moodle, LaTeX plugin, Request Tracker, etc. However, the main issue is not these components. The main issue is that it would be better to decentralize management of the site. Right now, everything that is in the Zope CMS has to go through the sysadmins. Every other user in the department either has to put in a request to them, or write their own web pages with no help from Zope. There are two main reasons for this: (1) Other people in the department don't have time to read the Zope manual. (2) It's a hassle to set up intermediate permissions in Zope. However, there are other people in the department who know how to write computer programs and use markup languages. I wouldn't want a solution that assumes that users either can't be trusted with much more than drag-and-drop, or that they are IT professionals who sleep with documentation manuals. I'm wondering if Plone/Zope still has this quality, since certainly Zope by itself does. But I also wonder sometimes if common-sense flexibility is unfashionable these days, and that things in general have be either mindlessly easy or incredibly powerful.

    Read the article

  • Is it easy to update ubuntu beta to ubuntu final release?

    - by Peter Smit
    At this moment I am preparing a virtual server to host a web application which needs php5.3 The virtual server base image is always Hardy (8.04 LTS). There is no php5.3 until the upcoming release in a few days: Lucid (9.04 LTS). I am seeing to options: - waiting until the final version is released and then start preparing this server - Now upgrading to the beta (do-release-upgrade --devel-release) and when the final release has come upgrading to that For time constraints I would prefer the second option. I only can't find whether it will be easy to upgrade from a beta to the 'clean' final release. Is this possible in an easy way. Will it have any drawback for security or will there be any traces left of it being ever a beta release? Note: the server will not go into production before the LTS is really installed.

    Read the article

  • Single sign-on for SharePoint to MySite?

    - by Chris W
    I've got a fairly simple SharePoint 2010 farm set up: 2 WFE servers with Network Load Balancing hosting the main portal site. As per Microsoft's best practice recommendations I've set up My Sites in a separate web application. As some of the user base are not using domain joined PCs they have to login once for the portal (http://portal) and then again when the access My Sites since they're crossing in to a separate web application on a separate host (http://mysite). Portal & MySite are both hosted on the same physical WFE servers. Is there an easy way to set up some thing to stop this happening and just have them login once? I understand that there's plans for us to deploy ISA in the not too distant future - could we use ISA to manage authentication to the two sites so that the users only need to log in once?

    Read the article

  • differencing disk opinions

    - by troth
    I've read about the performance issues with dfferencing disks but I still think there is a solid place for them and thats the os boot partition. If I'm going to have 20 vms on a csv based volume I don't won't to waste the 20+ gigs per guest just for the os boot. If I get a good base disk with all of the most used applications installed and have the pagefile located somewhere else I don't think the delta's would be that great thus it should not create a performance issue. Also in a SAN based csv volumes does it make any sense in having the pagefile go to a seperate csv volume? Any opinions on this? thanks

    Read the article

  • What is a 'best practice' backup plan for a website?

    - by HollerTrain
    I have a website which is very large and has a large user-base. I am trying to think of a 'best practice' way to create a back up or mirror website, so if something happens on domain.com, I can quickly point the site to backup.domain.com via 401 redirect. This would give me time to troubleshoot domain.com while everyone is viewing backup.domain.com and not knowing the difference. Is my method the ideal method, or have you enacted better methods to creating a backup site? I don't want to have the site go down and then get yelled at every minute while I'm trying to fix it. Ideally I would just 'flip the switch' and it would redirect the user to a backup. Any insight would be greatly appreciated.

    Read the article

  • Sysprep on Windows 7 for a non-windows admin?

    - by askvictor
    I'm needing to use sysprep to deploy some Windows 7 machines. I can't find any resources that explain this very easily. In particular, I've loaded all of the programs I want for the image onto a computer. If I want to use prepares this for cloning onto 200 other machines, what steps would I need to take? Or do I need to start again, creating the image using 'audit' mode? The machines I'm using come pre-configured from the manufacturer with a host of custom drivers that wouldn't be fun to re-install - I'd rather use their base image to build upon. Cheers, Victor

    Read the article

  • CVSNT to CVSNT Migrations

    - by BParker
    Has anybody re-homed a CVSNT installation? I need to take a CVSNT setup on one ageing server and relocate it to a new shiny server, but i don't have a great deal of experience with this software. Does anyone know the basic process for moving an entire repository to a new server? We have 3 base repositories, totaling about 6GB. Size wise it's not too big, but i need to make sure all the history is migrated correctly. I've googled around a bit, but not managed to find any info on this kind of move, everyone seems to prefer to write how-to's on moving to Subversion et al.... The CVSNT server is running on windows if that makes a great deal of difference....

    Read the article

  • Generic socket 940 CPU fan for HP XW9300 interchangeable with HP part number 411454-001

    - by ConcernedOfTunbridgeWells
    Hi, I'm trying to find a generic Socket 940 CPU fan that will fit the air duct in an XW9300. Note that two different CPU fan mounting schemes have been used on this machine. One has two grommets in the motherboard that the CPU fan screws into and the other has a plastic collar and base plate. The form factor I'm looking for is the former (see the picture below). Two HP part numbers for fans in this form factor are 411454-001 (shown below) and 416055-001. Note that the dimensions are important as the machine has an air duct that fits over the fan. Can anyone tell me the manufacturer and model of an equivalent generic part? Better yet, do you know somewhere (preferrably in the UK) from which such an item can be obtained?

    Read the article

  • How to resolve "HTTP/1.1 403 Forbidden" errors from iCal/CalDAV server after upgrade to Snow Leopard Server?

    - by morgant
    We recently upgraded our Open Directory Master & Replica to Mac OS X 10.6.4 Snow Leopard Server. We had a mismatched server FQDN & LDAP Search Base/Kerberos Realm, so we exported all users & groups, created the new Open Directory Master w/matching FQDN & Search Base/Realm, reimported users & groups, and re-bound all servers & workstations to the new OD Master. At the same time as all of this, we upgraded our iCal/CalDAV server to Mac OS X 10.6.4 Snow Leopard Server. Ever since doing so, we've seen the following issues with our iCal/CalDAV server and iCal clients on both Mac OS X 10.5 Leopard & Mac OS X 10.6: If a user attempts to move or delete an event (single or repeating) that was created prior to the upgrade to 10.6 Server, they get the following error: Access to "blah" in "blah" in account "blah" is not permitted. The server responded: "HTTP/1.1 403 Forbidden" to operation CalDAVWriteEntityQueueableOperation. New users added to the directory get the following error when attempting to add their account to in iCal's preferences: The user "blah" has no configured pricipals. Confirm with your network administrator that your account has at least one CalDAV principal configured. Interestingly, we've since discovered that users seem to be able to delete individual events from an old repeating event without error, but that's a massive amount of work to get rid of a repeating event. I will note that we have not yet added an SRV record in DNS as instructed on page 19 of iCal_Server_Admin_v10.6.pdf. Further Investigation: In this particular case, a user is attempting to decline repeating events created prior to the upgrade to Snow Leopard Server. Granting the user full write access with sudo calendarserver_manage_principals --add-write-proxy users:user1 users:user2 (which did work) doesn't allow deletion of the events. Still get the usual error: Access to "blah blah" in "blah blah" in account "blah blah" is not permitted. The server responded: "HTTP/1.1 403 Forbidden" to operation CalDAVWriteEntityQueueableOperation. The error that shows up in /var/log/caldavd/error.log on the iCal Server when attempting to delete one of the events is: 2011-03-17 15:14:30-0400 [-] [caldav-8009] [PooledMemCacheProtocol,client] [twistedcaldav.extensions#info] PUT /calendars/__uids__/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/calendar/XXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX.ics HTTP/1.1 2011-03-17 15:14:30-0400 [-] [caldav-8009] [PooledMemCacheProtocol,client] [twistedcaldav.scheduling.implicit#error] Cannot change ORGANIZER: UID:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX And the error in /var/log/system.log on the client is: Mar 17 15:14:30 192-168-21-169-dhcp iCal[33509]: CalDAV CalDAVWriteEntityQueueableOperation failed: status 'HTTP/1.1 403 Forbidden' request:\n\nBEGIN:VCALENDAR^M\nVERSION:2.0^M\nPRODID:-//Apple Inc.//iCal 3.0//EN^M\nCALSCALE:GREGORIAN^M\nBEGIN:VTIMEZONE^M\nTZID:US/Eastern^M\nBEGIN:DAYLIGHT^M\nTZOFFSETFROM:-0500^M\nTZOFFSETTO:-0400^M\nDTSTART:20070311T020000^M\nRRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU^M\nTZNAME:EDT^M\nEND:DAYLIGHT^M\nBEGIN:STANDARD^M\nTZOFFSETFROM:-0400^M\nTZOFFSETTO:-0500^M\nDTSTART:20071104T020000^M\nRRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU^M\nTZNAME:EST^M\nEND:STANDARD^M\nEND:VTIMEZONE^M\nBEGIN:VEVENT^M\nSEQUENCE:5^M\nDTSTART;TZID=US/Eastern:20090117T094500^M\nDTSTAMP:20081227T143043Z^M\nSUMMARY:blah blah^M\nATTENDEE;CN="First Last";CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT:urn:uuid^M\n :XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX^M\nATTENDEE;CN="First Last";CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:mailto:user@d^M\n omain.tld^M\nEXDATE;TZID=US/Eastern:20110319T094500^M\nDTEND;TZID=US/Eastern:20090117T183000^M\nRRULE:FREQ=WEEKLY;INTERVAL=1^M\nTRANSP:OPAQUE^M\nUID:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX^M\nORGANIZER;CN="First Last":mailto:[email protected]^M\nX-WR-ITIPSTATUSML:UNCLEAN^M\nCREATED:20110317T191348Z^M\nEND:VEVENT^M\nEND:VCALENDAR^M\n\n\n... response:\nHTTP/1.1 403 Forbidden^M\nDate: Thu, 17 Mar 2011 19:14:30 GMT^M\nDav: 1, access-control, calendar-access, calendar-schedule, calendar-auto-schedule, calendar-availability, inbox-availability, calendar-proxy, calendarserver-private-events, calendarserver-private-comments, calendarserver-principal-property-search^M\nContent-Type: text/xml^M\nContent-Length: 134^M\nServer: Twisted/8.2.0 TwistedWeb/8.2.0 TwistedCalDAV/2.5 (iCal Server v12.56.21)^M\n^M\n<?xml version='1.0' encoding='UTF-8'?><error xmlns='DAV:'>^M\n <valid-attendee-change xmlns='urn:ietf:params:xml:ns:caldav'/>^M\n</error> One thing I have noticed, and I'm not sure if this has any real effect is that in many of these pre-Snow Leopard Server migration events, the ORGANIZER is specified like the following: ORGANIZER;CN=First Last:mailto:[email protected] But newer ones are more like one of the two following: ORGANIZER;CN=First Last;[email protected];SCHEDULE-STATUS=1 ORGANIZER;CN=First Last;[email protected]:urn:uuid:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX iCal_Server_Admin_v10.6.pdf notes that the ".db.sqlite" files are completely disposable, they're merely a performance cache and are re-built on the fly, so are safe to delete. I did delete the one for the organizer's calendars and it took longer to process the attempted event delete while it rebuilt the database, but still errored out in the end. FWIW the error is thrown by this code: https://trac.calendarserver.org/browser/CalendarServer/trunk/twistedcaldav/scheduling/implicit.py Any further suggestions? I see lots of questions about this in my Google searches, but not solutions and this is a widespread problem on our iCal Server. Now, we mostly try to get users to ignore them (hence the amount of time this question has been open), but every now and then I dig in deeper trying to find the culprit and/or solution.

    Read the article

  • Automating and deploying new linux servers

    - by luckytaxi
    I'm in the process of developing a method to automate new virtual machines into my environment. 90% of our machines are virtual but the process is similar for both physical and vmware based images. What I do now is I use cobbler to install the base OS. The kickstart script has post hooks to modify the yum repo and installs puppet and func. Once the servers are running, I manually add them into nagios and sign the certificate via the puppetmaster. I've since migrated most of the resources to use mysql as the backend. I wanted to see what others are doing and my goal for 2011 is to have puppet inventory the hardware into mysql, and somehow i'll script a python script to have nagios grab the info and automatically add it for monitoring purposes. It's kind of tedious to have to add each new server into nagios, puppet's dashboard, munin, etc...

    Read the article

  • Website deployment - managing uploaded content?

    - by Legion
    I'm a programmer by trade, "server administrator" by company necessity. We're looking at dumping the old painful "update site by FTP upload" style of deployment. Having the webserver check out the latest code base from version control into a folder and having a "current" symlink point to the latest checkout (allowing for easily stepping back to an older version by changing the symlink) seems to be the way we want to go. But I have a question: what's a good practice for dealing with user-uploaded content? This stuff isn't in version control. I have a couple of ideas for dealing with this, but what is the smart, accepted practice?

    Read the article

  • How do I delete a differential backup?

    - by BlueMonkMN
    I often like to create backups when testing the software I work on, and will sometimes create a differential backup if I want to be able to get back to multiple previous states. However, sometimes I realize that I forgot one thing I wanted to include in a differential backup, or I no longer need a previous differential backup. Sometimes I simply want to create a new scenario from the original base image and start working with a new series of differential backups. So I'd like to be able to delete some older differential backups so I don't get confused about which ones I'm using. But I can't find any way to delete just the differential backups, selectively or all at once.

    Read the article

  • Debugging problems in Visual Studio 2005 - No source code available for the current location

    - by espais
    Hi all I've searched up and down Google for others with a similar problem, and while I can find the error I don't think that other people have the same base problem that I do. Basically, I had to create a project for a unit-testing environment in order to run this test suite. First, I add my original C file, compile, and then a test file (C++) is generated. I then exclude my original source from the project, include this test script (which includes the original source at the top), and then run. I can debug the test file fine, but when it jumps to the original C file I get the dreaded 'no source code available for the current location' error. Both files are located within the same location, and I compiled the original file without any issue. Anybody have any thoughts about this? Its driving me crazy!

    Read the article

  • PCI scan findings and problems with week ciphers on ports 993,443,995,465

    - by user64991
    From PCI scan results: Synops is : The remote service encrypts traffic using a protocol with known weaknesses . Description : The remote service accepts connections encrypted using SSL 2.0, which reportedly suffers from several cryptographic flaws and has been deprecated for several years. An attacker may be able to exploit these issues to conduct man-in-the-middle attacks or decrypt communications between the affected service and clients . See also : http://www.schneier.com/paper-ssl.pdf Solution: Consult the application's documentation to disable SSL 2.0 and use SSL 3.0 or TLS 1.0 instead. Risk Factor: Medium / CVSS Base Score : 2 (AV:R/AC:L/Au:NR/C:P/A:N/I:N/B:N) I have tried to change SSLProtocol all -SSLv2 to SSLProtocol -ALL +SSLv3 +TLSv1 And SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW To SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:!MEDIUM:!LOW:!SSLv2:!EXPORT But using SSLdigger, it shows the same result. Is this the right way to do something like this?

    Read the article

  • Toolkit & Habits for Linux Network & System Administration [closed]

    - by slashmais
    I am tasked with the administration of a small office network as well as several workstations running mostly Debian and Ubuntu. There are two servers: one database & print-server, and one backup & file server. Being relatively new to this side of things, knowing enough to help myself to some degree on Linux, I would like to know what software tools and tasks/habits I can use/acquire to learn this field and be effective while doings so. I don't need to know what is the best, just what a newbie sys-admin can use as a starter-pack to learn and use as a base to grow into proper system administration. [edit] What I need is those few basic tools to start with, and the kind of things I need to do regularly, e.g.: which logs to check, when & what to monitor, the kind of 'right' place to start and to which I can ad as I need.

    Read the article

  • How to install apt-get on a busybox embedded system?

    - by Daniel YC Lin
    My embedded system is for sh4 CPU. The debian distribution may get on http://www.si-linux.co.jp/pub/debian-sh/lenny-sh4/ I get the apt*.deb and extract the data.tar.gz. After setup the /etc/apt/sources.list, I could do 'apt-get update'. But it missing dependency when I try to run 'apt-get install ntpdate'. Is there any method to let apt-get ignore some base packages? Because those package is build by my original embedded system.(eg. busybox).

    Read the article

  • Timesync on HyperV with CentOS 6.2

    - by WaldenL
    I've got a CentOS VM (release 6.2) running under HyperV. I have integration services installed (part of base now), and CentOS shows the current clocksource is hyperv_clocksource, however my time in the VM is about 10 minutes fast after a week of uptime. My understanding of the new IC and plugable clocksource is that this shouldn't happen any more. Is there any additional configuration necessary to get the plugable clocksource to "work?" I know there are plenty of links about setting kernel options to PIT and various stuff like that, but those all seem to pre-date the integrated clocksource support, and as I understand it shouldn't be needed anylonger. Nor should ntpd nor adjtimex.

    Read the article

  • cloud computing ? Eucalyptus

    - by neolix
    Hi Greeting!! I want to setup small cloud computing using our old 2 core server system? we are new to cloud system we have google for the same. We are looking host VM's on top any one has done pls share me doc or how to ? we have 50 plus server which we are not using. 2 core each 4GB RAM, 1TB HDD centos is my base os we looking host windows. Right now we can use this server only paravirtualization ignore my english Thanks

    Read the article

  • Problems while applying an svn patch to a mercurial repository

    - by user26453
    Patch file is made with TopirtiseSVN - Create Patch... Attempting to import patch into the mercurial repository using hg import patchfile. The problem I'm running into is that there seems to be problems with how hg looks for files referenced in the patch file: unable to find 'gui/gui/RemoteFramework.cpp' for patching 2 out of 2 hunks FAILED -- saving rejects to file gui/gui/RemoteFramwork.cpp.rej Seems to be an issue of where the patch was made in terms of directories and where it should be applied. Have tried playing with the --base option for hg import, but haven't gotten anywhere just yet. Anyone have any tips?

    Read the article

  • SRSS Client Print Module must be downloaded every use

    - by Jmcgee73
    I am currently having an issue with SQL Server Reporting Services. Everytime a user clicks the print button for the report, the user must install the ActiveX client print module. The issue is that our clients are not admins on their computers. So therefore they can not install the module. I have gotten around this roughly by adding the SQL address to trust sites, setting "Download signed Active Controls" to enable, and then giving the users permission to write to "C:\Windows\Downloaded Program Files". However, this is fix is not easy to distrubute to our user base. I am using SQL Server 2008 SP3 CU3 running on Server 2008 R2. I believe the browser thinks the version is newer than what is on the server. I have tried downloading the print module CAB file from the SQL server and installing manually. That did not work either. Thanks!

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >