Search Results

Search found 13004 results on 521 pages for 'pretty printing'.

Page 363/521 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • nVidia performance with newer X and newer driver abysmal with Compiz

    - by Nakedible
    I recently upgraded Debian to Xorg 2.9.4 and installed nvidia-glx from experimental, version 260.19.21. This was somewhat of an uphill battle as the dependencies for the experimental nvidia-glx package are still somewhat broken. I got it to work without forcing the installation of any packages and without modifying the packages. However, after the upgrade compiz performance has been abysmal. I am using the desktop wall plugin and switching viewports is really slow - takes a few seconds for each switch. In addition to this, every effect that compiz does, such as zoom animations for icons when launching applications, takes seconds. The viewport switching speed changes relative to the amount of windows on that virtual screen - empty screens switch almost at normal speed, single browser windows work almost decently, but just 4 rxvt terminals slows the switches down to a crawl. My compiz configuration should be pretty basic. Xorg is likewise configured without anything special - the only "custom" configuration is forcing the driver name to be "nvidia". I've fiddled around with the nvidia-settings and compizconfig trying different VSync settings, but none of those helped. My graphics card is: NVIDIA GPU NVS 3100M (GT218) at PCI:1:0:0 (GPU-0). This is laptop GPU that is from the Geforce GTX 200 series. Graphics card performance should naturally be no problem.

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • Troubleshooting iptables and configuring it to drop the priority of long-term connections

    - by intuited
    I'm somewhat familiar with the general concepts of iptables, and would like to learn it in more detail. I'm hoping that my learning experience can also be useful. The situation: I'm running dd-wrt on my router. Despite its purported QoS skills, I'm still seeing connection latency shoot up hugely whenever there's an ongoing http connection, eg some large download. Under such conditions, it can take 10 seconds or more to load a basic webpage; sometimes the connections are dropped entirely. I've tried adjusting the parameters, dropping the allotted bandwidth for up and download to well under my limit, but nothing seems to work. dd-wrt is configured to use HTB as the QoS algorithm; HFSC, although presented as an option, seems to cause the router to crash, and is rumoured to not actually work on any linux system. I'd like to be able to troubleshoot this issue and hopefully improve the settings that dd-wrt is using, but I'm finding the learning curve a bit overwhelming. For starters I am not sure what HTB actually specifies: is this a set of iptables commands, or do some of those commands specify how HTB is to be used? I would like it to prioritize based on protocol the way that it already supposed to, and in addition I'd like to have it drop the priority of connections which have a high total byte count, say over 400KB. Also tips on utilities that can be run under dd-wrt to get more info on what's going on in there are appreciated. I've tried to get iftop to work but there were issues running curses. I'm leaning towards replacing dd-wrt with openwrt; comments on this strategy are also welcome. I suspect that I would be well advised to get a second router as a standin before trying that. It may be worth noting that my total bandwidth is pretty limited (256Kbit/s).

    Read the article

  • Configure Domino to use SMTP routing and hMailServer

    - by Sébastien Lachance
    I have been trying for a couple of days to set up a Domino 8.5 server. Basically, I want everything to be run inside a local network. Right now I can send email to other user in the Domino directory without any mail address. I am pretty new to all this stuff, so maybe the answer will be really obvious. What I need to do is be able to send a mail from somewhere else to a domino user that will be redirected to his account. On the Domino server, I also have hMailServer installed on port 25. I configured Domino to use port 26. I followed those step to get where I am now. -I have set the Fully qualified Internet host name to "preview.notes". -Smtp Listener task changed to Enabled to turn on the Listener so that the server can receive messages routed via SMTP routing -Setting up SMTP routing within the local Internet domain (http://www.h2l.com/help/help85%5Fadmin.nsf/f4b82fbb75e942a6852566ac0037f284/7f9738a49efc4f58852574d500097b01?OpenDocument) -I modified the person to use the [email protected] address. -I'm using the hMailServer (which have the local "preview.local" domain name) to send mail to [email protected]. When sending mail I got an error telling that the DNS is not set up correctly. Is using the Domino Smtp server instead of hMailServer will solve the problem? I can Telnet the Domino Smtp Server.

    Read the article

  • (solved) `ssh foo "<command/>"` not loading remote aliases?

    - by TomRoche
    summary: Why does this fail $ ssh foo 'R --version | head -n 1' bash: R: command not found but this succeeds $ ssh foo 'grep -nHe 'bashrc' ~/.bash_profile' /home/me/.bash_profile:3:# source the users .bashrc if it exists /home/me/.bash_profile:4:if [ -f "${HOME}/.bashrc" ] ; then /home/me/.bash_profile:5: source "${HOME}/.bashrc" $ ssh foo 'grep -nHe "\WR\W" ~/.bashrc' /home/me/.bashrc:118:alias R='/share/linux86_64/bin/R' $ ssh foo '/share/linux86_64/bin/R --version | head -n 1' R version 2.14.1 (2011-12-22) ? details: I am a (rootless) user on 2 clusters. One uses environment modules, so any given server on that cluster can provide (via module add) pretty much the same resources. The other cluster, on which I must also unfortunately work, has servers managed individually, so I get in the habit of doing, e.g., EXEC_NAME='whatever' for S in 'foo' 'bar' 'baz' ; do ssh ${SERVER} "${EXEC_NAME} --version" done This works fine for packages installed normally/consistently, but often (for reasons unknown to me) packages are not: e.g. (compare alias below to alias above), $ ssh bar 'R --version | head -n 1' bash: R: command not found $ ssh bar 'grep -nHe 'bashrc' ~/.bash_profile' /home/me/.bash_profile:3:# source the users .bashrc if it exists /home/me/.bash_profile:4:if [ -f "${HOME}/.bashrc" ] ; then /home/me/.bash_profile:5: source "${HOME}/.bashrc" $ ssh bar 'grep -nHe "\WR\W" ~/.bashrc' /home/me/.bashrc:118:alias R='/share/linux/bin/R' $ ssh bar '/share/linux86_64/bin/R --version | head -n 1' R version 2.14.1 (2011-12-22) Using aliases copes well with these install differences when I interactively shell into the server, but fails when I try to script ssh commands (as above); i.e., # interactively $ ssh foo ... foo> R --version calls my alias for R on remote host=foo, but # scripting $ ssh foo 'R --version' doesn't. What do I need to do to make ssh foo "<command/>" load my aliases on the remote host?

    Read the article

  • HP LaserJet 2550 has a carousel motor error

    - by Arlen Beiler
    I have a LaserJet 2550, and it's worked pretty good for a long time (except for some slowness a while back, spooling I think), but just recently it suddenly quit working. We moved this summer, but left it at our other place, and just recently when my Dad went over there to try to print something out, it didn't work. When you turn it on, you hear the fan give a false start (basically a quick pulse), and the carousel goes through its usual thing. Then it starts up in earnest like it's getting ready to print something. All of a sudden it just stops. Everything stops, and the three lower lights are steady. When I push the Go button, the Go light (bottom of the 3) turns off, but the other two stay on. I looked it up on the HP website and it says it is a carousel motor problem. I called HP, but they said it is out of warranty. I've opened the cover and held the switch with a screw driver so I could watch it, and it goes through its thing like I described (doesn't seem to make a difference whether the imaging drum is in or not), then when it stops it kind of seems to jump back a little bit (the carousel). I hope this all makes sense (I know you like details), and hopefully you also know what to do to fix it. Thanks.

    Read the article

  • BGP Multihomed/Multi-location best practice

    - by Tom O'Connor
    We're in the process of designing a new iteration of our network where we improve resilliency by adding a second datacentre. We'll be adding a second datacentre, with an identical configuration of servers as our primary location. To achieve network connectivity, we're looking into a couple of possible methods. See earlier questions http://serverfault.com/questions/86736/best-way-to-improve-resilience and http://serverfault.com/questions/101582/dns-round-robin-failover-and-load-balancing I'm pretty convinced that BGP is the right way to go about this, and this question is not about RRDNS. 1) If we have 2 locations, do we announce the same IP address block from both locations? 2) If we did this, but had a management ssh interface on x.x.x.50 from datacentre A, but it was on x.x.x.150 in datacentre B. What is the best practice mechanism for achieving this? Because if I were nearest to A, then all my traffic would go to x.50, but if i attempted to connect to x.150, I'd not be able to connect, because this address wouldn't be valid at A, but only at B. Is the best solution to announce 2 different netblocks, one at each location, facilitating the need for RRDNS, or to announce a single block, and run some form of VPN between the two sites for managment traffic?

    Read the article

  • Lighttpd - byte range request doesn't work. can't stream mp4

    - by w-01
    Am attempting to use the lastest flowplayer. (if it could work it would be pretty awesome btw) http://flowplayer.org One of the cool things about it is it uses the new HTML5 video element and supports random seeking/playback. In order to do this, you need a byte range request capable server on the backend. Luckily I'm using Lighttpd 1.5.0 on the backend. Unfortunately the current behavior is that when I do a random seek, the video simply restarts itself from the beginning. the docs say: "For HTML5 video you don't have to do any client side configuration. If your server supports byte range requests then seeking should work on the fly. Most servers including Apache, Nginx and Lighttpd support this." On my page, using chrome web developer tools, i can see when the video is requested, the server response headers indicate it is able to acce[t byte ranges. Accept-Ranges:bytes when I do random seek in the player, I can see that that byte ranges are request appropriately in the request header: Range: bytes=5668-10785 I can also verify the moov atom is at the front of the video file. My question here is if there is something else on the lighttpd side i'm missing in order to enable byte-range requests? The reason i ask is because the current behavior suggests that the lighttpd simply doesn't understand the byte range request and is just reserving the video from the beginning. Update it's clearer to put this here. As per RJS' suggestion I ran a curl command. in the response it looks like lighttpd is working as expected. Content-Range: bytes 1602355-18844965/18844966 Content-Length: 17242611

    Read the article

  • Software distribution from web server to client using PHP/FTP

    - by Jenolan
    I develop and maintain a number of add-ons and utilities for various widget (mainly aMember) which generally means I need to install php based codes onto other people's systems. Whilst I have a VPS and have access to rsync and all sorts of yummy tools most of the people I deal with have a basic ftp access and that's all folks. To upload from my local system is also a problem as I am satellite based (two-way) so it is fairly slow and expensive and in any case the files are already on my server. So there is no rsync, fxp, ssh and I can't really install anything as it is obviously not my system, they would be justifiably miffed if I started installing file managers or other things onto their sites. What I have been trying to find is a utility that I can run on my server from the web, preferably php based, that will be like a file manager but a bit different. Two panels. LH-Side the local server .. pretty much like a standard FM application RH-Side ability to login via FTP to the clients system Then I can fiddle as required. The closest thing I have found is net2ftp but it doesn't have the gui interface, at the moment I simply ssh into my server power up ncftp and run that way, but something easier to use would be mucho niceness. Thanks in advance! Larry

    Read the article

  • Can an image based backup potentially corrupt data?

    - by ServerAdminGuy45
    I'm considering doing image based backups (Acronis) on production Windows systems during non-peak hours. I'm just wondering if they can potentially lead to application data corruption. Lets say that I have a database that is getting hit pretty hard. Could I potentially have the beginning blocks of the database be commit ed to the image, data inserted into the db (which changes the beginning blocks of the DB on the server but not the image), then the blocks of data committed to the image (leading to an inconsistent state). Here's an example of what I'm trying to illustrate. Imagine a simple data structure which has a number in the front which represents the number of "a"s in a file. The number and data are delimited by a "-". For example: 4-ajjjjjjjajuuuuuuuaoffffa If an "a" is changed, the datastructure resets the number in the begining of the file such as: 3-ajjjjjjjajuuuuuuuboffffa I assume acronis writes block by block being a straight up image so here is what i'm invisioning happening with my database t0: 4-ajjjjjjjajuuuuuuuaoffffa ^pointer is here t1: 4-ajjjjjjjajuuuuuuuaoffffa ^pointer is here (all data before this is comitted to the image) t2: 4-ajjjjjjjajuuuuuuuboffffa ^pointer is here (all data before this is comitted to the image) Also notice how one of the "a"s change to a b. There are only 3 "a"s now t3: 4-ajjjjjjjajuuuuuuuboffffa ^pointer is here (all data before this is comitted to the image) The final image now reads "4-ajjjjjjjajuuuuuuuboffffa", while the true data is "3-ajjjjjjjajuuuuuuuboffffa" leading to a corrupt "database". Basically changes further down the blockchain could be reflected in the image, while important header and synchronization could already be committed. The out of date header information doesn't accurately reflect the structure of the blocks to come.

    Read the article

  • Running the LibreOffice MSI installer in English

    - by Scott Severance
    I'm trying to install LibreOffice on a machine running a Korean version of Windows XP. I don't know Korean. I haven't used Windows with any frequency in many years, so I'm pretty lost. When I run the installer, it shows up in Korean. But, I want to customize the installation, so I need the installer to be in English. Googling took me to this page, where I found an example command to run the installer in Gaelic, which I modified for my system as follows: msiexec /i LibO_3.6.1_Win_x86_install_multi.msi TRANSFORMS=:1084 This works, except that I know less about Gaelic than I do about Korean. The help page provided a link to a page where I could look up the ID codes. From that page, I determined that the correct code was 1033 for US English and 2057 for UK English. When I substituted the code, I got an error message. Here's the messages as translated by Google, followed by the original: Transform can not be applied.Verify that the specified transform paths are valid. ?? ??? ??? ? ????. ??? ?? ??? ???? ?????. I can't very well search on a machine translation, so I don't know where to go from here.What is the problem? How can I make the installer operate in English? Alternatively, how can I change XP to display its interface in English, while keeping full functionality for typing in Korean?

    Read the article

  • Linux: Force fsck of a read-only mounted filesystem?

    - by Timothy Miller
    I'm developing for a headless embedded appliance, running CentOS 6.2. The user can connect a keyboard, but not a monitor, and a serial console would require opening the case, something we don't want the user to have to do. This all pretty much obviates the possibility of using a recovery USB drive to boot from, unless all it does is blindly reimage the harddrive. I would like to provide some recovery facilities, and I have written a tool that comes up on /dev/tty1 in place of getty to provide these functions. One such function is fsck. I have found out how to remount the root and other file systems read-only. Now that they are read-only, it should be safe to fsck them and then reboot. Unfortunately, fsck complains to me that the filesystems are mounted and refuses to do anything. How can I force fsck to run on a read-only mounted partition? Based on my research, this is going to have to be something obscure. "-f" just means to force repair of a clean (but unmounted) partition. I need to repair a clean or unclean mounted partition. From what I read, this is something "only experts" should do, but no one has bothered to explain how the experts do it. I'm hoping someone can reveal this to me. BTW, I've noticed that e2fsck 1.42.4 on Gentoo will let you fsck a mounted partition, even mounted read-write, but it seems only to do so if fsck is run from a terminal, so it can ask the user if they're sure they want to do something so dangerous. I'm not sure if the CentOS version does the same thing, but it appears that fsck CAN repair a mounted partition, but it flatly refuses to when not run from a terminal. One last-resort option is for me to compile my own hacked fsck. But I'm afraid I'll mess it up in some unexpected way. Thanks! Note: Originally posted here.

    Read the article

  • nginx + @font-face + Firefox / IE9

    - by Philip Seyfi
    Just transferred my site from a shared hosting to Linode's VPS, and I'm also completely new to nginx, so please don't be harsh if I missed something evident ^^ I've got my WordPress site running pretty well on nginx & MaxCDN, but my @font-face fonts (served from cdn.domain.com) stopped working in IE9 and FF (@font-face failed cross-origin request. Resource access is restricted.) I've googled for hours and tried adding all of the following to my config files: location ~* ^.+\.(eot|otf|ttf|woff)$ { add_header Access-Control-Allow-Origin *; } location ^/fonts/ { add_header Access-Control-Allow-Origin *; } location / { if ($request_filename ~* ^.*?/([^/]*?)$) { set $filename $1; } if ($filename ~* ^.*?\.(eot)|(otf)|(ttf)|(woff)$){ add_header 'Access-Control-Allow-Origin' '*'; } } With all of the following combinations: add_header Access-Control-Allow-Origin *; add_header 'Access-Control-Allow-Origin' *; add_header Access-Control-Allow-Origin '*'; add_header 'Access-Control-Allow-Origin' '*'; Of course, I've restarted nginx after every change. The headers just don't get sent at all no matter what I do. I have the default Ubuntu apt-get build nginx which should include the headers module by default... How do I check what modules are installed, or what else could be causing this error?

    Read the article

  • Windows 7: Image thumbnails fail to appear

    - by Fopedush
    All right, I've got a pretty strange one here. Since I installed Windows 7 on this machine some time ago, image thumbnails have never worked properly. For the vast majority of images, they completely fail to show up, showing the icon of the default image viewing application instead. Please note that the “Always show icons, never thumbnails” option in folder options is not checked. I've taken a screenshot demonstrating the problem, here: Sometimes, a few image thumbnails will show up correctly, maybe about one in ten, with the rest failing as well. Another screenshot, with a handful of thumbnails visible, can be seen here: Windows explorer does not appear to make any effort to populate the missing thumbnails, and there is no appreciable CPU usage. I can leave a window with missing thumbnails open all night and they will still never appear. Newly created images never generate thumbnails, only images that have been on the system since day one will occasionally show them. This leads me to believe that explorer is set to show thumbnails, but whatever process is supposed to be in charge of actually generating them has failed somehow. In previous versions of windows, explorer.exe itself was responsible for thumbnail generation – has this changed? Any suggestions at all – even if you aren't sure that they will work – are welcome. I'd hate to have to wipe and reinstall on this machine for such a minor annoyance.

    Read the article

  • New keyboard for linux: Adesso Tru-Form or MS Natural Keyboard 4000?

    - by Andrea
    Hi folks! I'm going to buy a new ergonomic keyboard for my laptop. In the following, keep in mind I live in Italy. I considered the following models: Adesso PCK-308UB - Adesso Tru-Form™ Pro - Contoured Ergonomic Keyboard with TouchPad-PS2 Pro: has a built-in touchpad in the same position of my laptop somewhat cheaper than the alternative below Cons: the surface doesn't seem to be bowl-shaped. keys seem to lay on a straight slightly-inclined surface. It seems an idea used extensively in other ergonomic keyboards according to a few comments on the net, new Adesso keyboards seem to lack robustness, they're likely to loose small parts after a few weeks or months. Other users, instead, seem to never had any problem in years and swear by their quality and comfortability. Those who had problems, however, lamented a lack of responsiveness from the manufacturer. I'm not sure whether the keyboard, at least the standard keys, and the touchpad will both be recognized correctly under linux distros (I mostly use FC btw) last time I checked, Adesso didn't have local resellers in my country Microsoft Natural Ergonomic Keyboard Pro: recognized as one of the most comfortable keyboards reliable customer service operating in my country AFAIK there are several documented ways to get extra buttons work with linux Cons: it doesn't have a builtin touchpad and has a numeric keypad wasting space to reach mouse But there could be other keyboards I haven't considered yet, so here follows my ideal keyboard wishlist, ordered by priority linux compatible basic ergonomic design, which entails split tilted keyboard and pads advanced ergonomic design, like true-ergonomic's or kinesis , where special keys (like enter, caps-lock...) are placed symmetrically in the middle to be used by thumbs a builtin touchpad/trackball placed under the keyboard. I just love this on my notebook. I think it's pretty effective, since it allows my hand to rest naturally everytime I use it. Any opinion on this? high-quality switches, like cherry's (unsure about this one) additional programmable keys placed near usual ones, to simplify typing shortcuts TIA Andrea

    Read the article

  • How can I remotely tell what brand/model internal SCSI card is installed in a machine?

    - by edmicman
    I am doing some consulting work for a previous employer upgrading and migrating old servers to new hardware. There is an existing file server (HP ProLiant DL380) that has an tape backup drive connected; it is using a SCSI interface and I'm pretty sure it's using an internal SCSI card. They are upgrading to a new server hardware (HP ProLiant DL160 G6). The old server is 2U, the new one 1U and we would want to move the tape drive to the new server, too. I'm trying to figure out if the SCSI card in the old server would be able to be installed in the new one or if we'll need to source a new card; mostly I don't know for sure the height of the card and if it's low-profile enough that it would fit in the new server. There is not much of a technical resource onsite and the old server is in-use anyway so I would like to avoid making a trip in myself or trying to have someone onsite pop open the case and tell me what card is there. It's running Windows Server 2003 - is there a way to tell from say Device Manager what make and model the SCSI card might be? Or any other system diagnostic program or something that would give me hardware info like that? Thanks for any info!

    Read the article

  • OSX: Mimic Ubuntu IP Masquerading via iptables with ipfw

    - by Dogbert
    Good day, I am attempting to replicate a setup I have between a router and an Ubuntu PC, and have the same setup working on my MacBook (10.6, Snow Leopard). First, I have a router that has a USB port. When I plug it into my Ubuntu PC, it creates an RNDIS connection, allowing me to connect to the router over the USB cable via an IP connection. When I plug it into my computer via USB, it gets assigned an IP address of 172.16.84.1, and a new adapter appears when I type ifconfig. I can then SSH into the device via ssh [email protected]. When I log in to the device, I flush the routes, then create the default route: admin@localhost> route -f admin@localhost> route add default 172.16.84.2 Now, in my Ubuntu machine, I use iptables to enable IP masquerading: root@Valhalla> sudo iptables -t nat -A POSTROUTING -s 172.16.84.2 -j MASQUERADE Once this is all done, the router has internet access over the USB connection to my PC. I am trying to replicate this exact setup on my MacBook now (Snow Leopard), but iptables does not exist for OSX, not even a Macports version exists. I have scoured through other questions on StackOverflow that cover the usage of the ipfw command, which apparently works as a drop-in replacement for iptables. However, the syntax is significantly different, and I'm pretty much lost. Does anyone with some experience with ipfw have some suggestions on how I could accomplish this and create a NAT connection via IP masquerading like I could with my Ubuntu PC? Thank you for your assistance.

    Read the article

  • Squid Authentication & streaming

    - by Steve Butler
    I've got squid setup using Kerberos authentication. I'm also using squidguard as an URL redirector to block out the usual nastiness of the web. There are some sites though that we allow certain users to, and others not. This all works well, assuming I'm not using any streaming. From what i can determine from the squid logs and the wireshark traces I've done, when the initial request to stream is sent, everything is good, the authenticated username is sent with the request to squidguard. The problem is that on subsequent traffic the username is not sent to squidguard, causing it to be blocked based on default policy. I've tried using the squid built-in allow/deny stuff, but its relatively clunky, and so far squidguard has been pretty easy and fast. Here comes the question(s): How do i get Squid to pass username on all requests? (something tells me this isn't the best way) How do i get squidguard to see traffic is authenticated to a specific user even when a username isn't passed? Is there any other way of accomplishing this? A few details that may be of importance: I'm using a list of users stored in a text file for squidguard to compare against. I'm using full kerberos auth with Squid. CentOS 6.0 Squid 3.1.4 Squidguard 1.3

    Read the article

  • ISA 2006 SP1 - SSL Client Certificate Authentication in Workgroup Environment

    - by JoshODBrown
    We have an IIS6 website that was previously published using an ISA 2006 SP1 standard server publishing rule. In IIS we had required a client certificate be provided before the website could be accessed... this all worked fine and dandy. Now we wish to use a web publishing rule on ISA 2006 SP1 for this same website. However, it seems the client certificate doesn't get processed now, so of course the user can't access the website. I've read a few articles stating the CA for the certificate needs to be installed in the trusted root certificate authorities store on the ISA Server (i have done this), as well as installing the client certificate on the ISA Server (done as well). I have also verified that the ISA Server is able to access the CRL for our CA no problem... In the listener properties for the web publishing rule, under Authentication, and Client Authentication Method, there is an option for SSL Client Certificate Authentication... i select this, but it appears the only Authentication Validation Method selectable is Windows (Active Directory).... there is no Active Directory in this environment. When i configure the rule with the defaults, I then try to hit my website and it prompts for my certificate, i choose it and hit ok... then I'm given the following error Error Code: 500 Internal Server Error. The server denied the specified Uniform Resource Locator (URL). Contact the server administrator. (12202) I check the event logs on the ISA Server and in Security Logs, i see Event ID 536, Failure Aud. The reason: The NetLogon component is not active. I think this is pretty obvious since there is no active directory available. Is there a way to make this web publishing rule work using client certificates in this workgroup environment? Any suggestions or links to helpful documents would be greatly appreciated!

    Read the article

  • SQL Server Analysis Services, DNS, AD, Kerberos, Connection Issues

    - by ScaleOvenStove
    Running into a very weird issue. Converting servers to Windows 2008/SQL 2008. Have a server, SERVER_A, brand new, setup with Win2k8,Sql2k8 - works. Have a Server SERVER_B, running Windows2003/SQL2005. I want to migrate from SERVER_B to SERVER_A. I have all db's, cubes, etc setup on SERVER_A and it is mimicking functionality. Since users are using Excel to connect to SSAS, they connection string has SERVER_B in it. What I want to do, is, change DNS on the network to point SERVER_B (by name) at the ip of SERVER_A. I have successfully done this with another server, SERVER_C, but I need to do it with SERVER_B. What I have found is that with SERVER_C, after changing DNS, had to remove SERVER_C from AD and then it worked. I could connect to SERVER_C (DB), SERVER_C (SSAS Default Instance) and SERVER_C (SSAS Named instance) and it all was actually connecting to SERVER_A I tried to do the same with with SERVER_B, and no luck. Changed DNS, removed from AD, and it wouldn't connect. Found out that there were some SPN's in AD set up, so removed those and tried again. I then could connect to SERVER_B (DB), SERVER_B (SSAS Named Instance), but not SERVER_B (SSAS Default Instance). I could connect to SERVER_B (SSAS Default Intance WITH the Port #), but I need to be able to connect without the port number. I am at a loss to as why I can't connect to the default instance without a port #. Not sure if it is SPN's in AD, or another AD issue, or something else. Pretty sure it isnt something on the server (because SERVER_C works!) Any insight or suggestions would be greatly helpful!!

    Read the article

  • Desktop.ini Issues/Confusion

    - by EpicDavi
    BACKSTORY: I was out of town for a while and I forgot to turn my computer off. When I came back I saw that a desktop.ini file was on my desktop (using Windows 7). I thought that was odd because I knew it was a system file and it usually didn't show up due to the fact that I had disable the feature to show system files. Also it wasn't translucent like the other system files. I went to my control panel and saw that the "Hide protected operating system files" was indeed enabled. This puzzled me so I disabled the setting and another one was on my desktop like it usually is hidden. So now I have to desktop.ini files on my desktop: one hidden and one not hidden. I am doing an antivirus check to see if anything was going on and I will give an update soon. I am pretty sure these files are harmless and could be deleted but I would rather get another person's opinion on the subject. Thanks! UPDATE: I did an anti-virus scan and it seems I have no problems. It is odd because the file seems to maintain system file properties such as not being able to be edited and other things. Also I have tried restarting my computer and it is still not hidden. So the question remains: What should I do with the file and what caused it?

    Read the article

  • NATing IPv4 while routing IPv6

    - by Hugo
    I've the following setup: client(s) <---> (eth0) router (eth1) <---> wan I have a static IPv4 address and a /48 IPv6 address block. I need to connect all the clients to (wan). Each client will have it's own public IPv6. Meanwhile, I need to NAT those same clients over to (wan). Everything IPv4-related and the NAT are working fine. The IPv6 communication to/from (eth0)<-(clients) works fine, as does the IPv6 communication from (eth1)<-(wan) works fine. To provide IPv6 to all my clients, I've thought of too choices: Having the router as a gateway, which different IP on each interface. This sounds like I need to tell my ISP to route the entire block through that single IP, so it's not really an option. Transparently pass IPv6 packets to/from eth0<-eth1, so all clients can communicate with the upstream gateway (I would actually have a switch here if it weren't for the need to remain IPv4 compatible). So, since I've opted for the second choice, I'm in doubt: How can I pass all IPv6 traffic from eth0 to eth1 transparently? What I need is a level 3 bridge, but linux's bridgeutils create a level 2 bridge (which would bridge ipv4 as well, and I can't have that). This is a DD-WRT device, but it's pretty much an embeded linux, so most suggestions that would work on linux are welcome. Thanks.

    Read the article

  • Rails 3 passenger nginx application spawner server error on Synology NAS

    - by peresleguine
    Question updated, please read UPD2. I'm trying to deploy app through passenger nginx module on DS710+ (ruby 1.9.2p0 installed). There is syntax error relative to has_and_belongs_to_many_association.rb file. Please look at the screenshot(deleted, question updated). I'm pretty sure the problem isn't in library file. App is running good via webrick. Could you please advise what to look for? UPD1 ruby -v ruby 1.9.2p0 (2010-08-18 revision 29036) [i686-linux] gem list -d passenger *** LOCAL GEMS *** passenger (3.0.6) Author: Phusion - http://www.phusion.nl/ Rubyforge: http://rubyforge.org/projects/passenger Homepage: http://www.modrails.com/ Installed at: /usr/lib/ruby/gems/1.9.1 Easy and robust Ruby web application deployment UPD2 I've decided to reinstall everything. It solved previous problem but caused another one. The error is: The application spawner server exited unexpectedly: Unexpected end-of-file detected. Here is screenshot. New output: ruby -v ruby 1.9.2p180 (2011-02-18 revision 30909) [x86_64-linux] gem list -d passenger *** LOCAL GEMS *** passenger (3.0.7) Author: Phusion - http://www.phusion.nl/ Rubyforge: http://rubyforge.org/projects/passenger Homepage: http://www.modrails.com/ Installed at: /usr/lib/ruby/gems/1.9.1 Nginx error.log: [ pid=5653 thr=32771 file=ext/common/Watchdog.cpp:128 time=2011-04-20 14:08:34.505 ]: waitpid() on Phusion Passenger helper agent return -1 with errno = ECHILD, falling back to kill polling [ pid=5654 thr=49156 file=ext/common/Watchdog.cpp:128 time=2011-04-20 14:08:34.506 ]: waitpid() on Phusion Passenger logging agent return -1 with errno = ECHILD, falling back to kill polling 2011/04/20 14:12:33 [notice] 7614#0: signal process started

    Read the article

  • Graphics and USB devices freezing soon after OS loads

    - by Andrew
    I run Ubuntu/Windows dual boot. Last night I started the upgrade to Ubuntu 12.04, and my computer has not worked since in either Windows or Ubuntu. Here's what I got when I rebooted after the upgrade, and continue to get every time I boot: Gets to GRUB screen OK. Choose Ubuntu - black screen or crazy purple lines. At first I assumed something went wrong with the upgrade (often happens). Choose Windows - works fine, I log in, but soon after that the graphics freeze (sometimes with purple artifacts). The keyboard and mouse (both USB) also lose power at the same instant, and none of the USB ports have power to them. This happens sooner or later every time I boot. Update: the HDD also appears to lose power at the same point. I have tried a live CD, but my computer refuses to boot any CD even after disabling all other boot options in the BIOS. I have disconnected everything except keyboard, mouse, graphics card with one monitor, one RAM sick and HDD; no change. I also took the little battery out to reset CMOS. I am pretty sure no matter how wrong the Ubuntu upgrade went, it wouldn't cause the above symptoms in Windows. So the only explanation I can think of is that a hardware failure occurred at the same time. Some possible causes of this I can think of are: A couple of days before this, I added a third screen (which worked fine). About a week before, my house lost power in a storm (no ill effects over the past few days though). What can I do, other than buy a new motherboard/CPU and hope it works? Unfortunately I don't have another box to swap parts into to test at the moment.

    Read the article

  • If spambots are filling out forms on my website, should I mark the mail as spam?

    - by rob
    Apparently spambots have discovered the contact forms my website. I'm considering adding a CAPTCHA, but in the meantime I've been getting dozens of completed website forms in my Gmail account each day. I know I can configure a filter to always allow e-mails from my website to be delivered (i.e., "never send to spam"), but I'm wondering whether there are also larger implications. At first, I was marking the spam forms as spam because I figured it would block the sender's e-mail address from getting through again. But since then, I've given it some more thought. I know many spam filters use Bayesian filters and other techniques to identify spam based on its content. If I mark all these e-mails as spam, will legitimate e-mails from my website be marked as spam? Even if I stop marking the messages as spam, I can see another potential problem. My form is pretty standard (in fact, it's probably very similar to other online forms). If I mark my spambot-submitted website forms as spam, will Gmail begin to mark other people's similar-looking messages as spam?

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >