Search Results

Search found 15129 results on 606 pages for 'orientation changes'.

Page 464/606 | < Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >

  • How to change the spell checking language in Chromium?

    - by mote
    When write I need spell checking in Danish or English (one at a time, not both at the same time), but changing from one language to the other is not working well for me. Example: Writing an email in English, everything is underlined as misspelled because the spelling language is set to Danish. I then right-click the text field and choose "Spell-checker Options" and set the language to English. But is does not change the language. Only after I try maybe 4-5 times it does. Selecting the text, clicking it, clicking the background, click the text, I have tried it all but I cannot figure out how it works. Sometimes it changes on first try, sometimes I have to do it 6-7 times. Searching brings lots of a known bugs stating that Chromium does not re-check the text after changing the language. But that is not what drives me crazy, for starters being able to change the language in the first try would be nice. It is not a fault in my installation, I have the same problem on 3 computers. Does any know something that I don't?

    Read the article

  • Under FreeBSD, can a VLAN interface have a smaller MTU than the primary interface?

    - by larsks
    I have a system with two physical interfaces, combined into a LACP aggregation group. That LACP channel has two VLANs, one untagged (the "native vlan") and one using VLAN tagging. This gives us: lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=19b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4> ether 00:25:90:1d:fe:8e inet 10.243.24.23 netmask 0xffffff00 broadcast 10.243.24.255 media: Ethernet autoselect status: active laggproto lacp laggport: em1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> laggport: em0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> vlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=3<RXCSUM,TXCSUM> ether 00:25:90:1d:fe:8e inet 10.243.16.23 netmask 0xffffff80 broadcast 10.243.16.127 media: Ethernet autoselect status: active vlan: 610 parent interface: lagg0 Is it possible to set a 9K MTU on lagg0 while preserving the 1500 byte MTU on vlan0? Normally I would simply try this out, but this is actually on a vendor-supported platform and I am loathe to make changes "behind the back" of their administration interface. This system is roughly FreeBSD 7.3.

    Read the article

  • Issues with VSFTPD / FTP on Linux Ubuntu server - Steps for Troubleshooting?

    - by jnolte
    I am dealing with an issue I am unclear on how to resolve and have been pulling my hair out for some time. I have been trying to configure an FTP user using the following (we use this same documentation on all servers) Install FTP Server apt-get install vsftpd Enable local_enable and write_enable to YES and anonymous user to NO in /etc/vsftpd.conf restart - service vsftpd restart - to allow changes to take place Add WordPress User for FTP access in WP Admin Create a fake shell for the user add "usr/sbin/nologin" to the bottom of the /etc/shells file Add a FTP user account useradd username -d /var/www/ -s /usr/sbin/nologin passwd username add these lines to the bottom of /etc/vsftpd.conf - userlist_file=/etc/vsftpd.userlist - userlist_enable=YES - userlist_deny=NO Add username to the list at top of /etc/vsftpd.userlist restart vsftpd "service vsftpd restart" make sure firewall is open for ftp "ufw allow ftp" allow modify the /var/www directory for username "chown -R /var/www I have also went through everything listed on this post and no luck. I am getting connection refused. Sorry for the poor text formatting above. I think you get the idea. This is something we do over and over and for some reason it is not cooperating here. Setup is Ubuntu 12.04LTS and VSFTPD v2.3.5 Thank you in advance.

    Read the article

  • IPC between multiple processes on multiple servers

    - by z8000
    Let's say you have 2 servers each with 8 CPU cores each. The servers each run 8 network services that each host an arbitrary number of long-lived TCP/IP client connections. Clients send messages to the services. The services do something based on the messages, and potentially notify N1 of the clients of state changes. Sure, it sounds like a botnet but it isn't. Consider how IRC works with c2s and s2s connections and s2s message relaying. The servers are in the same data center. The servers can communicate over a private VLAN @1GigE. Messages are < 1KB in size. How would you coordinate which services on which host should receive and relay messages to connected clients for state change messages? There's an infinite number of ways to solve this problem efficiently. AMQP (RabbitMQ, ZeroMQ, etc.) Spread Toolkit N^2 connections between allservices (bad) Heck, even run IRC! ... I'm looking for a solution that: perhaps exploits the fact that there's only a small closed cluster is easy to admin scales well is "dumb" (no weird edge cases) What are your experiences? What do you recommend? Thanks!

    Read the article

  • w2k3 AD DC Demotion fails with "no other AD DC for that domain can be contacted"

    - by Kstro21
    i've a small office with a single w2k3 sp2 DC(bad idea, but it is real), now, i want to make a clean install of that pc, so, i got another one, install w2k3 sp2, add it to the domain, dcpromo and set it to be a GC, untill now everything is ok, then tried to dcpromo in the primary DC, but it fails with The box indicating that this domain controller is the last controller for the domain mydomain.com is unchecked. However, no other Active Directory domain controllers for that domain can be contacted. Do you wish to proceed anyway? If you click Yes, any Active Directory changes that have been made on this domain controller will be lost. So, i started to move all the roles to the new server as described here, when all was ok with the roles, i tried doing the same, but got the same result. Tried moving the DNS to the new server, but it doesn't make difference. Shutdown to the old server, then tried to log into a workstation, but it fails saying the domain is not available, also coudln't add new workstation to the domain, so i have to power on the old server again. So, if i successfully move all the roles and dns to the new server: why dcpromo give such message in the old server? why if i shutdown the old server the domain is not available?? if i successfully move all the roles and dns to the new server, and i click yes when dcpromo give warning in the old server, will i lose all users, computers, ou, etc.? am i missing some steps to make this work?? hope you can help me thanks

    Read the article

  • Power issues Foxconn Barebones kit

    - by alpha1
    I have a Foxconn R20D2 bought about a year and a half ago. It ran fine for a while and then around last summer it started having power issues. I chalked it up to changes in electric current due to the overwhelmed grid when people turn on their AC units, but this problem has stayed for all year, shutting off randomly, shutting off when i turn on a vacuum and similar problems. Now that its summer again, the box basically sits there all day cycling itself, and now has gotten to the point it tried to boot and after 3 seconds, fails, shuts off and tried again. I know its power related, it runs opensuse linux and there are never any shutdown logs or anything of that sort. As the weather got hotter i noticed it happening more and more, and it most often happened in the morning, i presume as people woke up and turn on the AC. The power supply is a Chennel well technology co LTD model DSL-150. 150W max output. Its an intel atom dual core, with 2 sata drives, no CD/floppy etc, recently upgrades from 2 to 4gb of ram. It runs at 104 degrees Fahrenheit all the time almost. Any way i can test the power supply or anything else to try to fix it? Im a software guy, not hardware so im at a complete loss here, thanks for all assistance you can provide! EDIT: The switch on the back that says 230 or 115 is set to 230. If im in the USA, could that be causing the problems?

    Read the article

  • EFS Remote Encryption

    - by Apoulet
    We have been trying to setup EFS across our domain. Unfortunately Reading/Writing file over network share does not work, we get an "Access Denied" error. Another worrying fact is that I managed to get it working for 1 machine but no other would work. The machines are all Windows 2008R2, running as VM under ESXi host. According to: http://technet.microsoft.com/en-us/library/bb457116.aspx#EHAA We setup the involved machine to be trusted for delegation The user are not restricted and can be trusted for delegation. The users have logged-in on both side and can read/write encrypted files without issues locally. I enabled Kerberos logging in the registry and this is the relevant logs that I get on the machine that has the encrypted files. In order for all certificate that the user possess (Only Key Name changes): Event ID 5058: Audit Success, "Other System Events" Key file operation. Subject: Security ID: {MyDOMAIN}\{MyID} Account Name: {MyID} Account Domain: {MyDOMAIN} Logon ID: 0xbXXXXXXX Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: Not Available. Key Name: {CE885431-9B4F-47C2-8415-2D766B999999} Key Type: User key. Key File Operation Information: File Path: C:\Users\{MyID}\AppData\Roaming\Microsoft\Crypto\RSA\S-1-5-21-4585646465656-260371901-2912106767-1207\66099999999991e891f187e791277da03d_dfe9ecd8-31c4-4b0f-9b57-6fd3cab90760 Operation: Read persisted key from file. Return Code: 0x0[/code] Event ID 5061: Audit Faillure, "System Intergrity" [code]Cryptographic operation. Subject: Security ID: {MyDOMAIN}\{MyID} Account Name: {MyID} Account Domain: {MyDOMAIN} Logon ID: 0xbXXXXXXX Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: RSA Key Name: {CE885431-9B4F-47C2-8415-2D766B999999} Key Type: User key. Cryptographic Operation: Operation: Open Key. Return Code: 0x8009000b Could this be related to this error from the CryptAcquireContext function NTE_BAD_KEY_STATE 0x8009000BL The user password has changed since the private keys were encrypted. The problem is that the users I using at the moment can not change their password.

    Read the article

  • Time Machine vs Source Control?

    - by Blub
    Finally got convinced to start using some kind of version control for my code instead of zipping down a copy of the project at the end of each day. Downloaded Tortoise SVN and used it to create a repository localy on my hdd. I've been using it for 2 days now but I have to say that using it is actually more hassle than just copying the project manually in explorer. Sure, you only store incremental changes but with the cheap disks of today I can't really say that's an argument when you only have small projects. I haven't realy found a quick way to browse the older versions of my files eighter. What I want is an infinite undo that is completely transparent while I code, if I save the file I want a backup. I don't want to check out, check in and don't even get me started on moving files. I haven't tried Time Machine for OS X but it looks like it's exactly what I'm looking for. Does such a program exist for windows? Preferably free and with some kind of tagging-system so I can tag a timestamp when the project is working etc. Maybe should add that I mostly work alone on a single computer. Update: Some of you asked why I want backup. Since I work alone it's mostly to allow me to quickly hack up a solution without worrying that something will screw up.

    Read the article

  • Updated my WAMP Server and MySQL is eating up 580mB of memory

    - by Jon
    I updated my dev-box's WAMPSERVER, and along with updating PHP and Apache, MySQL updated to '5.6.12'. After doing that, I copied the data folder from my old (5.1.36) install to the new one and now MySQL takes up 580mB which is way too much, since I'm the only person using it (Locally) and there are only 20 or so databases on it, none of which have 'memory' tables. How can I get this down to a decent amount? My my.ini: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL. [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # server_id = ..... # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES Database info: Storage Engine Data Size Index Size Total Size InnoDB 48.00 KB 0.00 B 48.00 KB MEMORY 0.00 B 0.00 B 0.00 B MyISAM 163.64 MB 122.49 MB 286.13 MB Total 163.69 MB 122.49 MB 286.18 MB

    Read the article

  • Metacity/Compiz not staring upon Login Ubuntu 10.10

    - by Ryan Lanciaux
    TLDR: As of this afternoon, I do not have a window manager when I login to Ubuntu 10.10. I would like to have window manager on login without needing to add to startup. Just started using linux again as my home OS. (Used it for a long time years ago but been on windows up until this past weekend) so this may be kind of n00b-ish :) Anyways, up until today, everything on my machine was running okay. I did not have compiz running as the default wm because I'm running NVidia Drivers and Xinerama (and as I understand Xinerama & Compiz don't work well together). I made no changes to my xorg / etc but today when I logged in, I had to manually start metacity from command line to get any window manager. Really not sure what would be causing this or what I can do to get it working again. My xorg.conf is available here: https://gist.github.com/845618. My default Window Manager is set to /usr/bin/metacity in Configuration Editor under /desktop/gnome/applications/window_manager. p.s. Any tips on how to run 3 monitors where I can move windows between screens without Xinerama would be appreciated but that's prolly for another thread :)

    Read the article

  • Disabling LDAP Signing on Windows PDC in Local Policy

    - by Golmaal
    I just tripped over my own feet it seems. Playing around on a Windows 2008 R2 server (set up as domain controller), I was intrigued by certain warning event (event id 2886) which says: "To enhance the security of directory servers, you can configure both Active Directory Domain Services (AD DS) and Active Directory Lightweight Directory Services (AD LDS) to require signed Lightweight Directory Access Protocol (LDAP) binds." So I thoughtlessly did some Googling and set the relevant policies which enforce LDAP signing. Now I don't remember but I may have done that using Local Policy. Now I have setup a pfsense box which must authenticate AD users via LDAP. While the firewall can communicate over secure channel, it is difficult to manage the same for other packages such as Squid and SquidGuard. So now I have to disable i.e. undo those policy changes. The problem is that they are greyed out! The policies in question are LDAP server signing and LDAP client signing. I don't remember what I did but when I access these policies from Local Policy editor on the server, they are set to "Require Signing" and are greyed out. The same policies can still be set via Default Domain Controller option in Group Policy editor. So how can I reset these greyed out policies? Thanks

    Read the article

  • Virtualbox port forwarding with iptables

    - by jverdeyen
    I'm using a virtualmachine (virtualbox) as mailserver. The host is an Ubuntu 12.04 and the guest is an Ubuntu 10.04 system. At first I forwarded port 25 to 2550 on the host and added a port forward rule in VirtualBox from 2550 to 25 on the guest. This works for all ports needed for the mailserver. The guest has a host only connection and a NAT (with the port-forwarding). My mailserver was receiving and sending mail properly. But all connections are comming from the virtualbox internal ip, so every host connection is allowed, and that's not what I want. So.. I'm trying to skip the VirtualBox forwarding part and just forward port 25 to my host only ip of the guest system. I used these rules: iptables -F iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT iptables -t nat -P PREROUTING ACCEPT iptables -t nat -P POSTROUTING ACCEPT iptables -A INPUT --protocol tcp --dport 25 -j ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -s 192.168.99.0/24 -i vboxnet0 -j ACCEPT echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -A PREROUTING -p tcp -i eth0 -d xxx.host.ip.xxx --dport 25 -j DNAT --to 192.168.99.105:25 iptables -A FORWARD -s 192.168.99.0/24 -i vboxnet0 -p tcp --dport 25 -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.99.0 -o eth0 -j MASQUERADE iptables -L -n But after these changes I still can't connect with a simple telnet. (Which was possible with my first solution). The guest machine doesn't have any firewall. I only have one network interface on the host (eth0) and a host interface (vboxnet0). Any suggestions? Or should I go back to my old solution (which I don't really like). Edit: bridge mode isn't an option, I have only on IP available for the moment. Thanks!

    Read the article

  • samba access from win98

    - by SimonSalman
    Hello, the admin installed a new file server in our institute: OpenSuse 11.1 with Samba 3.2.7-11.3.2-2154-SUSE-CODE11. They copied the smb.conf from the old machine (hosting Samba 3.0.0) to the new one. Everything works as before, but one Windows 98 machine can see but not access the file server. It prompts for user authentication, but will not accept any user-password combination. There exists a lot of discussion about the problem on the net, but none provided a clear answer to the problem. EDIT: 1. I changed Win98 registry enable plain-text passwords, and alternatively changed server's smb.conf and /etc/smbpasswd to accept encrypted passwords 2. Further I provide a profile with a user-password combination on Win98 machine similar to one of the samba users-password combinations. 3. I changed smb.conf such that the samba server is the Local Master Browser all these changes are not necessary when using the older samba server. So, I conclude that a configuration problem on the server side is likely. If you need any further information, I will post them here. Best regards, Simon

    Read the article

  • How do I restore to a delta file (disk) on Vmware ESXi

    - by Oscar
    Using VMware Server ESXi (freebie version) I have a Virtual Machine (win 2k3 r2 server). When I first provisioned it I took a snapshot of it. I recently tried to clone the primary drive using my standard hardware-based method to grow a windows disk. (using knoppix, clone drive to a new drive, make it bootable, then I intended to extend the partition via diskpart from within windows). This process failed; I tried setting the cloned drive (via the vmware gui) to replace the original drive, boot and be done. This didn't work out so well. The machine never booted. I checked the boot order, the disk location and all the basics I usually do. As a failsafe, I then tried changing all the settings back so the machine would boot to the original drive and I could figure out (as I eventually did) a better way of growing the disk. However when I powered on the machine with the original drive, it reverted back to that initial snapshot I created; It lost all the changes since. I looked in the file system and found a few files, I think the keyfile here is one named "delta" and I'm assuming that's the disk I want, but I can't find a way to have the Virtual Machine actually use that drive/file. It isn't available to add when I go to add an existing drive. Do I need to somehow commit that delta to the original drive and then boot from it again? Can you point me in the right direction? I've since discovered the proper way of growing drives using "vmkfstools" but I need to get back to the original state of the machine to try this out. Any help would be greatly appreciated.

    Read the article

  • Deploy our own software using Puppet?

    - by Ken
    (Apologies in advance for the stupidity in this question. I'm normally a programmer, not a sysadmin, but I've taken it upon myself to automate some things, and clean up some other things which are automated but not in the prettiest way. :-) I've been looking around at various tools for automation of software deployment to a bunch of servers, like cfengine, Puppet, and Chef. So far, Puppet looks the most appealing, but I've certainly not committed to anything yet. These tools all look like they can do a great job of keeping a bunch of servers up-to-date with prepackaged software. What I don't get is: how does one use a tool (like Puppet) to manage deployments of our own internal software? I think I'm at a loss because I've seen a thousand tutorials showing how to keep Apache ensure => latest (which is pretty cool), but nothing that quite corresponds to my use-case today, which is something more like: when a human being pushes The Button, pull branch A from the version-control repository B run command C to compile it copy the binaries D to servers E1 through E10 on each server, run command F to make all changes take effect Puppet sounds great, and I totally see the advantage of declarative, idempotent configuration over some shell scripts, but I've not seen any tutorials for "you want to update your shell scripts to Puppet (or Chef, or cfengine) so here's what you should do". Is there such a thing? Is it obvious to other people how to take the things provided in the Puppet docs and replicate the behavior I want? Am I just not getting it? What it's sounding like to me, so far, is that the human being (#1) would manually package the software (#2 and #3) external to Puppet, manually update the Puppet config, which would trigger Puppet to update the servers ... maybe? (I'm a little confused here, as I'm sure you can tell.) Thanks!

    Read the article

  • Recovering a mdadm+lvm+ext4 partition with read error

    - by bitwelder
    One of disks in my NAS has failed. The NAS is running Linux, and it uses mdadm + LVM technology for its filesystems. I do have backup for most of the contents, but not for the very last changes, and if possible, I'd like to recover that from this failing disk. The disk (a 'green drive' WD10EARS 1TB in size) throws this kind of errors: Oct 3 12:00:41 kernel: [ 3625.620000] ata5.00: read unc at 9453282 Oct 3 12:00:41 kernel: [ 3625.620000] lba 9453282 start 9453280 end 1953511007 Oct 3 12:00:41 kernel: [ 3625.620000] sde5 auto_remap 0 Oct 3 12:00:41 kernel: [ 3625.630000] ata5.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x6 Oct 3 12:00:41 kernel: [ 3625.630000] ata5.00: edma_err_cause=00000084 pp_flags=00000003, dev error, EDMA self-disable Oct 3 12:00:41 kernel: [ 3625.640000] ata5.00: failed command: READ FPDMA QUEUED Oct 3 12:00:41 kernel: [ 3625.650000] ata5.00: cmd 60/40:00:e0:3e:90/00:00:00:00:00/40 tag 0 ncq 32768 in Oct 3 12:00:41 kernel: [ 3625.650000] res 41/40:00:e2:3e:90/12:00:00:00:00/40 Emask 0x409 (media error) <F> Oct 3 12:00:41 kernel: [ 3625.660000] ata5.00: status: { DRDY ERR } However, while testing with 'dd', I noticed that if I skip the first 4kB, the read seems to be ok, i.e. a command like. dd if=/dev/sde5 of=dev/null bs=4k count=1000 skip=1 doesn't return any read error. Supposing that there is no other read failure in the rest of the disk, would I be able to recover this 900 GB partition (as I mentioned before, it's a 'linux raid autodetect' partition, that contains a a LVM2 volume that contains a ext4 filesystem) if I copy-clone the partition somewhere else, but the first 4kB?

    Read the article

  • Using VMware Guest OS to enable Host OS to ssh to remote network

    - by Reuben L.
    Basically I have an issue because my host OS is 64-bit Linux Mint (Ubuntu derived) and it doesn't seem to be compatible with the Juniper Network Connect that is used by the network at my workplace. Thus, I am unable to ssh from terminal to the network. I can't make changes to the workplace network either so that leaves me with looking for solutions on my end. The main reason for me to access the network from home is to check on my running processes or to issue more commands to a few workstations. Putty is the desperate choice I usually make but it means I have to reboot to Windows and also have limited control. I've tried several other methods and they have all failed. Recently, I setup a VM with Windows 7 as the guest OS. Now half my problems are fixed as I don't have to physically reboot the system - I just have to engage Juniper Network Connect on the VM. However, I would still like to use my Linux terminal to ssh to the network. It sounds plausible that I could somehow manipulate ports to connect to the remote network from the host OS tunneled through the guest OS, but I really have no clue how to do so... Can anyone help?

    Read the article

  • GIT : I keep having to merge my new branch

    - by mnml
    Hi, I have created a new branch and I'm working on it with others dev but for reasons when I want to push my new commits I always have to git merge origin/mynewbranch Otherwise I'm getting some errors: ! [rejected] mynewbranch -> mynewbranch (non-fast-forward) error: failed to push some refs to '[email protected]/repo.git' To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. You asked me to pull without telling me which branch you want to merge with, and 'branch.mynewbranch.merge' in your configuration file does not tell me, either. Please specify which branch you want to use on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details. If you often merge with the same branch, you may want to use something like the following in your configuration file: [branch "mynewbranch"] remote = <nickname> merge = <remote-ref> [remote "<nickname>"] url = <url> fetch = <refspec> See git-config(1) for details. Why is it not automatic? Thanks

    Read the article

  • Photoshop CS4. How do I make sure my color stays the same in my different .psd files? (could be RGB

    - by Kris
    I asked 2 photoshop experts I know but they haven't got a clue because all my settings are exactly the same, in both files. (except the RGB type !! I'm not sure. Please read on) I use RGB color, 72DPI, 8 bits / channel. No adjustments (filters, like greyscale, etc ...) are selected / used. The layers are both normal, and opacity and fill are 100% (yes, in both files). I took two screenshots, and you can see the difference: http://www.flickr.com/photos/30465871@N05/4623864297/ http://www.flickr.com/photos/30465871@N05/4624469754/ Both colors are ff795d, but that doesn't matter, any color I use gives me the same problem: they both look different. Now, I know the CMYK settings (see screenshots) are different, but when the settings are the same the color changes. Why is this happening and how do I solve this problem? My guess is I'm working with different a type of RGB. It's sRGB IE61966 - 2.1 in the file I created (file info raw data) but I can't find that in the file that started with a screenshot. If that's the problem, how do I change / convert the RGB, once the file is already open? Thank you.

    Read the article

  • XP/Intel wirelss only showing 'hpsetup' ad-hoc network that isn't there

    - by ewall
    Trying to help my friend with her work XP laptop, which recently stopped seeing any wireless SSIDs except the SSID 'hpsetup' (presumably from a wireless-enabled HP printer). Relevant information: The laptop is a Lenovo T500 (Centrino 2 chipset) with XP SP3. The network adapter is Intel WiFi Link 5300 AGN (built-in). The latest version (13.5) of the Intel drivers only are installed, not the Intel config software, so XP is using the Wireless Zero-Config manager. The wireless router is a NetGear WGR614 v7 with 802.11b/g. The SSID is broadcasting, and all the other laptops in the house can see and connect to it. On the laptop, I have tried repairing the network connection, disabling power management, turning off 802.11a & n radio, and more... but it didn't help. Some of the wireless settings are managed by Group Policy from her office (I get the "At least one of your changes was not applied successfully to your wireless configuration" message). It is enforced to connect to "Access point (infrastructure) networks only". The real kicker is that my laptop does not an SSID named 'hpsetup' here, but it can see several broadcasted SSIDs including the one we want, while my friend's laptop doesn't see any SSID except 'hpsetup'. Any suggestions?

    Read the article

  • Why cache static files with Varnish, why not pass

    - by Saif Bechan
    I have a system runnning nginx / php-fpm / varnish / wordpress and amazon s3. Now I have looked at a lot of configuration files while setting up the system, and in all of them I found something like this: /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "\.(jpg|jpeg|png|gif|css|js)$") { /* Remove the cookie and make the request static */ unset req.http.cookie; return (lookup); } I do not understand why this is done. Most of the examples also run NginX as a webserver. Now the question is, why would you use the varnish cache to cache these static files. It makes much more sense to me to only cache the dynamic files so that php-fpm / mysql don't get hit that much. Am I correct or am I missing something here? UPDATE I want to add some info to the question based on the answer given. If you have a dynamic website, where the content actually changes a lot, chaching does not make sense. But if you use WordPress for a static website for example, this can be cached for long periods of time. That said, more important to me is static conent. I have found a link with some test and benchmarks on different cache apps and webserver apps. http://nbonvin.wordpress.com/2011/03/14/apache-vs-nginx-vs-varnish-vs-gwan/ NginX is actually faster in getting your static content, so it makes more sense to just let it pass. NginX works great with static files. -- Apart from that, most of the time static content is not even in the webserver itself. Most of the time this content is stores on a CDN somewhere, maybe AWS S3, something like that. I think the varnish cache is the last place where you want to have you static content stored.

    Read the article

  • Default IPv6 route on debian squeeze does not come up after boot

    - by Georg Bretschneider
    I have a problem with my default IPv6 route not coming up after boot on a Debian Squeeze system. This is my config (/etc/network/interfaces): # Loopback device: auto lo iface lo inet loopback iface lo inet6 loopback # device: br0 auto br0 iface br0 inet static bridge_ports eth0 bridge_fd 0 address 88.198.62.xx broadcast 88.198.62.63 netmask 255.255.255.224 gateway 88.198.62.33 up route add -net 88.198.62.32 netmask 255.255.255.224 gw 88.198.62.33 br0 iface br0 inet6 static address 2a01:4f8:131:10x::2 netmask 64 gateway 2a01:4f8:131:100::1 up route -A inet6 add 2a01:4f8:131:100::1/59 dev br0 My inet comes up alright, but I have to exec the route command manually after boot to make IPv6 work. Otherwise I can't even reach my gateway. This is the output of ip -6 route show after boot: 2a01:4f8:131:10x::/64 dev br0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 unreachable fe80::/64 dev lo proto kernel metric 256 error -101 mtu 16436 advmss 16376 hoplimit 4294967295 fe80::/64 dev br0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 I already tried it with: up ip -6 route add 2a01:4f8:131:100::1 dev br0 up ip -6 route add default via 2a01:4f8:131:100::1 dev br0 in /etc/network/interfaces, but with the same results. If I execute those commands manually on my shell, everything starts working nicely. And yes, I tried with post-up instead of up, too. Only other changes I made was to activate ip forwarding for IPv6, because I want to run some LXC containers on that system.

    Read the article

  • Configure IIS to pass-through CGI output without any conditioning

    - by Daniel Watrous
    I'm building a web service on Windows 2008 R2 with IIS 7.5 and Python 2.5. Right now I have the Handler Mappings and everything else setup just fine, Except that IIS is modifying what it gets back from the CGI script before sending it along the the client. Here's an example: I wrote the following CGI script: # hello.py print "Status: 400 Bad Request" print "Content-Type: text/html" print print "Error Message" According to the HTTP spec this should be fine and a Status of 400 should allow for a description of the error message in the body of the response. When the server response actually comes back to me I get the following: Status: 400 Bad Request Date: Fri, 11 Feb 2011 17:58:30 GMT X-Powered-By: ASP.NET Connection: close Content-Length: 11 Server: Microsoft-IIS/7.5 Content-Type: text/html Bad Request I've seen on this forum and others where I can change or eliminate the X-Powered-By header element, but I would like IIS to leave it alone altogether. I'm not sure why it takes my response, deletes "Error Message" from the body and replaces it with "Bad Request" and then adds all that other junk in. Is there some way to tell IIS to just send the response along without making any changes at all?

    Read the article

  • Windows Server - share files without access for administrator

    - by Pawel
    We have a MS Windows Server 2008 R8 based server that is administrated by our IT department. We would like to achieve two things simultaneously: A folder on the server, containing several thousand files (new files added frequently) that is accessible to some ActiveDirectory users (e.g. board of directors) but is not accessible by IT department employees IT department employees still maintain rights to administrate the server, including installing new software and services We already checked some solutions: Using NTFS access rights. Unfortunately IT (members of "Administrators" group) can set themselves as new owners of the files and change the permissions so that they gain access to the files. Enabling EFS. Unfortunately even if you do not allow IT to access files, they still can disable EFS completely because they have administrative rights. Moreover as far as I know you have to manually add permissions for all users but the owner for each new file - very inconvenient. Creating a new role for the IT department that has all the privileges apart from taking ownership of files. Unfortunately if you're not a member of the Administrators group, you cannot install new software, no matter what privileges you add to the role. TrueCrypt - nice free encryption software, but with poor sharing capabilities. You can either mount an encryption container on the server (and then IT has access to its contents) or you mount them locally but only one user can mount it for writing. AxCrypt - free encryption software that enables file-by-file encryption on the server. There are some disadvantages though - you have to manually encrypt each new file added. The files have their extensions changes. You can only set one password for all files (so all users have to know this one password). Any other ideas? Our budget is limited so enterprise-class software from Symantec or PGP would probably be not an option.

    Read the article

  • Changing PATH Environment Variable for all Users. (Ubuntu)

    - by Wally Glutton
    I recently compiled Ruby Enterprise Edition (REE) on an Ubuntu 8.04 server. I would like to update my PATH to ensure this new version of Ruby (found in /opt/ruby_ee/bin) supersedes the older version in /usr/local/bin. (I still want the old version around, though.) I would like these PATH changes to affect all users and crontabs. Attempted Solution #1: The REE documentation recommends placing the REE bin folder at the beginning of the global PATH in /etc/environment. I altered the PATH in this file to read: PATH="/opt/ruby_ee/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" This did not affect my PATH at all. Attempted Solution #2: Next I followed these instructions and updated the PATH setting in /etc/login.defs and /etc/crontab. (I did not change /etc/sudoers.) This didn't affect my PATH either, even after logging out and rebooting the server. Other information: I seem to be having the same problem described here. I'm testing using the commands "echo $PATH" and "ruby -v". My shell is bash. My .bashrc doesn't override my PATH. Yes, I have heard of the Ruby Version Manager project. ;)

    Read the article

< Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >