Search Results

Search found 5770 results on 231 pages for 'sense hofstede'.

Page 143/231 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • PHP + MYSQL site perfomance

    - by Diego
    I have to manage a site which wasn't developed by me. It is in PHP using a mysql database, which is located in the web server. The site, sometimes (when the visitors increase too much) stops responding, or respond too slow. I have developed some sites in PHP but never took care of the management so really don't know where to start. The server (the hard) seems to be fine, when the web stops responding the cpu is being used at about 55% and has a lot of memory. I'm not asking someone to solve this issue. I only would really like if someone could give me a few tips about where can I find logs and how should I read and interpret them. So, that way I would be able to know if its the net traffic, the database (which queries), or what. Thanks! Update: Forgot to say: it is a Windows Server 2003. Note: I've recorded about a day with Jet Profiler. I don't really understand all the information it provides but there is one query which it marks as really slow. It makes sense because it is a select with a where clause which has three like condition. Initially I didn't include this in my question because when I run the query from MySQL Query Browser it doesn't take any long. It is under 0.01 seconds.

    Read the article

  • Joining Samba to Active Directory with local user authentication

    - by Ansel Pol
    I apologise that this is somewhat incoherent, but hopefully someone will be able to make enough sense of this to understand what I'm trying to achieve and provide pointers. I have a machine with two network interfaces connected to two different networks (one of which it's providing several other services for, such as DNS), running two separate instances of Samba, one bound to each interface. One of the instances is just a workgroup-style setup using share-level authentication, which is all working fine. The problem is that I'm looking to join the other instance to an MS Active Directory domain (provided by MS Windows Small Business Server 2003) to enable a subset of the domain users to access the shares from Windows machines on the other network. The users who need access from the domain environment have accounts (whose names are all-lowercase versions of their domain usernames) on the machine running Samba, but I'm not sure about how to map the UIDs and everything I've read concerns authenticating accounts on that machine against either AD or another LDAP server. To clarify: I only want the credentials for AD users accessing the non-workgroup Samba instance to be authenticated against AD, not the accounts on the machine running Samba. I hope this is sufficiently clear. EDIT: In addition to being able to access the Samba shares from AD, I do also need to be able to access a share on the domain from the machine running Samba but would still like everything non-Samba-related to authenticate locally.

    Read the article

  • HP LaserJet 2550 has a carousel motor error

    - by Arlen Beiler
    I have a LaserJet 2550, and it's worked pretty good for a long time (except for some slowness a while back, spooling I think), but just recently it suddenly quit working. We moved this summer, but left it at our other place, and just recently when my Dad went over there to try to print something out, it didn't work. When you turn it on, you hear the fan give a false start (basically a quick pulse), and the carousel goes through its usual thing. Then it starts up in earnest like it's getting ready to print something. All of a sudden it just stops. Everything stops, and the three lower lights are steady. When I push the Go button, the Go light (bottom of the 3) turns off, but the other two stay on. I looked it up on the HP website and it says it is a carousel motor problem. I called HP, but they said it is out of warranty. I've opened the cover and held the switch with a screw driver so I could watch it, and it goes through its thing like I described (doesn't seem to make a difference whether the imaging drum is in or not), then when it stops it kind of seems to jump back a little bit (the carousel). I hope this all makes sense (I know you like details), and hopefully you also know what to do to fix it. Thanks.

    Read the article

  • suggestions for firewall/router project using *BSD or Linux

    - by Adeodatus
    Hi All, I have a project in mind and I'd love to hear some ideas on some open source solutions with COTS hardware. I have a few 24 and/or 48 port managed layer2 switches with customers potentially on each port (though its usually about 20-30). Right now the switch has a bridged network and backhaul the traffic to our core to a centralized DHCP server. I need to move them to a NAT solution and, while doing this, I'd like to protect the customers on each port from the customer traffic on the other ports. I also need to be able to port forward from the public side of the firewall/nat box to specific hardware on the inside of the nat machine (easy enough, I know). My first thoughts are to build an appliance-like box (the fewer moving parts the better) that can do filtering and NAT with rfc1918 an address range being handed out via a DHCP server on the appliance. A caching DNS server on the appliance would be a plus since we backhaul everything to the core. I'd like to run FreeBSD but I'm open. Now, to try to limit the broadcast traffic thats visible I was thinking of doing each port on the switch as a different vlan and have the switch do trunking to the private NIC on the FreeBSD/appliance. I'd probably need to do some magic on the freebsd NIC to get this working but it should. We have the parts to build these systems. So, does this make sense? Are there any other solutions out there that we don't have to spend money on but can use our parts to create something? Are there any good distros that could do this already (monowall)?? I may or may not admin this solution so a secure web configuration and management tool would be a plus in the other admins' minds. Thoughts?

    Read the article

  • Long Gigabit Ethernet Run

    - by Timothy R. Butler
    I am trying to get an Gig-E network between two buildings that are approximately 260 ft. away. While some TRENDnet switches failed to be able to connect to each other over Cat 6 at that distance, two Netgear 5-port Gig-E switches do so just fine. However, it still fails after I put in place APC PNET1GB ethernet surge protectors at each end before the line connects to the respective switches. So I find myself wondering if I simply need to find a better surge protector that doesn't degrade the signal as much (if so, what kind would you recommend?) or if I should give up on copper and use fiber between the buildings. If I opt to go the latter route, I could really use some pointers. It looks like LC connectors are the most common, but I keep running into some others as well. A media converter on each end seems like the simplest solution, but perhaps a Gig-E switch with an SFP port would make more sense? Given a very limited budget, sticking with my existing copper seems best, but if it is bound to be a headache, a 100 meter fiber cable is something I think I can swing cost wise.

    Read the article

  • Accessing SSH_AUTH_SOCK from another non-root user

    - by Danny F
    The Scenario: I am running ssh-agent on my local PC, and all my servers/clients are setup to forward SSH agent auth. I can hop between all my machines using the ssh-agent on my local PC. That works. I need to be able to SSH to a machine as myself (user1), change to another user named user2 (sudo -i -u user2), and then ssh to another box using the ssh-agent I have running on my local PC. Lets say I want to do something like ssh user3@machine2 (assuming that user3 has my public SSH key in their authorized_keys file). I have sudo configured to keep the SSH_AUTH_SOCK environment variable. All users involved (user[1-3]), are non privileged users (not root). The Problem: When I change to another user, even though the SSH_AUTH_SOCK variable is set correctly, (lets say its set to: /tmp/ssh-HbKVFL7799/agent.13799) user2 does not have access to the socket that was created by user1 - Which of course makes sense, otherwise user2 could hijack user1's private key and hop around as that user. This scenario works just fine if instead of getting a shell via sudo for user2, I get a shell via sudo for root. Because naturally root has access to all the files on the machine. The question: Preferably using sudo, how can I change from user1 to user2, but still have access to user1's SSH_AUTH_SOCK?

    Read the article

  • How to whitelist external access to an internal webserver via Cisco ACLs?

    - by Josh
    This is our company's internet gateway router. This is what I want to accomplish on our Cisco 2691 router: All employees need to be able to have unrestricted access to the internet (I've blocked facebook with an ACL, but other than that, full access) There is an internal webserver that should be accessible from any internal IP address, but only a select few external IP addresses. Basically, I want to whitelist access from outside the network. I don't have a hardware firewall appliance. Until now, the webserver has not needed to be accessible externally... or in any case, the occasional VPN has sufficed when needed. As such, the following config has been sufficient: access-list 106 deny ip 66.220.144.0 0.0.7.255 any access-list 106 deny ip ... (so on for the Facebook blocking) access-list 106 permit ip any any ! interface FastEthernet0/0 ip address x.x.x.x 255.255.255.248 ip access-group 106 in ip nat outside fa0/0 is the interface with the public IP However, when I add... ip nat inside source static tcp 192.168.0.52 80 x.x.x.x 80 extendable ...in order to forward web traffic to the webserver, that just opens it up entirely. That much makes sense to me. This is where I get stumped though. If I add a line to the ACL to explicitly permit (whitelist) an IP range... something like this: access-list 106 permit tcp x.x.x.x 0.0.255.255 192.168.0.52 0.0.0.0 eq 80 ... how do I then block other external access to the webserver while still maintaining unrestricted internet access for internal employees? I tried removing the access-list 106 permit ip any any. That ended up being a very short-lived config :) Would something like access-list 106 permit ip 192.168.0.0 0.0.0.255 any on an "outside-inbound" work?

    Read the article

  • Is it the address bus size or the data bus size that determines "8-bit , 16-bit ,32-bit ,64-bit " systems?

    - by learner
    My simple understanding is as follows. Memory (RAM) is composed of bits, groups of 8 which form bytes, each of which can be addressed ,and hence byte addressable memory. Address Bus stores the location of a byte of memory. If an address bus is of size 32 bits, that means it can hold upto 232 numbers and it hence can refer upto 232 bytes of memory = 4GB of memory and any memory greater than that is useless. Data bus is used to send the value to be written to/read off the memory. If I have a data bus of size 32 bits, it means a maximum of 4 bytes can be written to/read off the memory at a time. I find no relation between this size and the maximum memory size possible. But I read here that: Even though most systems are byte-addressable, it makes sense for the processor to move as much data around as possible. This is done by the data bus, and the size of the data bus is where the names 8-bit system, 16-bit system, 32-bit system, 64-bit system, etc.. come from. When the data bus is 8 bits wide, it can transfer 8 bits in a single memory operation. When the data bus is 32 bits wide (as is most common at the time of writing), at most, 32 bits can be moved in a single memory operation. This says that the size of the data bus is what gives an OS the name, 8bit, 16bit and so on. What is wrong with my understanding?

    Read the article

  • Where do I connect the HDD LED wires on my RAID adapter?

    - by Giffyguy
    I'm using a Promise FastTrak TX8660 with RAID 5. The manual (and Google) just doesn't seem to explain how exactly to connect a standard two-pin HDD LED wire to the eight available pins on the card. The Manual just says - To connect your LED, follow the following diagram: The card itself resembles the diagram: But it doesn't make any sense to me. All I have is a two-pin connecter for HDD LED on the front of my computer case. I don't need anything fancy like the fault LED or seperate indicators for each drive. I just want to be able to see when my RAID 5 array is working, that's all. I don't know what the "R" and "G" stand for, but my HDD LED wires are red and white. I tried connecting the red wire to the "R" pin and the white wire to the "G" pin, but that just makes the LED on the front of my case light up indefinitely, even when the computer is idle. Which pins am I suppose to connect the HDD LED header to for basic activity indication?

    Read the article

  • Are relative-path symlinks reliable on Rackspace Cloud Sites?

    - by Jakobud
    Rackspace's Cloud Sites have a lot of stupid limitations. For example, no SSH (in or out), no shell, no RSYNC, etc... (even through cron). Recently I learned that you can't reliably use symlinks in Cloud Sites. Apparently this is because the absolute path of your sites could change at any moment, since it's a shared host environment split up between many disks/servers. I guess different account's sites get moved from disk to disk whenever Rackspace decides to. Supposedly to increase efficiency across the board. So after talking with a Rackspace tech, he said they cannot guarantee that symlinks would always work. Obviously this is because if you have a symlink that use's an absolute path like this: //mnt/disk-34566/home/user34566/files/sites/www.mysite.com/mydir If you files go moved to a different disk (or whatever they do), then the absolute path would be different and the link would now be broken. That makes sense. So next, I asked the Rackspace tech if relative path symlinks were reliable. So if I have the following link: files/sites/www.mysite.com/mylink --> ../www.myothersite.com/anotherdir You can see that the symlink simply points to a nearby directory's sub-directory. He said they cannot guarantee that even those would always work either. Since it uses a relative path to another nearby directory I'm not sure how it could ever break from something Rackspace would do. Do relative symlinks somehow rely on absolute paths underneath? Or is Rackspace using some weird custom filesystem where they will break from absolute path changes? It seems like a relative-path symlink would be fine and would only break if the user did something to mess up the directories involved. But when the tech's say that they "don't officially support symlinks of any kind" that makes me hesitant to use them for large commercial websites in Cloud Sites. Can anyone with Rackspace experience give input on this topic?

    Read the article

  • Can I disable chrome's auto translate function for visitors to a given page on my site?

    - by Drew Noakes
    I know how to disable this feature for pages that I visit, but what I'm looking for is a way to tell other user's Chrome browsers not to offer translation a particular page on my site. Is there some kind of meta tag I can use? Alternatively, can I indicate that a particular element on the page should not be translated? Reason: The controls which slide down from the top of the page cause my page to resize, which changes the content, which makes the control slide up, which resizes the page, which changes the content, which causes the controls to slide back in. Rinse and repeat. The page dances. The page itself is a map, and the content it wants to translate are all proper names and shouldn't be translated anyway. If alternate names exist in other languages, I provide them myself. Generally I'm against taking away features from the browser that users might like, but in this case it really makes sense. So please don't answer saying that I shouldn't be doing this. I've weighed up the options.

    Read the article

  • How small (spec wise) can a virtual machine be and still boot up and run some sort of OS?

    - by IllvilJa
    One of the advantages with virtual machines is that you can be very flexible with their sizes. If the host system permits it, you can have a very large virtual machine with a lot of virtual RAM and disk. Also, you can decide to go the other way around, to give the virtual machine a very modest amount of RAM and disk and then choose and configure the OS appropriately. The question is, how small virtual machines have people managed to setup (and get to both boot up and to run)? Virtual machines doing something usuful is preferable, even if I know "useful" in this context is awfully subjective, but laboratory-cases with a configuration stripped beyond common sense could be intresting as well, just to see what people manage to boot and run. Quite open ended question and quite academic, but think of it: an extremely small VM (which still does something useful) takes very little memory and disk and can be quite quickly saved to and restored from disk. If it's also gentle on CPU resources, one might consider having a huge number of such VMs up and running on a host. (Imagine a VM running just an old Commodore 64 or Commodore Amiga in it. Ok, way wrong CPU architecture for modern Virtualization software running on a x86-based PC but still an interesting thought. You could have quite a few such small VMs running on a modern PC.)

    Read the article

  • Configure Windows firewall to prevent an application from listening on a specific port

    - by U-D13
    The issue: there are many applications struggling to listen on port 80 (Skype, Teamviewer et al.), and to many of them that even is not essential (in the sense that you can have a httpd running and blocking the http port, and the other application won't even squeak about being unable to open the port). What makes things worse, some of the apps are... Well, I suppose, that it's okay that the mentally impaired are being integrated in the society by giving them a job to do, but... Programming requires some intellectual effort, in my humble opinion... What I mean is that there is no way to configure the app not to use specific ports (that's what you get for using proprietary software) - you can either add it to windows firewall exceptions (and succumb to undesired port opening behavior) or not (and risk losing most - if not all - of the functionality). Technically, it is not impossible for the firewall to deny an application opening an incoming port even if the application is in the exception list. And if this functionality is built into the Windows firewall somewhere, there should be a way to activate it. So, what I want to know is: whether there exists such an option, and if it does how to activate it.

    Read the article

  • compile kernel 2.6.34 for Ubuntu Lucid for xen dom0 / pvops

    - by andreash
    Hi there, I'd like to compile a recent Linux kernel (2.6.34) for my Ubuntu 10.04 Lucid Lynx AMD64 box, mainly because I'd like to use it as a dom0 kernel with the recent xen4. There's plenty documentation on the web about how to compile a kernel 'Debian style'. But what I think would be nice to start with an 'official' Ubuntu config to be sure not to miss any important things and having to recompile over and over again. So what I'd like to do is compile 2.6.34, but starting with the 'official' /boot/config-2.6.32-XX from Ubuntu Lucid. The question is: How do I best do that? If I just take the config from 2.6.32, the new features from 2.6.33/34 won't be in the config. So what I'd like to do is somehow the 2.6.34 config with the original 2.6.32 one from Ubuntu. How can I best do that? Does it even make sense? Is there easier ways to achieve what I want? Thanks for your insight! A. PS: I just found a linux-image-2.6.32-bpo.4-xen-amd64 package on backports.org, but no information about it. Would it work as a dom0 kernel on Lucid?

    Read the article

  • Is something infecting my Google searches?

    - by hippietrail
    I starting doing some experimentation toward making a browser userscript for Google searches and when opening the JavaScript console noticed something that strikes me as very fishy: The page at https://www.google.com.au/search?oq=XYZ&sourceid=chrome&ie=UTF-8&q=XYZ displayed insecure content from http://50.116.62.47/js/chromeServerV45.js. The page at about:blank displayed insecure content from http://96.126.107.154/amz/google.php?callback=a&q=XYZ&country=US. (XYZ is a placeholder for whatever the search terms really was.) Is it likely that I've picked something like a drive-by browser infection? I've tried all kinds of searches for these URLs and other keywords but I've had no luck finding anything conclusive about whether they're malicious or what they are: 50.116.62.47 chromeServerV45.js 96.126.107.154 amz/google.php The only extensions I have installed are either widely used or written by myself. But something else is strange and I'm not sure if it's just a coincidence. I updated my Windows Chrome browser today to version 23.0.1271.64 m and now my Extensions tab as well as my settings tab are blank, so I can't try disabling my extesions. Here's some discussion I've been able to find so far but not really understand and make sense of: for 96.126.107.154 : "anomalous-javascript-pt2"

    Read the article

  • Poor Write Performance in VM inside Proxmox PVE 2.0

    - by sorsenne
    I am running a PVE 2.0 on a decent Hardware (2 SATA HDDs as RAID1, 12GB RAM, i7 CPU) but the I/O Performance is very poor inside the VM (Ubuntu 11.10 Server). The very same VM was copied to another Server running simply Ubuntu Server with KVM and had better I/O Perf. this is how the HDD is shown in the Guest: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) ata1.00: ATA-8: ST3000DM001-9YN166, CC49, max UDMA/133 ata1.00: 5860533168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA ata1.00: configured for UDMA/133 scsi 0:0:0:0: Direct-Access ATA ST3000DM001-9YN1 CC49 PQ: 0 ANSI: 5 sd 0:0:0:0: [sda] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB) sd 0:0:0:0: [sda] 4096-byte physical blocks sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA I tested with DD: $ dd bs=1M count=128 if=/dev/zero of=test conv=fdatasync 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 19.2222 s, 7.0 MB/s on the Host, this same Test will result with 156 MB/s in average. PS: I am using VirtIO and see no error in dmesg.

    Read the article

  • Why doesn't my symbolic link work?

    - by orokusaki
    I'm trying to better understand symbolic links... and not having very much luck. This is my actual shell output with username/host changed: username@host:~$ mkdir actual username@host:~$ mkdir proper username@host:~$ touch actual/file-1.txt username@host:~$ echo "file 1" > actual/file-1.txt username@host:~$ touch actual/file-2.txt username@host:~$ echo "file 2" > actual/file-2.txt username@host:~$ ln -s actual/file-1.txt actual/file-2.txt proper username@host:~$ # Now, try to use the files through their links username@host:~$ cat proper/file-1.txt cat: proper/file-1.txt: No such file or directory username@host:~$ cat proper/file-2.txt cat: proper/file-2.txt: No such file or directory username@host:~$ # Check that actual files do in fact exist username@host:~$ cat actual/file-1.txt file 1 username@host:~$ cat actual/file-2.txt file 2 username@host:~$ # Remove the links and go home :( username@host:~$ rm proper/file-1.txt username@host:~$ rm proper/file-2.txt I thought that a symbolic link was supposed to operate transparently, in the sense that you could operate on the file that it points to as if you were accessing the file directly (except of course in the case of rm where of course the link is simply removed).

    Read the article

  • Setting Up Apache as a Forward Proxy with Cahcing

    - by Karl
    I am trying to set up Apache as a forward proxy with caching, but it does not seem to be working correctly. Getting Apache working as a forward proxy was no problem, but no matter what I do it is not caching anything, to disk or memory. I already checked to make sure nothing is conflicting in the mods_enabled directory with mod_cache (ended up commenting it all out) and also I tried moving all of the caching related fields to the configuration file for mod_cache. In addition I set up logging for caching requests, but nothing is being written to those logs. Below is my Apache config, any help would be greatly appreciated!! <VIRTUALHOST *:8080> ProxyRequests On ProxyVia On #ErrorLog "/var/log/apache2/proxy-error.log" #CustomLog "/var/log/apache2/proxy-access.log" common CustomLog "/var/log/apache2/cached-requests.log" common env=cache-hit CustomLog "/var/log/apache2/uncached-requests.log" common env=cache-miss CustomLog "/var/log/apache2/revalidated-requests.log" common env=cache-revalidate CustomLog "/var/log/apache2/invalidated-requests.log" common env=cache-invalidate LogFormat "%{cache-status}e ..." # This path must be the same as the one in /etc/default/apache2 CacheRoot /var/cache/apache2/mod_disk_cache # This will also cache local documents. It usually makes more sense to # put this into the configuration for just one virtual host. CacheEnable disk / #CacheHeader on CacheDirLevels 3 CacheDirLength 5 ##<IfModule mod_mem_cache.c> # CacheEnable mem / # MCacheSize 4096 # MCacheMaxObjectCount 100 # MCacheMinObjectSize 1 # MCacheMaxObjectSize 2048 #</IfModule> <Proxy *> Order deny,allow Deny from all Allow from x.x.x.x #IP above hidden for this post <filesMatch "\.(xml|txt|html|js|css)$"> ExpiresDefault A7200 Header append Cache-Control "proxy-revalidate" </filesMatch> </Proxy> </VIRTUALHOST> Thank you once again!

    Read the article

  • MsMpEng.exe (Windows Defender?) uses a lot of CPU at startup and runs two instances on a single core

    - by dlamblin
    I'm using Windows XP Professional SP2 on a single core AMD64 processor, and I've got two instances of MsMpEng.exe starting up when I start up and log in. They use 64MB and 32MB of ram and 140MB and 80MB of virtual memory, and fluctuate around 80% CPU usage for about 5 minutes at start up. They are (I read) associated with Windows Defender, but I'm concerned about: There's two of them, everything I read generally has only one reported. They might be scanning each other, and I want that to stop. They might be getting scanned by avgrsx.exe (AVG Free 8) (uses about 16Mb v ram) They might also be scanning moe.exe (assosciated with ms live mesh, which I'm considering getting rid of) Lastly I have Microsoft Security Essentials. I don't know the process name associated there. The main concern of mine (apart from the double instances) is that these are all trying to prioritize scanning each other at once except maybe moe.exe. This might seem legitimate but is likely a useless drain on resources. Have I made a mistake in having all of these installed, or is there a way to inform them not to do whatever they're doing that's taking about 5+ minutes at start up? [I also have Google Desktop, but I'm keeping that.] Comment if none of this makes sense to you.

    Read the article

  • How To Map Fn+Key (Multiple Keys) To Other Key in Windows?

    - by Mohamed Meligy
    I've got a ThinkPad W530 laptop, which replaces the Application Key (also known as Menu Key or Mouse Right Click key, usually between Right Alt and Right Ctrl) with the Print Screen key. So, I need to replace Prt/Scr with Application key. That's easy, all key mapping software like SharpKeys or whatever can do that. There are even a few threads in SuperUser about dozen of those. But then, the different part to this question from the others (which is why it's not a duplicate, I think) is that I also don't want to completely lose the Prt/Scr key. I'm thinking about replacing it with either: Fn + Prt/Scr Fn + F2 These seemed to have no special meaning, so, I'm not overriding anything, just adding functionality to either of them (one of them, not both), to be the new Prt/Scr key. I couldn't find any key mapping software that can detect or let me select more than one key to map, even when the other key is something like Fn key (although they all can map Fn key itself, without combination). I know it may make sense why this restriction exists, but it'll be really useful if I can override it though. Do you know any program that can do that? Thanks a lot.

    Read the article

  • script calling script as other user

    - by viktor tron
    Using CentOs, I want to run a script as user 'training' as a system service. I use daemontools to monitor the process, which needs a launcher script that is run as root: : #!/bin/bash exec >> /var/log/training_service.log 2>&1 setuidgid training training_command This last line is not good enough since for training_command, we need environment for training user to be set. : su - training -c 'training_command' gives 'standard in must be tty' as su making sure tty is present to potentially accept password. I know I could make this disappear by modifying /etc/sudoers a la Bash & 'su' script giving an error "standard in must be a tty" but i am reluctant and unsure of consequences. : runuser - training -c 'training_command' gives runuser: cannot set groups: Connection refused. I found no sense or resolution to this message. I am stuck. Is this something so hard to achieve? I appreciate all insight and guidance to best practice.

    Read the article

  • Which database to use and system/db administration by layman [closed]

    - by blah
    So my friend and I got briliant ;) idea for a business. Since it is not predictable whether it will work out or not, we decided to keep cost as low as possible to start with, in particular not to hire anyone. If it will work out as expected it will generate enough profit to hire professionals in few months. But for the first few months we'll be doing everything by ourselfs. He's a business/finance major, and I'm a software developer, so obviously I have to take care of IT :) It will be a webapp, written in python/django. My questions regarding this project: 1) What database should I choose? I'm experienced with oracle, and have been working with SQL Server for a while, but both of them are too expensive(at least now). It's a developer experience, I've never done any dba stuff. I'm looking for something free(as in beer). Looks like MySql or PostgreSQL are most popular in this sector. I would appreciate any comments on which db to choose. I'm open to any suggestions(it doesn't have to be MySql or Postgre). Here's what I know about data: It will be almost dates and numbers, a little bit of text. Searched mainly by dates. Data will almost never be updated, mostly inserted and browsed. From 30k to 300k new records/month. 2) Servers. My idea is to rent two dedicated servers. During normal operation one would be a web server(debian/apache), other would be a db server(debian/?). My recovery plan is to install everything on both, and in case of trouble with one of machines just run everything on the other one. Does it even makes sense? Any other tips appreciated. Thanks.

    Read the article

  • Kerberos & localhost

    - by Alex Leach
    I've got a Kerberos v5 server set up on a Linux machine, and it's working very well when connecting to other hosts (using samba, ldap or ssh), for which there are principals in my kerberos database. Can I use kerberos to authenticate against localhost though? And if I can, are there reasons why I shouldn't? I haven't made a kerberos principal for localhost. I don't think I should; instead I think the principal should resolve to the machine's full hostname. Is that possible? I'd ideally like a way to configure this on just one server (whether kerberos, DNS, or ssh), but if each machine needs some custom configuration, that'd work too. e.g $ ssh -v localhost ... debug1: Unspecified GSS failure. Minor code may provide more information Server host/[email protected] not found in Kerberos database ... EDIT: So I had a bad /etc/hosts file. If I remember correctly, the original version I got with Ubuntu had two 127.0. IP addresses, something like:- 127.0.0.1 localhost 127.0.*1*.1 hostname For no good reason, I'd changed mine a long time ago to: 127.0.0.1 localhost 127.0.*0*.1 hostname.example.com hostname This seemed to work fine with everything until I tried out ssh with kerberos (a recent endeavour). Somehow this configuration led to sshd resolving the machine's kerberos principal to "host/localhost@\n", which I suppose makes sense if it uses /etc/hosts for forward and reverse dns lookups in preference to external dns. So I commented out the latter line, and sshd magically started authenticating with gssapi-with-mic. Awesome. (Then I investigated localhost and asked the question)

    Read the article

  • Is basing storage requirements based on IOPS sufficient?

    - by Boden
    The current system in question is running SBS 2003, and is going to be migrated on new hardware to SBS 2008. Currently I'm seeing on average 200-300 disk transfers per second total across all the arrays in the system. The array seeing the bulk of activity is a 6 disk 7200RPM RAID 6 and it struggles to keep up during high traffic times (idle time often only 10-20%; response times peaking 20-50+ ms). Based on some rough calculations this makes sense (avg ~245 IOPS on this array at 70/30 read to write ratio). I'm considering using a much simpler disk configuration using a single RAID 10 array of 10K disks. Using the same parameters for my calculations above, I'm getting 583 average random IOPS / sec. Granted SBS 2008 is not the same beast as 2003, but I'd like to make the assumption that it'll be similar in terms of disk performance, if not better (Exchange 2007 is easier on the disk and there's no ISA server). Am I correct in believing that the proposed system will be sufficient in terms of performance, or am I missing something? I've read so much about recommended disk configurations for various products like Exchange, and they often mention things like dedicating spindles to logs, etc. I understand the reasoning behind this, but if I've got more than enough random I/O overhead, does it really matter? I've always at the very least had separate spindles for the OS, but I could really reduce cost and complexity if I just had a single, good performing array. So as not to make you guys do my job for me, the generic version of this question is: if I have a projected IOPS figure for a new system, is it sufficient to use this value alone to spec the storage, ignoring "best practice" configurations? (given similar technology, not going from DAS to SAN or anything)

    Read the article

  • Duplicity on a ReadyNAS

    - by Jason Swett
    Has anyone here run Duplicity on a ReadyNAS? I'm trying but here's what I get: duplicity full --encrypt-key="ABC123" /home/jason/ scp://[email protected]//gob Invalid SSH password Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' failed (attempt #1) I've also found this post that says the "Invalid SSH password" message doesn't actually mean invalid SSH password. This would make sense because I'm not using an SSH password; I'm using a public key. I can ssh, ftp, sftp and rsync into my ReadyNAS just fine. (Actually, to be more accurate, I can get past authentication with ssh, ftp and sftp but I can't actually do anything past that. Regardless, that's enough to tell me that "Invalid SSH password" is bogus. Rsync works with no problems.) The post I found says the command will work as soon as the directory at the end of your scp command exists, but I don't know how to check for that. I know the share gob exists on my ReadyNAS and I know it's writable because I'm writing to it with rsync. Also, here is the verbose output: Using archive dir: /home/jason/.cache/duplicity/3bdd353b29468311ffa8485160da6873 Using backup name: 3bdd353b29468311ffa8485160da6873 Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Main action: full ================================================================================ duplicity 0.6.10 (September 19, 2010) Args: /usr/bin/duplicity full --encrypt-key=ABC123 -v9 /home/jason/ scp://[email protected]//gob Linux gob 2.6.35-22-generic #33-Ubuntu SMP Sun Sep 19 20:34:50 UTC 2010 i686 /usr/bin/python 2.6.6 (r266:84292, Sep 15 2010, 15:52:39) [GCC 4.4.5] ================================================================================ Using temporary directory /tmp/duplicity-cridGi-tempdir Registering (mkstemp) temporary file /tmp/duplicity-cridGi-tempdir/mkstemp-ztuF5P-1 Temp has 86334349312 available, backup will use approx 34078720. Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' (attempt #1) State = sftp, Before = '[email protected]'s' State = sftp, Before = '' Invalid SSH password Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' failed (attempt #1) Any ideas as to what's going wrong?

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >