Search Results

Search found 4177 results on 168 pages for 'lost in the sauce'.

Page 118/168 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Custom keyboard map is causing issues with stuck keys

    - by Grumbel
    I have a Microsoft Ergonomic 4000 keyboard and I am running a custom keymap (dvorak with some stuff for umlauts): http://pingus.seul.org/~grumbel/tmp/md5/b054e11505c88e1bfc6ebd5da46bdb78-xmodmap_pke http://pingus.seul.org/~grumbel/tmp/md5/f5e42a5b8ba4a034c5945f719b3d2608-xmodmap_pm This used to work fine for years and it still does, except that I am now having issues with a stuck Mode_switch key. When I hit Control_R and Mode_switch at the same time (happens a lot by accident), the Mode_switch key gets into a 'stuck' state, all letters I type afterwards come out in their umlaut form as if Mode_switch is pressed. I can unstuck the Mode_switch by again hitting Control_R and Mode_switch at the same time, but that leaves Gnome in a broken state where it doesn't react to my Gnome keyboard shortcuts any longer. The key presses themselves are still registered by the window manager as one can see changes in the applications (cursor in Gnome Terminal will turn into an unfilled rect, as if the application lost focus), but don't trigger the bound action. Does anybody have a clue what could be causing this? Or does anybody has an idea how I could debug this? xev doesn't seem to help here, as it is reporting normal KeyPress/KeyRelease events, even when the key is stuck. Also the Gnome key bindings don't get reported at all when its in the 'broken' state. I assume they are captured by the window manager before they even reach xev. I am using Ubuntu 10.04 with Gnome and Metacity, I have disabled all OpenGL related effects, so Compiz shouldn't interfere. Some general info on which applications are involved in Gnomes key binding handling would be helpful as well, as I assume its metacity, but restarting metacity doesn't fix the issue.

    Read the article

  • Two servers, two domains, one ip. mod_proxy beginner

    - by Gutsav
    I run two virtual web servers (both running apache2 on debian). I have just one external IP, but two domains, and I want a domain going to each of the servers. I've understood that I need a Reverse Proxy, and I enabled both the mod_proxy and the mod_proxy_http modules on the "primary server". Do I need to enable anything on the "secondary server"? I also understood that I need to write some things in a virtual host file, but what? On the primary server, I have a virtual host file for one of the domains, and some for subdomains. I want domain1.tld to go to the primary server (port 80 is forwarded to it, so that works) and domain2.tld to go to the other server (internal ip 192.168.0.x). No ports needs to be forwarded to it, right? So, what to add and in which virtual host file? Or a new one? Other questions suggest adding ProxyPass and ProxyPassReverse, but I'm lost anyway, and I just don't understand the apache documentation. Thanks in advance

    Read the article

  • Looking for ballpark pricing on an affordable a Cisco VOIP solution for our office

    - by guytech
    We have about 8 incoming PSTN lines that are currently on an old and antiquated Nortel Meridian ICS system. This system has been giving us some grief. We're looking for a new VOIP solution. I've been looking at a Cisco solution and it does seem pricey but I'm sure effective. Unfortunately, we probably can't afford a Cisco Unified Communications 520 which seems to be the ideal solution. We have about 15 people who need an extension and voicemail. We really don't have any need for a fancy system just an auto attendant of some sort when people call us. It looks like we'll have to get an older router and an addon card for what we're looking for to get best value pricing. However, I don't know a a lot about Cisco voice products so I'm a bit lost as to what to get. The only thing I am sure on is the pricing on VOIP phones which we expect to be about ~$100-200. However, I'm not sure what pieces of VOIP infrastructure to get. Any advice? I am familiar with Asterisk but right now I'm looking on pricing concerning a Cisco solution.

    Read the article

  • Ways to recover data from external hard drive

    - by Howard Benson
    I use an external hard disk for backup of my mac with time machine (OS 10.5.8). I have made something wrong and I have found important folders in the recycler bin. These folders come from external hd. They are backup folders (backups.backupdb) and others. I have tried to restore them draggin and dropping. Some of them came back in the external hd in a while. For the others it takes hours to "preparing to copy" and then it has said "there's no space to copy" on ext hd. It's strange. Files are now in the recycle bin (180gb), and the ext had should have lot of free space. But it isn't really so. Ext hd is not free of space even if these files are in the bin. I ask for advices. I'm not also able to use time machine now (and i have "lost" old backups) for the same reason. Ext hd says that it has not free space.. Thanks

    Read the article

  • "Network Error - 53" while trying to mount NFS share in Windows Server 2008 client

    - by Mike B
    CentOS | Windows 2008 I've got a CentOS 5.5 server running nfsd. On the Windows side, I'm running Windows Server 2008 R2 Enterprise. I have the "Files Services" server role enabled and both Client for NFS and Server for NFS are on. I'm able to successfully connect/mount to the CentOS NFS share from other linux systems but am experiencing errors connecting to it from Windows. When I try to connect, I get the following: C:\Users\fooadmin>mount -o anon 10.10.10.10:/share/ z: Network Error - 53 Type 'NET HELPMSG 53' for more information. (IP and share name have been changed to protect the innocent :-) ) Additional information: I've verified low-level network connectivity between the Windows client and the NFS server with telnet (to the NFS on TCP/2049) so I know the port is open. I've further confirmed that inbound and outbound firewall ports are present and enabled. I came across a Microsoft tech note that suggested changing the "Provider Order" so "NFS Network" is above other items like Microsoft Windows Network. I changed this and restarted the NFS client - no luck. I've confirmed that the share folder on the NFS server is readable/writable by all (777) I've tried other variations of the mount command like: mount 10.10.10.10:/share/ z: and mount 10.10.10.10:/share z: and mount -o anon mtype=hard \\10.10.10.10:/share * No luck. As per the command output, I tried typing NET HELPMSG 53 but that doesn't tell me much. Just "The network path was not found". I'm lost on how to proceed with troubleshooting. Any ideas?

    Read the article

  • Can start-stop-daemon use environmental variables?

    - by scottburton11
    I need to daemonize a Windows app running in Wine, and create a pid in /var/run. Since it requires an X11 session to run, I need to make sure the $DISPLAY variable is set in the running user's environment. Assuming I already have a X11 session running, with a given display, here's what the start-stop-daemon line looks like in my /etc/init.d script: start-stop-daemon --start --pidfile /var/run/wine-app.pid -m -c myuser -g mygroup -k 002 --exec /home/myuser/.wine/drive_c/Program\ Files/wine-app.exe Unfortunately, my version of start-stop-daemon on Ubuntu 8.04 doesn't have the -e option to set environmental variables. I gather that you could simply set $DISPLAY before the command, like so: VAR1="Value" start-stop-daemon ... But it doesn't work. Since I'm using the -c {user} option to run as a specific user, I'm guessing there's an environment switch and VAR1 is lost. I've tried exporting DISPLAY from the running user's .profile and/or .bashrc but it doesn't work either. Is there another way to do this? Is this even possible? Am I overlooking something? Many thanks

    Read the article

  • Ubuntu 10.04->10.10 in failed state - how to recover?

    - by Harvey
    I was running Ubuntu 10.04 and attempted to upgrade to 10.10. I have a really slow connection (DSL 128kbits/sec) and copying the upgrade files took about 26 hours. I of course let it run unattended. When I came back, I notice the following 3 dlgs: (1) "Could not install the upgrades The upgrade has aborted. Your system could be in an unusable state. A recovery will run now (dpkg -- configure -a)." (2) "gpk-update-icon Distribution upgrades available maverick 10.10 (stable) [more information] [Do no show this again] [Cancel] [Ok]" (3) "gpk-update-icon Security updates available The following important updates are available for your computer: libwebkit-1.0-2-dbg - Web content engine library for Gtk+ - Debugging symbols libcupsimage2 - Common UNIX Printing System(tm) - Raster image library ..." What is the best response to all of this? I went through something similar in an attempted network upgrade from 8.04 to 10.04 and had to reload the unbootable machine fresh from distribution media (all data was lost). I'd like to avoid that here. I have not yet responded to the dialogs, and want to make sure the system is still bootable and not lose my data this time.

    Read the article

  • How to log kernel panics without KVM

    - by Spacedust
    My server is crashing and I can't find an answer why. It all started after my datacenter upgrade RAM from 16 GB to 32 GB. I also found such logs in dmesg - they've started to show itself just before the first kernel panic: EXT4-fs error (device md2): ext4_ext_find_extent: bad header/extent in inode #97911179: invalid magic - magic 5f69, entries 28769, max 26988(0), depth 24939(0) EXT4-fs error (device md2): ext4_ext_remove_space: bad header/extent in inode #97911179: invalid magic - magic 5f69, entries 28769, max 26988(0), depth 24939(0) EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 20974: 8589 blocks in bitmap, 54896 in gd JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. EXT4-fs error (device md2): ext4_ext_split: inode #97911179: (comm pdflush) eh_entries 28769 != eh_max 26988! EXT4-fs (md2): delayed block allocation failed for inode 97911179 at logical offset 1039 with max blocks 1 with error -5 This should not happen!! Data will be lost EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 21731: 5 blocks in bitmap, 60762 in gd JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. My system is CentOS 5.8 64-bit with latest kernel 2.6.18-308.20.1.el5. How can I check what is the reason of kernel panic without having an access to the KVM ? I have told my datacenter admins to check the memory in the server.

    Read the article

  • windows 7 turns off itself everyday at about 9am

    - by Radek
    I was given this comp at work and after a week or so this strange thing started to happen in the middle of doing something :-( It turns off itself at about 9.10am every morning Just once a day and it works fine after it had it little nap. it happens both if the comp was on all night or I turned it off before leaving the office. I tried to swapped the memory as I was told that there was an issue with memory. But moving or removing any memory did not make any difference. I am not aware that I would install any program that could cause that. I installed AVG and set it up to do every day scan about 8am + if restart is needed it requires user confirmation. 'The Software Protection service has stopped' few minutes before it turned off itself but it happened also at other times without the computer turned off. I turned off Windows automatic updates as the first thing when I got the comp configuration Windows 7 Ultimate Intel Core2 Quad Q9550 at 2.8GH, 8GB RAM ST31000528AS Barracuda 7200.12 SATA 3Gb/s 1TB Update Below message are logged when I turn the comp on again. Message 1: The previous system shutdown at 9:08:54 AM on ?6/?29/?2010 was unexpected. Message 2: Level:Critical Source: Kernel Power EventID 41 Task Category (63) The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. Log after manual restart Update 2 (task scheduler)

    Read the article

  • How do I restore to a delta file (disk) on Vmware ESXi

    - by Oscar
    Using VMware Server ESXi (freebie version) I have a Virtual Machine (win 2k3 r2 server). When I first provisioned it I took a snapshot of it. I recently tried to clone the primary drive using my standard hardware-based method to grow a windows disk. (using knoppix, clone drive to a new drive, make it bootable, then I intended to extend the partition via diskpart from within windows). This process failed; I tried setting the cloned drive (via the vmware gui) to replace the original drive, boot and be done. This didn't work out so well. The machine never booted. I checked the boot order, the disk location and all the basics I usually do. As a failsafe, I then tried changing all the settings back so the machine would boot to the original drive and I could figure out (as I eventually did) a better way of growing the disk. However when I powered on the machine with the original drive, it reverted back to that initial snapshot I created; It lost all the changes since. I looked in the file system and found a few files, I think the keyfile here is one named "delta" and I'm assuming that's the disk I want, but I can't find a way to have the Virtual Machine actually use that drive/file. It isn't available to add when I go to add an existing drive. Do I need to somehow commit that delta to the original drive and then boot from it again? Can you point me in the right direction? I've since discovered the proper way of growing drives using "vmkfstools" but I need to get back to the original state of the machine to try this out. Any help would be greatly appreciated.

    Read the article

  • Is it possible to download extremely large files intelligently or in parts via SSH from Linux to Windows?

    - by Andrew
    I have a ~35 GB file on a remote Linux Ubuntu server. Locally, I am running Windows XP, so I am connecting to the remote Linux server using SSH (specifically, I am using a Windows program called SSH Secure Shell Client version 3.3.2). Although my broadband internet connection is quite good, my download of the large file often fails with a Connection Lost error message. I am not sure, but I think that it fails because perhaps my internet connection goes out for a second or two every several hours. Since the file is so large, downloading it may take 4.5 to 5 hours, and perhaps the internet connection goes out for a second or two during that long time. I think this because I have successfully downloaded files of this size using the same internet connection and the same SSH software on the same computer. In other words, sometimes I get lucky and the download finishes before the internet connection drops for a second. Is there any way that I can download the file in an intelligent way -- whereby the operating system or software "knows" where it left off and can resume from the last point if a break in the internet connection occurs? Perhaps it is possible to download the file in sections? Although I do not know if I can conveniently split my file into multiple files -- I think this would be very difficult, since the file is binary and is not human-readable. As it is now, if the entire ~35 GB file download doesn't finish before the break in the connection, then I have to start the download over and overwrite the ~5-20 GB chunk that was downloaded locally so far. Do you have any advice? Thanks.

    Read the article

  • Change default profile directory per group

    - by Joel Coel
    Is it possible to force windows to create profiles for members of one active directory group in a different folder from members in another active directory group? The school here uses DeepFreeze to protect public computers. In a nutshell, DeepFreeze prevents all changes to a hard drive such that every time you restart the machine the disk is identical to it was at the time you froze it. This is a bit different than restoring to an image, in that it never really wrote changes to disk in a permanent way in the first place. This has a few advantages over images: faster recover times, and it's easy to thaw the machine for a few minutes to perform maintenance such as windows updates (which can even be automated). DeepFreeze also allows you to configure a "thawspace" partition, where changes are persistent across reboots. One of the weaknesses of DeepFreeze is that you end up needing to create a new profile every time you log in, unless your profile existed at the time the machine was frozen. And even then, any changes you make to your profile while working on a frozen machine are lost. As students have frequent legitimate needs to log in to our classroom machines, there is currently a lot of cleanup involved from time to time in removing their old profiles and changes, so I want to extend DeepFreeze to protect our classroom computers as well as public computers. The problem is that faculty have a real need to keep a stateful profile locally on these classroom computers. The solution I would like to use is to configure Windows via group policy (or even manually, if that's the way I'll have to do it) to place profile folders on the thawspace partition, but only for members of the faculty security group. Is this possible?

    Read the article

  • Inkscape: Copying an object, retaining transparency

    - by dpk
    I'm looking for a way to copy objects from one window to another without losing the surrounding transparency. I have two Inkscape windows. The setup is pretty simple. In the first window I draw a filled circle and a filled rectangle in it, with the circle set on top of the rectangle to show that the area around the circle is transparent (that is, you can see the rectangle "under" the circle, see screenshot 1, left). In the second window I just drew a filled rectangle (screenshot 1, right). When I copy the circle from window 1 to window 2 the transparency around the circle is lost (screenshot 2). I've verified that the backgrounds of the documents are 0% alpha/white. This is a rather contrived example but is readily reproducible. The real graphics I am working with have a bunch of objects all in a single group, but I have the same results. I feel like I'm missing something. The circle no longer behaves like a circle at its destination. Instead, it acts kind of like a bitmap. I'm definitely not using the bitmap copy feature.

    Read the article

  • Courier-imap login problem after upgrading / enabling verbose logging

    - by halka
    I've updated my mail server last night, from Debian etch to lenny. So far I've encountered a problem with my postfix installation, mainly that I managed to broke the IMAP access somehow. When trying to connect to the IMAP server with Thunderbird, all I get in mail.log is: Feb 12 11:57:16 mail imapd-ssl: Connection, ip=[::ffff:10.100.200.65] Feb 12 11:57:16 mail imapd-ssl: LOGIN: ip=[::ffff:10.100.200.65], command=AUTHENTICATE Feb 12 11:57:16 mail authdaemond: received auth request, service=imap, authtype=login Feb 12 11:57:16 mail authdaemond: authmysql: trying this module Feb 12 11:57:16 mail authdaemond: SQL query: SELECT username, password, "", '105', '105', '/var/virtual', maildir, "", name, "" FROM mailbox WHERE username = '[email protected]' AND (active=1) Feb 12 11:57:16 mail authdaemond: password matches successfully Feb 12 11:57:16 mail authdaemond: authmysql: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> Feb 12 11:57:16 mail authdaemond: Authenticated: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> ...and then Thunderbird proceeds to complain that it cant' login / lost connection. Thunderbird is definitely not configured to connect through SSL/TLS. POP3 (also provided by Courier) is working fine. I've been mainly looking for a way to make the courier-imap logging more verbose, like can be seen for example here. Edit: Sorry about the mess, I've found that I've been funneling the log through grep imap, which naturally didn't display entries for authdaemond. The verbose logging configuration entry is found in /etc/courier/imapd under DEBUG_LOGIN=1 (set to 1 to enable verbose logging, set to 2 to enable dumping plaintext passwords to logfile. Careful.)

    Read the article

  • Access denied to external USB disk; update access rights fails in Windows 8

    - by gerard
    I use to work with 2 laptops (Windows vista and Windows 7), my work files being on an external usb disk. My oldest laptop broke down, so I bought a new one. I had no option other than take Windows 8. I suspect something changed with access rights, as my external disk suffered some "access denied" problem on Windows. I was prompted (by Windows 8) somehow to fix the access rights, which I tried to do, getting to the properties - security. This process was very slow and ended up saying disk is not ready Additionally, my external usb disk somehow was not recognized anymore. Back to Windows 7, I was warned that my disk needed to be verified, which I did. In this process, some files were lost (most of them I could recover from the folder found00x, but I have some backup anyway). Also, I don't know why, but under Windows 7, all the folder showed with a lock. Then back again to Windows 8. Same problem : access denied to my disk + no way to change access rights as it gets stuck disk is not ready". Now I am pretty sure there is some kind of bug or inconsistency in Windows 8 / Windows 7. I did 2. and 3. a few times. At some point, I also got an access denied in Windows 7. I could restore access rights to the disk to "System" (properties - security - EDIT for full control to group "system". ). But then I still get the same access right pb on Windows 8, and getting stuck in the process to restore full control to "system" -- and "admin" groups. I upgraded Windows8 with the Windows8 updates available. Does not help.

    Read the article

  • Outlook locking network connection/session?

    - by HaydnWVN
    Scenario: We have an 'automatic orders' machine sat in the corner running XP with Outlook 2003. Its job is to check for new emails on a specific account, when it encounters one it checks the e-mail body for specific wording to determine which customer it is from (using a macro), then it checks the attachment for specific order codes before parsing the attachment to create a .csv file (which is then e-mailed onto one of the sales team) before importing the .csv into our bespoke ERP/Sales Order system to create an order. Problem: Periodically the machine will have symptoms of a lost network connection (unable to connect to any network source). Sometimes after several days, sometimes over a week. Volume of emails/orders processed does not seem to be linked. Additional info: The machines .pst is stored on a mapped network location. The .csv created is stored on a mapped network location. This is a workgroup, not a domain. All network drives are Samba shares from an Ubuntu fileserver. Our bespoke system runs from a database (MySQL) Ubuntu server. Our troubleshooting so far: I have switched machines (previous was Win2000) with the same symptoms. Restarting the machine FIXES the problem. Closing Outlook and then end tasking an Outlook.exe background process FIXES the problem. If you close Outlook, without end taking the background process, outlook will not reopen (saying it cannot find the pst file & it will not open any network location). Does Outlook have some kind of 'max session' linking it to network activity that is not closing after a mail request? Could Auto-archive be causing this? Is there a tool to check/display what each outlook.exe process is doing? Have not found many ways to troubleshoot this yet, as it is so infrequent...

    Read the article

  • IP6 seems to be enabled - How do I configure it without interfering with IP4?

    - by Mister IT Guru
    I noticed that some of my Centos boxes have IP6 enabled, and seem to have addresses. I have no problem with this, but I would like to get a handle on it, and even connect to them using IP6. This would really help if for any reason DHCP has a hiccup. But I'm a bit lost as to where the configuration on my CentOS box is. (I am also on google researching this, but I like server fault! :) ) I am hoping that I would be able to log into this via the VPN because every now and then that DHCP device has a bad morning, and needs to be restarted. (I'm also looking into this issue, but someone else handles that, management separation gone mad!) It's a remote client, so it would be a lot easier for me to connect to these systems which seem to self configure, to use that as a pivot via ssh tunnels to get to other remote devices to continue to manage them, while out main route is fixed. I guess, my questions are How can I configure IP6 without interfering with IP4, and On CentOS, can I influence this auto configuration I seem to be seeing?

    Read the article

  • Alfa AWUSO36H 1W dysfunctional driver

    - by BrainStorm
    I recently purchased an Alfa AWUSO36H 1W wireless USB adapter for my notebook, in order to improve signal strength and quality. I'm currently using Linux Mint 11, and the it uses the RTL8187 driver for this adapter, I'm also using a 4dbi antenna, though I have others. The problem is that this adapter does exactly the opposite of what it should, actually my internal Broadcom BCM4313 adapter works way better than the alfa. Browsing is slow, some network applications don't even work, pings against Google.com on the internal adapter runs smooth, while in the alfa it gets like 25% packets lost or more! I'm less them 50 feet from my AP, the internal adapter gets 44/70 link quality, and the alfa gets around 60/70 (iwconfig output). Also the system always sets alfa power to 20dbm(100mw), then I have to do sudo iw set reg B0 to make it 30dbm(1000mw), but apparently no significant change. I've installed wireless-compat drivers, no change either. And worst of all, in Windows 7 it works way more smoothly for browsing, though I couldn't test it properly there. I hope its a driver problem, even if it's a pain to find/compile Linux drivers for a starter, I prefer it to a hardware problem where I would need to buy another adapter, since I have no money left (except for the cantenna pieces).

    Read the article

  • Internet Explorer / Windows 7 does not want to show HTML file from local network drive

    - by Jaanus
    Setup: I have Windows 7 running inside VirtualBox on Mac OS X host. I have a shared drive with some HTML files, that I am mounting as a local drive W: in Windows, from the VirtualBox server \VBOXSVR. I want to look at them with a browser in Windows. Chrome in Windows 7 opens and shows those HTML files just fine (file:///W:/welcome.html). But Internet Explorer does not, and shows this error instead of the files: Internet Explorer cannot display the web page What you can try: [button Diagnose Connection Problems] More information This problem can be caused by a variety of issues, including: Internet connectivity has been lost. The website is temporarily unavailable. The Domain Name Server (DNS) is not reachable. The Domain Name Server (DNS) does not have a listing for the website's domain. If this is an HTTPS (secure) address, click Tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For the internet zone in the status bar, it shows: Internet | Protected Mode: On IE settings are a mystery to me, and I could possibly get it to work by tweaking IE settings, but I don't know which ones. How do I make IE show the same files that Chrome is happy to show? (Chrome showing them means that the files themselves are fine, there is something about the setup that just makes IE be a diva.)

    Read the article

  • few basic questions on webhosting (namservers & dns records)

    - by claws
    I bought a domain name on name.com & I want to use free webhosting on 110mb.com By default name.com integrates services of Google apps. Name server entries are ns1.name.com ns2.name.com ns3.name.com ns4.name.com When I registered on 110mb.com it gave me two addresses ns1.110mb.com ns2.110mb.com This is where I'm lost. The concept is that "Domain name should point to an address of the server where the website is hosted" right? Then why are these 4 entires by default. How exactly is it working? should I remove these 4 and then add 110mb.com servers or just append 110mb.com server addresses to name.com ones. I would like to use google apps. If I change these name server addresses would that remove google apps? I especially want to use email service of google. And I really don't understand what is CNAME, MX, or something something. I want to learn about these stuff & how it exactly works. When I search for webhost tutorial. I'm unable to find any fruitful results.

    Read the article

  • Servers / ram for social network- how many?

    - by Marty
    I am launching my social network soon an looking into hosting. The question i am lost is: Do i need separate servers for web vs database vs image handling since there is photo sharing? Or does 1 server handle it all? Also is more ram better? If i get 50GB ram is that better than having 8 gb ram? EDIT: It is PHP codeignitor and MySQL for now. (switch to NoSQL DB later if demand calls fr it.) I will be using memcache also. Concept wise it is similar to yelp, so geographic based with lots of user content and image sharing + live feeds an privacy levels. User plan is open question. Without testing the demand for this i cant give a number. But the concept is unique, no one out there with the set of features i am releasing so it could grow. Ideally I want to plan for handling about 1-2 million views / month from launch. If it goes more than that then I will upgrade.

    Read the article

  • Deploying a Django application in a virtual Ubuntu Server

    - by mfsaint
    I have a virtualbox machine running Ubuntu Server 10.04LTS. My intention is to this machine to work like a VPS, this way I can learn and prepare for when I get a VPS service. Apache+mod_wsgi for deploying the Django app seems the right choice to me. I have the domain (marianofalcon.com.ar) but nothing else, no DNS. The problem is that I'm pretty lost with all the deployment stuff. I know how to configure mod_wsgi(with the django.wsgi file) and apache(creating a VirtualHost). Something is missing and I don't know what it is. I think that I lack networking skills ant that's the big problem. Trying to host the app on a virtualbox adds some difficulty because I don't know well what IP to use. This is what I've got: file placed at: /etc/apache2/sites-available: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName www.my-domain.com ServerAlias my-domain.com Alias /media /path/to/my/project/media DocumentRoot /path/to/my/project WSGIScriptAlias / /path/to/your/project/apache/django.wsgi ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost> django.wsgi file: import os, sys wsgi_dir = os.path.abspath(os.path.dirname(__file__)) project_dir = os.path.dirname(wsgi_dir) sys.path.append(project_dir) project_settings = os.path.join(project_dir,'settings') os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler()

    Read the article

  • Autounmounting USB keys with FAT filesystem on Linux (RHEL5)

    - by niXar
    For security reasons, I have two workstations i front of me, and I can only transfer data between them through a USB key. As you can imagine, it can get quickly tiresome, but the most annoying is having to unmount the things before removing them. Not umounting them results in missing files most of the time, even if I remove them a while after having last written to them. Now, since they're only used for transferring smallish files, and each are basically written once and read once, I don't need the fancy pansy caching infrastructure that makes clean unmounting a necessary step. And since the data is always a copy of something I have at hand, I don't care if the filesystem croaks from time to time. But anyway the system doesn't need to force that on me, it could simply make sure everything is committed with a second, and works synchronously. Then when I remove the key, nothing is lost. Is there a way to do this? I would appreciate any other tips on handling this situation. Edit: it appears the situation has changed between RHEL5 and Fedora up to F11 on one hand, and F12 on the other. The latter use DeviceKit-disk, and I haven't quite figured out how to do this. The method provided below in gconf does not work anymore.

    Read the article

  • SQL Server Installation: Is it 32 or 64 bit?

    - by CapBBeard
    Hi, Recently I was performing an OS upgrade on one of our DB servers, moving from Server 2003 to Server 2008. The DBMS is SQL Server 2005. While reinstalling SQL on the new Windows installation, I went to another of our DB servers to verify a couple of settings. Now, I always thought this second server was Server 2003 x64 + SQL 2005 x64 (from what I'd been told), but I now have my doubts about this. I now suspect that it is in fact only 32 bit SQL, however I'd like to verify this. Here's some details: The OS is definitely 64 bit. xp_msver shows Platform as NT INTEL X86 SELECT @@VERSION shows Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)... However sqlservr.exe is not shown with '* 32' in taskmgr, does anyone know why this is the case, if it is in fact 32 bit as claimed? Despite this, it does seem to be running out of the x86 program files folder. If I do the same checks on a confirmed 64 bit installation, it does give back the expected 64 bit readings, which can only prove that this server in question is only running in 32 bit. Now, that being the case, the question arises about how much memory this '32 bit' install can use. Task manager reports about 3.5GB memory usage for sqlservr.exe (The server has 16GB physical). I suspect that AWE has not been configured at all, and therefore the server will be significantly under-utilised (remembering that the OS is 64 bit) if SQL is simply using a 32bit address space. Is this assumption correct? I feel the server should have SQL reinstalled as 64 bit in order to fully utilise the hardware platform, however it is currently heavily in production; this will be no easy task. I suspect we may just have to configure AWE correctly and let it be for the time being (Unless this is a bad idea?). I apologise that this question is a little vague/lost; I'm no SQL expert, just trying to get a handle on what's going on here.

    Read the article

  • I deployed Flash Player via a Software Installation policy. How to upgrade?

    - by eleven81
    I have a Windows Server 2008 machine as my DC. Earlier this year I created a Software Installation GPO to deploy Adobe Flash Player plugin MSI. I assigned the policy to the computers, about half run Windows XP x86 and the other half Windows 7 x64. That all works like clockwork. When I created the Software Installation Policy, I disabled the Flash Player plugin's automatic update feature by editing the MSI in Orca. I did this because I wanted all of my machines to run the exact same version of the plugin. Now, some time has passed and a newer version of the Flash Player plugin has been released. It is time for me to push out the updated version of the plugin. I already have the new MSI, but I am lost on what to do next. I see the upgrades tab in the Software Installation GPO, but everything there reads like that would be used for add-ons to a larger master program and not for updates that are released over time. I have read that it is best to create a new Software Installation policy with the new MSI, revoke the old GPO, and assign the new GPO. I feel as though, over time, I will wind up with more revoked policies than active ones. I have also read that some people have had success by replacing the old MSI with the new MSI and simply telling the GPO to redeploy. This seems like a backdoor method that will only get me in to trouble. In short, what is the correct, best-practice, or preferred way to roll out the new version via Group Policy?

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >