Search Results

Search found 4971 results on 199 pages for 'mu mind'.

Page 134/199 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • FreeBSD 8 and Samba 3.3 Samba seems to crash/not start?

    - by scraft3613
    I am both new to FreeBSD and Samba, and with that in mind ... I installed Samba 3.3. from Ports. I have this in my rc.conf: #samba nmbd_enable="YES" smbd_enable="YES" winbindd_enable="YES" I can see an active PID: prod1# cat /var/run/smbd.pid 24426 But it seems like smbd isn't running: prod1# ps -auwxx | egrep '[sn]mbd' root 24513 0.0 0.3 21108 4672 ?? Ss Sat02PM 0:00.71 /usr/local/sbin/nmbd -D -s /usr/local/etc/smb.conf If I restart samba with /usr/local/etc/rc.d/samba restart it runs: prod1# ps -auwxx | egrep '[sn]mbd' root 30188 0.0 0.3 21080 4700 ?? Ss 2:49PM 0:00.00 /usr/local/sbin/nmbd -D -s /usr/local/etc/smb.conf root 30196 0.0 0.6 35520 10952 ?? Ss 2:49PM 0:00.00 /usr/local/sbin/smbd -D -s /usr/local/etc/smb.conf root 30198 0.0 0.6 35520 10880 ?? S 2:49PM 0:00.00 /usr/local/sbin/smbd -D -s /usr/local/etc/smb.conf Until I do: prod1# smbclient -L prod1 Connection to prod1 failed (Error NT_STATUS_CONNECTION_REFUSED) prod1# prod1# ps -auwxx | egrep '[sn]mbd' root 30188 0.0 0.3 21080 4700 ?? Ss 2:49PM 0:00.00 /usr/local/sbin/nmbd -D -s /usr/local/etc/smb.conf What should I be checking to find out what's going on?

    Read the article

  • Why is dwm.exe using so much memory?

    - by Leonard Challis
    I've scoured the web, but I'm sick of reading "scan your computer for viruses" and "upgrade your RAM" on answers to similar questions to this. I understand that dwm.exe is for (simply put) caching bitmaps for things like Aero-peek and similar, but as far as I have read it shouldn't be using vast amounts of memory. My colleague and I both have 4GB of RAM, Core 2 Duo, blah, blah -- essentially they're pretty capable. His dwm.exe is running at around 30mb, mind is currently running at about half a gig, though it does fluctuate quite a lot. This is the same while running the exact same applications (currently Zend studio, FireFox (with firemin - low memory usage), Outlook). Every so often I will get a notification asking me if I want to switch to Aero Basic because it's using too much memory, and sometimes it will just switch itself to basic and let me know why. I know it's possible to stop it switching, but I want to know why it is using too much memory otherwise it's just papering over the cracks. One thing to add is this seems to have started after a robbery on Monday, where two of my monitors were stolen, and I had to temporarily use a couple of alternative monitors. I am now using brand new monitors but the problem is the same. All drivers installed and working seemingly fine. Any ideas why the usage is so high? We are using windows 7 64-bit Professional.

    Read the article

  • Rate limiting bandwidth per IP

    - by Yohan
    First, I am not that good with computer. I even had problem with Windows PC. Right now I own a restaurant which happened to offer free internet. My ISP has my connection setup using a Ubuntu 11.1 box. IP Address is 192.168.1.16 with netmask 255.255.0.0, dns is 192.168.1.1 and gateway is 192.168.1.1. My problem is that my customers complains all day about slow network. When I received that kind of complain, the first thing came to my mind is to scout my area and find out who is the culprit, and ask him not to waste our bandwidth. Now, it is getting bored scouting people around, and I need to implement to my Linux box to limit bandwidth. I don't care if their provider can't be faster, but I want to limit 70kbit for each person. More annoying are people who use flashget and torrents. Usually they consume the biggest bandwidth. My question, how can I limit that? Please guide me in easy way. I've spent few days reading tc documents but doesn't understand a thing. I am using Ubuntu 11.10 Basically, I want all my customer get 70kbps each, no matter what.

    Read the article

  • How to get data out of a Maxtor Shared Storage II that fails to boot?

    - by Jonik
    I've got a Maxtor Shared Storage II (RAID1 mode) which has developed some hardware failure, apparently: it fails to boot properly and is unreachable via network. When powering it on, it keeps making clunking/chirping disk noise and then sort of resets itself (with a flash of orange light in the usually-green LEDs); it then repeats this as if stuck in a loop. In fact, even the power button does nothing now – the only way I can affect the device at all is to plug in or pull out the power cord! (To be clear, I've come to regard this piece of garbage (which cost about 460 €) as my worst tech purchase ever. Even before this failure I had encountered many annoyances about the drive: 1) the software to manage it is rather crappy; 2) it is way noisier that what this type of device should be; 3) when your Mac comes out of sleep, Maxtor's "EasyManage" cannot re-mount the drive automatically.) Anyway, the question at hand is how to get my data out of it? As a very concrete first step, is there a way to open this thing without breaking the plastic casing into pieces? It is far from obvious to me how to get beyond this stage; it opens a little from one end but not from the other. If I somehow got the disks out, I could try mounting the disk(s) on one of the Macs or Linux boxes I have available (although I don't know yet if I'd need some adapters for that). (NB: for the purposes of this question, never mind any warranty or replacement issues – that's secondary to recovering the data.)

    Read the article

  • Nginx rewrite for link shortener + Wordpress pretty URLs

    - by detusueno
    Okay so I installed Nginx/PHP/MySQL/Wordpress via a online walk through, and it had me enter these rewrites to enable Wordpress pretty URLs: if (-f $request_filename) { break; } if (-d $request_filename) { break; } rewrite ^(.+)$ /index.php?q=$1 last; error_page 404 = //index.php?q=$uri; This is then included in the vhost for my domain. What I'm trying to do now is add some redirection/link shortner rewrites that will play nice with the setup I have in mind. I'd like to redirect "x.com/y" to "x.com/script.php?id=y" for all external links that I post. The Wordpress link setup right now has almost all internal links begin with "news" (x.com/news/post-blah, x.com/news/category/1, etc) BUT I also have a few root links that point to some internal content (x.com/news, x.com/start). I'm guessing that's going to cause some conflicts. What's the best approach to do this? I've never worked with Nginx (or any rewrite rules) but maybe I can distinguish between "x.com/news" and "x.com/news/" to allow it to play nice? I had a friend setup a working version of this in Apache and it'd be nice if I could get this up on Nginx again.

    Read the article

  • I can't get my PC to start up by a normal way.

    - by ssice
    I couldn't write a more accurate title. I am just unable to start the computer by pressing the Power On button. I checked the Power Supply and it seems to give good voltage values in every pin. And this is not a BIOS malfunction because of bad overclocking or anything that may come to your mind. And I will tell you why. It happens that EPS (or any ATX-based) power supply has the ability to be powered-on by the Motherboard by jumping the 13th pin of the 24-pin-ATX-connector to COM/GND. I did it, after pushing the power on button (without any visual response) and, pwhaa! The machine turned on. I was able to read (and even write, if I wanted) BIOS values and then start any OS installed. Machine starts, so it's not any kind of misconfiguration. It seems some hardware related. I am able to power the machine on only if I already pushed the power on button. Though pushing it without jumping the 13th pin to ground for a second does not power the machine. Of course, jumping the pin without pushing the power on button does not tell the motherboard anything, so the computer would not start up either. It's as if the logic that connects the power button with the 13th pin derivation to GND was unable to be activated. What can be the issue? How can I solve it? My configuration is as follows: CPU: AMD Phenom 9850 X4 Black Edition MB: ASUS Formula II AM2 RAM: 2x2GB Corsair Dominator 5-5-5-15 2T @ 1066MHz DDR2 Tested also with only 1 module GPU: 2x XFX nVIDIA GeForce 9600 GT XxX Alpha Dog Edition @ Core: 540Mhz [SLi] Power Supply: Xilence 700W (ATX 12V 2.3 / EPS 12V 2.92 compatible) PS: I know the machine is like 2 years old. I hardly use it now, but my parents do.

    Read the article

  • Server configration for our website [duplicate]

    - by Varun Varunesh
    This question already has an answer here: Can you help me with my capacity planning? 2 answers We are a start-up and 6 month back we have launched our beta version website. Now we are in a phase of building our website and web-services for the final product. This website will be based on PHP, Python, MySql database and with wamp server. Right now in the beta version we are using Azure VM for hosting, with configuration of 786MB RAM and Shared CPU. We have 200 avg users daily coming to our website. Now we are trying to increase the number of users from 200 to 1500 daily users. And I am thinking our server should have capability to handle at least 100 concurrent user. Also we have developed web-services for our mobile-apps. Which can also increase loads on the sever. So here are the question that takes me here, I am pretty much confused about whether to go with shared hosting or VM based hosting. If VM, then what configuration will be best for our requirement (as I discussed above) ? Currently our VM is a Windows based server and its very simple to manage, So other than cost factor why should I go for Linux based sever? What other factor should I keep in mind while choosing the server as per our requirement ?

    Read the article

  • Good process/software for organizing photos past/present

    - by Matthew
    So I have tons of photos taken all the time. I have a lot from years past that I never went through (meaning deleting duplicates, etc). I've got a new pc with windows 7, and I'm wondering what a good process is to organize those photos. They're in folders that have really no meaning (it used to be people would put them in a folder wherever, even the desktop or somewhere else, not just the My Pictures folder). I'm going to keep all pictures in the "My Pictures" folder from now on. I've used Picasa from g=Google, and it works great. Is this the recommended free software for this? What process do I use to move the old pictures over in to new "organized" folders? Lately in Picasa when I import off my camera card, I would just select something that names the folder after the date it was taken. Is this advised? Just give me ideas on how to stay organized with photos. Should I tag them also? Should I rename the file names? Keep in mind I have over 16,000 photos I'll have to go through, so it can't be anything to thorough.

    Read the article

  • Technology mash: is this possible?

    - by Jon Story
    I'm in the process of setting up my own DNS+hosting on a couple of VPS and my home machines, mostly for academic/learning purposes, but also for convenient accessing of my files, hosting my personal websites, private git repositories etc. I've got a main web server with DNS, and a slave DNS server. I've also got a couple of machines at home doing file hosting, video streaming and all that fun stuff. I'm intending to use my VPS's to provide myself with a dynamic DNS system so that I can point mydomain.com at my DNS servers, with home.mydomain.com going into my home network via a raspberry pi. HOWEVER.... I've not got access to the network infrastructure at home (rented accommodation with managed internet), so I can't forward the ports on the router to my own machines. As such, I'm wondering if it's possible to route all the traffic via an SSH/HTTP tunnel through one of the VPS? My plan is to have the raspberry pi provide a VPN into my home network. The raspberry pi uses SSH to connect to the VPS, and the VPS forwards any traffic to home.mydomain.com via the tunnel to the raspberry pi. Is this even possible, and how do I go about it? I don't mind getting my hands dirty with coding and low level tools, I'm just not sure where to start or what the best way to go about it is.

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • Things to check for an internet-facing email server.

    - by Shtééf
    I'm faced with the task of setting up a public-internet-facing email server, that will be relaying mail for all of our other servers in the network. While the software in itself is set up in few keystrokes, what little experience I have with managing an email server has thought me that there are tons of awkward filtering techniques employed by other email systems. Systems that my own server will inevitably interact with a some point. Hence, my questions: What things should be kept in mind and double checked when setting up an email server? What resources are available for checking if my email server is set-up correctly? I'm specifically NOT looking for instructions for any given mail server, such as Exchange or Postfix. But it's okay to say: “you should have X and Y in your set-up, because when talking to server software Z, it typically tries to weed out open relays by checking for these.” Some things I've discovered myself: Make sure forward and reverse DNS are set up. Mail servers tend to do a reverse lookup for the peer IP-address when receiving. Matching a reverse look up with a follow-up forward lookup is probably employed to weed out open relays run through malware on home networks. Make sure the user in the From-address exists. The From-address is easily spoofed. A receiving mail server may try to contact the mail server in the From-domain, and see if the From-user actually exists.

    Read the article

  • Removing extended partition without deleting logical in it

    - by HisDudeness
    I'm running a Linux-based laptop, and in order to multi-boot several distros in it, I created an extended partition which contains a bunch of logical ones with GParted. Now, after quite a long time with this setup, I've changed my mind because of the consequent lack of storing space for my data partition. Now I want to keep one distro alone like it's normal, and eventually have some other operating systems stored in external supports to plug in and use if I want. Obviously, also this partition I want to keep (and to enlarge a little too) is just a logical inside the extended I want to keep. For what concerns the number I'm ok, meaning I currently have this big distro dedicated extended, the swap and the data partitions, so there's space for another primary before I delete the extended, but I don't know how to delete it without touching the logical in it, I don't want to reinstall the system losing all changes and settings, and I don't want to keep an extended partition for a logical alone. How can I do? Do I have to create a new primary, copy the logical content in it and then delete everything? Will the system boot and maintain exactly all the features it has now? Or is there a way to convert an extended into a primary once it contains just one logical? Or can I directly move a logical out of an extended turning it into a primary? Or, again, am I screwed?

    Read the article

  • How to prevent Gnome-shell's Alt+Tab from grouping windows from similar apps?

    - by wleoncio
    I love pretty much everything about how Gnome Shell handles app-switching through Alt+Tab. My one gripe with it, though, is how it forces the user to use Alt+` to switch between windows of the same app. This is very annoying for me, because now I have to keep in mind if the last window I was using belonged to the same app as the current window or not. Definitely a nuisance for power users who thinks in terms of "windows I'm working with" instead of "applications I'm working on". I've tried the AlternateTab extension ( https://extensions.gnome.org/extension/15/alternatetab/ ), but it's looks way too ugly for me. Not to mention that in the end all I want is to remap Alt+(key above tab) to Alt+Tab on this application. I guess one option would be to just tweak Gnome-shell. My guess is that I should tinker with the altTab.js file at /usr/share/gnome-shell/js/ui/, but the file is too long and overwhelming for someone like me, who doesn't know JavaScript. Does anyone know how I can make Gnome Shell stop grouping windows by applications?

    Read the article

  • IPTABLE & IP-routed netwok solution for HOST net and VM's subnet

    - by Daniel
    I've got ProxmoxVE2.1 ruled KVM node on Debian and bunch of VM's guests machine. That is how my networking looks like: # network interface settings auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address 175.219.59.209 gateway 175.219.59.193 netmask 255.255.255.224 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp And I've got two working subnet solution auto vmbr0 iface vmbr0 inet static address 10.10.0.1 netmask 255.255.0.0 bridge_ports none bridge_stp off bridge_fd 0 post-up ip route add 10.10.0.1/24 dev vmbr0 This way I can reach internet, to resolve outside hosts, update and download everything I need but can't reach one guest VM out of any other VM's inside my network. The second solution allows me to communicate between VM's: auto vmbr1 iface vmbr1 inet static address 10.10.0.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE I can even NAT internal addresses: -t nat -I PREROUTING -p tcp --dport 789 -j DNAT --to-destination 10.10.0.220:345 My inexperienced mind is ready to double VM's net adapters: one for the first solution and another - for second (with slightly different adresses) but I'm pretty sure that it's a dumb way to resolve the problem and everything can be resolved via iptables/ip route rules that I can't create. I've tried a dozen of "wizard manuals" and "howto's" to mix both solution but without success. Looking for an advice (and good reading links for networking begginers).

    Read the article

  • Installing Linux on a Windows 8.1 laptop

    - by nicoX
    I would like to clean install a linux distribution as Ubuntu etc. My laptop that runs Windows 8.1. I have two options in mind. Clean install or dual boot. My technical question is: my laptop have a 8GB SSD drive, which it uses to boot Windows with and a 500GB for storage. I wonder what that 8GB SSD stores? It can't store the whole Windows install as that would be much more than 8GB. Also if I would do a clean install of Ubuntu could I use the 8GB SSD to have Ubuntu boot up quicker. How would I install it. Option two, if I would like to dual boot, how would I proceed having the SSD to boot both systems? I also wish to ask about the Legacy and UEFI differences. Windows runs with UEFI. So when I'm installing Linux, should I run Legacy, and if I dual boot, what option to I choose?

    Read the article

  • Backup files from Linux client to Windows Server

    - by Andrew
    I'm trying to backup my files from my Linux box to my Windows Server 2008 as a push, and when I delete them from my Linux box, they remain on my Windows Server. I've found lots of sources that are similar, but most results were from Windows to Linux. I managed to find slightly more similar cases like Using rsync and cygwin to Sync Files from a Linux Server to a Windows Notebook PC, and rsync from Windows PC to remote Linux server, with the most similar being a backup from Linux to Windows Server, but through a pull from the Windows Server. Initially, I used Unison because I thought having the 2-way capability would come in handy, and I would just have to set some configurations to make it 1-way. Unfortunately, I couldn't find the right configuration, and only managed to synchronize using the command unison "profile" -ui text -auto -silent. When I deleted the files on my Linux box, the files in the Server got deleted too, which of course, isn't what I want. When I tried to find any options for Unison, I only discovered the -force option, which didn't help, since what I wanted was an incremental update to the Server. I found out I could achieve this from using rsync and the -a option (archive), which would keep adding files even if I deleted them from my Linux box. I installed Cygwin on my Windows Server, configured an SSH daemon, but I can't seem to get it working. I've also already configured Windows Firewall to open port 22 (both inbound and outbound). I used the following command from my Linux box: rsync -avrzn /folder/to/be/backed/up/ [email protected]:/cygdrive/c/place/to/store/backed/up/files (a - archive, v - verbose, r - recurse into subdirectories, z - compress, n - dryrun) but it just won't work. Can anyone help me out? I don't mind using either Unison or rsync, as long as it achieves what I want.

    Read the article

  • mkvmerge: How to merge two videos, one without audio?

    - by ProGNOMmers
    I have two videos, one without audio (the second). Trying to merge them I have this error: mkvmerge concat1.webm +concat2.webm -o output.webm mkvmerge v5.8.0 ('No Sleep / Pillow') built on Oct 19 2012 13:07:37 Automatically enabling WebM compliance mode due to output file name extension. 'concat1.webm': Using the demultiplexer for the format 'Matroska'. concat2.webm': Using the demultiplexer for the format 'Matroska'. 'concat1.webm' track 0: Using the output module for the format 'VP8'. concat2.webm' track 0: Using the output module for the format 'VP8'. concat2.webm' track 1: Using the output module for the format 'Vorbis'. No append mapping was given for the file no. 1 (concat2.webm'). A default mapping of 1:0:0:0,1:1:0:1 will be used instead. Please keep that in mind if mkvmerge aborts with an error message regarding invalid '--append-to' options. Error: The file no. 0 ('concat1.webm') does not contain a track with the ID 1, or that track is not to be copied. Therefore no track can be appended to it. The argument for '--append-to' was invalid. Is there a way to say to mkvmerge to make the audio track longer? Thank you!

    Read the article

  • USB transfer speed for Windows 7 is incredibly slow to my external drive

    - by Wolfram
    I'm running Windows 7 Pro and am try to backup 116 GB of data to my external 1 TB hard drive. My laptop has only USB 2.0 ports and my hard drive is USB 3.0 compatible, as is the cable I'm using. I understand that the transfer speed should still be in accordance with USB 2.0 speeds. However, right now I'm getting 135 KB/s and it's been gradually dropping. For an earlier transfer, I would get between 4 MB/s to 8 MB/s. So, I'm really just wondering what's going on with my transfer rate and what I can do to improve it. I'm currently about 35 GB into the 116 GB transfer. Another strange thing is that the window which shows the transfer status decided to max out at 835 MB, and therefore shows items remaining as 0. However, it is still performing the rest of the transfer, and I can see it still cycling through files. Now that I think about it, it seems plausible that the speed being shown by the window is calculated merely as total data transferred / time elapsed. Since the "counter" of data, as far as what is being displayed in the window, maxed out at 835 MB, as time increases, the speed shown is going to keep decreasing because the 'total data transferred' value isn't being incremented. So with that in mind, I suppose I don't actually know at what rate the data is being transferred currently. Nonetheless, my best speed earlier was only around 8 MB/s. Shouldn't USB 2.0 deliver closer to 35 MB/s? Also, if someone can tell me why the transfer status window is displaying the incorrect data information and how to fix this, that would also be appreciated.

    Read the article

  • Remote Desktop fails after VPN connection

    - by Samet Sorgut
    The local computer (comp 1) is connected to a remote computer (comp 2) with Remote Desktop. On the remote computer (comp 2), I try to establish an VPN connection to a different remote computer (comp 3). Once I try to establish the VPN connection from the remote computer (comp 2) to the second remote computer (comp 3), Remote Desktop freezes on comp 1. It is not possible to connect to comp 2 again via Remote Desktop. What can be done to connect to this remote computer (comp 2) after it establishes a VPN connection? The only thing that comes to my mind is to install a second NIC and configure Remote Desktop to accept connection from this NIC while VPN is working from the other... What do you suggest? EDIT: I want to use the internet connection of the VPN, so all traffic should go over the VPN but still RDP working. My IP: 100.0.0.1 The IP where I'm connecting via RDP: 200.0.0.20 (Mask: 255.255.255.192, Gateway: 200.0.0.193) Where the 200.0.0.1 connects to VPN the IP of the VPN is: 65.254.61.250 Will routing like this help (Command is issued in 200.0.0.20, the RDP location): route ADD 65.254.61.250 MASK 255.255.255.192 200.0.0.193 Couldn't add gives the error: The route addition failed: The parameter is incorrect. I tried before connecting to VPN.

    Read the article

  • Array on servers which receive several hundred GB of data a day

    - by Matthew
    This is hopefully a simple question. Right now we are deploying servers which will serve as data warehouses. I know with raid 5 the best practice is 6 disks per raid 5. However, our plan is to use RAID 10 (both for performance and safety). We have a total of 14 disks (16 actually, but two are being used for OS). Keeping in mind that performance is very much an issue, which is better - doing several raid 1's? Do one large raid 10? One large raid 10 had been our original plan, but I want to see if anyone has any opinions I haven't thought of. Please note: This system was designed for using Raid 1+0, so losing half of the raw storage capacity is not an issue. Sorry i hadn't mentioned that initially. The concern is more whether or not we want to use one large Raid 1+0 containing all 14 disks, or several smaller raid 1+0's and then stripe across them using LVM. I know the best practice for higher raid levels is to never use more than 6 disks in an array.

    Read the article

  • Regarding partitions for dual-booting Ubuntu with pre-existing Windows 7

    - by Shasteriskt
    I have zero actual experience with configuring disk partitions and the stuff I have read for the past few hours have been confusing me a bit, so please bear with me. First of all, I'd like to explain what I'm setting to achieve: Windows 7 with: C:\ Windows 7 (pre-existing installation) D:\ Data (Already exists and has files already) Ubuntu 11 - Does not exist yet, but I already have a LiveCD in hand. \root directory for Ubuntu \home on its own partition I plan \swap on its own partition with around 8GB Here is the current situation: I have a single 500 GB hard-disk with Windows 7 x64 installed, and the current partition schemes is as follows: System Reserved: 100 MB (Primary, Active) C: 100 GB - Where Windows 7 is installed (Primary) D: 365 GB - Where my files are located, LOTS of free space (Primary) Now, I would like to shrink my D: drive and create around 40 GB of unallocated disk space for the Ubuntu installation, but here what's confusing me a bit: I'm thinking I would create an extended partition and subdivide it into 3 logical partitions for the Ubuntu setup I had in mind. (If you think my setup is a bad idea, please let me know & why. I also hope you can suggest a better one...) I am aware that I can only have up to 4 primary partitions, or 3 primary partitions with 1 extended parition max. Now, does the System Recovery portion count as one primary partition? I'm really new to these things and it is totally unclear to me. In shrinking my D: drive using Windows 7's Disk Management tool, I would get an unallocated free space which I don't know how to make an extended partition from. It seems like I can only create a primary partition from it, not an extended one. How do I go about it? (I'd also like to note, if it is of any importance, that I am trying to avoid using the option to install Ubuntu alongside Windows, and much rather prefer using the custom install where I can specify which drives I wish to use and stuff. Somehow I feel its safer that way.)

    Read the article

  • Openfire: Granular alerts

    - by R.S.
    Our organization has had an Openfire server up and running for about a year now. So far we have used it for messaging in the I.T. Dept and Alerts to all users. We hit a snag this week when one system went down and several notifications were sent out to inform users of progress. Some of the users were Radiologists that do not use the particular system in question and these users found it more of an annoyance than informative. Since that I have been tasked with finding a more granular system for alerts. I am confident that Openfire can handle this and I have just about settled on a way of getting this to work. My idea is to create a half dozen or so users. For example: Staff, Doctor, Assitant and Supervisor. Using spark as our messenger has worked great so far so I would like to stick with that if possible. With that in mind, under advanced login features the resource name can be changed to something unique and non-unique users can log in under the same account, however, when a message is sent to one of these users, the message delivery is inconsistent. Currently I have 4 users under the Assistant user and it seems only 1 of the users receives the messages. Is this scenario even possible? I am avoiding working with the groups in Openfire because the function is atrocious. I could possibly integrate the system into our Active directory but I don’t think that will get us to a workable solution any quicker or more efficiently.

    Read the article

  • To what extent is size a factor in SSD performance?

    - by artif
    To what extent is the size of an SSD a factor in its performance? In my mind, correct me if I'm wrong, a bigger SSD should be, everything else being equal, faster than a smaller one. A bigger SSD would have more erase blocks and thus more leeway for the FTL (flash translation layer) to do garbage collection optimization. Also there would be more time before TRIM became necessary. I see on Wikipedia that it remarks that "The performance of the SSD can scale with the number of parallel NAND flash chips used in the device" so it seems throughput also increases significantly. Also many SSDs contain internal caches of some sort and presumably those caches are larger for correspondingly large SSDs. But supposing this effect exists, I would like a quantitative analysis. Does throughput increase linearly? How much is garbage collection impacted, if at all? Does latency stay the same? And so on. Would the performance of a 8 GB SSD be significantly different from, for example, an 80 GB SSD assuming both used high quality chips, controllers, etc? Are there any resources (webpages, research papers, presentations, books, etc) that discuss correlations between SSD performance (4 KB random write speed, latency, maximum sequential throughput, etc) and size? I realize this does not really sound like a programming question but it is relevant for what I'm working on (using flash for caching hard drive data) which does involve programming. If there is a better place to ask this question, eg a more hardware oriented site, what would that be? Something like the equivalent of stack overflow (or perhaps a forum) for in-depth questions on hardware interfaces, internals, etc would be appreciated.

    Read the article

  • Amazon EC2: Instances, IPs and a wordpress blog (LAMP)

    - by JustinXXVII
    I had a link to my blog posted on Reddit yesterday and MySQL crashed on my EC2 Micro instance. I know I didn't have that many visitors because I used a marketing link that tracks hits. The link got 167 hits over the course of the last 18 hours, and MySQL crashed twice. So anyway, 167 visits is not a lot, so I've done some short term optimizations like restricting the number of Apache threads to limit the MySQL calls. I also set up WP Super Cache to serve static content. Soon I'm going to offload all of my images to S3 or CloudFront. So this leads me to my question. If this doesn't seem to help, and if i have another traffic "spike", how do AMIs work when you have a MySQL database? I think I understand that if you have more than one instance and assign the same Elastic IP to both of them, the incoming traffic gets distributed among both. But what happens when the MySQL database gets updated on one of the instances? I just need to wrap my mind around what happens when I create an AMI and then launch a new instance to help with traffic. Thanks for your suggestions.

    Read the article

  • How do you set max execution time of PHP's CLI component?

    - by cwd
    How do you set max execution time of PHP's CLI component? I have a CLI script that has gone into a infinite loop and I'm not sure how to kill it without restarting. I used quicksilver to launch it, so I can't press control+c at the command line. I tried running ps -A (show all processes) but php is not showing up in that list, so perhaps it has timed out on it's own - but how do you manually set the time limit? I tried to find information about where I should set the max_execution_time setting, I'm used to setting this for the version of PHP that runs with apache, but I have no idea where to set it for the version of PHP that lives in /usr/bin. I did see the follow quote, which does seem to be accurate (see screenshot below), but having an unlimited execution time doesn't seem like a good idea. Keep in mind that for CLI SAPI max_execution_time is hardcoded to 0. So it seems to be changed by ini_set or set_time_limit but it isn't, actually. The only references I've found to this strange decision are deep in bugtracker (http://bugs.php.net/37306) and in php.ini (comments for 'max_execution_time' directive). (via http://php.net/manual/en/function.set-time-limit.php) ini_set('max_execution_time') has no effect. I also tried the same thing and go the same result with set_time_limit(7).

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >