Search Results

Search found 7166 results on 287 pages for 'soa rest osb'.

Page 217/287 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Possible for linux bridge to intercept traffic?

    - by A G
    I have a linux machine setup as a bridge between a client and a server; brctl addbr0 brctl addif br0 eth1 brctl addif br0 eth2 ifconfig eth1 0.0.0.0 ifconfig eth2 0.0.0.0 ip link set br0 up I also have an application listening on port 8080 of this machine. Is it possible to have traffic destined for port 80 to be passed to my application? I have done some research and it looks like it could be done using ebtables and iptables. Here is the rest of my setup: //set the ebtables to pass this traffic up to ip for processing; DROP on the broute table should do this ebtables -t broute -A BROUTING -p ipv4 --ip-proto tcp --ip-dport 80 -j redirect --redirect-target DROP //set iptables to forward this traffic to my app listening on port 8080 iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 8080 --tproxy-mark 1/1 iptables -t mangle -A PREROUTING -p tcp -j MARK --set-mark 1/1 //once the flows are marked, have them delivered locally via loopback interface ip rule add fwmark 1/1 table 1 ip route add local 0.0.0.0/0 dev lo table 1 //enable ip packet forwarding echo 1 > /proc/sys/net/ipv4/ip_forward However nothing is coming into my application. Am I missing anything? My understanding is that the target DROP on the broute BROUTING chain will push it up to be processed by iptables. Secondly, are there any other alternatives I should investigate? Edit: IPtables gets it at nat PREROUTING, but it looks like it drops after that; the INPUT chain (in either mangle or filter) doesn't see the packet.

    Read the article

  • How do you update without cutting off users?

    - by Griffin
    I searched around and I was surprised that I couldn't find an answer to this question. My assumption is that you have multiple servers. Normally they both will be doing their specific take (for the rest of this I will assume a simple website). Now lets say server A & B need updates. Do you update server A while server B keeps pushing out the webpage and then when server A is okay you update server B? This seems like it would work in small scale but seems horrible in large scale due to the fact that you'd need twice the power that you normally have. When dealing with a large number of servers do you update small sections at a time? I thought the problem with this would be if server A can't work alongside server B C D E or F any-longer that's not that bad. But when you start updating you slowly lose this small percentage. What is the proper way to deal with updates like this?

    Read the article

  • Outlook 2010 Crashing Unpredictably

    - by cbkadel
    Very often when I open up Outlook 2010 and start doing actions in it, it will hang and become non responsive. I have tried letting it finish, but it never does come back (up to 20 minutes of letting it try). I generally have to restart Outlook and try again. Usually after about an hour of doing this, Outlook somehow snaps out of it and works for the rest of the day. It's generally in the morning (though I doubt that's the key variable). Generally, the emails that cause problems are HTML/formatted, but not always. What I've done so far to troubleshoot: Install Latest Outlook Hotfix (I think Dec 14, 2010) Start Outlook in Safe Mode Neither of those steps seem to make a difference. Usually - after about 10-15 restarts of Outlook on any given day, then it starts working thereafter. My next step is to uninstall/reinstall Office 2010, but I'm hoping someone has seen this and knows what to do about it - though not sure. My configuration is like this: Microsoft Online Services (using Microsoft's Sign In App) - Connecting to Exchange I have two other Exchange accounts in this profile (new feature in 2010) connected through Outlook Anywhere. Life Meeting Conferencing Add In I've disabled the People tab/add in. I've disabled the "Send to Bluetooth" add-in. Not sure what else to do?

    Read the article

  • Cannot ssh into server

    - by revolver
    I am trying to SSH into a linux machine running ubuntu, but the interactive shell stuck somewhere and I can't key in anything. I am on Mac OS X Lion. This only happens when I am trying to access via an external IP. Local LAN SSH is working perfectly. macbook:~ user$ ssh -v -v user@serverip // i skipped the rest of the log, but I can paste it here again if needed. Authenticated to serverip debug1: channel 0: new [client-session] debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug1: Sending env LC_CTYPE = UTF-8 debug2: channel 0: request env confirm 0 debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 My terminal shell just hang after this, and I can't key in anything. I checked var/log/auth on the server and saw that the a session is being created and I had already logged in, but I don't see any responses on my client machine. I googled around and a lot of the solution had to do with the Broadcom wireless driver, but I am not even using one, so I am pretty clueless here. To give you more information, the linux machine is also running a web server, and I have no problem accessing the web server. Thanks. Any help is appreciated.

    Read the article

  • Why is REMOTE_ADDR only sometimes available as an Apache environment variable?

    - by Xiong Chiamiov
    To avoid having to parse X-Forwarded-For in Varnish, I'm trying to just set a header on the SSL terminator (currently Apache) that stores the direct client IP in a header. On our development machine, this works: RequestHeader set X-Foo %{REMOTE_ADDR}e However, in staging it doesn't. Specifically, the header is empty, as illustrated by both varnishlog: 13 TxHeader b X-Foo: (null) (On the development machine, this shows the IP address as expected.) Similarly, logging REMOTE_ADDR shows that it only appears to be populated on the dev machine: # Config LogFormat "%{X-Forwarded-For}i %{REMOTE_ADDR}e" combined CustomLog "/var/log/httpd/access_log" combined # Log file, staging <my ip> - # Log file, development <my ip> <my ip> Since the dev machine is, well, a dev machine, it is different in a number of ways; however, I can't track down which difference is causing this. The versions of Apache are the same (2.2.22), and I don't see anything relevant in any of the standard config files or /etc/sysconfig/httpd. And the rest of the system is reasonably similar, since they're built off the same CentOS 5 base image. I can't even tell from the Apache documentation whether REMOTE_ADDR is expected to exist or not as an environment variable, but it clearly works on one machine, whether by fluke or design, and the inconsistency is driving me mad.

    Read the article

  • Centralized backup method recommendation for SMEs with various OSes

    - by Akinator
    Hi I was wondering what in your opinion is the "best" method for having "everything" backed-up in the following situation. We are a SMEs with 10 computers in total. Three of those computers are MACs The rest are windows (1 vista, 4 win7 and 2 XPs) I'm very open to what the method should be but you should also consider the follwing: Very limited resources Quite "small" bandwidth (4 MBs for all (download) 0.4 MBs (upload, yep, thats it)- though this might get, a little bit better) One of the main thing to back up would be the mails, considerations: All windows computers use outlook, mainly 2003 There is one mac that uses outlook too (for mac of course - not 2011 yet) We also have to backup the files: Not a huge amount Very few very big files Very organizes (by machine) What I would like is to hear your opinions as to which would be the best method (or combination of methods - preferably one of course) considering. We are not sure what do we need and I'm open to suggestions, though an online (cloud based applications) would be great, remember the the bandwidth is unbearable. Last think to consider, it that we would like to do weekly updates (unless the method is very easy of course). Thanks in advance!! I tried to be as specific as possible, but if anything is needed I'll gladly update, please ask for any clarification needed! Please avoid any answers like upgrade all to windows 7 and throw away your macs :) our's may not be an ideal situation, but it is what it is, and right now, it would be impossible for us to change it for a lot of circumstances.

    Read the article

  • Routing WIFI and LAN for specific traffic

    - by jakebird451
    I have two network devices aboard my macbook pro: WIFI (en1): Used for general traffic. Connects to an ip of 192.168.19.* via DHCP LAN (en0): Used for specific traffic. Connects to an ip of 192.168.2.10 as a static IP. Does not connect to a router, only a switch for direct routing connection. I have 4 IP addresses I need to access on the LAN: 192.168.2.1 192.168.2.21 192.168.2.20 192.168.2.30 The rest of the traffic needs to go to WIFI. I have tried setting up a routing table for the specific ip addresses, but I only managed to mess up my network. I do not venture out into the world of networking too often, but this was the latest command I have been trying: sudo route add -host 192.168.2.30 -interface en0 This command killed my ability to use ping. It told me that ping could not allocate memory (is that even possible)? It also killed my wifi access. Logging out and back in fixed the issue. I really do not mind to make this solution permanent, so I am fine with a temporary routing.

    Read the article

  • Take a regular Windows 7 clone with clonezilla (device-to-image)

    - by Mario De Schaepmeester
    I am unexperienced with cloning software and I've decided to use Clonezilla as it seemed best as freeware. I chose device image and left most options standard. I chose expert mode anyway to see what I could configure, and decided to try the lzop algorithm instead of the default one for compression. The rest was left at default. When Clonezilla asked me which partitions to clone (I chose parts to image), I chose the C:\ drive but Windows 7 also creates a 100MB partition on setup for system files (the actual boot partition?). I copied that into the image as well. The reason I didn't choose disk to image is that I also have a data partition that needs to stay intact. Now I'm simply not sure that this is the way to go, should I ever need to restore my disk image. Will Clonezilla know what to do with both partitions and will Windows 7 work perfectly after restoring? Edit: apparantly a similar question has been asked before. The link to the first article in the answer is not relevant to me since it covers a device-to-device clone. It appears the windows installation disk can repair the 100MB partition. As for Clonezilla, it copies "hidden data after the MBR" by default too. I don't know, I feel I'll be allright whether by restoring the partition with Clonezilla or repairing it with the Windows 7 disk.

    Read the article

  • Batch-Remove Specific Text from Photo Descriptions

    - by David
    I've recently upgraded to iPhoto '11 (couldn't resist the pricing on the new app store) and as I'm adding more meta-data to my library and generally organizing things (places, faces, etc... I hadn't upgraded since '08) I've noticed something odd in my photos. Every photo in my library has a description (though many are short), but it would appear that somehow the description of one of the photos has been appended to many. I don't know if maybe I accidentally screwed up a batch change at some point in the past, or if the library upgrade somehow messed up, or what else may have happened. But what I need to do is fix these somehow. Now, manually editing is something of a daunting task. Within a library of 21,248 photos, 18,858 of them have this additional text. The one thing I do have going for me is that it's a specific string. If there's a way to "remove this string from everywhere in the library without removing the rest of any given description" then that would be perfect. Is there anything I can do like this? Maybe even manually editing a library file in a text editor? (Would that break anything else in iPhoto if its library was edited outside of the application, even while it's not running?) Does anybody have any ideas?

    Read the article

  • Terminal Server CPU usage at 100%

    - by Light1c3
    I'm running a terminal server with around 50-60 users,and every so often the server will go from 40% usage to 100%. I took a closer look an it seems every time this happens, a different user or two seem to be caught in a loop and end up using < 30% where the rest of the users only use a maximum of 5%. The company behind the software we use clame it's due to the servers inadequate hardware (It's a VM system running on a dual - quad core setup) which to me sounds like BS! I'm fairly new to this level of IT so if I misspoke I apologize. I have no way to prove it but I believe adding more raw hardware power wont do me any good as this to me seems like a bug in their software, and it will suck up as much ( or little) CPU as it's given. The VM in question has 4 vCPU cores and 12 GB RAM available, and is running Windows Server 2008, 64-bit Thanks in advance for your help! Note: I have the same question posted on SO, but was pointed in this direction so just in case, here is a link to the post http://stackoverflow.com/questions/17276602/termserver-cpu-at-100

    Read the article

  • Setting up a home network

    - by Mat Richardson
    I'm trying to get my head around how to do this with the equipment I have. Here's how it is.. I have an old laptop that has server 2003 installed on it. This has no wireless NIC, but I have a spare wireless router that I can wire to it. We have a virgin wireless router which provides the home with internet access. What I would like to do it make resources on the server 2003 machine available to the rest of the devices in the home - specifically a couple of laptops running windows 7. Ideally I'd like to be able to use the laptop as a fileserver but maybe also offer application virtualization. In the first instance however, I'd like to create a file server with the laptop and allow connections to it by the other machines. Virtualisation is just a 'nice to have'... Given the hardware I have is this possible? Can you point me in the right direction how to do what I want if this is the case?

    Read the article

  • Unwanted forced authentication after server restart (Win 2k3)

    - by Felthragar
    We're running a Win 2k3 R2 Standard 64-bit edition server. On this server we're running a fileserver and the ability to allow remote login to our network through vpn. We do not currently utilize a domain setup, all user accounts are local accounts on the server. Each employee is given a unique account to login to the server. The password is a randomly generated 16 character long string, which makes it hard to remember. What we've done is basicly had the password stored on the client machine (standard "Remember Me" functionality). This has worked well. However, last night our server automatically restarted after an automatic update. After that, some of our employees, myself included, had to re-authenticate with the server, submitting our credentials again. Then again, some others did not have to re-authenticate. Do you guys have any idea why this is? Is there a setting to prevent this? I've checked the logs but I couldn't find anything of interest. Then again I'm not really sure what I'm looking for. Thanks in advance, I'll try to answer any additional questions you may have. Edit: When I say "login" or "authenticate" I mean through the standard windows samba protocol. Edit 2: Ok, new day. Tonight the server restarted again, and the same two clients that had to re-authenticate yesterday had to re-authenticate today as well. The rest did not.

    Read the article

  • Error getting PAM / Linux integrated with Active Directory

    - by topper
    I'm trying to add a Linux server to a network which is controlled by AD. The aim is that users of the server will be able to authenticate against the AD domain. I have Kerberos working, but NSS / PAM are more problematic. I'm trying to debug with a simple command such as the following, please see the error. Can anyone assist me to debug? root@antonyg04:~# ldapsearch -H ldap://raadc04.corp.MUNGED.com/ -x -D "cn=MUNGED,ou=Users,dc=corp,dc=MUNGED,dc=com" -W uid=MUNGED Enter LDAP Password: ldap_bind: Invalid credentials (49) additional info: 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 525, vece I have had to munge some details, but I can tell you that cn=MUNGED is my username for logging into the AD domain, and the password that I typed was the password for said domain. I don't know why it says "Invalid credentials", and the rest of the error is so cryptic, I have no idea. Is my approach somehow flawed? Is my DN obviously wrong? How can I confirm the correct DN? There was a tool online but I can't find it. NB I have no access to the AD Server for administration or configuration.

    Read the article

  • FTP Server upload and filesystem questions

    - by Alex
    I'm a photographer who mainly does event photography. A while ago I bought myself a Nikon WT-4 wireless transmitter, a small device which connects via USB to my Nikon D700 DSLR, and then establishes a WiFi connection to an existing WLAN. It can then upload any pictures I take via FTP to an FTP server somewhere in the network. On my laptop I then have a piece of software which will check a given folder on the disk regularly, this software is smart enough to look at the modified file timestamp, if this timestamp is less than 10 seconds ago, it will not attempt to import the folder and skip the file in this iteration of the import scan. The problem I've discovered seems to be inherent to the FTP protocol, as I have the same problem with Windows 7 built in IIS server, as I do with FileZilla FTP server. When the transmitter starts to upload a file, the FTP server will create a small 300-500 KB file with the correct filename on the disk, but then do nothing with the file until it has completely received the file via FTP. So it seems to create this small dummy file, and then buffer the remainder of the FTP upload until it's finished, and then dump the rest of the file into the dummy file making it the correct size. Problem is, these uploads take about 15-30 seconds depending on reception, but since the folder watch tool will already try to import any file older than 10 seconds, it will always try to import the small dummy files which obviously fails as they're not copmlete yet. Is there any way to 'disable' this behaviour? Ideally I would like my file only to show up once it's been completely uploaded. Or perhaps someone knows another FTP server application (it has to run on win7) which does not show this behaviour?

    Read the article

  • Developing high-performance and scalable zend framework website

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand in terms of servers approch (MySQL, NGINX, MongoDB or pgsql) for a scalable application expected to have a lot of traffic using PHP?...I would be interested in your approch. Note: I'm a Zend Framework developer and don't have to much experience with the servers part (to determin what would be best solution for my scalable application)

    Read the article

  • What should I encrypt in Debian during install?

    - by ianfuture
    I have seen various guides and recommendations on web about how best to do this but nothing that clearly explains the best way and why. So I understand there is a need for part of Debian during install to be un-encrypted on its own partition to allow it to boot. Most info I have seen is call this /boot and set the boot flag. Next I believe the best approach is to create another partition out of all the rest of the disk space, encrypt this, then on top of that create a LVM and then within the LVM create my various partitions , name them , select size, and file system type. Can I include /swap in the encrypted LVM part ? Is this approach sound? If so what are the partitions I should use (this is going to be a minimal server install with a view to install as and when what I need for a dev server)? Finally how does the installer know what to put in each partition I define ? I appreciate there are more than one question but any help and suggestions would be appreciated. If further clarification is needed please mention in the comments . EDIT : 16/3/2010 After Richard Holloways reply I thought it relevant to add this info: The reasons why I want to do this are to explore maximising security on any server install and set up, due to interest in the area of Computer Security and Forensics. Also I am trying to peform the task as if it being performed in an enterprise situation. On a technical matter, once set up and configured with minimal packages and ssh this server will not physically be easy to access so I will only be entering via ssh. (Yes I know why encrypt something no one will ever be able to get their hands on? Because I can and I want to is the simple answer, but see above too).

    Read the article

  • ESX 3.5 refuses to update

    - by Speeddymon
    I have a set of ESX 3.5 servers in 2 different datacenters. One is DR, one is production. They are on the same vlan and so I can access any of them on the private network from my vCenter server. Last month, as a learning experience (I hadn't dealt with ESX much before), I updated the DR server. Other than finding out that a couple of bundles had to be installed manually in order to get the rest to install from vCenter, it went off without a hitch. Now, I'm trying to do the same for our production servers and it is not working. I've googled around for the error I get during scan, and investigate loads of different solutions (editing the integrity file, checking DNS, etc) -- I did install the 2 bundles that had to be installed manually already -- but scan from vCenter is just not working. Side note: I did just scan the DR server again and that scan works fine so shouldn't be a problem with vCenter that has cropped up recently -- it has to be something else. The error I get is: Patch metadata for (servername) missing. Please download updates metadata first. Failed to scan (servername) for updates. I'm all out of ideas on how to make this work, so any help would be hugely appreciated.

    Read the article

  • extra managed+unmanaged switches @ home/office -- best (mis)usage scenario? what would you do?

    - by locuse
    up front -- definitely NOT a mission-critical kind of question. after a 'spring cleaning' of my local office, i've ended up with two 'spare' GigE switches at my home/office -- one managed, capable of VLANs, QoS, etc, and the other unmanaged. i've got more ports than i need. in fact EACH switch has more total ports than i need. but, since i can't have these just sitting around not doing SOMETHING ... ;-) i'm interested in ideas for best combined use of these switches. my local topology is simple: [ net ] -- [ adsl2 modem ] -- [linux firewall/router/DNS ] _______________| | [ some arrangement of the 2 GigE switches ] | ( ... stuff on the lan ... ) [WAP1] [voip ATA] [printer] [desktop1] [mail server] [Xen server [desktop2] ( mostly dev, [desktop3] + file server [desktop4] + media server)] the MailServer is a production mail server the XenServer serves some low vol to the 'net; the MediaServer guest serves ONLY to the LAN is there, e.g., any performance value in segmenting off any of the LAN using the managed switch (VLAN? QoS tagging? something?), feeding the rest into the connected unmanaged switch? or should i simply use one of the switches & be done with it, and use the other for a coffee-cup stand?

    Read the article

  • How expensive is a hostname in htaccess? Other solutions possible?

    - by Nanne
    For easy allow or disallowing of dynamic IP-adresses you can add them as a hostname in a .htaccess file. As I have read from: .htaccess allow from hostname? it does a reverse lookup on the connecting ip address, seeing if the response matches the allowed name. (Well, actually Apache is doing a double lookup, first a reverse lookup and then a forward lookup on the result of the reverse.) This is the reason we are currently not using dynamic-ip hostnames in the .htaccess: this "sounds" quite heavy: 2 extra lookups for every request. Is this indeed quite heavy, and would a reasonably busy server that is rather looking for less then more load get away with this :)? (e.g.: how does this 'load' compare to the rest? If a request is 1000 times more expensive then the lookups it might be negligible. otoh, it could be that final straw :) ) Are there other solutions? I can write a script that does a lookup of the hostname and put it in .htaccess files ofcourse, but this feels a bit like a hack.

    Read the article

  • 4GB Memory Upgrade for Acer Aspire 5102WLMi

    - by Richard Slater
    I have bought a 4GB memory upgrade (2x 2GB PC2-5300 SODIMM) for my Acer Aspire 5102WLMi (Aspire 5100 Series) laptop, I installed the two memory modules correctly however with 4GB installed the laptop refuses to POST. I have tried the following: Tried both 2GB SODIMMs without the other (Worked Fine) Tried the original 512MB SODIMMs (Worked Fine) Tried with original 512MB SODIMM and new 2GB SODIMM (Worked Fine) Tried swapping over the 2GB SODIMs (Didn't Boot) Left the computer for 10 minutes with both 2GB SODIMMs installed (Didn't Boot) Checked latest BIOS installed (No Change) The Crucial website said that the laptop supported 4GB of RAM as do several other sites through found through Google, up until now I was fairly confident this would work. Couple of questions that would be good to have answered: Question: Has anyone got an Acer Aspire 5100 Series running with 4GB RAM? Answer: Yes, I have now got one working with 3.75GB Usable, the rest is occupied utilized by the Graphics Card. Question: Any tips on getting this to work; is there a CMOS reset switch? Answer: Yes there is, if both SODIMMs are removed two very small interlocking PCB tracks are revealed. If these are shorted together with a screwdriver the BIOS will be reset. Thanks.

    Read the article

  • Domain joining debate for Outlook 2010 with Exchange 2007 on windows SBS 2008 for a user on a laptop that will travel a fair amount of the time.

    - by user71195
    I'm basically debating on whether or not to join the Domain on a Laptop, and was wondering if anyone has had a similar experience. If the computer were staying in the office, its a no brainer. Join the domain. In this case I have a user who will come into the office a few days a week, and work remotely the rest of the time. There is a working VPN using OpenVPN client/server, but it's not site-to-site. My knee jerk reaction is to not join the domain, so that the user can have 1 profile that they always use. In this configuration, should Outlook work properly with the user's domain account, and should the shared calendar still work (at least once inside the VPN)? My concern with joining the domain would be the inability to login to it when elsewhere. Is there maybe a way around this with caching or something? Would creating a second local login make sense for a user like this in any way? If so, why not just skip the domain join to begin with? Any thoughts on or experiences with this would be appreciated. Laptop OS Windows 7 (Not purchased yet.. pro if domain needed) Server SBS 2008, Exchange 2007 Outlook version 2010 Thanks for any help, Mike

    Read the article

  • Windows xp blinking under score after bios

    - by heyjoe
    so this is for an older pc I have to repair for a friend. The pc has an hdd of about 60 something gb, It uses win xp and let's say 60-70% of the boots it hangs on showing only an underscore bilking line after bios screen, rest of the times it boots fine or the computer shuts down on xp loading screen. Sometimes if you let it alone while the underscore is blinking, it will boot after a while, like a few minutes, some times it won't boot at all even if you give him more time, like one hour. When it boots successfully the pc seems to work fine. I think it's a bad hard disk and i'm about to suggest buying a new one and switching it but I don't have enough experience and i would hate making him buy a new hdd and not solving the problem. anyone has any tips? I know there are other topics about blinking underscores or cursors while xp is booting but the issues about the pc shutting itself down or sometimes booting really freaks me out. Can't format everything and re install until about 10 days from now, cause the dude has some program for his business on this pc and I have to migrate it when the next computer arrives, however he needs to use it until then. so please advise, thx.

    Read the article

  • Best format for hard drive for Windows and Mac?

    - by Neil
    I have a 500 GB USB External Hard Drive. I need four partitions on it, for the following purposes: 160 GB for a bootable backup of my Mac. 160 GB for a bootable backup of my Windows. 11 GB for a bootable Snow Leopard Install Disk Rest as for file storage. Now I need a partition table which will get recognised on both Windows and Mac, without needing extra software on Windows, which will let me keep bootable copies of both OS'es, but let me access the file storage from both OS'es. Currently, I have a GUI Partition Table, with Mac OS Extended (Journaled) Partitions for the two backups, Mac OS Extended for the Install Disk, and NTFS for the file storage. While this gets recognised perfectly on my Mac, thanks to an NTFS for Mac driver from Paragon, when connected to Windows, the drive is detected by the machine (listed in Safely Remove USB), but not recognised in Windows Explorer unless I install MacDrive, which is not feasible for me to install on public Windows Machines I might wanna access my storage area on. Can someone recommend the best combination of formats and software/drivers to get this done seamlessly?

    Read the article

  • Ubuntu 12.04 can't boot after installing with software RAID 1

    - by Bill
    I've been trying to install Ubuntu with software RAID on my server and there is obviously something that I don't understand about the process. This is the guide that I followed: https://help.ubuntu.com/11.04/serverguide/advanced-installation.html I have two identical 1 TB disks in my server. I went through the initial install process and manually set up my partitions. On each disk I set up: (1) 100 MB partition for EFI boot (I didn't originally have this but added it based on a forum post I found after my original install failed to boot, I ended up with EFIboot since that was what the 'guided partitioning' decided to do) (1) 970 MB partition for / (1) 30 MB partition for swap I then created new RAID 1 disks combining the two partitions, one from each disk, such that each partition is mirrored. I then configured their usage as stated above. After saving the configuration I said yes to boot in a degraded state. The rest of the setup went normally, no errors of any kind. I saw GRUB being installed and again no errors. However, after rebooting the server I get the dreaded 'Insert boot media' and nothing happens. I loaded up the recovery disk and the mdadm configuration looks correct. md0 is my EFIBoot partition md1 is my \ partition using ext4 md2 is my swap partition Running file -s /dev/md0 doesn't indicate that GRUB is there and so I attempted to reinstall GRUB using the recovery disk. I selected the md0 disk and it appeared to install just fine. Running file -s /dev/md1 shows the error needs journal recovery, I'm not sure if that's related or not or how to fix that. Rebooting gives me the same problem, no boot media found. I've searched around the internet but can't figure out what to do next or more importantly how to troubleshoot what exactly is going wrong. Thanks!

    Read the article

  • Graphics card ATI Asus 9250 128 MB AGP problem.Monitor switching off.

    - by Dominick1978
    I have an old system with these specs: Motherboard: Via P4x266a (with AGP 4x) CPU: P4 1,7 Ghz Memory: 1152 MB DDR SDRAM Graphics card: Voodoo 3 2000 16 MB PSU: 300 Watt It also has 1 dvd-rom, 1 dvd-rw, 2 hard drives (all 4 connected via molex) , 1 sounblaster sound card and 1 ethernet card (both connected via pci). OS: XP Pro Recently I bought Asus 9250 128 MB AGP to replace voodoo.Wnen I switch on the pc the initial screens (until after the xp logo) sometimes are distorted with blurred colours.When XP are loaded there is some flickering but the rest are ok.XP can't recognize the card seeing it as just a VGA adapter.I have downloaded the latest xp drivers from ATI website and installed them.Then after the restart everything is ok (no distorted image or blurring) until after the xp logo.After this the monitor turns off while the pc is still running.I have tried many drivers but the problem persists (of course I removed the voodoo drivers before from the display adapter properties).Only once I have managed to enter XP (after changing BIOS features for graphics card from 256 MB to 128 MB) but the drivers on the control panel had an exclamation mark (ati 9250!) and below them ati 9250 secondary without ! under the display adapter tab and the ATI catalyst program said that it couldn;t find the card.That was the opnly time I went beyond the xp logo.Now the monitor auto switcheS off. So, what do you think? 1)Is this a broken card? 2)Is it the drivers of the card? 3)Is it PSU fault? 4)Anything else? Thanks for your help and excuse my english!

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >