Search Results

Search found 9662 results on 387 pages for 'sales and operations plan'.

Page 294/387 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • SQL Server Subscriber Migration

    - by SuperCoolMoss
    We're currently have one way transaction replication from a SQL Server 2005 OLTP publisher/distrbituor to two subscribers (one SQL 2005 and the other SQL2008 R2). Replication security is via the SQL Agents' domain service account (the same account is used on all boxes). The SQL2008R2 subscriber is used for BI purposes and hosts a database that has a subset of the Production publisher database tables, with different security and indexes. We need to migrate this BI subscriber to a newer box with more performant hardware. The plan is as follows: Stop replicating to the BI box (continue replicating to the other subscriber). Backup all databases on the BI box (including system databases). Restore all databases (including master in single user mode) to the new BI box (this has SQL Server 2008R2 already installed). Take the old BI box off the network and shut it down. Rename and Re-IP the new BI box to be the same as the old box. Switch replication back on. Are there any flaws in this approach?

    Read the article

  • Dell PowerEdge T710, add a new hard disk, how to?

    - by user1340802
    I need to add a new hard disk to a PowerEdge T710 running on Vmware EXSI 4. this hard disk is a 'normal' desktop hard disk 1TB (that is it is not coming from Dell, I also have no rack for it to plug it inside any of the front bay) I would like to add this disk for a virtual machine needing space, the most easily as possible. I have find that there is an avaiable sata cable with its electric power, so may I just add the disk plugging these and using the empty 5"1/4 slot available under the CD drive (with a 5"1/4 - 3"1/2 bay adaptater) ? (even if this way it seems that i bypass the raid controller that own the front bay with racks)) that way i think could be easier than adding the disk to the already defined Raid (btw i am also not sure on how to do these but i would not risk to mess the already working things) what are the other operations that i would have to do to ? (sorry I am a real beginner on Vmware EXSI and PowerEdge management :/ i have seen that there is some management from Bios (CTRL+R as start up) so that the disk will be seen or initialize it. I am really not sure of the steps needed...) thank you, best.

    Read the article

  • Mesh Networked servers via vpn

    - by microspino
    I got a design idea and I would like to have some advice from SF about It. I have 5 customers with small real-estate databases. I've built for them a desktop app and now they would like to merge their database to share their data. I don't want to centralize everything in one place nor I want to do maintenance for servers. They told me also, that all of them in their offices, have little servers and maintenance guys available. Although everything seems suitable for web application, I had the idea to experiment something new: Any customer small-server wild be connected to the others in a sort of mesh network without a single point of failure and through VPNs. If one of the servers went down the customers could still connect to their databases from one of the other mesh networked servers instead of from the local one that is down. During normal operations all the servers sync the db with the others through VPNs. I can accept a half-day timing window of NON synched data, in other words, since I don't need real time synchronization, the server don't have to always stay in synch. I can migrate my data over to other Non-Sql technologies like CouchDB or Redis or whatever you suggest. As you can see I don't have a lot of constraints and although I could go with a web application I would like to delegate and decentralize support, data-privacy and management, as more as I can to my customers offices. Is that a crazy idea? Do you know If something similar exist? Which technology would you suggest?

    Read the article

  • Using wildcard domains to serve images without http blocking

    - by iopener
    I read that browsers sometimes block waiting for multiple images from the same host, and I'm trying to do everything I can to speed up page load times. One caveat: I need to serve files over HTTPS. Any opinions about whether this is feasible: Setup a wildcard cert for *.domain.com. Whenever I need an image, generate an number based on a hash mod 5 of the filename, and append it to an 'img' subdomain (eg img1.domain.com, img4.domain.com, img3.domain.com, etc.); the hash will make any filename always use the same subdomain, and therefore the browser should be able to cache the images Configure a dynamic virtualhost record to point all img#. subdomains to /var/www/img I am looking for feedback about this plan. My concerns are: Will I get warnings when my page has https:// links to multiple subdomains? Is the dynamic virtualhost record I'm talking about even possible? Considering the amount of processing this would require, is it likely to even produce any kind of overall benefit? I'm probably averaging a half-dozen images per page, with only half being changed on each page refresh. Thanks in advance for you feedback.

    Read the article

  • Missing drive space in Server 2003

    - by Tim Brigham
    I have two drives used for SQL backups which for the last week have been acting strange - the free space indicated by windows is far off from what windirstat, etc indicates. There should only be about 60 GB of drive space used and there is about 160. This would match the utilization if the two last backup files were still residing on disk. SQL server is 2000, OS Server 2003 x64. Running on a VMware 5.0 cluster. OSSEC and McAfee for this system shows clean. My current plan is to temporarily attach one of these drives this drive to another VM for analysis. Is there anything more I should be looking at? There were a lot of pages on the net when I was looking for documentation on this issue but I haven't found this case described. EDIT: Unfortunately even a full reboot did not clear this behavior. I also used process explorer to look for open file handles. No dice.

    Read the article

  • How to migrate WinXP from failing old HD to new one

    - by Péter Török
    Following this issue, we have all our important data backed up now. I also bought and installed a new replacement hard disk (WD 160GB PATA) as secondary (slave) drive. I created two primary NTFS partitions on it: a 40 GB system partition, and a 110GB data partition. In theory I could start reinstalling WinXP from scratch on the new system partition, then copying over all user data from the old drive to the new data partition. Once this is done, I could even throw away the old drive, or keep it just to see what happens. (Note: I don't want to clone the whole drive as it contains a dual boot setup with an old Linux installation which I don't need anymore, and anyway, a fresh reinstall would do WinXP good to get rid of many years' clutter.) However, I am lazy :-) The old HD is still functioning, the problem has not manifested again since. So I feel there is no need to hurry with a complete OS reinstall. What I don't know though is whether I will be able to install WinXP on the new system partition at a later stage without affecting the contents of the data partition on the same drive. If this is possible, I can just move over all our data to the new data partition to have it safe, then continue running WinXP from the old drive as long as it works. Does anyone see any problems/risks with this plan?

    Read the article

  • Extending a home wireless network using two routers running tomato

    - by jalperin
    I have two Asus RT-N16 routers each flashed with Tomato (actually Tomato USB). UPSTAIRS: Router 'A' (located upstairs) is connected to the internet via the WAN port and connected via a LAN port to a 10/100/1000 switch (Switch A). Several desktops are also attached to Switch A. Router A uses IP 192.168.1.1. DOWNSTAIRS: I've just acquired Router 'B' and set it to IP 192.168.1.2. I have a cable running from Switch A downstairs to another switch (Switch B). Tivo, a blu-ray player and a Mac are connected to Switch B. My plan was to connect Router B to Switch B so that I have improved wireless access downstairs. (The wireless signal from Router A gets weak downstairs in a number of locations.) How should I configure Router B so that all devices in the house can see and talk to one another? I know that I need to change DHCP on Router B so that it doesn't cover the same range as DHCP on Router A. Should I be using WDS on the two routers, or is that unnecessary since I already have a wired connection between the two routers? Any other thoughts or suggestions? Thanks! --Jeff

    Read the article

  • Multiple Remote Desktop Connection in Windows Server 2003?

    - by Joel Bradley
    My company is transitioning all user PC's to Windows 7 64-Bit in anticipation of the 2014 cutoff for Windows XP support. So far everything has been going great except for one specific piece of software that will not run in Windows 7. The current plan is to give everyone a cheap secondary PC to run this software but I feel that's a little much for software that's not even used all the time, although it is essential. I've suggested we install virtual machines but the company does not want to pay for the XP licences. I have access to a copy of Windows Server 2003 that is no longer being used and I was wondering if it was possible to create a remote desktop server. I know it can be done on a one-to-one basis, but this is a 15 person helpdesk. I'd like to be able to support multiple remote dekstop sessions, each with their own logins and dekstops. Is this possible? Are there any other alternatives to my issue? FYI, I've been told that XP mode is only free for consumers. There are costs when used in a corporate environment.

    Read the article

  • virtual machines, dual booting and data disks on SSD

    - by stevemarvell
    This is in planning, so if I've got the strategy wrong, please let me know. There are multiple questions here, but I think they all degenerate to the same answers. The hardware is a laptop with a single SSD. I'm trying to not lose the performance of the SSD. I plan a native dual booting Windows (plus cygwin) and Linux machine which is my BYOD and represents the development environment. I keep the codebase on a shared partition (though sometimes this is an external thunderbolt SSD) which can be natively "mounted" by whichever OS is in operation. I boot into one or the other environments depending on the task in hand. Sometime I have to develop with windows tools, but generally, Linux is my preferred development environment. It would be ideal if I could VM the other OS and run either in either. I'm going to assume, because I've not found a sensible VM based solution, that I have get samba involved to share the code partition between VMs. Is this going to blow my SSD performance in the VM? The client also supplies me with a VM for the target environment, usually linux. This is not often suited to development and is used for testing only. I normally keep two copies of this, one as a sandbox and one which I deploy to using the client's preferred method. I keep these VM snapshots on the shared partition. The latter is interacted with over the network and so has no disk sharing requirements. However, it would be useful for the sandbox to be able to "mount" the code base from the natively running OS. Is this samba or nfs again, depending on the native OS? Am I missing a trick which allows this to all work smoothly with all four environments running at once without loosing the SSD performance?

    Read the article

  • configure a Macbook Pro to use external monitor at boot (Debian Linux)

    - by Eric
    In the spirit of reuse, I've installed Debian (version 6.0.5 "squeeze") on my wife’s old Macbook Pro (circa 2009 or so), to repurpose it for various tasks. The catch is the display is flaky. It will last a random amount of time, between 2 minutes and 2 hours, before freezing and graying out. This is a known issue with that generation of MBP. Fortunately it’s no problem for me, as I plan to use it with an external monitor anyway. Which brings us to the problem: How do I configure this thing to output to the external display by default, and hopefully disable the built-in LCD? The ideal solution would be to modify a setting in the EFI (BIOS), but I’m not holding out much hope for that. Next best thing would be a kernel option I can pass to the NVIDIA driver. What won’t work is a solution that doesn’t give me a display until X starts. I need to have console access, especially given that the built-in LCD is dying, and any day now might give out completely. So far I haven’t been able to find anything online. lspci says I’ve got an NVIDIA GeForce 9400M Help is much appreciated! Eric PS if this question is better suited to the Unix & Linux area, pls advise and I will move it.

    Read the article

  • Regarding partitions for dual-booting Ubuntu with pre-existing Windows 7

    - by Shasteriskt
    I have zero actual experience with configuring disk partitions and the stuff I have read for the past few hours have been confusing me a bit, so please bear with me. First of all, I'd like to explain what I'm setting to achieve: Windows 7 with: C:\ Windows 7 (pre-existing installation) D:\ Data (Already exists and has files already) Ubuntu 11 - Does not exist yet, but I already have a LiveCD in hand. \root directory for Ubuntu \home on its own partition I plan \swap on its own partition with around 8GB Here is the current situation: I have a single 500 GB hard-disk with Windows 7 x64 installed, and the current partition schemes is as follows: System Reserved: 100 MB (Primary, Active) C: 100 GB - Where Windows 7 is installed (Primary) D: 365 GB - Where my files are located, LOTS of free space (Primary) Now, I would like to shrink my D: drive and create around 40 GB of unallocated disk space for the Ubuntu installation, but here what's confusing me a bit: I'm thinking I would create an extended partition and subdivide it into 3 logical partitions for the Ubuntu setup I had in mind. (If you think my setup is a bad idea, please let me know & why. I also hope you can suggest a better one...) I am aware that I can only have up to 4 primary partitions, or 3 primary partitions with 1 extended parition max. Now, does the System Recovery portion count as one primary partition? I'm really new to these things and it is totally unclear to me. In shrinking my D: drive using Windows 7's Disk Management tool, I would get an unallocated free space which I don't know how to make an extended partition from. It seems like I can only create a primary partition from it, not an extended one. How do I go about it? (I'd also like to note, if it is of any importance, that I am trying to avoid using the option to install Ubuntu alongside Windows, and much rather prefer using the custom install where I can specify which drives I wish to use and stuff. Somehow I feel its safer that way.)

    Read the article

  • Creating multiple SFTP users for one account

    - by Tom Marthenal
    I'm in the process of migrating an aging shared-hosting system to more modern technologies. Right now, plain old insecure FTP is the only way for customers to access their files. I plan on replacing this with SFTP, but I need a way to create multiple SFTP users that correspond to one UNIX account. A customer has one account on the machine (e.g. customer) with a home directory like /home/customer/. Our clients are used to being able to create an arbitrary number of FTP accounts for their domains (to give out to different people). We need the same capability with SFTP. My first thought is to use SSH keys and just add each new "user" to authorized_keys, but this is confusing for our customers, many of whom are not technically-inclined and would prefer to stick with passwords. SSH is not an issue, only SFTP is available. How can we create multiple SFTP accounts (customer, customer_developer1, customer_developer2, etc.) that all function as equivalents and don't interfere with file permissions (ideally, all files should retain customer as their owner)? My initial thought was some kind of PAM module, but I don't have a clear idea of how to accomplish this within our constraints. We are open to using an alternative SSH daemon if OpenSSH isn't suitable for our situation; again, it needs to support only SFTP and not SSH. Currently our SSH configuration has this appended to it in order to jail the users in their own directories: # all customers have group 'customer' Match group customer ChrootDirectory /home/%u # jail in home directories AllowTcpForwarding no X11Forwarding no ForceCommand internal-sftp # force SFTP PasswordAuthentication yes # for non-customer accounts we use keys instead Our servers are running Ubuntu 12.04 LTS.

    Read the article

  • Licensing SQL Server 2012 Reporting Services w/ SharePoint 2010

    - by Evan M.
    Here's my situation: I have 1 VM that is running SharePoint 2010 SP1. I have a different physical server that is running SQL Server 2008 R2 that hosts all the configuration and content database for SharePoint. Now, we want to start providing BI capabilities to our users with SharePoint and SQL Server. With it's new features, 2012 is the obvious way to go. To support this, I'm looking to build a new VM that will have SQL Server 2012 installed w/ Analysis services and SSIS, which will be the platform that gets our data from our Oracle databases, puts it in a warehouse hosted by the SQL 2012 instance, and is put into cubes. What's getting me about the platform is licensing for Reporting Services and PowerPivot. My plan was to install SSRS and PowerPivot on the current SharePoint server. But my understanding of the licensing means that instead of the new SQL server being licensed, I'd have to license both new server, and the SharePoint server. Conversely, I could install SharePoint onto the SQL server, and only have to get a second SP license, but then I'd have the added complexity of deploying a separated application server, and combines my data and application servers. Is my licensing understanding correct, or can I have SSRS and PowerPivot installed separately without incurring additional licensing costs?

    Read the article

  • How to have LiveJournal delegate my OpenID to something else?

    - by T-Boy
    I understand that if I have full control over my domain, I can set it up so that I can delegate the task of authenticating to another OpenID service provider. The problem is, what I'd like to do is to get the LiveJournal server to pass the authentication to someone else, instead of having LJ doing it. Preferably what I'd like to do is get LiveJournal, when asked by a web site, say, "No, I don't do it anymore -- go to this address". The plan was that this address would then be in a domain I fully control, which then would pass it on to whichever service provider I choose. I don't even know if I've gotten my understanding of OpenID right, if all this shenanigans are necessary, if my question makes sense, or if it's even possible with a service provider like Livejournal. ETA: Doing a little more reading up, and examining the source for my LiveJournal user page, I note that this particular line in the file's <header> area: <link rel="openid.server" href="http://www.livejournal.com/openid/server.bml" /> I suspect that changing this will allow me to forward OpenID requests to whomever I wish, I think; so far so good. Now comes the hard part -- figuring out how to change all of that using LiveJournal's customization options, if that is at all possible (here's hoping I don't need to pay to get that functionality).

    Read the article

  • Automatic o/s reset on a dedicated internet browsing Windows 7 pc.

    - by camelCase
    I have just purchased a new Acer Revo nettop PC for dedicated internet browsing. It will be the only pc on a home network. My original plan was to install one virtual PC for family browsing, another for remote web based server administration and ban browser use from the host Windows 7 o/s. The idea was that I could recover to a fresh VHD image once a week to eliminate any build up of malware inside the browser VMs. However now I am looking for alternative solutions since the Intel Atom cpu does not have hardware VT support which Windows Virtual PC requires. Would it be possible to engineer some type of routine overnight host o/s wipe and recovery? I guess cyber cafes do something like this? The only user data that would need to be retained across a recovery would be browser bookmarks but these could be exported to remote service. Edit 1: I am thinking the o/s reset could be done via some disk image recovery process. Edit 2: Just had a brainwave. Routine browsing could be done via the new Google Chrome O/S. I have just seen a video of the Google Chrome o/s booting off a usb pen drive in seconds.

    Read the article

  • Tables in the SQL Server "master" database, will they cause problems?

    - by pepoluan
    Folks, please be kind on me... I'm just an 'accidental' DBA due to our DBA resigned, so I'm totally a newbie in DBA... You see, I have this application, "ESET Remote Administration Server" (ERAS) that stores its logs and analysis on (originally) a local Access database. The decision was to migrate its database to a SQL Server 2008 R2 machine. ESET (the maker of the software) helpfully provided tools to perform such migration; unfortunately, being the DBA neophyte that I am, I didn't realize that I have to first create my own database (on the SQL Server side) and assign that database as the 'default' database for ERAS' ODBC connection. Now, the migration tool had successfully created a whole bunch of tables inside the "master" database. My questions: Should I leave things be as it is, or should I re-migrate the ERAS database to a different database? If you suggest me perform a re-migration, my plan is to (1) create a new instance, (2) create a new database within the new instance, (3) create a new ODBC System DSN on the ERAS server pointing to the new DB in step 2, (4) use ESET's migration tool to migrate from the current DSN to the new DSN. Do you think I missed a step there? Thanks beforehand for any guidance.

    Read the article

  • Where did my backup files go? Can they be recovered?

    - by Ken
    I just purchased a Western Digital Essential SE 1TB external hard drive from Best Buy at their recommendation. I then exchanged it for a Toshiba Canvio (I think that was the name). I have a Toshiba Qosmio X505-Q898. The Canvio locked up my computer and rewrote some kind of OS file, and erased all the restore points as well as the system image backup (according to Best Buy) just by plugging it in for the first time. Never even got to the install part or anything -- plugged it in and fried my computer. They spent about an hour and a half on my computer and got it back to a somewhat working condition and gave me access to my files. So now they say I have to back it up using my recovery disk and rewriting my OS. Enter the Essential. Brought it home last night, plugged it in and installed everything. Works perfect, no problems. Backed up everything on it. I unplugged and plugged it twice to make sure that everything was on it. Essential told me it had both the HDD and SSD backed up. So I reinstalled my OS. Plugged the Essential in and everything loads right up. Went to retrieve my files and the Western Digital has nothing on it. It shows all my music, pics, ETC. as still being on my computer and needing to be backed up, but since there are no files on my computer now. Where is this information coming from and where did my files go? It's about 810GB worth of files I've amassed over several years. Is there any way to recover data from this? I plan to contact Western Digital and Best Buy, just thought I would check here too. Any advice will be appreciated as a lot of these files are invaluable to me.

    Read the article

  • RHEL 5.3 Kickstart - How specify location of individual package in Workstation folder?

    - by Ed
    I keep getting "package does not exist" errors during the install. I made a kickstart ISO to create an unattended install of a RHEL 5.3 build machine for C++ software releases. It pulls the kickstart config file from our internal web server. This is handy; it makes it easy to test and modify without having to make a new ISO. And I plan to check it in to version control if I can get it working. Anyway, the rpm packages are located in two folders on the disk; Client and Workstation. The packages install fine for the ones that are physically located under the Client folder. It cannot find those under the Workstation folder such as as doxygen and subversion complaining that packages do not exist. Is there a way to specify the individual package location? # ----------------------------------------------------------------------------- # P A C K A G E S # ----------------------------------------------------------------------------- %packages @gnome-desktop @core @base @base-x @printing @development-tools emacs kexec-tools fipscheck xorg-x11-server-Xnest xorg-x11-server-Xvfb #Packages Located in Workstation Folder *** Install can not find any of these ?? bison doxygen gcc-c++ subversion zlib-devel freetype-devel libxml2-devel Thanks in advance, -Ed

    Read the article

  • choosing hosting for custom ecommerce site, shared, dedicated, what to look for?

    - by spirytus
    Hi, I have (almost) developed website for my client and now need to decide on hosting. Most of the users of the site will be located in Australia, and so am I and my client. Now, I want to consider everything before deciding on host and few questions comes to my mind: I cannot afford website being down, and all hosts say something like "99% uptime guranted". Should just that be enough or shall I ask hosts for some stats maybe? Does it make any difference if servers and whole hosting company is located in Australia or outside? I've been hosting few sites with JustHost.com on shared hosting (cheapest plan, servers in US I believe) and never seen any delays but could that be an issue? I would prefer Australian company so I can actually go to them and give them piece of my mind if something goes wrong, but US servers seem cheaper. Would share hosting do? Its ecommerce custom build php application, I know there are security issues with sessions etc on shared hosting. Will take precautions of course but could share hosting be an issue? Would dedicated be worthy option considering that my knowledge of server is very limited? I need to run php/mysql, with preferably unlimited bandwidth as with my experience I cannot tell what amount of traffic would be sufficient. Please let me know if I didn't provide you with enough information so you could answer my questions, will gladly explain further. In advance thanks for any answers :)

    Read the article

  • Same native and tagged vlan possible on Redhat?

    - by Chris Phillips
    Hi guys and gals, I'm looking at implementing a systems using a number of tagged and a native vlan connected to a server over a a/p bonded interface. The untagged vlan is for physical machine access, the tagged vlans are connected to bridges and then to QEMU VM's inside the machine. Hopefully this plan is fine, but I'm trying to implement a crippled version of this in a dev environment due to a lack of underlying network config in this location where I just have the same single vlan delivered to the machine on a tag AND plain. I'm nto clear if this is going to work (and that I should just be confident that it will work using different vlans) as I'm seeing odd things like a vm is arping out over the vlan out to the core switch, but the arp reply is coming back on the untagged interface. Now an ARP reply is unicast right? So it's a deliberate thing to send the ARP response on the untagged interface, and not a case that a broadcast response isn't being passed on the tagged side... i.e. there's some underlying logic pushing it that way. Something about the MACs somehow? This is on a CentOS 5.5 machine, vlan's from vconfig. (I've seen reference to the Linux mac-vlan project work, but that's not available here by default.) so 1) Should having the SAME vlan tagged and untagged work? 2) Will different tagged vlans to the untagged interface work nice and easily?

    Read the article

  • Site name questions [closed]

    - by avenas8808
    I'm creating a website on German cars. No issues with it, since it's basically a PHP site, and let's say for the sake of argument, there are no server issues. However, I plan to call it "autohaus" (well, for the sake of this scenario anyway, it's not its actual name) The site is not for commercial gain; it's a fansite. Legally, can I use a name for a website if it's a commonly-used one - "autohaus" appears to be this. There are other sites called "Autohaus" but could someone use a similar name for non-commercial, information/educational use? Basically, what's the copyright law for such a situation? Currently my site is only on localhost for now, but what if I did want to release it to the web? Obviously the legal issues have to be taken into account before the site goes live, but there's no Data Protection Act. etc. since it's not selling anything, and no-one's buying anything, and the pictures in it are being used educationally (but that's for another question entirely)] [NOTE: This question is on trademark/names etc. not domains]

    Read the article

  • Recovering ZFS pool with errors on import.

    - by Sqeaky
    I have a machine that had some trouble with some bad RAM. After I diagnosed it and removed the offending stick of RAM, The ZFS pool in the machine was trying to access drives by using incorrect device names. I simply exported the pool and re-imported it to correct this. However I am now getting this error. The pool Storage no longer automatically mounts sqeaky@sqeaky-media-server:/$ sudo zpool status no pools available A regular import says its corrupt sqeaky@sqeaky-media-server:/$ sudo zpool import pool: Storage id: 13247750448079582452 state: UNAVAIL status: The pool is formatted using an older on-disk version. action: The pool cannot be imported due to damaged devices or data. config: Storage UNAVAIL insufficient replicas raidz1 UNAVAIL corrupted data 805066522130738790 ONLINE sdd3 ONLINE sda3 ONLINE sdc ONLINE A specific import says the vdev configuration is invalid sqeaky@sqeaky-media-server:/$ sudo zpool import Storage cannot import 'Storage': invalid vdev configuration I should have 4 devices in my ZFS pool: /dev/sda3 /dev/sdd3 /dev/sdc /dev/sdb I have no clue what 805066522130738790 is but I plan on investigating further. I am also trying to figure out how to use zdb to get more information about what the pool thinks is going on. For reference This was setup this way, because at the time this machine/pool was setup it needed certain Linux features and booting from ZFS wasn't yet supported in Linux. The partitions sda1 and sdd1 are in a raid 1 for the operating system and sdd2 and sda2 are in a raid1 for the swap. Any clue on how to recover this ZFS pool?

    Read the article

  • data transfer rate anomaly (internet)

    - by Nrew
    I tested my internet speed at speedtest.net. And go the result. .42Mbps Download and .21 Upload rate. My classmate got the same download speed of .42Mbps but has .87Mbps upload rate. Does upload rate affect the transfer rate?Because even though we got the same download speed. His transfer rate is about 100kbps downloading a movie from a torrent. And mine is only about 47kbps. Also the same torrent. And even direct download its always 47kbps. Is it possible to tweak something in order to have higher transfer rates. Other details: Were also both using the same ISP. The same slow ISP. And it seems that he's getting the most out of his connection even if his plan is lower than mine. I just don't know why I'm a loser at this. And when I try to complain to the ISP. They say that I'm getting the minimum speed and its okay. That really sucks. And I'm not using any router, so is he. The computer is directly connected to the internet using the modem provided by the ISP.

    Read the article

  • Scenario - NTFS Symbolic Link or Junction?

    - by Unsigned
    Differences Absolute Relative File Directory UNC Symbolic link ? ? ? ? ? Junction ? x x ? x Scenario Let's assume we're creating a reparse point to create the redirect C:\SomeDir => D:\SomeDir Since this scenario only requires local, absolute paths, either a junction or symlink would work. In this situation, is there any advantage to using one or the other? Assume Windows 7 for the OS, disregarding backward-compatibility (prior to Vista, symlinks are not supported). Update I have found another difference. Symbolic Link - Link's permissions only affect delete/rename operations on the link itself, read/write access (to the target) is governed by the target's permissions Junction - Junction's permissions affect enumeration, revoking permissions on the junction will deny file listing through that junction, even if the target folder has more permissive ACLs The permissions make it interesting, as symlinks can allow legacy applications to access configuration files in UAC-restricted areas (such as %ProgramFiles%) without changing existing access permissions, by storing the files in a non-restricted location and creating symlinks in the restricted directory.

    Read the article

  • SSD as primary or secondary drive on a small Linux server?

    - by Alex Martelli
    I'm pensioning off my 10-years-old home server and replacing it with an Ubuntu 10.04 box. The two storage devices are a Western Digital Caviar Green 2.0TB HD and an Intel X25-M 34nm Gen 2 80GB SATA II 2.5inch SSD (the box has 8GB RAM and an i5 750, if it matters). I don't care much about boot times (since I don't plan to reboot all that often;-); the main frequent, performance-demanding task will be (re)building large open source C or C++ software packages from sources (as an open source contributor, I do that often). So, I thought I'd keep the SSD as the secondary drive and the HD as the primary one, using the SSD mostly for the files that can otherwise demand a lot of seeking (esp. in a parallel make). However, the friendly vendor (perhaps more experienced in Windows systems than in Linux ones) thinks the "normal" way to configure the machine would be with the SSD as the primary drive. I'm pretty rusty on configuring and tuning systems, so, I thought I'd better double check on SuperUser... thanks in advance for advice about this choice!

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >