Search Results

Search found 13862 results on 555 pages for 'questions'.

Page 411/555 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • Windows 7 / TCP/IP network share guide - looking for to resolve failure to mount lacie network drive but works on XP,Linux,Mac.

    - by Rob
    Can anyone advise me of a really good, readable, Windows 7 TCP/IP network share guide, book, or reference. I want this because I cannot mount my Lacie 2big ethernet network drive in Windows 7 (32 bit home), but I can mount it in Windows XP Home 32bit, Ubuntu Linux 10.04 and Apple MacOS X. This drive is being mounted via the accompanying Lacie Ethernet Agent in XP (which I believe uses "Bonjour" protocol), on Mac and Linux it works without further need for software. Another Super User user has the same problem, but no answer: Trouble accessing network drives in Windows 7 I hope my take on the question shows a better willingness to investigate and do some digging - and therefore invite some suggestions to help with this. The drive is detected by Windows 7 (i.e. speech bubble "network drive found") but on trying to open an Explorer window, this remains blank with the Windows busy pointer. I'd prefer not to reinstall Windows 7 to see if that cures the problem, I'd rather understand what is happening/not happening, perhaps even compare differences with Windows XP. Suggestions, please for such guides or even the original problem itself. Update Edit Rewrote question more comprehensively here: Mhttp://superuser.com/questions/304209/looking-for-definitive-answer-to-accessing-a-network-share-via-windows-7-home-and

    Read the article

  • Handling emails on a web server - Making sure the FQDN is set correctly based on the website sending the email

    - by webnoob
    I have a Windows 2008 Web Edition server hosting multiple websites using IIS 7.5. At the moment, all the emails are sent via the IIS6 SMTP service. The FQDN of the SMTP service is set to the computer name at the moment which isn't correct as it doesn't resolve to a valid DNS entry and is not RFC compliant. Some questions: Is there any way I can change the FQDN of the SMTP service based on the site sending the email? Would it be Ok to just setup mailserver.mydomain.com and use that as the FQDN for all the sites on multiple domains. Should I be using some other mail server software to handle this better? The reason I am asking is lots of emails are hitting spam folders because the settings are incorrect. I have access to the code that is running the websites so if something needs to be done there then that shouldn't be a problem. The sites are written using ASP.NET 2.0. EDIT: I have just found an option to create an SMTP virtual service. Would this be the way forward? Create a virtual server for each site? Thanks.

    Read the article

  • Multiple Routers, Failover, DHCP and multiple gateways. NOT WAN-failover

    - by u_b
    I've had a look around google and this forum but could not find an answer to my question. So probably one of you can help me a little. My intended setup is: Router R1: wan connection to isp. connected backup server. provides some wireless SSID. other connected devices like printer, laptop, etc. both wired and wireless. Router R2: no wan connection to isp but connected to R2. connects mp3-streamer and music server. also serves as a wireless access point with same SSID. other than described connections only wireless connections. I would like to be able to control music even if R1 is off, e.g. with no internet connection. On the other hand I would like to access internet also in the case that R2 is off, i.e. no music access. Last but not least I would like to stream internet radio, i.e., R1 and R2 are on, and music is streamed from internet to R1 to R2 to streamer. I would like to realize all this using DHCP (also using static assignments) so i do not have to configure statically on android, laptop, etc. So my questions are: Can I make DHCP provide a list of two default gateways R1,R2? In order to make clients fallback to other gateway if currently assigned gateway is turned off? Thanks in advance, u_b

    Read the article

  • How can I take browser screenshots at a higher resolution than my browser supports?

    - by user53575
    I need to take a screenshot of a website as it would appear on a very high resolution monitor... say 16000x12800 pixels. My laptop's screen has a native resolution of 1280x800. Basically, I need to simulate having a monitor resolution much higher than my monitor and video card actually supports. I want the screenshot of the site to look pretty much how it does when you hit CTRL MINUS (zoom out) in Firefox repeatedly, but without any loss of pixels due to scaling. How can I do this? Is there some way to use virtual machine software to simulate a super-high-res display? If not, is there some way to open a browser window bigger than the screen, and then capture its contents as a PNG somehow? Anything else that might work? Here was an answer: http://superuser.com/questions/120266/how-can-i-take-browser-screenshots-at-a-higher-resolution-than-my-browser-support But it doesn't work. Firefox remains in the resolution of the physical screen. The window blinks and shrinks back to normal resolution. Please Help!!

    Read the article

  • Looking for advice on using dd to backup a dual boot laptop.

    - by AvatarOfChronos
    My questions boils down to this. If I do "dd if=/dev/sda of=usbdrive" can anybody confirm that this will get everything including mbr/partition information/all four partitions and create a drive that I can swap with the failing internal drive without losing anything? If this is done while the computer is running will it still copy everything? At this point I'm afraid to shutdown the computer for fear of it never starting again. Secondly, how tolerant is dd of failing drives? Has anybody used it to recover a half dead drive before that can share any potential pitfalls? Did it get the data ok or is this going to be a hope for the best kind of situation? And lastly, If the usbdrive is larger than the failing internal drive I'll still be able to expand the partitions later so I'm not losing space? this last part seems silly to ask but with my current streak of bad luck I'll end up overwriting some magic bit and forever turning a 640gb hdd into a 500gb hdd. Also if anybody has a better solution to create a complete clone that gets everything I'm all for hearing about it. PostScript: I had been making periodic backups however when whatever miasma that killed the laptop struck it also got the NAS :( Post PostScript: both devices were on a UPS system.

    Read the article

  • How to make Firefox file associations consistent with Ubuntu file associations?

    - by wbharding
    This seems to be a pretty commonly Google question, but one for which there are no answers. http://www.linuxquestions.org/questions/linux-software-2/firefox-download-mime-types-378902 http://www.birkit.com/content/kubuntu-linux/internet/firefox/fix-file-associations-in-firefox.html Being three links amongst the many. The gist of what I want to accomplish is to have Firefox understand the file associations I download without me having to manually map all of them myself. Gnome knows the file extensions, so I would have expected that Firefox could just use the already-known file mappings there to open the right stuff (as I presume Chrome does). But it doesn't. At least not for me, using Firefox 4, and not by default. When I click on a downloaded file right now, Firefox always has to ask me what application should be used to open the file. A handful of Google results tell me that I can reassociate my file extensions by deleting ~/.mozilla/firefox/[profile name]/mimeTypes.rdf, but while deleting that file does in fact result in a new mimeTypes file being generated, the new mimeTypes is just as barren as the old one had been. Based on the amount of unanswered Qs on the Googlesphere, I know this is a very common problem for Ubuntu users, but it seems to be one for which nobody has chimed in with a good solution. Maybe Superuser can finally be the panacea for us all?

    Read the article

  • Apache using 100% CPU, once again

    - by CBenni
    Recently, apache2 started using 100% of CPU power: top gives me From other, similar threads, I took the tip to use mod_status. Aside from HUGE amounts of NULL requests, it gives: CPU Usage: u2.16 s1.32 cu0 cs0 - .0835% CPU load 1.2 requests/sec - 17.6 kB/second - 14.6 kB/request 8 requests currently being processed, 42 idle workers The access and error logs do not show anything surprising or intriguing at all. Note the .8% CPU usage. Another tip was to use strace: root@server:~# strace -p 1956 Process 1956 attached - interrupt to quit restart_syscall(<... resuming interrupted call ...> And remains like this for at least half an hour, without producing any additional output. Restarting apache fixed the problem for less than a second The server runs a few custom python scripts aswell as a django-powered website on apache2 (up-to-date), but even turning the scripts off (or not having them active in the first place) did not change anything. After I stopped apache and powered my server off, powered it on a few minutes afterwards and restarted all my services, the CPU usage remained low for several hours, just in order to pop up again randomly (?) The DigitalOcean CPU stats on my server are: You can see how the CPU usage was super high for almost half a day until I restarted the bot - just to remain stable for several hours and then pop up again. I am completely at a loss of words and don't know what I could do to find out what piece of my code is giving me these problems or if apache itself is the cause... Therefore I would greatly appreciate any hints to the questions: What else can I try to do? Which things might I not have checked? Is this definitely in my own code? How do you find what part of python code crashes an app via a infinite loop or similar?

    Read the article

  • Azure Virtual Machines - what fault tolerance do they provide?

    - by Borek
    We are thinking about moving our virtual machines (Hyper-V VHDs) to Windows Azure but I haven't found much about what kind of fault tolerance that infrastructure provides. When I run VHD in Azure, I've got two questions: Is my VHD and all the data in it safe? I think that uploaded VHDs use the "Storage" infrastructure so they should be automatically replicated to multiple disks and geographically distributed but should I still make a full-image backup just to be safe? (Note that of course I will be backing up the actual data inside VMs that I care about; I just want to know if there is a chance greater than 0.0000001% that one day I will receive an email from Microsoft telling me that my VM is gone and that I should create or restore it from scratch). Do I need to worry about other things regarding the availability of my VMs? I mean, when I have an on-premise server I need to worry about the hardware itself, about the host operating system, what would happen if my router failed, if my Hyper-V's C: drive failed etc. Am I right in thinking that with Azure, their infrastructure takes care of all of this? Thanks.

    Read the article

  • What differences are there between "home" switches and "professional" switches?

    - by pjreddie
    Our radio station uses a PtP wireless system to stream our radio and TV signals from our studio up a hill to our transmitter. We have been having problems with warbly sound and drop outs that come from some point in this system. An engineer that occasionally visits the station thinks it could be the switches we use on each side of the PtP wireless system to connect the PtP devices to the encoders and decoders and wants us to get two of these switches: http://www.amazon.com/Netgear-JGS516-ProSafe-16-Port-Ethernet/dp/B0002CWPOK/ref=dp_return_1 The encoder/decoder setup only streams 8Mbps total so it seems like the switches we have should not be stressed out, unless they are causing sufficient latency to degrade the performance of the encoder/decoder. At each end of the connection we only have 4 connections, is there any reason we couldn't get a cheaper, "home" quality switch like this: http://www.amazon.com/D-Link-DGS-1005G-5-Port-Gigabit-Desktop/dp/tech-data/B003X7TRWE/ref=de_a_smtd Is there a significant difference that we would notice in terms of latency between these two switches? How much does the quality of the switch actually matter in this scenario? Any help is appreciated, feel free to ask questions if anything needs clarification. Thanks

    Read the article

  • Import data in Excel that doesn't have a row delimiter, but number of columns is known

    - by Alex B
    So i have this text file that looks something like this: Header1 Header2 Header3 Header4 A1 B1 C1 D1 A2 B2 C2 D2 and so on. When imported, I'd want the data to format itself in 4 columns. I tried the Get External Data from Text, and it successfully imports it, but it doesn't wrap it around, so it just keeps making columns for every space. I'd want it to go on the next line after 4 (in this case) elements have been added. What's the simplest way to achieve this? EDIT: My answer follows, since I'm not yet allowed to answer my own questions yet. The Excel function I needed is called indirect(). Not sure how it actually works though, so hopefully someone can help out with that, but the function call that worked for me is =INDIRECT(ADDRESS((ROW(A1)-1)*4+COLUMN(A1),1)) which i found over here: http://www.ozgrid.com/forum/showthread.php?t=101584&p=456031#post456031 Note: this required me to add the text to excel where i'd get this row full of columns, and then flip it so that i'd have a column full of rows.

    Read the article

  • How to handle files that don't need version control in mercurial

    - by richardh
    I am new to mercurial, and for the most part do LaTeX reports and statistical calculations in R using .csv and/or .sqlite files. Re LaTeX, all I really care is the .tex file. Re R, I don't need version control on the .csv or .sqlite files because they are static. When I do 'hg add' for a repo with a .csv and/or .sqlite file, I get a warning like: rev2.sqlite: up to 3070 MB of RAM may be required to manage this file (use 'hg revert rev2.sqlite' to cancel pending addition) So I revert and subsequently use adds like hg add -X *.sqlite. I guess I really have two questions: (1) Should I ignore these warnings? Because these large files are static, can I just add to the repo knowing that the diff files will always be empty and not worry about wasted resources? (2) If I should keep excluding these files from the repo, is there away that I can fix this option? I.E., add to my .hgrc file something that always appends an option like -I *.tex -I *.R to my 'hg add' commands? Thanks!

    Read the article

  • Can you see something wrong in my .htaccess?

    - by AlexV
    OK, after many search, trial and errors I've managed to create an .htaccess that do what I wanted (see explanations and questions after the code block): <IfModule mod_rewrite.c> RewriteEngine On #1 If the requested file is not url-mapper.php (to avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!url-mapper\.php)$ #2 If the requested URI does not end with an extension OR if the URI ends with .php* RewriteCond %{REQUEST_URI} !\.(.*) [OR] RewriteCond %{REQUEST_URI} \.php.*$ [NC] #3 If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/seo-urls\/(excluded1|excluded2)(/.*)?$ #Then serve the URI via the mapper RewriteRule .* /seo-urls/url-mapper.php?uri=%{REQUEST_URI} [L,QSA] </IfModule> This is what the .htaccess should do: #1 is checking that the file requested is not url-mapper.php (to avoid infinite redirect loops). This file will always be at the root of the domain. #2 the .htaccess must only catch URLs that don't end with an extension (www.foo.com -- catch | www.foo.com/catch-me -- catch | www.foo.com/dont-catch.me -- don't catch) and URLs ending with .php* files (.php, .php4, .php5, .php123...). #3 some directories (and childs) can be excluded from the .htaccess (in this case /seo-urls/excluded1 and /seo-urls/excluded2). Finally the .htaccess feed the mapper with an hidden GET parameter named uri containing the requested uri. Even if I tested and everything works, I want to know if what I do is correct (and if it's the "best" way to do it). I've learned a lot with this "project" but I still consider myself a beginner at .htaccess and regular expressions so I want to triple check it there before putting it in production...

    Read the article

  • Can a non-redundant RAID5 cause any serious problems (compared to RAID0)?

    - by leemes
    I used to have a three-disc RAID5 (mdadm) in my computer for personal media storage (music, videos, photos, programs, games, ...). It had three discs with 750 GB each, resulting in an array capacity of 1.5 TB. One day (one year ago), I needed one of those discs to install another operating system. I thought, I don't need the redundancy anymore since I backup the most important stuff (personal photos e.g.) on an external disc anyway. So I decided to remove one of the three discs without converting the RAID to RAID0 or even two separate discs, because I had no temporary storage (since one cannot simply convert the RAID5 to RAID0 AFAIK). So now, for about one year, I have a non-redundant RAID5 with 2 of 3 discs running. Sometimes, one of the discs has a defective contact at the power cable or something similar causing the drive to stop working temporarily (I don't know exactly what it is). Since it still works when rebooting the computer and in most cases by calling some mdadm commands, it wasn't that problematic. Note that the data is not very critical, since I still have a backup of the most important stuff. But in the last few weeks, one of the drives fails very frequently (every few hours), so it gets really annoying to manage this. My questions are: Is there any disadvantage (apart from the annoying management) of a non-redundant RAID5 (with one drive less than typical) over a RAID0? If I understand it correctly, both have no redundancy and the same capacity. On a temporary drive failure, I can restart the array in both cases, assuming that the drive itself still works after the failure. Can it happen that the drive contents alter on a drive failure, making the array inconsistent? If so, can I tell mdadm to check the array for failures (without a file system level checking tool)? Since the drive most probably only has a defective contact causing it to fail for a second only, can I tell mdadm to automatically restart the array, so I will not even notice the failure if no application wanted to access the file system during the failure?

    Read the article

  • Cloning a failing disk (Win 7)

    - by daveh551
    I have a Windows 7 machine with several partitions on a 1.5T drive. Windows has been complaining about disk errors and imminent failure, so I have purchased a new 2TB drive. The failing disk has not completely failed, and, in fact, I was able to boot Windows from it (after a couple tries) and examine the SMART logs - the only RED item was 1 sector being reallocated. But when I try to Clone it to the new Drive using Acronis True Image Home (2010), True Image can see the drive, the partitions, and the contents, but when it goes to actually do the clone, it says "Failed to move. Make sure the destination disk is not smaller than the source disk, and that there are not errors on the disk" (or something like that). What are some other options for simply cloning the failing drive. I'd like to clone the entire disk, but am willing to do it partition by partition if necessary. Was this a known failing of the 2010 edition of ATI, or is it really something hosed in my system. Would upgrading to the 2012 edition be likely to work any better? (I'd download the trial and try it out, but if I remember right, the cloning operation is disabled in the trial version), and I don't have enough free disk space to make an entire image.) What are some other cloning software packages if ATI won't work? Note that I'm only looking to clone the disk, not make an image as a back up - I use Ghost for that, and can fall back to that if I have to. It looks to me like CloneZilla would do the job. Any recommendations? Thanks, and if this duplicates other questions, I apologize.

    Read the article

  • Computer not finding hard drives on boot -sometimes-

    - by todd.pund
    Computer specs: Mobo: Gigabyte ultradurable 3 - GA-970A-UD3 Processor: First gen I7 3.2GHZ Ram: 8GB Kingston DDR3 1066 Video Card: EVGA NVidia GTX 460 1GB Hard Drive: 500MB 7200rpm x2 (can't remember brand, sorry I'm at work.) Last week my developer preview for Windows 8 ran out so I put my copy of windows 7 back on the computer. The computer at that point started suffering from frequent freezing and crashing. When I rebooted the computer sometimes it wouldn't find the system HD at all. When I looked at the post screen it seemed to show that it wasn't finding either of the HDs. Then yesterday when turning on the computer I just got GRUB as a message (not a GRUB prompt, just GRUB) I haven't had a dual boot of Linux for at least a year. I loaded windows 7 recovery console from the disk and ran: bootrec /fixboot bootrec /fixmbr Which did not help. At that point I just installed Ubuntu 13.04 over the windows 7 install and still received the GRUB post. I went into the BIOS and switched the Hard Drive priorities and then it loaded into Ubuntu fine. For several days everything was just hunky dory until I installed the Ubuntu version of Steam, install Portal and tried to run it. At that point the computer froze and after hard rebooting couldn't find the hard disks again. Then after restarting the system it loaded up fine again and no issues since. (I have not tried to launch portal again). My next thought is to remove the system hard drive and try to use the secondary as the master to see if the primary HD is bad. I'm sorry if this has been confusing, I'll answer any questions I can. Any thoughts?

    Read the article

  • Any ideas why Ettercap filters aren't seeing packet data?

    - by Bryan
    I'm using an Ettercap filter to detect a query response coming back from a particular service on a remote machine. When I see a response from the service, I'm searching through the data in the packet to see if an offset is a specific value, and if so I'm changing the value at another offset. Trouble is, when I try this on a new virtual machine I built my Ettercap filter's no longer getting any data in the DATA.data variable available to it. if(ip.proto == TCP && tcp.src == 17867) { msg("Response seen!\n"); if(DATA.data + 2 == "\0x01") { msg("Flag detected!\n"); DATA.data + 5 = 0x09; } } The filter's getting applied to the traffic because "Response seen!" messages get printed out by Ettercap. However, "Flag detected!" messages do not. I think DATA.data is indeed empty because if I change my second "if" statement to check for DATA.data == "" then the "Flag detected!" message gets printed. Any ideas why this may be happening?! Also, if this is the wrong site to be asking questions like this, please let me know. I wasn't sure if it fit better here or somewhere like superuser or serverfault. By the way, this is a cross-post from StackOverflow... I should have posted on this forum instead I think. :)

    Read the article

  • Expanding iSCSI LUNs (NTFS)

    - by Fatih
    I have a 4TB iSCSI LUN that formated as NTFS in Windows 2008. I've shared this formated volume as a folder over SMB. When the capacity of this volume is not enough, I have to add more iSCSI LUNs, but the end-users must see only the folder that I've shared before. So, when I expand the NTFS volume that is currently 4TB, with more iSCSI LUNS(for example 2 more 4TB LUN), if one of the luns is failed, or missing, will all of my data in the folder be lost? I imagine that the expanding ntfs volume is like RAID 0(striped). if it is like RAID 0, then all my data will be lost when one of the luns is failed, or missing. In brief, there are two questions in here: 1- What will be happened, if one of the luns is missing in an expanded ntfs volume? 2- Is there another way to merge all of iscsi luns as only a folder, in that way the users don't see any extra folder even if I add extra iscsi luns to the file server.(I don't mention about DFS) Regards.

    Read the article

  • Excel controls not visible for certain users

    - by Nossidge
    One of the users of an Excel program I've written is having a weird problem. None of the control objects (Command Button, ComboBox, etc.) are visible to him when he opens the file on his laptop. He is using Excel 2003, the same version I used to create the program, and enables macros using the pop-up when the file loads. I have Googled this, and have found these people who seem to be having the exact same problem, with various versions of Excel. Unfortunately, none of their questions were answered. I can't really explain it any better than this user: If I enter design mode and pull a control from the control toolbar onto a sheet all I see are the drag handles. When not in design mode I have to feel around with the mouse and can click the button which executes the button click code correctly and opens another sheet where again I have to feel around for the buttons to return me to the original sheet. The button I managed to click is now visible but as soon as I click anywhere on the sheet it disappears. I have verified that the visible property of the buttons is set and that the Show All Objects on the Options View tab is selected. If I pull buttons from the Forms toolbar onto a sheet they are visible. If I try to find Objects using F5 when not in design mode Excel reports no objects on the sheet. So, Super Users, can you help? UPDATE: Thanks for your replies, but much like the person in the ozgrid link, the problem has gone away. Not sure why it went, but I can confirm that the user rebooted again and also started up other Excel files that didn't contain controls in the interim. Perhaps that fixed it, or maybe it'll be back again. I'll keep udating with progress, and close if the problem doesn't reoccur for the next few days. Thanks again.

    Read the article

  • Which default Database Systems come installed in Microsoft VS2010 Express?

    - by Tonygts
    Appreciate all advice 0n the following questions Which database systems (Ms SQL 2008, MS SQL Compact, or others) comes installed with VS2010 Express edition. SQL Server 2008 R2 Express is free, can we install and integrate with VS2010 Express? How to uninstall those database already come installed? I have installed VS2010 express on Windows 7; just VS2010 components (VB, C#, C++ and Web Developer) and without installing any other things like SQL Express. In the Console Panel-Program & Features' window, the installed list is shown below: Microsoft SQL Server 2008 Setup Support File Microsoft SQL Server 2008 Browser Microsoft SQL Server VSS Writer Microsoft SQL Server Database Publishing Wizard 1.4 Microsoft ASP.NET MVC2 - VWD Express 2010 Tools Microsoft SQL Server 2008 Management Objects Microsoft SQL Server Compact 3.5 SP2 ENU Microsoft SQL Server System CLR Types Microsoft Silverlight 3 SDK Microsoft ASP.NET MVC 2 Microsoft Visual Studio 2010 ADO.NET Entity Framework Tools Visual Studio 2010 Tools doe SQL Server Compact 3.5 SP2 ENU Web Deployment Tool Microsoft Visual Web Developer 2010 Express - ENU Microsoft Visual C++ 2010 Express - ENU Microsoft Visual C# 2010 Express - ENU Microsoft Visual Visual Basic 2010 Express - ENU Microsoft SQL Server 2008 As you can see, Microsoft SQL Server 2008 (last line) and near the top, Microsoft SQL Server Compact 3.5 SP2 ENU and many of their related SQL components such as Microsoft SQL Server 2008 R2 Management Objects are also installed. These are actually installed by installing VS2010 Express, but I have no idea how to use them or verify their valid existence from VS2010. Also, do I have to uninstall them before I install SQL Server 2008 R2, which is the latest version I believe? And what tool is needed to manage and create data source and tables?

    Read the article

  • Cloning to a smaller hard drive with DDRescue

    - by krebshack
    I am currently working with a 700 GB Seagate hard drive that's beginning to fail. I'll call this "SDB" from now on. I'd like to clone it while I'm still able to. However, the only hard drive that I have available is a 500 GB WD hard drive. I'll call this "SDC" from now on. The partition scheme on SDB is as follows: 9.77 GB is allocated to a recovery partition and the remaining 688.87 GB is allocated to a Windows partition. Both are formatted using NTFS. There is no partition scheme on SDC. I know how to clone one hard drive to another using DDRescue but I've only done it using hard drives that are the same size. For your reference, I'll normally use the command "ddrescue -v -r 3 /dev/sdb /dev/sdc example.log". I'd like to know if it's possible to do this with DDRescue. I've read the manual from GNU (http://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html) and I haven't seen anything indicating that it is possible. I'm just looking for some confirmation that this is a correct impression. If it's not possible, then it would be helpful if any of y'all would be able to make some work around suggestions. But please don't feel obligated to do that. I don't want to have my one thread bogged down with two many questions.

    Read the article

  • Windows 8 install app for multiple user accounts

    - by Robert Graves
    I purchased Adera episode 2 intending to play through it with my son. We each have our own user account on the same PC. When my son logged in, he was prompted to purchase the app which I had already purchased, installed, and played on the same PC. So I checked the Terms of Use. After selecting an app in the store, there is a Terms of Use link on the left side under the Install button. It is almost impossible to identify it as a link unless you put your mouse over it. The Terms of Use are standard across all apps in the store, not specific to particular apps. The terms of use indicates that the app may be installed on up to five devices, but says nothing about multiple user accounts on those devices. However, this Microsoft blog article indicates that it is allowed. Say, for example, that your family has a shared PC. You have previously used your Microsoft account to purchase a game that all your kids like to play. You can install it for each of your kids by having each of them sign in to their Windows accounts on the shared PC, then launch the Store and sign in to the Store using your own Microsoft account. There, you’ll see all your apps and you can re-install the app on your kid’s Windows account. Installing apps on multiple user accounts on a shared PC still only counts as one of the five allowable PCs where you can install apps. So I have two questions: Is it permissible under the Terms of Use to install the app under multiple accounts on the same device? If so, how do I do so given that my son has already signed into the store using his own Microsoft account.

    Read the article

  • Can a website see/know my MAC address even if I use a VPN?

    - by ilhan
    I have searched other results and read many of them but I could not get an enough information. My question is that can a website see my MAC address or can they have an information about that I'm the same person under these conditions: I am using a VPN and I use two IPs: first one is normal one, the second one is the VPN's IP. I use two browsers to hide behind browser fingerprinting. I use both browsers with Incognito Mode. I always use one for normal IP, one for the VPN IP. I do not know that if the website uses cookies or not. But can they collect an enough information to prove that these two identities belong to same person? Is there any other way for them to see that I am the same person? I use different IPs, different browsers and I use both browsers in incognito mode. I even changed one of browsers language to only English. So even if they collect my info from browser, they will see two browsers using different languages. (Addition after edit): So I have changed my IP and browser information and the website can not reach this information anymore to prove that I am the same person using two accounts. Then let's come to the title: Can they see my MAC address? Because I think that it is the last way that they can identify me and my main question is that. I wrote the information above to mention that I changed IPs and I have some precautions to avoid browser fingerprinting (btw my VPN provider already has a service about blocking it). I wrote them because I read similar advices in some related questions but my question is that can they see my MAC address (or anything else that can make me detected) despite all these precautions. And lastly, Is there an extra way to be anonymized that I can do? For example, can my system clock or anything else give an information? Thanks in advance.

    Read the article

  • Remote reboot of windows to knoppix

    - by user64452
    I am attempting to develop an Auditing application. This audit application will be employed on windows networks. The Audit will need to discover Hardware and software details of all machines attached to the network (including Printers) I do not want to have to install this application on each workstation. The audit app. needs to discover all the ip addresses of all the networked workstations. I have been prototyping this app for the last couple of months and have decided to try a new tack Is this possible? a). You have a windows network, min Windows XP sp3 and upwards b). Maximum of 100 Networked machines (if that matters) c). I need to remotely reboot each WINDOWS machine in turn on the entire network and get it to startup using UNIX, say knoppix for example! d). however the knoppix live cd is only available from one of the networked machines Questions... Morphology? Longevity? Incept dates? Cheers DD

    Read the article

  • iptables: How to combine DNAT and SNAT to use a secondary IP address?

    - by Que_273
    There are lots of questions on here about iptables DNAT/SNAT setups but I haven't found one that solves my current problem. I have services bound to the IP address of eth0 (e.g. 192.168.0.20) and I also have a IP address on eth0:0 (192.168.0.40) which is shared with another server. Only one server is active, so this alias interface comes and goes depending on which server is active. In order to get traffic accepted by the service a DNAT rule is used to change the destination IP. iptables -t nat -A PREROUTING -d 192.168.0.40 -p udp --dport 7100 -j DNAT --to-destination 192.168.0.20 I also wish all outbound traffic from this service to appear to come from the shared IP, so that return responses will work in the event of a active-standby failover. iptables -t nat -A POSTROUTING -p udp --sport 7100 -j SNAT --to-source 192.168.0.40 My problem is that the SNAT rule is not always run. Inbound traffic causes a connection tracking entry like this. [root]# conntrack -L -p udp udp 17 170 src=192.168.0.185 dst=192.168.0.40 sport=7100 dport=7100 src=192.168.0.20 dst=192.168.0.185 sport=7100 dport=7100 [ASSURED] mark=0 secmark=0 use=2 which means the POSTROUTING chain is not run and outbound traffic leaves with the real IP address as the source. I am thinking I can set up a NOTRACK rule in the raw table to prevent conntracking for this port number, but is there a better or more efficient way to make this work? Edit - Alternative question: Is there a way (in CentOS/Linux) to have an interface that can be bound to but not used, such that it can be attached to the network or detached when a shared IP address is swapped between servers?

    Read the article

  • How can I monitor network traffic?

    - by WIndy Weather
    I have a home network with about 10 devices including BluRay player [netflix] and both windows and linux machines. I need to collect network traffic statistics so that if questions come up about how much traffic I'm using I have the answer independent of my ISP. I've looked at DD-WRT, but I see that even buying a new router that will be supported is a problem since I might get the wrong version of the hardware. I have a DIR-655 and a DIR-501 - neither of which is supported. I don't mind buying new hardware, but it looks like a crap-shoot to get one that will work. DD-WRT looks like a bad solution unless someone knows of a place to get a router that is guaranteed to work. Does someone know of an arduino or other SBC solution? I have plenty of NAT routers already, so I just need traffic statistics for external traffic. The network is GBit Ethernet inside and Cable / soon to be DSL outside. The DIR-655 only gives me "packets", not bytes transferred oddly enough. Thanks, ww

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >