Search Results

Search found 18096 results on 724 pages for 'let'.

Page 92/724 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Windows memory logged on vs logged off

    - by Adi
    Let's say I power on my fresh installed Windows 7 x64 machine. After Windows boots up, there are a bunch of services being started in the background that start allocating memory. Then I enter my user/pass and Windows logs me in. Let's supose I don't do anythig else (I don't explicitely start any application) and I don't have any other app installed by me. So it's fresh install of my machine. My question is: how much memory is needed for all the UI & other stuff? Is it a good indicator to look into task manager and check all the processes started under my user name and sum up all the memory consumed by those processes to get the total amount of memory I am consuming just to stay logged on? Basically this is my question: how much memory is needed just to stay logged on? Now, if log off would all the memory be released back to the system so that the background services can benefit of? Also, I assume that there might be a different discussion for each Windows flavors (?)

    Read the article

  • Windows 7 Not Recognizing Camera Nor iPhone as Camera

    - by taudep
    I've been struggling with this one for a few days. I've recently upgraded an older computer to Windows 7 Home Premium. Neither my digital camera (A Canon SD1200IS) nor iPhone are ever detected as cameras, nor ever show up as accessable in Explorer. With the Canon camera, no driver is required. It's supposed to work with the default Windows 7 drivers. However, in the Control Panel's Device Manager, I'm always seeing a yellow icon next to the "Canon Digital Camera" device. I've uninstalled the device and let Windows attempt to reinstall, but it can never find a driver to install. With the iPhone, it's very similar. One big difference, though, is that iTunes can see the iPhone and back it up, etc. However, again when I go to the Device Manager, there's a yellow icon next to the iPhone. I've uninstalled iTunes, reinstalled, rebooted, deleted drivers, and let Window try to reinstall the driver, but it can never find the driver. So there seems to be some correlation that my machine can't detect cameras properly, and that it might be even a lower-level type of driver I'm struggling with. I know that USB however, does work, because I have have an external drive hooked into the machine. I've gone through the web and tried two hours worth of fixes, without success. I feel like if I can get the Canon camera detected, then the iPhone will be on it's way to being fixed too. BTW, I couldn't really find anything of use in the Event viewer. Any and all suggestions welcome.

    Read the article

  • DNS failover in a two datacenter scenario

    - by wanson
    I'm trying to implement a low-cost solution for website high availability. I'm looking for the downsides of the following scenario: I have two servers with the same configuration, content, mysql replication (dual-master). They are in different datacenters - let's call them serverA and serverB. Users use serverA - serverB is more like a backup. Now, I want to use DNS failover, to switch users from serverA to serverB when serverA goes down. My idea is that I setup DNS servers (bind/powerdns) on serverA and serverB - let's call them ns1.website.com and ns2.website.com (assuming I own website.com). Then I configure my domain to use them as its nameservers. Both DNS servers will return serverA IP as my website's IP. If serverA goes down I can (either manually or automatically from serverB) change configuration of serverB's DNS, to return IP of serverB as website's IP. Of course the TTL will be low, as it's supposed to be in DNS failovers. I know that it may take some time to switch to serverB (DNS ttl, time to detect serverA failure, serverB DNS reconfiguration etc), and that some small part of users won't use serverB anyway. And I'm OK with that. But what are other downsides of such an approach? An alternative scenario is that ns1.website.com will return serverA IP as website's IP, and ns2.website.com will return serverB IP as website's IP. But AFAIK clients not always use primary nameserver and sometimes would use secondary one. So some small part of users would use serverB instead of serverA which is not quite what I'd like. Can you confirm that DNS clients behave like that and can you tell what percentage of clients would possibly use serverB instead of serverA (statistically)? This one also has the downside that when serverA goes back up, it will be automatically used as website's primary server, which is also a bad situation (cold cache, mysql replication could fail in the meantime etc). So I'm adding it only as a theoretical alternative. I was thinking about using some professional DNS failover companies but they charge for the number of DNS requests and the fees are very high (why?)

    Read the article

  • BIND master/slave does not respond for queries for its slave

    - by Savas
    Systems are all Centos 6.2 Lets say I have a masterdns with IP 10.2.1.2, authoritative for the 10.2.1.X subnet and let say it is domain example.com I have another two subnets, 10.2.2.X and 10.1.2.X Each one has its own DNS server, dns2 and dns1 respectively and let say these are domains dom2.example.com and dom2.example.com respectively. The masterdns server has slave zones for dns1, dns2 and respond to requests OK. The dns1, dns2 have the masterdns zones as slaves two, and respond to requests OK. So, the masterdns has as slave zones all the subordinate domains of example.com Each of dns1 and dns2 use masterdns as a forwards (which uses another dns cache/proxy server) for dns resolution of internet public domain names. It works OK that too. The problem is, and I cannot figure it out. Why queries for example at dns1 for hostnames of dom2.example.com do not resolve? If i use nslookup - masterdns at dns1 server, resolve (i use directly the dns facility of masterdns). If I use nslookup locally, meaning queries are sent to dns1, for hosts that are at dom2.example.com, they do not resolve. Everything other works OK.

    Read the article

  • How to debug silent hang on shutdown of Solaris 10?

    - by jblaine
    We're experiencing a mysterious hang on shutdown of a newly-imaged Oracle/Sun Solaris 10 SPARC box. It is repeatable (in the same spot ... from what we can tell). We let it try to work itself out multiple times for 5-10 minutes and it never progressed. I've never seen this happen before. The last thing displayed on the console is that syslogd was sent signal 15. Prior to us disabling snmpdx on the box, the last thing on the console was that snmpdx was sent signal 15 (after syslogd was sent signal 15). While very rare to find, in Solaris days past, I'd have a better idea from experience where the problem might be, and could then narrow things down further with silly (but effective) debugging echo statments in /etc/*.d scripts. With SMF in the picture, I'm not really quite sure where to start. We forced a crash dump via sync at the {ok} prompt for later analysis, and then let the box come up because it's a production server and our scheduled outage window was closing. /var/adm/messages shows nothing of use. How would you debug this situation? mdb ps of the savecore shows the following processes were running at hang time (afsd is the OpenAFS client and that many are expected): > > ::ps S PID PPID PGID SID UID FLAGS ADDR NAME R 0 0 0 0 0 0x00000001 00000000018387c0 sched R 108 0 0 0 0 0x00020001 00000600110fe010 zpool-silmaril-p R 3 0 0 0 0 0x00020001 0000060010b29848 fsflush R 2 0 0 0 0 0x00020001 0000060010b2a468 pageout R 1 0 0 0 0 0x4a024000 0000060010b2b088 init R 1327 1 1327 329 0 0x4a024002 00000600176ab0c0 reboot R 747 1 7 7 0 0x42020001 0000060017f9d0e0 afsd R 749 1 7 7 0 0x42020001 00000600180104d0 afsd R 752 1 7 7 0 0x42020001 0000060017cb44b8 afsd R 754 1 7 7 0 0x42020001 0000060017fc8068 afsd R 756 1 7 7 0 0x42020001 0000060017fcb0e8 afsd R 760 1 7 7 0 0x42020001 00000600177f4048 afsd R 762 1 7 7 0 0x42020001 000006001800f8b0 afsd R 764 1 7 7 0 0x42020001 000006001800ec90 afsd R 378 1 378 378 0 0x42020000 0000060013aee480 inetd R 7 1 7 7 0 0x42020000 0000060010b28008 svc.startd R 329 7 329 329 0 0x4a024000 00000600110ff850 sh Z 317 7 317 317 0 0x4a014002 0000060013b3a490 sac

    Read the article

  • Looking for software to read PDFs/web pages aloud on OS X

    - by Clinton Blackmore
    I am looking for software that will read PDFs and web pages aloud for me under OS X 10.5, preferably something that is free. I am aware that you can make your Mac read to you by pressing a key combination. It is pretty slick, but I really want something that: will allow me to say, "Read this document" and let me skip paragraphs and pause (instead of simply stopping and then restarting from the beginning) will allow me to skip things that aren't relevant, like page headers, footers, and side bars. will allow me to rewind and listen to something again (either to think on it more deeply, or to understand what the text-to-speech engine was trying to say) for a pdf with text in two columns, will let me read just one column at a time. (Right now if I make a selection, it gets both columns and reads from one and then from another. If I could just select one column and read it, I'd be happier. [IIRC, Apple improved things in Snow Leopard so you can select one column in a pdf.]) I don't really expect one program to do both pdfs and web pages, but it would be nice.

    Read the article

  • Mod_rewrite to eliminate query strings

    - by Greg Frommer
    Hi everyone, I have been working on this for a while but I'm not finding exactly what I am looking for. I am writing a webapp to let my users create and publish pieces of HTML content in a domain and URL folder structure of their choosing. All of the content and requested URL structures are stored in a database. I have all of the code in my index.php (in the root folder) to access the database content, and based on the server name (and hopefully folder structure) will pick out the proper content from the DB and display it to the end-users browser. So my situation looks like this: www.test.com/index.php?id=123234345 ... will display the proper page, but I want my users to be able to define a unique "page name" instead of using the numeric index (also I want to hide the /index.php part) so what I would like the end-user to see is: www.test.com/arbitrary-unique-keyword/keyword2/keyword3 which will invoke the index.php page in the root folder. Then I will use the PHP $_SERVER['PATH_INFO'] variable to match the requested folder structure up with the proper content in my database and display that. All the material I have found so far expects me to hard code parts of the folder structure into the rules.... but I think I want something simpler (perhaps). So the question in a nutshell: How do I use mod_rewrite to allow all "non-existent" folder paths be passed through to a main index.php residing in the root folder? (For all paths that DO exist, like for calls to images... I want those to succeed and not be directed to the index.php obviously) Thanks everyone, please let me know if I can clear anything up.

    Read the article

  • How to make project auto-estimate duration based on work?

    - by Bruno Brant
    This one has bothered me for a long while. I like to do estimates thinking on how much time a certain task will take (I'm in TI business), so, let's say, it takes 12 hours to build a program. Now, let's say I tell Project that my beginning date is today. If I allocate one resource to this task, it means that the task will last 1,5 days, implying that it will end tomorrow. But right now, that is not what it's doing. I say that the task will take 1 hour, and when I add a resource to it, it allocate the resource at [13%] basis, which means that the duration is still fixed... project is trying to make the task last for a day. I have, on many occasions, accomplished this. What I do is build a plan based on these rough estimates for effort, then I allocate tasks to resources. Times conflict, so I level resources and then Project magically tells me how long, in days, will it take. But every time I have to start estimating again, I end up having trouble on how to make project work like that.

    Read the article

  • Howto disable SSH local port forwarding ?

    - by SCO
    I have a server running Ubuntu and the OpenSSH daemon. Let's call it S1. I use this server from client machines (let's call one of them C1) to do an SSH reverse tunnel by using remote port forwarding, eg : ssh -R 1234:localhost:23 login@S1 On S1, I use the default sshd_config file. From what I can see, anyone having the right credentials {login,pwd} on S1 can log into S1 and either do remote port forwarding and local port forwarding. Such credentials could be a certificate in the future, so in my understanding anyone grabbing the certificate can log into S1 from anywhere else (not necessarily C1) and hence create local port forwardings. To me, allowing local port forwarding is too dangerous, since it allows to create some kind of public proxy. I'm looking for a way tto disable only -L forwardings. I tried the following, but this disables both local and remote forwarding : AllowTcpForwarding No I also tried the following, this will only allow -L to SX:1. It's better than nothing, but still not what I need, which is a "none" option. PermitOpen SX:1 So I'm wondering if there is a way, so that I can forbid all local port forwards to write something like : PermitOpen none:none Is the following a nice idea ? PermitOpen localhost:1

    Read the article

  • How can I backup entire installations of a program, instead of just manually backing up individual f

    - by NoCatharsis
    It seems pretty straightforward to backup individual files, such as pictures, saved games, or settings files - just copy them straight over to your 2nd HDD or to an online service like DropBox. However, is there any way to backup entire installations of a program? For instance, my Firefox directory has a lot of personal customizations and add-ons. I don't want to go through each item and decide to back it up or let it go. So my next option is to copy out the entire directory for backup. But, if I copy the entire directory back onto the HD after a format, it is not an integrated installation and this seems like it could be troublesome. I would assume Windows cannot detect the directory for uninstallation, or would not let you choose Firefox as your default browser, right? I'm no pro, but this sounds like a bad idea. So my question is whether there is a good way to preserve all necessary files, while also preserving the full installation process of an application. This is not specific to Firefox - I would like to know how to do this for any application. Thank you.

    Read the article

  • How to place a virtual machine in DMZ?

    - by Giordano
    I have an Ubuntu 12.04 server running few virtual machines with KVM. I would like to expose some of these virtual machines on the internet, to make it possible for customers to test the products we're developing and make available other products for demo purposes. One of the server NICs is configured with a public IP. However before exposing anything on the web I would like to be sure that if one of the virtual machines get compromised, the attacker doesn't reach the rest of the hosts. What I would like to do is to put these virtual machines into a DMZ. These are the steps I'm planning to do: Create a tap interface in the virtualization host (let's say tap1) Create a bridge using tap1 and give it an IP in a subnet separate from the other hosts. Let's say 10.0.0.1 Attach the DMZ virtual machines to the bridge and configure their IP statically (10.0.0.2, 10.0.0.3, etc...) Using UFW, forbid any traffic from 10.0.0.0/24 to any of the internal hosts, allow the traffic from the internal hosts towards 10.0.0.0/24 and expose the virtual machines on the web using port forwarding. Do you think this setup is safe? Can you suggest any improvement or a better/safer approach? Thanks in advance!

    Read the article

  • Is there a screen sharing/remote desktop app for mac that lets you use a different host screen resolution?

    - by MarqueIV
    Ok, there are tons and tons of questions about remote desktop for mac and they're all being closed as duplicates. I however am specifically looking for one that will let me use a different resolution than the host, the way you can with Remote Desktop for Windows. For instance, when I connect to my 11" Macbook Air booted into Windows7 from my quad-screen desktop, also booted into Win7 using Microsoft's Remote Desktop Client, it blanks out the screen on the notebook, then virtualizes the video across all four of my desktop's monitors at their native resolutions (2560x1600, 2 x 1920x1200 and 1600x1200) and the notebook now acts as if it has four physical monitors connected to it. All of this from a notebook that only has a 1366 x 768 native resolution. Even when running OS X on the client running RDC, while it doesn't support multi-monitors like its Win counterpart, it still lets me run at the native resolution of the client screen of 2560x1600. Again, it just blanks out the host screen while doing so. However when using Mac's screen sharing, since that is just glorified VNC, it just mirrors what's already on the host's screen, meaning it will always be a single screen with the resolution of 1366x768. This of course makes sense since VNC is a mirroring solution, not a video-virtualizing one like RDC, but it means that on my quad-monitor setup, the remote window isn't even large enough to fill up a single monitor, let alone four (unless you have a client that can scale it up, but that's video scaling. It's still only 1366x768.) So what I'm looking for is if there is a solution on the Mac that lets me do the same thing as RDC in a Win environment. Don't care if I have to pay. I'd gladly pay several hundred dollars for this. I just need that specific feature. Note: People have suggested various VNC clients, but the VNC host still runs at 1366x768 so that will not work here. Ever. Also, people have suggested Synergy/Synergy+/Teleport and such which share the keyboard and mouse, not video. Completely different animal unrelated to what I'm looking for.

    Read the article

  • Get Safari to use different autocompletion on different URLs on same hostname

    - by Luke404
    I have a webserver publishing different services over the same SSL VirtualHost, the two most commonly used being PhpMyAdmin and Cacti. These (and others) use 'cookie' style authentication, asking user and password in an HTML form (thus not using HTTP Authentication). Being on the same hostname, the Safari browser didn't manage too well stored passwords: if I login to one app with user foo, and then go to app two it would propose me user foo and its password in the login form. Changing just the username to bar used to be sufficient to let Safari autocomplete the correct password in its form field. Annoying, but I could live with it - usernames are short and easy to remember when compared to the passwords we use. After the update to safari5 this seems to be no longer true: if I store in safari (actually user keychain on OSX) credentials for https://www.foobarbaz.com/app1 AND credentials for https://www.foobarbaz.com/app2 there seem to be no way for it to autocomplete both based on the url. Even editing the keychain to add the path (it will store only the hostname by default) does not help. Is there anything I can do to let it work the way I want while still keeping everything on one hostname? Modifying anything server side is of course possible, but I can't switch apps to HTTP Auth (and not every one will support it anyway) to use different 'realms'.

    Read the article

  • setup lowcost image storage server with 24x SSD array to get high IOPS?

    - by Nenad
    I want to build let's name it a lowcost Ra*san which would host for our social site the images (many millions) we have 5 sizes of every photo with 3 KB, 7 KB, 15 KB, 25 KB and 80 KB per Image. My idea is to build a Server with 24x consumer 240 GB SSD's in Raid 6 which will give me some 5 TB Disk space for the photo storage. To have HA I can add a 2nd one and use drdb. I'm looking to get above 150'000 IOPS (4K Random reads). As we mostly have read access only and rarely delete photos i think to go with consumer MLC SSD. I read many endurance reviews and don't see there a problem as long we don't rewrite the cells. What you think about my idea? - I'm not sure between Raid 6 or Raid 10 (more IOPS, cost SSD). - Is ext4 OK for the filesystem - Would you use 1 or 2 Raid controller, with Extender Backplane If anyone has realized something similar i would be happy to get Real World numbers. UPDATE I have buy 12 (plus some spare) OCZ Talos 480GB SAS SSD Drive's they will be placed in a 12-bay DAS and attached to a PERC H800 (1GB NV Cache, manufactured by LSI with fastpath) Controller, I plan to setup Raid 50 with ext4. If someone is wondering about some benchmarks let me know what you would like to see.

    Read the article

  • Anonymous file sharing without login window, from Windows 7 server to XP clients

    - by Niten
    I'm trying to provide machines on a small LAN with read-only, anonymous access to files shared from a Windows 7 workstation (let's call it WIN7SVR). In particular, I don't want clients to have to deal with a login window when they navigate to, e.g., \\WIN7SVR in Windows Explorer, but we do not have a domain and synchronizing accounts between the server and clients would be intractable. There are both Windows 7 and Windows XP clients that need access to these shares. I got this working for Windows 7 clients by just enabling the Guest account on WIN7SVR and setting appropriate share permissions. Other Windows 7 machines automatically try logging in as Guest, it seems, so their users don't have to deal with the login window. The problem is with the XP clients--they can access the server if the user enters "Guest" in the login window, but I don't want users to have to do that. So from what I gather, in my limited understanding of Windows file sharing, this boils down to granting null sessions access to file shares on WIN7SVR. But I've had no success so far on that front. I've tried all the following in the local group policy editor on the Windows 7 server: Set Network access: Let Everyone permissions apply to anonymous users to Enabled Set Network access: Restrict anonymous access to Named Pipes and Shares to Disabled Added the names of corresponding shares to Network access: Shares that can be accessed anonymously Added "ANONYMOUS LOGON" to Access this computer from the network under User Rights Assignment Any advice would be highly appreciated... I'm mostly a Unix guy, so I feel somewhat out of my league with Windows file sharing. I do understand that any sort of anonymous access to file shares isn't generally ideal from a security standpoint, but it's the most practical solution for us in this case, and access to our network is well enough controlled that share-level security isn't a concern.

    Read the article

  • Apache server rewrite rules: how to avoid "implicitly forcing redirect (rc=302)"?

    - by Olivier Pons
    Hi! I've got a very annoying problem: our webserver handles 2 (more actually but let's say 2 for a simpler example): pretassur.fr pretassuragentimmobilier.fr Here's what I want to do: change (whatever1).pretassuragentimmobilier.fr(/whatever2) to (whatever1).pretassur.fr(/whatever2)?theme=agentimmobilier So here's my rewriterule: RewriteCond %{SERVER_NAME} (([a-z]+\.)*)pretassuragentimmobilier.(fr|com) RewriteRule ^(.+) http://%1pretassur.fr$1 [E=THEME:pretassur_agent,QSA] # if THEME not empty, set it : RewriteCond %{ENV:THEME} ^(.+)$ RewriteRule (.*) $1?IDP=%{ENV:THEME} [QSA] The big (huge) problem is: let's have a look at the rewrite logs: [pretassurmandataireimmo.com] (5) => setting env variable 'THEME' to 'pretassur_mandataire' [pretassurmandataireimmo.com] => (2) implicitly forcing redirect (rc=302) with http://pretassur.fr/ Aaaaaaaaarg! "implicitly forcing redirect" = I don't want that ! I want to internally redirect to pretassur.fr, not to make a real redirect! Now if you type: http://pretassurmandataireimmo.com it is redirected to http://pretassur.fr/?IDP=pretassur_mandataire (try it) I don't want that! I want to display this page http://pretassur.fr/?IDP=pretassur_mandataire but without touching the original host! Any idea? Thanks a lot!

    Read the article

  • Laptop Overheating with Windows 8

    - by Dany Khalife
    I recently installed Windows 8 on my HP G62 Laptop and i have been noticing a very strange problem with it. Let alone, for lets say 5 minutes, without even touching it, it starts to heat up and it reaches about 60 degrees (Celsius) with absolutely no applications open (not just on desktop but overall). I dug in a little deep and found out that Maintenance was running when the computer was Idle, so i turned that off From the System's Task Scheduler, and while there i also turned off other services i did not need hoping that would solve the problem. So after a few days, i noticed that the average temperature of my laptop dropped from 55 to 48 degrees while working on Visual Studio. And when i thought the problem had disappeared, it still did show up, only not after 5 minutes, but more like 10 minutes... Here is what i have done so far: Replacing the thermal paste on the CPU and the fan and cleaning the fan (this was like 6 months ago) Using a laptop cooler Running a virus scan (i just formatted my laptop so it would be really weird if i already caught something but who knows) Right now, i believe it has something to do with my gfx driver (Even though it IS up to date, looking closely at the screen, i can see the pixels slowly refresh (kinda like watching static on TV) which i wasn't able to do on Windows 7. If you have any ideas, let me know. Thanks

    Read the article

  • Formatting a memory stick with two partitions?

    - by Marius
    I have a 16GB memorystick which used to have a Linux partition. It therefore has two partitions; 2GB FAT32 and 14GB linux boot drive. The linux part stopped working, so I decided to reinstall it. But windows can't see that partition. I tried formatting the whole disk, but I can only format one partition (the FAT32). There seems to be no way to combine the two partitions into one big one, and there seems to be no way for windows to partition the large part of the memorystick to but Linux on it. In the windows partition manager, windows sees the large unused partition, and it let me delete it. But once I have deleted it, I'm not allowed to format it. Also I cannot delete or resize the small partition. So, to summarize: I have a memorystick with two partitons. Windows only sees one of them, and won't let me use the other one. I would like to combine the two partitions so I can install Linux on the memory stick again.

    Read the article

  • How browsers handle multiple IPs

    - by Sandman4
    Can someone direct me to information on exact browsers behavior when browser gets multiple A records for a given hostname (say ip1 and ip2), and one of them is not accessible. I interested in EXACT details, like (but not limited to): Will browser get 2 IPs from OS, or it will get only one ? Which ip will browser try first (random or always the first one) ? Now, let's say browser started with the failed ip1 For how long will browser try ip1 ? If user hits "stop" while it waits for ip1, and then clicks refresh which IP will browser try ? What will happen when it times-out - will it start trying ip2 or give error ? (And if error, which ip will browser try when user clicks refresh). When user clicks refresh, will any browser attempt new DNS lookup ? Now let's assume browser tried working ip2 first. For the next page request, will browser still use ip2, or it may randomly switch ips ? For how long browsers keep IPs in their cache ? When browsers sends a new DNS request, and get SAME ips, will it CONTINUE to use the same known-to-be-working IP, or the process starts from scratch and it may try any of the two ? Of course it all may be browser dependent, and may also vary between versions and platforms, I'd be happy to have maximum of details. The purpose of this - I'm trying to understand what exactly users will experience when round-robin DNS based used and one of the hosts fails. Please, I'm NOT asking about how bad DNS load balancing is, and please refrain from answering "don't do it", "it's a bad idea", "you need heartbeat/proxy/BGP/whatever" and so on.

    Read the article

  • DD-WRT Acces Point as a Router

    - by Dzh
    Following suggestion to this question asked on Network Engineering, I am asking the question here. this is an extension to my previous question (I think it was deleted), where I was claiming that DDWRT was disabling it's DHCP server once connected to the network. I was wrong, as it now seems that it is bridging itself with another parallel connected wireless router. I have two Draytek 2820 and one Netgear WG602v3 with latest DDWRT. Lets call one wired-Draytek and it has wireless disabled. The other one, let's call it wireless-Draytek, is connected to wired-Draytek and has wireless with MAC filtering enabled. Once I connect Netgear to the wired-Draytek, the client that connects to Netgear, will be assigned with IP address from the wireless-Draytek. If the MAC address is not on the wireles-Draytek, the client is unable to obtain IP address and has no connectivity at all, even with manually assigned static IP configuration. To illustrate further, this is how network is set up: wired-Draytek ---------- wireless-Draytek \_________ Netgear What I wish to have, is that Netgear issues IP addresses from it's own IP pool and ignores the MAC filtering rules from wireless-Draytek. This is kind of puzzling how this they are bridging (if they are) themselves automatically. Thanks. UPDATE: It's not a home network. I gave you a bit simplified set-up. If there is a better site on Stack Exchange to ask this, please let me know. The Drayteks are running stock firmware, it's only Netgear that I've flashed to get more stability. In addition to these routers, I have also three 3COM Baseline switch 2824, and another Draytek router with Prosafe FS752TP PoE switch dedicated for VoIP phones. Wired-Draytek has IP 10.0.0.1, DHCP disabled as there is AD DC which is issuing IP addresses. Wireless-Draytek has IP 1.1.1.1 and DHCP enabled. Netgear has default - 192.168.1.1. As per suggestion, the specific question is - how do I isolate these two wireless routers?

    Read the article

  • Transfering Files to server IP and port

    - by Mason
    I need to transfer files from my local computer on windows 7 to a server running linux. I access the server with putty through ssh at a specific IPv4 address and port number. I've attempted using the pscp command from my local computer but was denied access by the server. "Fatal: Network error: Connection refused" c:>pscp test.csv userid@**IPv4_Addres***:Port# /path/destination_file_name. Either the server blocks all pscp attempts from unauthorized users (most likely my laptop included) or I used the command incorrectly. If you have experience using this command, where exactly will the file get transfered to, I'm assuming that the path destination starts at my home directory in the server. Also if you have any other alternative methods of transfering the files let me know. Update 1 I have also tried using WinSCP however I got permission denied for that as well, it looks like the server will not let me upload or save files. Solved I had a complete lapse of memory and forgot about sudo (spent too much time with scripts the last 2 months), so I was able to change the permissions to allow external editing. Thanks for all the help guys!

    Read the article

  • Windows network routing

    - by fabianvilers
    Hi! I'm working by my customer premises and they let me connect my private laptop on a dedicated Wi-Fi for internet access. It's nice for external consultants. The only issue is that we can't connect on a remote server on port 25. I suppose this policy is set up to avoid infected computers sending spam from their network. As you can have guessed, this is something weird that I can't send mail at all. Fortunately, I've a 3G cell phone that I can connect by Bluetooth on my laptop. So when I want to send an e-mail, I have to disconnect from Wi-Fi, connect my phone, send the e-mail, disconnect phone and reconnect Wi-Fi. Kinda overhead. My question is: how can I tell Windows 7 to use the Wi-Fi for every out connection, but if it's a connection on port 25, use the cell phone network? With this solution, I could let my phone connected all day without having to switch again and again. Thanks a lot for your anwwers. Fabian

    Read the article

  • Unix apt-get doesnt download from nfs locaiton

    - by pravesh
    I have switched to unix from last 3 months and trying to understand install process and in particular apt-get. I am able to successfully install and download the packages when I configure my repository on http location in /etc/apt/sources.list file. e.g. deb http://web.myspqce.com/u/eng/rose/debian-mirror-squeeze-amd64/mirror/ftp.us.debian.org/debian/ squeeze main contrib non-free This command will download(/var/cache/apt/archive) and install the package when i use apt-get install When I change the source location to file instead of http(nfs mount point), the package is getting installed but NOT getting downloaded in /var/cache/apt/archive. deb file:/deb_repository/debian-mirror-squeeze-amd64/mirror/ftp.us.debian.org/debian/ squeeze main contrib non-free Please let me know if there is any configuration or settings that i have to make to let apt-get to both download and install package when i use (nfs)file:/ instead of http:/ in sources.list. To achieve this, I can use apt-get --downlaod-only and then use apt-get install for both download and install in two separate calls, but I want to know why package is not getting downloaded with apt-get install but only getting installed when used with file:/ in sources.list

    Read the article

  • How do I count the times each number appears in columns of numbers?

    - by Andy C.
    I am sure this must be easy, but I am inexperienced. About the best way to think of my problem is to think of it as trying to sort and then count lottery numbers. To stay simple, let's do a Pick 3 game. Let's look at 10 drawings. I would split each drawn number into a separate column: DATE BALL#1 BALL#2 BALL#3 3/1 1 3 5 3/2 3 7 8 3/3 2 2 1 3/4 5 7 6 3/5 2 3 1 3/6 0 5 9 3/7 3 7 0 3/8 6 8 4 3/9 2 4 3 3/10 7 1 2 I would like to be able to build formulas into cells that would tell me how many times each number appeared overall, and how many times each number appeared in the position it occurred. Like this (using the above example): Number Overall Count Ball#1 Count Ball#2 Count Ball#3 Count 0 2 1 0 1 1 4 1 1 2 (That is, The number zero appears twice overall, and came up once as the first number drawn; zero times as the middle ball; and once as the third ball. Likewise, the number 1 was drawn four times in our 10-day period. It was the first ball once, the second ball once and the third ball twice.) And so on. All help appreciated. I have access to Excel and Microsoft Works, or of course if there is a Google Docs way to handle this All thanks for any help.

    Read the article

  • Linux security: The dangers of executing malignant code as a standard user

    - by AndreasT
    Slipping some (non-root) user a piece of malignant code that he or she executes might be considered as one of the highest security breaches possible. (The only higher I can see is actually accessing the root user) What can an attacker effectively do when he/she gets a standard, (let's say a normal Ubuntu user) to execute code? Where would an attacker go from there? What would that piece of code do? Let's say that the user is not stupid enough to be lured into entering the root/sudo password into a form/program she doesn't know. Only software from trusted sources is installed. The way I see it there is not really much one could do, is there? Addition: I partially ask this because I am thinking of granting some people shell (non-root) access to my server. They should be able to have normal access to programs. I want them to be able to compile programs with gcc. So there will definitely be arbitrary code run in user-space...

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >