Search Results

Search found 21454 results on 859 pages for 'via'.

Page 95/859 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • How to encode content to send them via jquery to a php file?

    - by phpheini
    I am trying to send a form to a php file via jquery. The problem is, that the content, which has to be sent to the php file, contains slashes (/) since there is bb code inside. So I tried the following: $.ajax( { type: "POST", url: "create.php", data: "content=" + encodeURIComponent(content), cache: false, success: function(message) { $("#somediv").html(message); } }); In the php file I use rawurldecode to decode the content and get my bb codes back which I can then transform into html. The problem is as soon as I put the encodeURIComponent() it will ouput: [object HTMLTextAreaElement] What does that mean, where is my mistake? Thanks for your help! phpheini

    Read the article

  • PXE boot linux. PXE-E51: No DHCP or proxyDHCP offers were received

    - by athspk
    I am trying to have an ubuntu box (192.168.10.9) acting as a PXE server, but i have trouble getting DHCP to work. The PXE server is connected to a SOHO router (192.168.10.1) acting as a switch. I have disabled the DHCP server on the router. $ dhcpd --version isc-dhcpd-4.2.4 The contents of /etc/dhcp/dhcpd.conf ddns-update-style none; option domain-name-servers 192.168.10.1; default-lease-time 3600; max-lease-time 7200; authoritative; log-facility local7; allow booting; allow bootp; subnet 192.168.10.0 netmask 255.255.255.0 { range dynamic-bootp 192.168.10.101 192.168.10.200; option routers 192.168.10.1; option broadcast-address 192.168.10.255; next-server 192.168.10.9; filename "/tftpboot/pxelinux.0"; } The contents of /etc/default/isc-dhcp-server INTERFACES="eth0" When the client boots, it tries to get an IP address from the server but fails with the following Error message: PXE-E51: No DHCP or proxyDHCP offers were received. On Server side, i was tailing /var/log/syslog while the client tries to boot: Dec 4 12:57:10 athspk-Dell dhcpd: DHCPDISCOVER from 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:11 athspk-Dell dhcpd: DHCPOFFER on 192.168.10.101 to 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:12 athspk-Dell dhcpd: DHCPDISCOVER from 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:12 athspk-Dell dhcpd: DHCPOFFER on 192.168.10.101 to 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:17 athspk-Dell dhcpd: DHCPDISCOVER from 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:17 athspk-Dell dhcpd: DHCPOFFER on 192.168.10.101 to 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:25 athspk-Dell dhcpd: DHCPDISCOVER from 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:25 athspk-Dell dhcpd: DHCPOFFER on 192.168.10.101 to 00:1f:d0:8e:6b:db via eth0 Please advise. Thanks in advance

    Read the article

  • How can I use Linksys WPSM54G print server as a bridge for another machine AND also share the printe

    - by user26453
    I have a Linksys WPMS54G currently sharing a printer via the USB port with the rest of my network via the wireless. Is there any way to set it up so that the ethernet port is bridged over the wireless adapter portion? i.e., be able to uplink another machine or switch into the network via the WPMS54G's ethernet port? Update: The network architecture is as follows: (1) Linksys WRT54G router that serves as a router, DHCP server, and wireless access point for the network. Fairly standard configuration (3) Laptops that are used throughout the house via wifi (1) Linksys WPSM54G printer server that connects via wireless to the router, in a separate room with a printer attached to print seerver's USB port along with (1) Un-networked desktop in the same room Since the printer is plugged into the USB port of the WPSM54G, I am wondering if I can connect the desktop to the ethernet port of the WPSM54G and have it bridged over the wifi to the router. The twist here is that the ethernet is initially used to connect the wireless print server to the router (for configuration, can't configure it wirelessly if you are initially on a encrypted network). Now instead of using that ethernet port as a way to connect the print server to the network (via the router), I want to use the ethernet port as a way to connect another computer to the network, in effect bridging into the router via the print server, while still sharing the printer (attached via USB) through the print server. If this is not clear, please comment. To be clear, the computer I want to connect/bridge into the network does not have a wireless card, is far from the router, and I do not want to lay ethernet cable to connect it. While I could certainly buy a legitimate wireless bridge to accomplish this, I figured since the print server already has an ethernet port, see if I can't use that.

    Read the article

  • How do I make some files on my machine accessible via HTTP using Apache?

    - by Lazer
    I did a wget on the source and built the apache binaries correctly. Now what do I need to do to get some documents accessible using HTTP (start some services?)? Also, do I need to group all the files I want to make accessible in some directory and make the directory and its contents accessible or can I just make the individual documents available? I will be providing these links to my colleagues and do not want them to be down, so need to make sure that the apache services are up automatically after a reboot. Does apache have some inbuilt support for this?

    Read the article

  • Trying to configure domain-based access via htaccess file.

    - by kenja
    I've created an account with no-ip.com that registers my ip with a subdomain of their service. When I do an nslookup, I see that the service is working and that my domain is being shown. Now I want to provide access to that subdomain on the admin site of our server which is protected by htaccess IP restrictions. When I try to add the new domain to my script it does not work. Am I doing something wrong? I'm basically trying to make my laptop so it can log in from no matter when I'm at while still preventing all other IPs from accessing the site. ## password begin ## AuthName "Restricted Access" AuthUserFile /usr/www/users/site/.passwd AuthType Basic Require valid-user Order deny,allow Deny from all Allow from 69.1.122.161 mysubdomain.no-ip.org Satisfy All

    Read the article

  • Sharing Bandwidth and Prioritizing Realtime Traffic via HTB, Which Scenario Works Better?

    - by Mecki
    I would like to add some kind of traffic management to our Internet line. After reading a lot of documentation, I think HFSC is too complicated for me (I don't understand all the curves stuff, I'm afraid I will never get it right), CBQ is not recommend, and basically HTB is the way to go for most people. Our internal network has three "segments" and I'd like to share bandwidth more or less equally between those (at least in the beginning). Further I must prioritize traffic according to at least three kinds of traffic (realtime traffic, standard traffic, and bulk traffic). The bandwidth sharing is not as important as the fact that realtime traffic should always be treated as premium traffic whenever possible, but of course no other traffic class may starve either. The question is, what makes more sense and also guarantees better realtime throughput: Creating one class per segment, each having the same rate (priority doesn't matter for classes that are no leaves according to HTB developer) and each of these classes has three sub-classes (leaves) for the 3 priority levels (with different priorities and different rates). Having one class per priority level on top, each having a different rate (again priority won't matter) and each having 3 sub-classes, one per segment, whereas all 3 in the realtime class have highest prio, lowest prio in the bulk class, and so on. I'll try to make this more clear with the following ASCII art image: Case 1: root --+--> Segment A | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment B | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment C +--> High Prio +--> Normal Prio +--> Low Prio Case 2: root --+--> High Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Normal Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Low Prio +--> Segment A +--> Segment B +--> Segment C Case 1 Seems like the way most people would do it, but unless I don't read the HTB implementation details correctly, Case 2 may offer better prioritizing. The HTB manual says, that if a class has hit its rate, it may borrow from its parent and when borrowing, classes with higher priority always get bandwidth offered first. However, it also says that classes having bandwidth available on a lower tree-level are always preferred to those on a higher tree level, regardless of priority. Let's assume the following situation: Segment C is not sending any traffic. Segment A is only sending realtime traffic, as fast as it can (enough to saturate the link alone) and Segment B is only sending bulk traffic, as fast as it can (again, enough to saturate the full link alone). What will happen? Case 1: Segment A-High Prio and Segment B-Low Prio both have packets to send, since A-High Prio has the higher priority, it will always be scheduled first, till it hits its rate. Now it tries to borrow from Segment A, but since Segment A is on a higher level and Segment B-Low Prio has not yet hit its rate, this class is now served first, till it also hits the rate and wants to borrow from Segment B. Once both have hit their rates, both are on the same level again and now Segment A-High Prio is going to win again, until it hits the rate of Segment A. Now it tries to borrow from root (which has plenty of traffic spare, as Segment C is not using any of its guaranteed traffic), but again, it has to wait for Segment B-Low Prio to also reach the root level. Once that happens, priority is taken into account again and this time Segment A-High Prio will get all the bandwidth left over from Segment C. Case 2: High Prio-Segment A and Low Prio-Segment B both have packets to send, again High Prio-Segment A is going to win as it has the higher priority. Once it hits its rate, it tries to borrow from High Prio, which has bandwidth spare, but being on a higher level, it has to wait for Low Prio-Segment B again to also hit its rate. Once both have hit their rate and both have to borrow, High Prio-Segment A will win again until it hits the rate of the High Prio class. Once that happens, it tries to borrow from root, which has again plenty of bandwidth left (all bandwidth of Normal Prio is unused at the moment), but it has to wait again until Low Prio-Segment B hits the rate limit of the Low Prio class and also tries to borrow from root. Finally both classes try to borrow from root, priority is taken into account, and High Prio-Segment A gets all bandwidth root has left over. Both cases seem sub-optimal, as either way realtime traffic sometimes has to wait for bulk traffic, even though there is plenty of bandwidth left it could borrow. However, in case 2 it seems like the realtime traffic has to wait less than in case 1, since it only has to wait till the bulk traffic rate is hit, which is most likely less than the rate of a whole segment (and in case 1 that is the rate it has to wait for). Or am I totally wrong here? I thought about even simpler setups, using a priority qdisc. But priority queues have the big problem that they cause starvation if they are not somehow limited. Starvation is not acceptable. Of course one can put a TBF (Token Bucket Filter) into each priority class to limit the rate and thus avoid starvation, but when doing so, a single priority class cannot saturate the link on its own any longer, even if all other priority classes are empty, the TBF will prevent that from happening. And this is also sub-optimal, since why wouldn't a class get 100% of the line's bandwidth if no other class needs any of it at the moment? Any comments or ideas regarding this setup? It seems so hard to do using standard tc qdiscs. As a programmer it was such an easy task if I could simply write my own scheduler (which I'm not allowed to do).

    Read the article

  • What do I need in order to extract and combine text files from multiple ZIP files, via command line?

    - by Iszi
    I've got an interesting scripting challenge in front of me. I'm fairly certain there's a way to do it, but I feel like I'm probably lacking some particular tools and/or functional knowledge. There's some fifty-plus ZIP files that each contain, among other things, text files that need to be merged with one another. The structure is something like this: C:\Reports\FirstJob-1.zip |-MyName |-FirstJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt C:\Reports\FirstJob-2.zip |-MyName |-FirstJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt C:\Reports\SecondJob-1.zip |-MyName |-SecondJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt If I had all the Report.txt files in one regular folder, and uniquely named, I could probably just write a FOR statement that targets *.txt and runs something like type filename.txt >> Consolidated.txt on each. However, these all have the same file name and are embedded deep within separate ZIP files. The potentially useful tools I currently have at my disposal are Windows XP Professional SP3, PowerShell, and WinZip. I'd rather not download or install anything else, but I do understand that third-party tools (or additional tools from Microsoft or WinZip) may be necessary. Whatever tools I use should run natively in Windows. I really don't want to have to mess with Cygwin or other emulators on this system. At the very least, I need a tool that will allow me to analyze and manipulate ZIP files from the command line. Also, are there any other particular complications to this that I've not yet thought of?

    Read the article

  • How can ICS in Windows 7 be managed via command line, scripts, config files, etc.?

    - by Skya
    I've been using ICS successfully for years, but now I'm looking for a way to control it through something else than the GUI in Control Panel\Network and Internet\Network Connections - Connection Properties: I want to do everything that the encircled checkbox does, without touching the GUI. But what does the checkbox do? Microsoft don't provide specific information and the most helpful forum post I've found is from 2003. Assuming that some of the advice is still valid, I've come to the conclusion that ICS is broken down into 6 parts that have to be set up individually: the sharedAccess service interface settings firewall rules a static route dnsproxy autodhcp I've already learned that the service can be started/stopped with the command net start/stop sharedAccess and that netsh is a good tool for changing the interface settings and the firewall rules. But I don't understand how ICS handles routing and DNS. All hosts in my network are configured statically, so I don't care much about autodhcp. Thanks for your help! EDIT: I've spent the whole day scanning through ProcMon and I've seen reads/writes to both the registry and the filesystem and it is difficult to determine what parts of it actually make ICS work. I'm trying to look for an API instead. I'm looking into this right now, but I still want to know more about the inner workings.

    Read the article

  • Difference between adding MIME types in IIS via Websites vs Local Computer?

    - by Alex Key
    What is the difference between adding MIME types in these 2 different situations? When in IIS 6 manager... Right click on the computer name (local computer) properties mime types Right click on the "Web sites" folder properties http headers mime types I'm guessing that perhaps option 1 adds MIME types for FTP also? However if that were true i'd expect to be able to add MIME types specifically in the properties of FTP (and not just websites). thanks for your help.

    Read the article

  • How to whitelist external access to an internal webserver via Cisco ACLs?

    - by Josh
    This is our company's internet gateway router. This is what I want to accomplish on our Cisco 2691 router: All employees need to be able to have unrestricted access to the internet (I've blocked facebook with an ACL, but other than that, full access) There is an internal webserver that should be accessible from any internal IP address, but only a select few external IP addresses. Basically, I want to whitelist access from outside the network. I don't have a hardware firewall appliance. Until now, the webserver has not needed to be accessible externally... or in any case, the occasional VPN has sufficed when needed. As such, the following config has been sufficient: access-list 106 deny ip 66.220.144.0 0.0.7.255 any access-list 106 deny ip ... (so on for the Facebook blocking) access-list 106 permit ip any any ! interface FastEthernet0/0 ip address x.x.x.x 255.255.255.248 ip access-group 106 in ip nat outside fa0/0 is the interface with the public IP However, when I add... ip nat inside source static tcp 192.168.0.52 80 x.x.x.x 80 extendable ...in order to forward web traffic to the webserver, that just opens it up entirely. That much makes sense to me. This is where I get stumped though. If I add a line to the ACL to explicitly permit (whitelist) an IP range... something like this: access-list 106 permit tcp x.x.x.x 0.0.255.255 192.168.0.52 0.0.0.0 eq 80 ... how do I then block other external access to the webserver while still maintaining unrestricted internet access for internal employees? I tried removing the access-list 106 permit ip any any. That ended up being a very short-lived config :) Would something like access-list 106 permit ip 192.168.0.0 0.0.0.255 any on an "outside-inbound" work?

    Read the article

  • If Nvidia Shield can stream a game via WiFi (~150-300Mbps), where is the 1-10Gbps wired streaming?

    - by Enigma
    Facts: It is surprising and uncharacteristic that a wireless game streaming solution is the *first to hit the market when a 1000mbps+ Ethernet connection would accomplish the same feat with roughly 6x the available bandwidth. 150-300mbps WiFi is in no way superior to a 1000mbps+ LAN connection aside from well wireless mobility. Throughout time, (since the internet was created) wired services have **always come first yet in this particular case, the opposite seems to be true. We had wired internet first, wired audio streaming, and wired video streaming all before their wireless counterparts. Why? Largely because the wireless bandwidth was and is inferior. Even today despite being significantly better and capable of a lot more, it is still inferior to a wired connection. Situation: Chief among these is that NVIDIA’s Shield handheld game console will be getting a microconsole-like mode, dubbed “Shield Console Mode”, that will allow the handheld to be converted into a more traditional TV-connected console. In console mode Shield can be controlled with a Bluetooth controller, and in accordance with the higher resolution of TVs will accept 1080p game streaming from a suitably equipped PC, versus 720p in handheld mode. With that said 1080p streaming will require additional bandwidth, and while 720p can be done over WiFi NVIDIA will be requiring a hardline GigE connection for 1080p streaming (note that Shield doesn’t have Ethernet, so this is presumably being done over USB). Streaming aside, in console mode Shield will also support its traditional local gaming/application functionality. - http://www.anandtech.com/show/7435/nvidia-consolidates-game-streaming-tech-under-gamestream-brand-announces-shield-console-mode ^ This is not acceptable to me for a number of reasons not to mention the ridiculousness of having a little screen+controller unit sitting there while using a secondary controller and screen instead. That kind of redundant absurdity exemplifies how wrong of a solution that is. They need a second product for this solution without the screen or controller for it to make sense... at which point your just buying a little computer that does what most other larger computers do better. While this secondary project will provide a wired connection, it still shouldn't be necessary to purchase a Shield to have this benefit. Not only this but Intel's WiDi claims game streaming support as well - wirelessly. Where is the wired streaming? All that is required, by my understanding, is the ability to decode H.264 video compression and transmit control/feedback so by any logical comparison, one (Nvidia especially) should have no difficulty in creating an application for PC's (win32/64 environment) that does the exact same thing their android app does. I have 2 video cards capable of streaming (encoding) H.264 so by right they must be capable of decoding it I would think. I should be able to stream to my second desktop or my laptop both of which by hardware comparison are superior to the Shield. I haven't found anything stating plans to allow non-shield owners to do this. Can a third party create this software or does it hinge on some limitation that only Nvidia can overcome? Reiteration of questions: Is there a technical reason (non marketing) for why Nvidia opted to bottleneck the streaming service with a wireless connection limiting the resolution to 720p and introducing intermittent video choppiness when on a wired connection one could achieve, presumably, 1080p with significantly less or zero choppiness? Is there anything limiting developers from creating a PC/Desktop application emulating the same H.264 decoding functionality that circumvents the need to get an Nvidia Shield altogether? (It is not a matter of being too cheap to support Nvidia - I have many Nvidia cards that aren't being used. One should not have to purchase specialty hardware when = hardware already exists) Same questions go for Intel Widi also. I am just utterly perplexed that there are wireless live streaming solution and yet no wired. How on earth can wireless be the goto transmission medium? Is there another solution that takes advantage of H.264 video compression allowing live streaming over a wired connection? (*) - Perhaps this isn't the first but afaik it is the first complete package. (**) - I cant back that up with hard evidence/links but someone probably could. Edit: Maybe this will be the solution I am looking for but I still find it hard to believe that they would be the first and after wireless solutions already exist. In-home Streaming You can play all your Windows and Mac games on your SteamOS machine, too. Just turn on your existing computer and run Steam as you always have - then your SteamOS machine can stream those games over your home network straight to your TV! - http://store.steampowered.com/livingroom/SteamOS/

    Read the article

  • Remapping Home/End from PC to Mac Via Synergy is not client specific.

    - by DtBeloBrown
    This question asks about the end key but the answers give no examples: http://superuser.com/questions/60052/what-key-works-like-end-using-a-mac-with-synergy If they had, I am guessing that they would likely have run into this problem. Adding lines like the bottom two of this: section: options keystroke(End) = keystroke(Control+Right,myiMac) keystroke(Home) = keystroke(Control+Left,myiMac) to my synergy.sgc in MyDocuments on the winXP machine would work but causes the keys to stop functioning on the winXP machine. Unacceptable. I next tried a compromise: keystroke(End) = keystroke(Control+Right,myiMac); keystroke(End,myPc) keystroke(Home) = keystroke(Control+Left,myiMac); keystroke(Home,myPc) Expecting that to broadcast the keystrokes to both machines regardless of which one was the Active Screen. That and many other variations did not work. What am I doing wrong? Has someone actually done this?

    Read the article

  • How to connect to a Linux server via SSH using Lazarus?

    - by Altar
    Hi, I need to work (run/edit/delete files) with 12 Linux computer nodes in a cluster. Usually I use SSH to connect to those nodes. How can I do this from Lazarus? I have tried the 2nd and 3rd TProcess example posted on Lazarus' wiki page (http://wiki.lazarus.freepascal.org/Executing%5FExternal%5FPrograms) but it is not working with SSH (but it is working with 'ls'). Any ideas how to make SSH work with Lazarus? (I am on Linux)

    Read the article

  • How to upgrade XBMC Live from 9.04.1 to 9.11 via command line?

    - by sunpech
    I've been unable to do a fresh install of XBMC Live 9.11 to my hard drive. Everytime it fails at the Install System step. But I am able to get XBMC Live 9.04.1 to install successfully. How do I upgrade XBMC Live 9.04.1 to 9.11? I understand that Ctrl+Shift+F2 brings up the command line, but what are the next set of commands to run?

    Read the article

  • Do you need to advertise an AFP service via Avahi for an Ubuntu Server to show up in OSX Finder?

    - by James
    I am only advertising an NFS share plus the "model", and I don't want to install extra services on the Server unless I have to, ie netatalk, as it is used solely for NFS exports. Currently there is no entry in Finder under "Shared" with below config of Avahi. serveradmin@FILESERVER:/etc/avahi/services$ cat nfs.service <?xml version="1.0" standalone='no'?><!--*-nxml-*--> <!DOCTYPE service-group SYSTEM "avahi-service.dtd"> <service-group> <name replace-wildcards="yes">%h</name> <service> <type>_nfs._tcp</type> <port>2049</port> <txt-record>path=/Volumes/StoragePool</txt-record> </service> <service> <type>_device-info._tcp</type> <port>0</port> <txt-record>model=Xserve</txt-record> </service> </service-group> Server: Ubuntu 12.04.01 x64 Clients: OSX 10.6.8 , 10.7.5, 10.8.2 The goal is to advertise that NFS share, then assign a really old Model code of Mac like a Powermac and switch out the icon for a more "LinuxServer-y" one. Plus allow users to connect to NFS in a manner they are familiar with like our other Xserve servers. I think Avahi is working in general as if I do: nfs://FILESERVER.local/Volumes/StoragePool it will connect fine. Any ideas?

    Read the article

  • Issues with ASP.NET via Apache/mod_mono on Ubuntu.

    - by Matthew Scharley
    I run an Ubuntu test server, and my deployment system is also Ubuntu. I've recently been trying to get ASP.NET to work on my test server so that we can take it live. I managed to get it installed, and configured properly, and my application is installed and running, but I can't get anything to work. The error I keep receiving is below, if anyone has any clue what might be going on, it would be greatly appreciated. Server Error in '/' Application Standard output has not been redirected or process has not been started. Description: HTTP 500. Error processing request. Stack Trace: System.InvalidOperationException: Standard output has not been redirected or process has not been started. at System.Diagnostics.Process.CancelErrorRead () [0x00000] at (wrapper remoting-invoke-with-check) System.Diagnostics.Process:CancelErrorRead () at Mono.CSharp.CSharpCodeCompiler.CompileFromFileBatch (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00000] at Mono.CSharp.CSharpCodeCompiler.CompileAssemblyFromFileBatch (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00000] at System.CodeDom.Compiler.CodeDomProvider.CompileAssemblyFromFile (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00000] at System.Web.Compilation.AssemblyBuilder.BuildAssembly (System.Web.VirtualPath virtualPath, System.CodeDom.Compiler.CompilerParameters options) [0x00000] at System.Web.Compilation.AssemblyBuilder.BuildAssembly (System.Web.VirtualPath virtualPath) [0x00000] at System.Web.Compilation.BuildManager.BuildAssembly (System.Web.VirtualPath virtualPath) [0x00000] at System.Web.Compilation.BuildManager.GetCompiledType (System.String virtualPath) [0x00000] at System.Web.HttpApplicationFactory.InitType (System.Web.HttpContext context) [0x00000] Version information: Mono Version: 2.0.50727.42; ASP.NET Version: 2.0.50727.42 Apache version String: Apache/2.2.11 (Ubuntu) mod_mono/2.0 PHP/5.2.6-3ubuntu4.2 with Suhosin-Patch Server at dev Port 80 PS: I had to add three DLL's to the /bin directory in my application, copying them from Windows because I couldn't find them in any of Mono's packages. This might or might not be causing problems, I don't know. The list that I had to add is: System.Web.Abstractions System.Web.Routing System.Web.Mvc

    Read the article

  • Giving VPN connections access to all locations?

    - by Jeff
    I have asked a similiar question, but didn't get any answers so i am going to try and rephrase. i have 4 locations corporate and 3 remotes when you are at the corporate location, you have full access to all networks. 192.168.3.x 192.168.2.x 192.168.1.x 192.168.0.x all locations are connected via site-to-site vpn with the corporate location. if you are at a remote location, you have access to that location & the corporate location. the corporate location handles all VPN traffic. however, when you VPN into the corporate location, you can not see outside the corporate location. can anyone provide some information or a link explaining how to allow the VPN users to see all locations? thanks static route configuration: Gateway of last resort is 207.255.x.1 to network 0.0.0.0 C 207.255.x.0 255.255.255.0 is directly connected, outside S 10.0.1.6 255.255.255.255 [1/0] via 207.255.x.1, outside S 10.0.1.5 255.255.255.255 [1/0] via 207.255.x.1, outside S 192.168.0.0 255.255.255.0 [1/0] via 192.168.0.1, inside C 192.168.1.0 255.255.255.0 is directly connected, inside S 192.168.2.0 255.255.255.0 [1/0] via 192.168.2.1, inside S 192.168.3.0 255.255.255.0 [1/0] via 192.168.3.1, inside S* 0.0.0.0 0.0.0.0 [1/0] via 207.255.x.1, outside [1/0] via 192.168.1.1, outside

    Read the article

  • How to connect a VM running on an ESXi host to that host via a VMKernel NIC?

    - by Zac B
    Say I have an ESXi (5.0) host that runs a Linux distribution which hosts iSCSI targets, which contain the images for other VMs which the host will run. When it's used, I'll start the host first, then the iSCSI server, and then refresh all storage targets/HBAs in order to see the provided shares as online. I know it's a strange puzzle-box solution, but I was told to implement it. The ESXi host itself has a gigabit NIC which connects to the outside world. The guest OS (CentOS) supports VMXNet3, however, and if I can, I'd like to use its VMXNET3 NIC to host iSCSI for the ESXi host. How should I go about doing this? I went to create a new virtual network, and selected "VKernel", as it suggested that I use that type of network for SAN traffic, but it is apparently not set up for "self-hosted" SAN hosts, as the new network did not appear as an option to attach the CentOS box's VMXNET3 NIC to. How should I best connect an iSCSI host out to its "parent" ESXi host, if I need a) a 10gb connection, and (optionally) b) a VMKernel network for it?

    Read the article

  • Websites down EC2 inaccessible via SSH CPU utilisation 100% last few hours - what should I do?

    - by fuzzybee
    I have multiple websites hosted on 1 single EC2 instance. 1 website "abc" were down for a few hours, sometimes threw database connection error and sometimes just took too long to respond. 1 website "def" were incredibly slow but still up and running the rest of the websites had the same symptoms has "abc" I can afford 15 min or less down time for "def". Should I then (in AWS console) reboot my instance or create an AMI image from my instance and launch it and associate my elastic IP to the new instance or "launch more like this" Background on what may have happened to my ec2 The last time I made changes for 21 hours ago. A cronjob to create snapshots ran around 19 hours ago and it has been running for a long time. Google Analytics shows traffic to my websites such as kidlander.sg has been nothing exceptional. Is there any other actions I should take or better options I could have? (I have already contacted AWS support but their turnaround is 12 hours so I appreciate all the help I could get) Update I got everything back up and running and CPU utilisation back to normal, around 30%. There is 1 difference between "def" and "abc" as well as my other websites "def"'s database is hosted on RDS "abc"'s database is hosted on an EC2 instance (different from my web server instance) configured by myself Nevertheless, I checked the EC2 instance I'm using as MySQL server yesterday and it was absolutely fine during the incident low CPU ultilisation I could log in using linux command line

    Read the article

  • MacBook Pro for Windows development via virtualization. Performance?

    - by webworm
    I am a Windows/Web developer by profession and I have been considering a MacBook Pro as a replacement for my current development machine. I am impressed by the build quality, the uni-body construction and performance specs of the MacBook Pro. I am specifically interested in the 13.3" MacBook Pro running Core 2 Duo 2.4 GHz processor with 4 GB RAM. What I am wondering is this ... what performance can I expect running SQL Server 2008, IIS, and Visual Studio 2010 within a virtual environment (VMWare Fusion and Windows 7) on the above mentioned MacBook Pro? I like the 13.3" model as the size is more portable, but am I expecting to much from a core 2 duo processor? Would I need to look at the next step up in MacBook Pro using the core i5 processor? Thanks!

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >