Search Results

Search found 45843 results on 1834 pages for 'network access'.

Page 465/1834 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 6

    - by MarkPearl
    Learning Outcomes Discuss the physical characteristics of magnetic disks Describe how data is organized and accessed on a magnetic disk Discuss the parameters that play a role in the performance of magnetic disks Describe different optical memory devices Magnetic Disk The way data is stored on and retried from magnetic disks Data is recorded on and later retrieved form the disk via a conducting coil named the head (in many systems there are two heads) The writ mechanism exploits the fact that electricity flowing through a coil produces a magnetic field. Electric pulses are sent to the write head, and the resulting magnetic patterns are recorded on the surface below with different patterns for positive and negative currents The physical characteristics of a magnetic disk   Summarize from book   The factors that play a role in the performance of a disk Seek time – the time it takes to position the head at the track Rotational delay / latency – the time it takes for the beginning of the sector to reach the head Access time – the sum of the seek time and rotational delay Transfer time – the time it takes to transfer data RAID The rate of improvement in secondary storage performance has been considerably less than the rate for processors and main memory. Thus secondary storage has become a bit of a bottleneck. RAID works on the concept that if one disk can be pushed so far, additional gains in performance are to be had by using multiple parallel components. Points to note about RAID… RAID is a set of physical disk drives viewed by the operating system as a single logical drive Data is distributed across the physical drives of an array in a scheme known as striping Redundant disk capacity is used to store parity information, which guarantees data recoverability in case of a disk failure (not supported by RAID 0 or RAID 1) Interesting to note that the increase in the number of drives, increases the probability of failure. To compensate for this decreased reliability RAID makes use of stored parity information that enables the recovery of data lost due to a disk failure.   The RAID scheme consists of 7 levels…   Category Level Description Disks Required Data Availability Large I/O Data Transfer Capacity Small I/O Request Rate Striping 0 Non Redundant N Lower than single disk Very high Very high for both read and write Mirroring 1 Mirrored 2N Higher than RAID 2 – 5 but lower than RAID 6 Higher than single disk Up to twice that of a signle disk for read Parallel Access 2 Redundant via Hamming Code N + m Much higher than single disk Highest of all listed alternatives Approximately twice that of a single disk Parallel Access 3 Bit interleaved parity N + 1 Much higher than single disk Highest of all listed alternatives Approximately twice that of a single disk Independent Access 4 Block interleaved parity N + 1 Much higher than single disk Similar to RAID 0 for read, significantly lower than single disk for write Similar to RAID 0 for read, significantly lower than single disk for write Independent Access 5 Block interleaved parity N + 1 Much higher than single disk Similar to RAID 0 for read, lower than single disk for write Similar to RAID 0 for read, generally  lower than single disk for write Independent Access 6 Block interleaved parity N + 2 Highest of all listed alternatives Similar to RAID 0 for read; lower than RAID 5 for write Similar to RAID 0 for read, significantly lower than RAID 5  for write   Read page 215 – 221 for detailed explanation on RAID levels Optical Memory There are a variety of optical-disk systems available. Read through the table on page 222 – 223 Some of the devices include… CD CD-ROM CD-R CD-RW DVD DVD-R DVD-RW Blue-Ray DVD Magnetic Tape Most modern systems use serial recording – data is lade out as a sequence of bits along each track. The typical recording used in serial is referred to as serpentine recording. In this technique when data is being recorded, the first set of bits is recorded along the whole length of the tape. When the end of the tape is reached the heads are repostioned to record a new track, and the tape is again recorded on its whole length, this time in the opposite direction. That process continued back and forth until the tape is full. To increase speed, the read-write head is capable of reading and writing a number of adjacent tracks simultaneously. Data is still recorded serially along individual tracks, but blocks in sequence are stored on adjacent tracks as suggested. A tape drive is a sequential access device. Magnetic tape was the first kind of secondary memory. It is still widely used as the lowest-cost, slowest speed member of the memory hierarchy.

    Read the article

  • postfix smtp_fallback_relay for deferred messages to a single domain

    - by EdwardTeach
    I use Postfix to send messages to a mail server outside my organization which frequently rejects/defers my mail. My Postfix server sees that these messages are deferred and tries again, eventually getting through. Final delivery can take up to an hour, which makes my users unhappy. In comparison, mail from my Postfix server to other hosts works normally. I have now found out about a second, unofficial MX for this domain that does not reject/defer mail. This second MX does not appear when doing a DNS MX query for the domain. Therefore, for the problem domain I would like to use this second MX as a fallback. That is: whenever mail is deferred by the primary MX, try again on the unofficial second MX. I see that there is already a postfix configuration "smtp_fallback_relay". However the documentation seems to indicate that I can not restrict usage of the fallback to a single domain. The documentation also doesn't mention deferred message handling. So is there a way to configure a single-domain, deferred-retry fallback host in Postfix? For reference, I am including my postconf output (the host names and ip addresses are fake): alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/etc/postfix/legacy_mailman, ldap:/etc/postfix/ldap-aliases.cf append_dot_mydomain = no biff = no config_directory = /etc/postfix default_destination_concurrency_limit = 2 inet_interfaces = all inet_protocols = all local_destination_concurrency_limit = 2 local_recipient_maps = $alias_maps mailbox_size_limit = 0 mydestination = myhost.my.network, localhost.my.network, localhost, my.network myhostname = myhost.my.network mynetworks = 127.0.0.0/8, [::ffff:127.0.0.0]/104, [::1]/128, 10.10.10.0/24 myorigin = my.network readme_directory = no recipient_delimiter = + relay_domains = $mydestination relayhost = smtp_fallback_relay = the.problem.host smtp_header_checks = smtpd_banner = $myhostname ESMTP $mail_name virtual_alias_maps = hash:/etc/postfix/virtual

    Read the article

  • faking NAT with a VMware distributed switch across multiple hosts

    - by romant
    I need to construct a NAT for certain machines within the network. Wish to do this with dvSwitch - as it seems the logical way of attacking the problem as in this scenario there's just under 30 hosts. In order for the NAT'ed VM's to have access to the 'real' network. I am providing a 'router' VM, which will have access to the WAN/outside network, and also act as the DHCP server for the NAT'ed machines. Problem Space When the machines connected to the NAT interface and the router are on the same host, then they get an IP from the router VM, and work perfectly (routed outside). Unfortunately machines on other Hosts that are connected to the dvSwitch do not get an IP and further tcpdump shows no network data getting through across the hosts within the dvSwitch. Has anyone achieved a NAT solution using a dvSwitch before that they could share?! Thank you. EDIT: Including the diagram.

    Read the article

  • "Half" ssh authorization to a server with git repository

    - by hsz
    Hello ! Currently I have purchased web hosting with ssh access. I have created a git repository on it and if I set my public key in ~/.ssh/authorized_keys file, I have access to that repo, I can push/pull data, etc. This solution allows access for every user that has his public key in authorized_keys file. But there is one thing that I want to avoid. Every user can login to the server too and has access to whole ssh account. Is it possible to create a blacklist of users' keys that will not have an access to ssh ? I see it that way: user logs in to a git - ok, allow for every one user logs in to ssh account ~/.profile file is hooked and called a custom script: check user's public key if public key is in ~/.ssh/blacklist_keys call bash exit/logout Is it possible in any way ?

    Read the article

  • ipv6 auto configuration not working in ubuntu natty

    - by allan ruin
    In win7, my computer can automatically get a IPV6 global address and use ipv6 network, but in ubuntu natty, I can't find out how to let stateless configuration work. My network is a university campus network,so I don't need tunnels. I think if one thingg can silently and successfully complished in windows it shouldn't be impossible in linux. I did can manually edit /etc/network/interfaces and use a static ipv6 address, and I can use ipv6 this way, but I just want to use auto-configuration. I found this post: How to disable autoconfiguration on IPv6 in Linux? and try sudo sysctl -w net.ipv6.conf.all.autoconf=1 sudo sysctl -w net.ipv6.conf.all.accept_ra=1 but no luck

    Read the article

  • How to go about scheduling a task in windows 7 to change wireless connection

    - by Skindeep2366
    This may or not be something that can be done. I cannot find anything on the wireless connection manager built into windows 7 let alone methods for passing params into it. Problem is as follows: I have 2 wireless routers. One provides internet access, the other provides sole access to the local network. Every day at 4am the main system creates a backup in 2 locations. One is a External usb drive, the other is a location on the network. This is all cool if it is remembered to change over to the local network router before leaving. But if it is forgotten the roof will collapse, the walls will burn, and I will be... well you get the idea. Solution: there is already a custom event that fires a automated backup program at 4am everyday. I need someway to force the wireless network to use the correct connection at say 3:58am everyday. Any ideas????

    Read the article

  • (manually configured) kernel update leaves wireless in a mess

    - by Mala
    I recently upgraded my kernel from 2.6.31-gentoo-r6 to 2.6.32-gentoo-r7. In both cases, I configured everything manually. However, since the upgrade, my wireless card appears to be on the fritz. It will connect to networks just fine, and remain connected, but can only access the internet (and other hosts on the network) for about 3 seconds after connecting. Reconnecting to the network appears to fix the problem... for another 3 seconds or so. The problem is "solved" by booting into the older kernel. The relevant lspci entry is 02:00.0 Network controller: Intel Corporation PRO/Wireless 5300 AGN [Shiloh] Network Connection I'm pretty sure I have the correct drivers enabled in the kernel Device Drivers ---> Network device support ---> Wireless LAN (IEEE 802.11) ---> <*> Intel Wireless Wifi [*] Enable LED support in iwlagn and iwl3945 drivers [*] Enable Spectrum Measurement in iwlagn driver [*] Enable full debugging output in iwlagn and iwl3945 drivers <*> Intel Wireless WiFi Next Gen AGN (iwlagn) [*] Intel Wireless WiFi 4965AGN [*] Intel Wireless WiFi 5000AGN; Intel WiFi Link 1000, 6000, and 6050 Series I tried with the other intel drivers enabled as well (iwl3945) and no difference. Is there something stupid I'm missing? Is there something I have to recompile after upgrading the kernel (a la nvidia)? Thanks Mala

    Read the article

  • Missing driver ASUS PCE-N53 11n N600 PCI-E Adapter

    - by oyse
    I have problems with getting an Asus PCE-N53 11n N600 PCI-E Adapter card to work on my desktop computer. As far as I can tell no drivers are installed for the card. I know I can manually download the drivers directly from Asus, but I would rather not go that route. If there are anyone that knows about any packages or other things I can do to make this work would be much appreciated. Some systems details: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.1 LTS Release: 12.04 Codename: precise $ sudo lshw -C network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 06 serial: d4:3d:7e:03:b9:1d size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl8168e-3_0.0.4 03/27/12 ip=192.168.0.173 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:43 ioport:d000(size=256) memory:f2104000-f2104fff memory:f2100000-f2103fff *-network UNCLAIMED description: Network controller product: Ralink corp. vendor: Ralink corp. physical id: 0 bus info: pci@0000:04:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f7100000-f710ffff $ lsmod Module Size Used by nvidia 12319264 51 vesafb 13844 1 snd_hda_codec_hdmi 32474 1 joydev 17693 0 bnep 18281 2 rfcomm 47604 0 bluetooth 180104 10 bnep,rfcomm snd_hda_codec_realtek 224173 1 snd_seq_midi 13324 0 ppdev 17113 0 snd_rawmidi 30748 1 snd_seq_midi usbhid 47199 0 hid 99559 1 usbhid nouveau 774641 0 parport_pc 32866 1 snd_hda_intel 33773 5 ttm 76949 1 nouveau snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel drm_kms_helper 46978 1 nouveau drm 242038 3 nouveau,ttm,drm_kms_helper snd_seq_midi_event 14899 1 snd_seq_midi snd_hwdep 13668 1 snd_hda_codec snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event i2c_algo_bit 13423 1 nouveau mxm_wmi 12979 1 nouveau wmi 19256 1 mxm_wmi mac_hid 13253 0 snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec psmouse 97362 0 video 19596 1 nouveau snd_timer 29990 2 snd_seq,snd_pcm snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78855 20 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_rawmidi,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_seq,snd_pcm,snd_timer,snd_seq_device serio_raw 13211 0 soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm mei 41616 0 lp 17799 0 parport 46562 3 ppdev,parport_pc,lp r8169 62099 0

    Read the article

  • localhost name error with linux machines

    - by coderex
    Hi, CASE 1: I have a Ubuntu machine with name midhun.local I can access this in http://midhun.local/svn ... But its can't access from other machines(both Windows and Linux) through this host name. But it works with http://192.168.1.192/svn CASE 2: I have a another machine(windows) having the host-name myname:555 In this case i can access https://myname:555/svn from other windows machines with the same URL. But if am trying to access from the a Linux machine it will not work with the same URL instead of that https://192.1.168.111:555/svn will work. How can I solve the problem. I need to access via the same name from cross domain. How is it possible in LAN Thanks in advance!!

    Read the article

  • Turnkey with LightSwitch

    - by Laila
    Microsoft has long wanted to find a replacement for Microsoft Access. The best attempt yet, which is due out in, or before, September is Visual Studio LightSwitch, with which it is said to be as 'easy as flipping a switch' to use Silverlight to create simple form-driven business applications. It is easy to get confused by the various initiatives from Microsoft. No, this isn't WebMatrix. There is no 'Razor', for this isn't meant for cute little ecommerce sites, but is designed to build simple database-applications of the card-box type. It is more clearly a .NET-based solution to the problem that every business seems to suffer from; the plethora of Access-based, and Excel-based 'private' and departmental database-applications. These are a nightmare for any IT department since they are often 'stealth' applications built by the business in the teeth of opposition from the IT Department zealots. As they are undocumented, it is scarily easy to bring a whole department into disarray by decommissioning a PC tucked under a desk somewhere. With LightSwitch, it is easy to re-write such applications in a standard, maintainable, way, using a SQL Server database, deployed somewhere reasonably safe such as Azure. Even Sharepoint or Windows Communication Foundation can be used as data sources. Oracle's ApEx has taken off remarkably well, and has shaken the perception that, for the business user, Oracle must remain a mystic force accessible only to the priests and acolytes. Microsoft, by comparison had only Access, which was first released in 1992, the year of the Madonna conical bustier. It looks just as dated. Microsoft badly needed an entirely new solution to the same business requirement that led to Access's and Foxpro's long-time popularity, but which had the same allure as ApEx. LightSwitch is sound in its ideas, and comfortingly conventional in its architecture. By giving an easy access to SQL Server databases, and providing a 'thumb and blanket' migration path to Access-heads, LightSwitch seems likely to offer a simple way of pulling more Microsoft users into the .NET community. If Microsoft puts its weight behind it, then it will give some glimmer of hope to the many Silverlight developers that Microsoft is capable of seeing through its .NET revolution.

    Read the article

  • Debugging Windows PC freeze

    - by Violet Giraffe
    I have a problem with my computer, would appreciate any hints/ideas. It usually begins not immediately after booting Windows, but at some unpredictable point in time, which doesn't seem to correlate with any specific actions of mine. First sign of a problem is process System starting to consume 25% CPU time steadily. I have a quad-core CPU, so it might be one thread working non-stop. At this point micro-freezes start to occur - screen stops refreshing, but if I have, say, music player running - it continues playing. If I try to do something between the freezes, like open Start menu, it will freeze completely and forever. If I press reset button the PC will shut down and then start cold, as opposed to usual reset behavior (which doesn't include PC shutting down). I have noticed that full restart upon reset is usual for hardware problems, but I think this problem isn't related to at least motherboard-CPU-RAM-videoadapter. It certainly isn't caused by overheating. One very important not is that it seems to be related to Windows hosted WLAN network: I have USB Wi-Fi dongle and have configured a hosted network to share cable Internet connection with Wi-Fi devices. I am not 100% certain there's a strong connection, but in 9 or 10 cases when I enabled the network (by executing netsh wlan start hostednetwork), it did freeze eventually (sometimes within minutes of starting the network, sometimes within hours), and on at least 10 days when I didn't start the network it never froze, no matter how I used the computer). There are no critical/error entries in the events log that I can suspect as being related, only regular stuff like "driver not loaded". I have found no critical/error events that are being logged around the time of freeze occurring and are not logged during normal boot without starting the WLAN.

    Read the article

  • Sharing internet connection from Windows XP using wi-fi router

    - by Darius
    Hi, I have an network configuration like: Ethernet cable from ISP connected to Windows XP machine, configured with static IP 192.168.0.3 Another ethernet connection from 2nd Windows XP machine's network adapter to a Wi-Fi router (D-Link Airport G+) XP set to "Share internet connection", the 2nd adapter configured as static to 192.169.0.1 D-Link Airport Wi-Fi router also configured as "static connection", it's IP set to 192.169.0.2, default gateway set to 192.169.0.1. Network mask everywhere is 24. Laptop computer connected with the router with static IP 192.169.0.3 The problems are: XP machine sees the router (it's able to ping it and access it via the web admin tool) The router somehow cannot PING the XP machine (using the tool provided by the web-based admin tool) The laptop computer cannot ping anything and cannot be pinged The router is only accessible when the ethernet cable is connected with a router's 1-4 LAN port, when I connect it via "WAN" port (which I believe is the proper one) it's not visible from the XP machine If you have similar experience with configuring a network like this I would really appreciate your help. I cannot use the Wi-Fi router with the ISP cable itself.

    Read the article

  • Why does my DSL modem now need a reboot each time for my laptop to connect?

    - by msorens
    I have a rather peculiar home networking issue. For sometime my home network was purring along fine. I could turn on either of my laptops and they would quickly find and connect to my DSL modem (and thence the internet). Several days ago I unplugged my DSL modem for the first time in months. Upon turning it back on and waiting for the boot to finish, the lights on the panel indicated the DSL modem was fully operational, just as before. But that's not what happened. Not at all. Now when I turn on my Win7 laptop, the network icon in my system tray shows a small starburst; hovering over it the tooltip states "Not connected; connections are available". Clicking it lists several nearby networks including my own network showing a strong signal. If I click to connect, it attempts a connection but then I get a dialog stating "Windows was unable to connect to MyNet.". Turning off wireless on my laptop and turning it back on yields no difference. Running the network troubleshooter (which includes doing a repair on the network connection) yields no difference. The only remedy is to reboot the DSL modem (i.e. unplug it, wait a few seconds, then plug it back in). As soon as it goes online my laptop finds it and connects properly. To add one more twist to the story, this happened to me once before, several months ago. After a couple weeks, the situation resolved itself(!). Everything started working properly again, due to nothing I did. Final note: this problem only affects the wireless connection to the DSL modem. My desktop computer, connected via hardline to the DSL modem, connects fine when I turn it on. Any thoughts on why this is happening or how to fix it?

    Read the article

  • Cisco ASA 5505 (8.05): asymmetrical group-policy filter on an L2L IPSec tunnel

    - by gravyface
    I'm trying to find a way to setup a bi-directional L2L IPSec tunnel, but with differing group-policy filter ACLs for both sides. I have the following filter ACL setup, applied, and working on my tunnel-group: access-list ACME_FILTER extended permit tcp host 10.0.0.254 host 192.168.0.20 eq 22 access-list ACME_FILTER extended permit icmp host 10.0.0.254 host 192.168.0.20 According to the docs, VPN filters are bi-directional, you always specify the remote host first (10.0.0.254), followed by the local host and (optionally) port number, as per the documentation. However, I do not want the remote host to be able to access my local host's TCP port 22 (SSH) because there's no requirement to do so -- there's only a requirement for my host to access the remote host's SFTP server, not vice-versa. But since these filter ACLs are bidirectional, line 1 is also permitting the remote host to access my host's SSH Server. The documentation I'm reading doesn't seem to clear to me if this is possible; help/clarification much appreciated.

    Read the article

  • Just a few questions about Hyper-V virtual machines and clustering

    - by René Kåbis
    I have been using Microsoft’s Hyper-V technology for a little while now, but I am just now dipping my toe into clustering. In particular, I am trying to implement a fault-tolerant SQL DB. This involves setting up two VMs, clustering them via Failover Cluster, and then installing SQL Server in some fashion. I have two physical machines - one high-end and rather beefy “heavy lifter” to contain the majority of the VMs, and another “backup” (a repurposed desktop) to hold the essential “secondary” (or failover) AD-DC, SQL and FS VMs. The main reason why I find the failover cluster at the VM level so attractive is that it presents a single IP and DNS entry to the network as a whole - if one machine (physical or virtual) goes down, you might loose some ping and the connections get reset, but the network applications (Microsoft RMS connection to backend SQL) can still connect to a viable DB without having to mess around with the settings at all. My first question is in terms of SQL Server itself. If I have a cluster between two VMs, does it make more sense to install the SQL Server in Failover Cluster configuration or should I simply install it in a stand-alone config and mirror the DBs? For example, this post suggests just mirroring the DBs, but do I just mirror standalone DBs on standalone VMs, or can I get the network and failover benefits of clustered VMs while still utilizing (on each clustered VM) standalone DBs that have been mirrored between each other? As well, I have come across a lot of documentation about SQL clustering, but most assume a number (#2) of physical machines to hold not only the actual SQL VMs but also the Quorum and Witness stores. I will not be able to muster more than two physical machines. As such, I will have to be satisfied with a VM cluster that does not exceed two VMs (one for each physical machine). Another issue involves MSDTC - the Distributed Transaction Coordinator. When attempting to install the SQL Failover Cluster (I never completed it for this reason) it threw a hissy fit because MSDTC had not been clustered. Search as I might, I have not yet found a way to do so under Windows Server 2012 R2. I have found plenty of docs for Windows 2008 and 2008 R2, but these instructions don’t align with 2012 R2 (at least, not in a way that allows me to successfully cluster MSDTC). Plus, some of the instructions that I have found for SQL Server Failover Cluster installation suggest that a third “network device” - shared network storage (a SAN) - is required for the DB itself (and other functionality). I do not have this, and won’t be getting this. Most of my storage exists on the “heavy lifter” that was designed for all of the “primary” VMs. If that physical machine goes down, so does the storage. The secondary server does have enough resources for an AD-DC Server, an SQL server and a File Server, so it will handle the “secondary” failover versions of those VMs (clustered or not). My final question involves file servers. If I cluster file servers between two VMs (one on my “heavy lifter” and another on my “backup”, how do I mirror the data between them? Clustering VMs only provides a single point of access on the network for a resource, it doesn’t exactly replicate data between the two - that is left to the services that serve up that data. I am unsure how I can ensure that file server data between two clustered file server VMs can be properly mirrored. Remember, I only have two devices to be used here - my primary machine and a backup secondary. There is no chance of me obtaining a SAN or any other type of network attached storage. What exists on the machines must act as the storage. Thanks in advance for any suggestions.

    Read the article

  • RequestContextHolder.currentRequestAttributes() and accessing HTTP Session

    - by Umesh Awasthi
    Need to access HTTP session for fetching as well storing some information.I am using Spring-MVC for my application and i have 2 options here. User Request/ Session in my Controller method and do my work Use RequestContextHolde to access Session information. I am separating some calculation logic from Controller and want to access Session information in this new layer and for that i have 2 options Pass session or Request object to other method in other layer and perform my work. use RequestContextHolder.currentRequestAttributes() to access request/ session and perform my work. I am not sure which is right way to go? with second approach, i can see that method calling will be more clean and i need not to pass request/ session each time.

    Read the article

  • How do I force my Graphite Airport to distribute 192.168.x addresses instead of 10.x.x addresses?

    - by William
    I'm sharing a single IP address but there's no way to change what network it uses for distributing the private addresses via DHCP. My problem is that my VPN to work already uses the 10.x.x network, so I'd like my home to use 192.168.x. I've tried messing with the Network tab within the Airport Admin Utility for Graphite and Snow but nothing works. I'm hoping there's some way to hack the configuration file.

    Read the article

  • Connecting to wireless networks from command line

    - by Balaji
    I need to write a shell script which connects to one of the two available wi-fi connections. One is a un secure connection and the other is secure connection. My question has 2 parts- 1.How to connect to the un-secure (un-encrypted and no password required) connection from command line (or by executing a shell script) when I'm connected to the secure connection? I followed the steps in http://www.ubuntugeek.com/how-to-troubleshoot-wireless-network-connection-in-ubuntu.html for in-secure connection. I put all the commands in a script and executed it (I made sure that interface name and essid are correct) - sudo dhclient -r wlan0 - sudo ifconfig wlan0 up - sudo iwconfig wlan0 essid "UAPublic" - sudo iwconfig wlan0 mode Managed - sudo dhclient wlan0 But nothing happens - I'm not disconnected from the current network and connected to the new one 2.When I want to connect to the secure wi-fi network, I understand from http://askubuntu.com/a/138476/70665 that I need to use wpa_supplicant. But I enter a lot of details in the interface when I connect via UI security : wpa and wpa2 enterprise Authentication : PEAP CA certificate : Equifax... PEAP version : automatic inner authentication : MSCHAPv2 username : password : How to use wpa_supplicant to mention all these details in the command line? The conf file network={ ssid="ssid_name" psk="password" } doesn't work for me.

    Read the article

  • How do I troubleshoot an IPsec tunnel (from a cellular router to a public server)?

    - by Hanno Fietz
    I'm new to IPsec and struggling with a setup that might soon be widely used in our operations (provided I do understand it, eventually...). A cellular router (blackbox by netModule, from its log messages it seems to be running Linux and OpenSwan) connects a sensor network on customers' sites with our public server. We need to be able to connect into the local network, so I had the cell provider give me a public IP (a dynamic one). The way their setup works, the public IPs only allow IPsec traffic. I set up OpenSwan on our Ubuntu server (running Jaunty). This is my connection config from /etc/ipsec.conf: conn gprs-field-devices left=my.pub.lic.ip [email protected] #leftsubnet=192.168.1.129/25 right=%any [email protected] #rightsubnet=192.168.1.1/25 #rightnexthop=%defaultroute auto=add On the router, all I have is the Web UI, in which I made the following settings: "Remote endpoint": public IP of server, same as "left" above "Local Network Address": 192.168.1.1 "Local Network Mask": 255.255.255.128 "Remote Network Address": 192.168.1.129 "Remote Network Mask": 255.255.255.128 The pluto process on the server is listening for connections on port 500. It can't open a tunnel, obviously, because it doesn't know at which IP the client is. I set up a passphrase as PSK for @field.econemon.com in /etc/ipsec.secrets and also configured it in the router (which doesn't seem to support certificates). My problem is, nothing happens. The router just says, IPsec is "down". When I copy-paste the IP into ipsec.conf (for "right="), and ask the server to ipsec auto --up gprs-field-devices, it just hangs until I press Ctrl-C. Is there anything wrong with my setup? How can I debug this further? My router gives the following loglines that seem related, but don't tell me anything: Feb 21 23:08:20 Netbox authpriv.warn pluto[2497]: loading secrets from "/etc/ipsec.secrets" Feb 21 23:08:20 Netbox authpriv.warn pluto[2497]: loading secrets from "/etc/ipsec.d/hostkey.secrets" Feb 21 23:08:20 Netbox authpriv.warn pluto[2497]: loading secrets from "/etc/ipsec.d/netbox0.secrets" Feb 21 23:08:20 Netbox authpriv.warn pluto[2497]: "netbox00" #1: initiating Main Mode Feb 21 23:08:20 Netbox daemon.err ipsec__plutorun: 104 "netbox00" #1: STATE_MAIN_I1: initiate Feb 21 23:08:20 Netbox daemon.err ipsec__plutorun: ...could not start conn "netbox00" Feb 21 23:08:22 Netbox authpriv.warn pluto[2497]: packet from 188.40.57.4:500: ignoring informational payload, type NO_PROPOSAL_CHOSEN Feb 21 23:08:22 Netbox authpriv.warn pluto[2497]: packet from 188.40.57.4:500: received and ignored informational message Feb 21 23:08:28 Netbox user.warn parrot.system_controller[762]: IPSECCTRLR: Tunnel 0 is down for 0 seconds Feb 21 23:08:40 Netbox user.warn parrot.system_controller[762]: IPSECCTRLR: Tunnel 0 is down for 10 seconds Feb 21 23:08:52 Netbox authpriv.warn pluto[2497]: packet from 188.40.57.4:500: ignoring informational payload, type NO_PROPOSAL_CHOSEN

    Read the article

  • What is the current state of Ubuntu's transition from init scripts to Upstart? [migrated]

    - by Adam Eberlin
    What is the current state of Ubuntu's transition from init.d scripts to upstart? I was curious, so I compared the contents of /etc/init.d/ to /etc/init/ on one of our development machines, which is running Ubuntu 12.04 LTS Server. # /etc/init.d/ # /etc/init/ acpid acpid.conf apache2 --------------------------- apparmor --------------------------- apport apport.conf atd atd.conf bind9 --------------------------- bootlogd --------------------------- cgroup-lite cgroup-lite.conf --------------------------- console.conf console-setup console-setup.conf --------------------------- container-detect.conf --------------------------- control-alt-delete.conf cron cron.conf dbus dbus.conf dmesg dmesg.conf dns-clean --------------------------- friendly-recovery --------------------------- --------------------------- failsafe.conf --------------------------- flush-early-job-log.conf --------------------------- friendly-recovery.conf grub-common --------------------------- halt --------------------------- hostname hostname.conf hwclock hwclock.conf hwclock-save hwclock-save.conf irqbalance irqbalance.conf killprocs --------------------------- lxc lxc.conf lxc-net lxc-net.conf module-init-tools module-init-tools.conf --------------------------- mountall.conf --------------------------- mountall-net.conf --------------------------- mountall-reboot.conf --------------------------- mountall-shell.conf --------------------------- mounted-debugfs.conf --------------------------- mounted-dev.conf --------------------------- mounted-proc.conf --------------------------- mounted-run.conf --------------------------- mounted-tmp.conf --------------------------- mounted-var.conf networking networking.conf network-interface network-interface.conf network-interface-container network-interface-container.conf network-interface-security network-interface-security.conf newrelic-sysmond --------------------------- ondemand --------------------------- plymouth plymouth.conf plymouth-log plymouth-log.conf plymouth-splash plymouth-splash.conf plymouth-stop plymouth-stop.conf plymouth-upstart-bridge plymouth-upstart-bridge.conf postgresql --------------------------- pppd-dns --------------------------- procps procps.conf rc rc.conf rc.local --------------------------- rcS rcS.conf --------------------------- rc-sysinit.conf reboot --------------------------- resolvconf resolvconf.conf rsync --------------------------- rsyslog rsyslog.conf screen-cleanup screen-cleanup.conf sendsigs --------------------------- setvtrgb setvtrgb.conf --------------------------- shutdown.conf single --------------------------- skeleton --------------------------- ssh ssh.conf stop-bootlogd --------------------------- stop-bootlogd-single --------------------------- sudo --------------------------- --------------------------- tty1.conf --------------------------- tty2.conf --------------------------- tty3.conf --------------------------- tty4.conf --------------------------- tty5.conf --------------------------- tty6.conf udev udev.conf udev-fallback-graphics udev-fallback-graphics.conf udev-finish udev-finish.conf udevmonitor udevmonitor.conf udevtrigger udevtrigger.conf ufw ufw.conf umountfs --------------------------- umountnfs.sh --------------------------- umountroot --------------------------- --------------------------- upstart-socket-bridge.conf --------------------------- upstart-udev-bridge.conf urandom --------------------------- --------------------------- ureadahead.conf --------------------------- ureadahead-other.conf --------------------------- wait-for-state.conf whoopsie whoopsie.conf To be honest, I'm not entirely sure if I'm interpreting the division of responsibilities properly, as I didn't expect to see any overlap (of what framework handles which services). So I was quite surprised to learn that there was a significant amount of overlap in service references, in addition to being unable to discern which of the two was intended to be the primary service framework. Why does there seem to be a fair amount of redundancy in individual service handling between init.d and upstart? Is something else at play here that I'm missing? What is preventing upstart from completely taking over for init.d? Is there some functionality that certain daemons require which upstart does not yet have, which are preventing some services from converting? Or is it something else entirely?

    Read the article

  • Connect to wired and wireless networks at same time, Ubuntu

    - by Gary Chambers
    Currently, I have a media PC running Ubuntu 10.04 that I am trying to connect via a wired network cable directly to a NAS box, and wirelessly to the router. This works no problem after I run sudo /etc/init.d/networking restart but I can't get both interfaces to come up on system startup. My /etc/network/interfaces file reads as follows: auto eth0 iface eth0 inet static address 10.0.1.2 netmask 255.255.254.0 broadcast 10.0.1.255 network 10.0.1.0 auto wlan2 iface wlan2 inet dhcp As I say, I know this works, because I can get it to work by restarting the network interfaces, but I can't bring them both up on system startup. Does anyone know why this might be?

    Read the article

  • Book Review&ndash;Getting Started With OAuth 2.0

    - by Lori Lalonde
    Getting Started With OAuth 2.0, by Ryan Boyd, provides an introduction to the latest version of the OAuth protocol. The author starts off by exploring the origins of OAuth, along with its importance, and why developers should care about it. The bulk of this book involves a discussion of the various authorization flows that developers will need to consider when developing applications that will incorporate OAuth to manage user access and authorization. The author explains in detail which flow is appropriate to use based on the application being developed, as well as how to implement each type with step-by-step examples. Note that the examples in the book are focused on the Google and Facebook APIs. Personally, I would have liked to see some examples with the Twitter API as well. In addition to that, the author also discusses security considerations, error handling (what is returned if the access request fails), and access tokens (when are access tokens refreshed, and how access can be revoked). This book provides a good starting point for those developers looking to understand what OAuth is and how they can leverage it within their own applications. The book wraps up with a list of tools and libraries that are available to further assist the developer in exploring the APIs supporting the OAuth specification. I highly recommend this book as a must-read for developers at all levels that have not yet been exposed to OAuth. The eBook format of this book was provided free through O'Reilly's Blogger Review program. This book can be purchased from the O'Reilly book store at: : http://shop.oreilly.com/product/0636920021810.do

    Read the article

  • Trixbox CentOS Default GW Problem (Multi-homed server)

    - by slashp
    I'm having an issue with a CentOS trixbox server which is dual-homed (one private facing NIC [eth1], one internet-facing NIC [eth0]). I can't seem to get the default gateway to set properly to our ISP's GW via eth0. I've modified the /etc/sysconfig/network to contain both a GATEWAY & GATEWAYDEV line and removed the GATEWAY line from /etc/sysconfig/network-scripts/ifcfg-eth1 (as well as /etc/sysconfig/network-scripts/ifcfg-eth0). No default GW shows up in the routing table unless it's specified in the ifcfg-eth1 file (which both the wrong interface and wrong gateway IP), otherwise, the routing table simply does not contain a default gateway..any ideas would be greatly appreciated! Thanks! EDIT Just realized when attempting to add the default gateway manually using the route add command, I receive an error stating: SIOCADDRT: Network is unreachable I know this error can occur when your default gateway and interface IP address are not on the same subnet..in this case, my public IP address of eth0 is a /29.

    Read the article

  • Prevent users from Router 2 seeing Router 1 computers

    - by Patrick Robert Shea O'Connor
    I've got 2 Netgear N300 (WNR2000v3) routers. Here's my setup: Modem Router 1 Private Users/Router 2 Public Wireless Users on "Guest" Network. I want to prevent users who are connected to Router 2's "Guest" network from accessing anything that is connected to Router 1. There is an option when setting up the "Guest" network called "Allow guest to access My Local Network" which I thought if unchecked would do this very thing; however, I can still access files and such of computers connected to Router 1. Router 1 assigns 192.0.0.x IP addresses, Router 2 assigns 10.0.0.x IP addresses, how can they even see each other? Do I need to change the subnet or something else?

    Read the article

  • Proper policy for user setup

    - by Dave Long
    I am still fairly new to linux hosting and am currently working on some policies for our production ubuntu servers. The servers are public facing webservers with ssh access from the public network and database servers with ssh access from the internal private network. We are a small hosting company so in the past with windows servers we used one user account and one password that each of us used internally. Anyone outside of the company who needed to access the server for FTP or anything else had their own user account. Is that okay to do in the linux world, or would most people recommend using individual accounts for each person who needs to access the server?

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >