Search Results

Search found 5294 results on 212 pages for 'multi wan'.

Page 71/212 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Inbound SIP calls through Cisco 881 NAT hang up after a few seconds

    - by MasterRoot24
    I've recently moved to a Cisco 881 router for my WAN link. I was previously using a Cisco Linksys WAG320N as my modem/router/WiFi AP/NAT firewall. The WAG320N is now running in bridged mode, so it's simply acting as a modem with one of it's LAN ports connected to FE4 WAN on my Cisco 881. The Cisco 881 get's a DHCP provided IP from my ISP. My LAN is part of default Vlan 1 (192.168.1.0/24). General internet connectivity is working great, I've managed to setup static NAT rules for my HTTP/HTTPS/SMTP/etc. services which are running on my LAN. I don't know whether it's worth mentioning that I've opted to use NVI NAT (ip nat enable as opposed to the traditional ip nat outside/ip nat inside) setup. My reason for this is that NVI allows NAT loopback from my LAN to the WAN IP and back in to the necessary server on the LAN. I run an Asterisk 1.8 PBX on my LAN, which connects to a SIP provider on the internet. Both inbound and outbound calls through the old setup (WAG320N providing routing/NAT) worked fine. However, since moving to the Cisco 881, inbound calls drop after around 10 seconds, whereas outbound calls work fine. The following message is logged on my Asterisk PBX: [Dec 9 15:27:45] WARNING[27734]: chan_sip.c:3641 retrans_pkt: Retransmission timeout reached on transmission [email protected] for seqno 1 (Critical Response) -- See https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions Packet timed out after 6528ms with no response [Dec 9 15:27:45] WARNING[27734]: chan_sip.c:3670 retrans_pkt: Hanging up call [email protected] - no reply to our critical packet (see https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions). (I know that this is quite a common issue - I've spend the best part of 2 days solid on this, trawling Google.) I've done as I am told and checked https://wiki.asterisk.org/wiki/display/AST/SIP+Retransmissions. Referring to the section "Other SIP requests" in the page linked above, I believe that the hangup to be caused by the ACK from my SIP provider not being passed back through NAT to Asterisk on my PBX. I tried to ascertain this by dumping the packets on my WAN interface on the 881. I managed to obtain a PCAP dump of packets in/out of my WAN interface. Here's an example of an ACK being reveived by the router from my provider: 689 21.219999 193.x.x.x 188.x.x.x SIP 502 Request: ACK sip:[email protected] | However a SIP trace on the Asterisk server show's that there are no ACK's received in response to the 200 OK from my PBX: http://pastebin.com/wwHpLPPz In the past, I have been strongly advised to disable any sort of SIP ALGs on routers and/or firewalls and the many posts regarding this issue on the internet seem to support this. However, I believe on Cisco IOS, the config command to disable SIP ALG is no ip nat service sip udp port 5060 however, this doesn't appear to help the situation. To confirm that config setting is set: Router1#show running-config | include sip no ip nat service sip udp port 5060 Another interesting twist: for a short period of time, I tried another provider. Luckily, my trial account with them is still available, so I reverted my Asterisk config back to the revision before I integrated with my current provider. I then dialled in to the DDI associated with the trial trunk and the call didn't get hung up and I didn't get the error above! To me, this points at the provider, however I know, like all providers do, will say "There's no issues with our SIP proxies - it's your firewall." I'm tempted to agree with this, as this issue was not apparent with the old WAG320N router when it was doing the NAT'ing. I'm sure you'll want to see my running-config too: ! ! Last configuration change at 15:55:07 UTC Sun Dec 9 2012 by xxx version 15.2 no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug datetime msec localtime show-timezone service timestamps log datetime msec localtime show-timezone no service password-encryption service sequence-numbers ! hostname Router1 ! boot-start-marker boot-end-marker ! ! security authentication failure rate 10 log security passwords min-length 6 logging buffered 4096 logging console critical enable secret 4 xxx ! aaa new-model ! ! aaa authentication login local_auth local ! ! ! ! ! aaa session-id common ! memory-size iomem 10 ! crypto pki trustpoint TP-self-signed-xxx enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-xxx revocation-check none rsakeypair TP-self-signed-xxx ! ! crypto pki certificate chain TP-self-signed-xxx certificate self-signed 01 quit no ip source-route no ip gratuitous-arps ip auth-proxy max-login-attempts 5 ip admission max-login-attempts 5 ! ! ! ! ! no ip bootp server ip domain name dmz.merlin.local ip domain list dmz.merlin.local ip domain list merlin.local ip name-server x.x.x.x ip inspect audit-trail ip inspect udp idle-time 1800 ip inspect dns-timeout 7 ip inspect tcp idle-time 14400 ip inspect name autosec_inspect ftp timeout 3600 ip inspect name autosec_inspect http timeout 3600 ip inspect name autosec_inspect rcmd timeout 3600 ip inspect name autosec_inspect realaudio timeout 3600 ip inspect name autosec_inspect smtp timeout 3600 ip inspect name autosec_inspect tftp timeout 30 ip inspect name autosec_inspect udp timeout 15 ip inspect name autosec_inspect tcp timeout 3600 ip cef login block-for 3 attempts 3 within 3 no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO881-SEC-K9 sn ! ! username xxx privilege 15 secret 4 xxx username xxx secret 4 xxx ! ! ! ! ! ip ssh time-out 60 ! ! ! ! ! ! ! ! ! interface FastEthernet0 no ip address ! interface FastEthernet1 no ip address ! interface FastEthernet2 no ip address ! interface FastEthernet3 switchport access vlan 2 no ip address ! interface FastEthernet4 ip address dhcp no ip redirects no ip unreachables no ip proxy-arp ip nat enable duplex auto speed auto ! interface Vlan1 ip address 192.168.1.1 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip nat enable ! interface Vlan2 ip address 192.168.0.2 255.255.255.0 ! ip forward-protocol nd ip http server ip http access-class 1 ip http authentication local ip http secure-server ip http timeout-policy idle 60 life 86400 requests 10000 ! ! no ip nat service sip udp port 5060 ip nat source list 1 interface FastEthernet4 overload ip nat source static tcp x.x.x.x 80 interface FastEthernet4 80 ip nat source static tcp x.x.x.x 443 interface FastEthernet4 443 ip nat source static tcp x.x.x.x 25 interface FastEthernet4 25 ip nat source static tcp x.x.x.x 587 interface FastEthernet4 587 ip nat source static tcp x.x.x.x 143 interface FastEthernet4 143 ip nat source static tcp x.x.x.x 993 interface FastEthernet4 993 ip nat source static tcp x.x.x.x 1723 interface FastEthernet4 1723 ! ! logging trap debugging logging facility local2 access-list 1 permit 192.168.1.0 0.0.0.255 access-list 1 permit 192.168.0.0 0.0.0.255 no cdp run ! ! ! ! control-plane ! ! banner motd Authorized Access only ! line con 0 login authentication local_auth length 0 transport output all line aux 0 exec-timeout 15 0 login authentication local_auth transport output all line vty 0 1 access-class 1 in logging synchronous login authentication local_auth length 0 transport preferred none transport input telnet transport output all line vty 2 4 access-class 1 in login authentication local_auth length 0 transport input ssh transport output all ! ! end ...and, if it's of any use, here's my Asterisk SIP config: [general] context=default ; Default context for calls allowoverlap=no ; Disable overlap dialing support. (Default is yes) udpbindaddr=0.0.0.0 ; IP address to bind UDP listen socket to (0.0.0.0 binds to all) ; Optionally add a port number, 192.168.1.1:5062 (default is port 5060) tcpenable=no ; Enable server for incoming TCP connections (default is no) tcpbindaddr=0.0.0.0 ; IP address for TCP server to bind to (0.0.0.0 binds to all interfaces) ; Optionally add a port number, 192.168.1.1:5062 (default is port 5060) srvlookup=yes ; Enable DNS SRV lookups on outbound calls ; Note: Asterisk only uses the first host ; in SRV records ; Disabling DNS SRV lookups disables the ; ability to place SIP calls based on domain ; names to some other SIP users on the Internet ; Specifying a port in a SIP peer definition or ; when dialing outbound calls will supress SRV ; lookups for that peer or call. directmedia=no ; Don't allow direct RTP media between extensions (doesn't work through NAT) externhost=<MY DYNDNS HOSTNAME> ; Our external hostname to resolve to IP and be used in NAT'ed packets localnet=192.168.1.0/24 ; Define our local network so we know which packets need NAT'ing qualify=yes ; Qualify peers by default dtmfmode=rfc2833 ; Set the default DTMF mode disallow=all ; Disallow all codecs by default allow=ulaw ; Allow G.711 u-law allow=alaw ; Allow G.711 a-law ; ---------------------- ; SIP Trunk Registration ; ---------------------- ; Orbtalk register => <MY SIP PROVIDER USER NAME>:[email protected]/<MY DDI> ; Main Orbtalk number ; ---------- ; Trunks ; ---------- [orbtalk] ; Main Orbtalk trunk type=peer insecure=invite host=sipgw3.orbtalk.co.uk nat=yes username=<MY SIP PROVIDER USER NAME> defaultuser=<MY SIP PROVIDER USER NAME> fromuser=<MY SIP PROVIDER USER NAME> secret=xxx context=inbound I really don't know where to go with this. If anyone can help me find out why these calls are being dropped off, I'd be grateful if you could chime in! Please let me know if any further info is required.

    Read the article

  • openwrt uses a single interface bridge?

    - by timbo
    My understanding of bridging is that it ties together two interfaces at layer 2. I am looking at a Ubiquiti Nanostation2 running OpenWRT that has an ethernet port 'eth0' and a wifi port 'ath0'. The ethernet port (the 'wan' port) is not part of the bridge and the bridge is just a single interface. Can anyone clarify this? - seems very different to Ubuntu. /etc/config/network: config 'interface' 'loopback' option 'ifname' 'lo' option 'proto' 'static' option 'ipaddr' '127.0.0.1' option 'netmask' '255.0.0.0' config 'interface' 'wan' option 'ifname' 'eth0' option 'proto' 'dhcp' config 'interface' 'wifi' option 'ipaddr' '192.168.13.1' option 'type' 'bridge' option 'proto' 'static' option 'netmask' '255.255.255.0' option 'ifname' 'wifi0'

    Read the article

  • Upgrading from MVC 1.0 to MVC2 in Visual Studio 2010 and VS2008.

    - by Sam Abraham
    With MVC2 officially released, I was involved in a few conversations regarding the feasibility of upgrading existing MVC 1.0 projects to quickly leverage the newly introduced MVC features. Luckily, Microsoft has proactively addressed this question for both Visual Studio 2008 and 2010 and many online resources discussing the upgrade process are a "Bing/Google Search" away. As I will happen to be speaking about MVC2 and Visual Studio 2010 at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (Check http://www.fladotnet.com for more info.), I decided to include a quick demo on upgrading the NerdDinner project (which I consider the "Hello MVC World" project) from MVC 1.0 to MVC2 using Visual studio 2010 to demonstrate how simple the upgrade process is. In the next few lines, I will be briefly touching on upgrading to MVC2 for Visual Studio 2008 then discussing, in more detail, the upgrade process using Visual Studio 2010 while highlighting the advantage of its multi-targeting support. Using Visual Studio 2008 SP1 For upgrading to MVC2 Using VS2008 SP1, a Microsoft White Paper [1] presents two approaches:  1- Using a provided automated upgrade tool, 2-Manually upgrading the project. I personally prefer using the automated tool although it comes with an "AS IS" disclaimer. For those brave souls, or those who end up with no luck using the tool, detailed manual upgrade steps are also provided as a second option. Backing up the project in question is a must regardless of which route one would take to upgrade. Using Visual Studio 2010 Life is much easier for developers who already adopted Visual Studio 2010. Simply opening the MVC 1.0 solution file brings up the upgrade wizard as shown in figures 1, 2, 3 and 4. As we proceed with the upgrade process, the wizard requests confirmation on whether we choose to upgrade our target framework version to .Net 4.0 or keep the existing .Net 3.5 (Figure 5). VS2010 does a good job with multi-targeting where we can still develop .Net 3.5 applications while leveraging all the new bells and whistles that VS2010 brings to the table (Multi-targeting enables us to develop with as early as .Net 2.0 in VS2010) Figure 1 - Open Solution File Using VS2010   Figure 2 - VS2010 Conversion Wizard Figure 3- Ready To Convert To VS2010 Confirmation Screen Figure 4 - VS2010 Solution Conversion Progress Figure 5 - Confirm Target Framework Upgrade In an attempt to make my demonstration realistic, I decided to opt to keep the project targeted to the .Net 3.5 Framework.  After the successful completion of the conversion process,  a quick sanity check revealed that the NerdDinner project is still targeted to the .Net 3.5 framework as shown in figure 6. Inspecting the Web.Config revealed that the MVC DLL version our code compiles against has been successfully upgraded to 2.0 (Figure 7) and hence we should now be able to leverage the newly introduced features in MVC2 and VS2010 with no effort or time invested on modifying existing code. Figure 6- Confirm Target Framework Remained .Net 3.5  Figure 7 - Confirm MVC DLL Version Has Been Upgraded In Conclusion, Microsoft has empowered developers with the tools necessary to quickly and seamlessly upgrade their MVC solutions to the newly released MVC2. The multi-targeting feature in Visual Studio 2010 enables us to adopt this latest and greatest development tool while supporting development in as early as .Net 2.0. References 1. "Upgrading an ASP.NET MVC 1.0 Application to ASP.NET MVC 2" http://www.asp.net/learn/whitepapers/aspnet-mvc2-upgrade-notes

    Read the article

  • Threading Overview

    - by ACShorten
    One of the major features of the batch framework is the ability to support multi-threading. The multi-threading support allows a site to increase throughput on an individual batch job by splitting the total workload across multiple individual threads. This means each thread has fine level control over a segment of the total data volume at any time. The idea behind the threading is based upon the notion that "many hands make light work". Each thread takes a segment of data in parallel and operates on that smaller set. The object identifier allocation algorithm built into the product randomly assigns keys to help ensure an even distribution of the numbers of records across the threads and to minimize resource and lock contention. The best way to visualize the concept of threading is to use a "pie" analogy. Imagine the total workset for a batch job is a "pie". If you split that pie into equal sized segments, each segment would represent an individual thread. The concept of threading has advantages and disadvantages: Smaller elapsed runtimes - Jobs that are multi-threaded finish earlier than jobs that are single threaded. With smaller amounts of work to do, jobs with threading will finish earlier. Note: The elapsed runtime of the threads is rarely proportional to the number of threads executed. Even though contention is minimized, some contention does exist for resources which can adversely affect runtime. Threads can be managed individually – Each thread can be started individually and can also be restarted individually in case of failure. If you need to rerun thread X then that is the only thread that needs to be resubmitted. Threading can be somewhat dynamic – The number of threads that are run on any instance can be varied as the thread number and thread limit are parameters passed to the job at runtime. They can also be configured using the configuration files outlined in this document and the relevant manuals.Note: Threading is not dynamic after the job has been submitted Failure risk due to data issues with threading is reduced – As mentioned earlier individual threads can be restarted in case of failure. This limits the risk to the total job if there is a data issue with a particular thread or a group of threads. Number of threads is not infinite – As with any resource there is a theoretical limit. While the thread limit can be up to 1000 threads, the number of threads you can physically execute will be limited by the CPU and IO resources available to the job at execution time. Theoretically with the objects identifiers evenly spread across the threads the elapsed runtime for the threads should all be the same. In other words, when executing in multiple threads theoretically all the threads should finish at the same time. Whilst this is possible, it is also possible that individual threads may take longer than other threads for the following reasons: Workloads within the threads are not always the same - Whilst each thread is operating on the roughly the same amounts of objects, the amount of processing for each object is not always the same. For example, an account may have a more complex rate which requires more processing or a meter has a complex amount of configuration to process. If a thread has a higher proportion of objects with complex processing it will take longer than a thread with simple processing. The amount of processing is dependent on the configuration of the individual data for the job. Data may be skewed – Even though the object identifier generation algorithm attempts to spread the object identifiers across threads there are some jobs that use additional factors to select records for processing. If any of those factors exhibit any data skew then certain threads may finish later. For example, if more accounts are allocated to a particular part of a schedule then threads in that schedule may finish later than other threads executed. Threading is important to the success of individual jobs. For more guidelines and techniques for optimizing threading refer to Multi-Threading Guidelines in the Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) whitepaper available from My Oracle Support

    Read the article

  • Is span monitoring on Cisco ASA 5520 possible?

    - by Brent
    From what I have read, you can use the switchport monitor command on ASA 5505's to setup a Span port due to the back of the ASA actually being a switch. On my 5520, I do not see the switchport command listed when issuing a ? via the CLI. How do people monitor traffic on non-5505's? My goal is to connect our IDS/IPS device that is running is promiscuous mode to a Ethernet port on the 5520 to monitor WAN traffic. I do not want to have to pass the WAN traffic through a switch as it would require me to get two (for redundancy) STP/switchport capable switches. Guide to setting up switchport access on a 5505: http://www.wr-mem.com/?p=66

    Read the article

  • Redirecting subsite on same domain to other IIS using HTTPS

    - by Alberto
    I've seen many similar questions (and answers) on this subject, but none seem to be on exactly the same situation I am facing. Which is weird since I don't think it is that special, so forgive me if I haven't searched enough. Anyway. I have two websites which are on two IIS7, one facing WAN and one in the LAN. The WAN facing is already HTTPS-only. I want to add the second website, but on the same HTTPS domain and SSL certificate, so that it becomes a subsite like: https://www.domain.com/subsite How can I do a redirect or rewrite on the first IIS to the second one to make this work? I don't think there is a standard IIS feature that can do this. ISA server is not an option currently. But maybe another extension to IIS exists? Done this numerous times on Apache, and am about to ditch IIS for Apache.

    Read the article

  • Chain Gateways on the LAN

    - by Black2night
    I installed m0n0wall in a virtualized environment, i have 10 PCs connected to a router ( 192.168.1.0/24) which connect them to the internet through PPPoE, the problem is that this router does not have a QoS so what i want to do is the following :- let all the PCs get their IP from the Router and the default gateway will be m0n0wall the moon wall will have 2 interface (Lan 192.168.1.20) and (Wan 192.168.1.21 and default gateway 192.168.1.1) now when any PC want to access the internet it should go through m0n0wall and then m0n0wall will forward the connection to the default gateway through the wan interface which is the PPPoE running on the router (192.168.1.1) the big question is this scenario possible or not and what do you suggest? Thanks

    Read the article

  • DHCP for Multiple Subnets

    - by TheD
    So this is the current setup - essentially I would like to get my DHCP server, serving DHCP requests for two seperate subnets. Netgear DG834G acting as a modem connected to a Sonicwall Pro 2040. X0 - LAN - 192.168.1.0/24 X1 - WAN - <WAN-IP> X2 - WLAN - 192.168.10.0/24 At the moment, I have a 2008R2 server with DHCP installed, with an IP address on the 192.168.1.0/24 range handling DHCP fine for this subnet. The Sonicwall is configured correctly - anything connected to the WLAN has Full Allow to anything in the LAN, and vice versa but it will not lease an IP from my Server. I've also added another IP address to the server, so the physical NIC now has two IP's: 192.168.1.2 and 192.168.10.2 with a DHCP scope configured for each. Still no luck! Any ideas? Thanks!

    Read the article

  • iptables question

    - by RubyFreak
    i have a small network, with one valid IP and a firewall with 3 network interfaces (LAN, WAN, DMZ). I want to enable PAT on this valid IP to redirect http traffic to a server in my DMZ. (done) I want to enable MASQ on this ip from traffic that comes from my LAN (done) I want from my LAN as well to access my http server at DMZ. (partially) Question: in the above scenario, i cannot from my LAN, to access my http server in the DMZ, since it has the IP used by the MASQ (the only valid ip that i have). What would be the best option to solve this problem? network interfaces: eth0 (WAN) eth1 (DMZ) eth2 (LAN) /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE /sbin/iptables -A FORWARD --o eth1 -d 2.2.2.2 -p tcp --dport 80 -j ACCEPT /sbin/iptables -t nat -A PREROUTING -i eth0 -d 1.1.1.1 -p tcp --dport 80 -j DNAT --to 2.2.2.2 /sbin/iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT /sbin/iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT /sbin/iptables -A FORWARD -i eth2 -o eth0 -j ACCEPT

    Read the article

  • dd-wrt switch for PfSense

    - by Kmao
    I currently have eth2 on my pfsense set up, and configured as 192.168.1.1, it has dhcp setup with allocation being 192.168.1.10 - 192.168.1.245. On my dd-wrt box, i disabled the WAN, and set it to act as a port for the switch. I disabled dhcp, dnsmasq, spi firewall, Wlan0 and set a static IP for the router being 192.168.1.10 Pfsense is plugged into lan0 and pc plugged into lan1 (wan port is empty) I have followed a few different guides, but i can't seem to get my router to act as a switch. Anyone have success using DD-WRT as a switch while using pfsense as your dhcp/dns/gateway. Any advice would help :)

    Read the article

  • Process for migrating Dropbox to SpiderOak

    - by Marcel Janus
    I want to move my data from dropbox to SpiderOak. I have 3 computers running dropbox. But I have a poor WAN connection with very limited upload bandwidth. So I thought I do as first step install the dropbox client on my server on the internet an download there my data from dropbox. Then after this I upload/backup my data from this server with a broadband connection to SpiderOak. After the backup is completed I setup the sync between my 3 computers so that they will not have to upload the data again. Will this process will work so that I don't have to upload my data again over my WAN connection at home?

    Read the article

  • unreachable subnet in one direction

    - by Carl Michaud
    I'm unable to route traffic to a subnet in my home. Here's my topology: INET <- Router A <- Router B <- WIFI AP where: Router A - ARRIS TG862G/CT (from comcast) - WAN DHCP, LAN 10.0.0.99 Router B - Linksys WRT400N with DD-WRT - WAN 10.0.0.12 , LAN 10.0.1.1 I was able to use DD-WRT to configure a static route for the 10.0.1.x network back to the 10.0.0.x network. But I'm not sure if i can do the reverse on the ARRIS router and was looking for suggestions on how to fix that. I've set things up this wasy because I am using OpenDNS to mange internet content for my kids and of course I wasn't able to configure DNS to my liking on Router A either. So at present I'm using Router A on 10.0.0.x to provide a private unfiltered WIFI AP (no ssid broadcast) that I use for netflix only and then I use router B to provide a filtered WIFI AP for the rest of my WIFI devices on 10.0.1.x. However I would like to be able to connect to my 10.0.1.x devices from the internet via dynamic dns; but I can't see anything behind router B this way. Thoughts?

    Read the article

  • How can I setup a Firewall without NAT?

    - by SRobertJames
    We have 16 IP addresses from our ISP, and are setting up a SonicWall Firewall. I'd like to have the SonicWall do NAT for the LAN, but act as a firewall only (no NAT) for the servers which are using some of the 16 addresses. How do I set this up? If I set the WAN's subnet to include the 16 IPs, the SonicWall won't route the traffic to the LAN interface. Should I set the WAN subnet to only include the ones we are dedicating for NAT, and then keep the others on the LAN? Related point: How can I set multiple IP addresses for a SonicWall LAN interface?

    Read the article

  • Creating Ubuntu Browser App Frames

    - by user73006
    After watching the video i am inspired to create one browser but stuck at one place, could you please help me with this. Requirement = - Like you displayed in your Video i wan create Multiple Buttons in my Toolbar which will open Second ToolBar or Popup Window. - From that Pop Window i wanted to Select Specific Button Which will open My Required Browser. Question - - As displayed in your Video i create new BUtton and If i try to open new link using that it works but now i want to display tool bar or Popup window once any one click on that button, how can i do that.The Second Tool Bar Need to be Activated only after clicking on that button. Things i Tried - - As per my understanding i create Second Toolbar and on that tool bar i have created Button, now i wan know how do i link that tool bar with my Browser Toolbar button. - I tried that by passing Signal Property in Second Toolbar in Quickly but something is missing. MY Code class TvbrowserWindow(Window): gtype_name = "TvbrowserWindow" def finish_initializing(self, builder): # pylint: disable=E1002 """Set up the main window""" super(TvbrowserWindow, self).finish_initializing(builder) self.AboutDialog = AboutTvbrowserDialog self.PreferencesDialog = PreferencesTvbrowserDialog # Code for other initialization actions should be added here. self.refreshbutton=self.builder.get_object("refreshbutton") self.SONY=self.builder.get_object("SONY") self.urlentry=self.builder.get_object("urlentry") self.scrolledwindow1=self.builder.get_object("scrolledwindow1") self.webview = WebKit.WebView() self.scrolledwindow1.add(self.webview) self.webview.show() def on_refreshbutton_clicked(self, widget): print "refresh" def on_urlentry_activate(self, widget): url = widget.get_text() print url self.webview.open(url)

    Read the article

  • Disable ALTQ for internal network traffic

    - by javanix
    I currently have a FreeBSD 8.2 media server set up on my LAN that I use to stream my music from. I also have an SSH login that I use to do file transfers to and from this server remotely. I would like to set up ALTQ (and have gotten this working) to limit my outgoing bandwidth from the server for SSH traffic. However, configuring ALTQ this way is also limiting my internal traffic (and thus interfering with my music streaming) since I am only using a single network interface. Can anyone show me how I would use PF and ALTQ to limit outgoing WAN traffic while allowing all internal LAN traffic to go through unhindered? ext_if="eth0" int_if="eth0" altq on eth0 cbq bandwidth 1Mb queue { std, ssh } queue std bandwidth 80% cbq(default) queue ssh bandwidth 20% cbq(ecn) pass out on eth0 proto tcp to port 22 queue ssh eth0 is my LAN interface, my total WAN bandwidth on my cable connection is 1Mb/s, and my internal network is 10/100.

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • How to setup an Openvpn server with two gateways to internet

    - by fourat
    I have an openvpn server behind two wan interfaces: eth1 and eth2 where eth1 is the default gw and eth2 is where openvpn binds to. The problems my ovpn server is replying back to ovpn client via the default gw (through eth1) and the tcp negociation is lost before establishing any tunnel. Here's what's happening: wan client -----> eth2 ----> openvpn -----> eth1 ----> lost and not delivered back to client Is there a way to tell ovpn to stick on eth2 and consider it for all traffic ?

    Read the article

  • SQL 2008 R2 Mirroring Issue

    - by CWL
    Windows 2008 R2 with SQL 2008 R2 - Using Mirroring of a Database across the WAN in a HA setup with one witness. One issue I am having is during a failure (ever so often) the system fails over or tries, but leaves both databases in a Restoring State. My guess is the failover issue happens when there is a WAN bouncing and the systems get confused. The usual fix is to reboot the sql servers. Has anyone seen this type of failure? While this does not happen often it does causes an issue and concern with HA not being trusted fully.

    Read the article

  • update jframe in java or revalidate/repaint/ panel

    - by user1516251
    How to update a java frame with changed content I want to update a frame or just the panel with updated content. What do I use for this Here is where i want to revalidate the frame or repaint mainpanel or whatever will work I have tried a number of things, but none of them have worked. public void actionPerformed(ActionEvent e) { //System.out.println(e.getActionCommand()); if (e.getActionCommand().equals("advance")) { multi--; // Revalidate update repaint here <<<<<<<<<<<<<<<<<<< } else if (e.getActionCommand().equals("reverse")) { multi++; // Revalidate update repaint here <<<<<<<<<<<<<<<<<<< } else { openURL(e.getActionCommand()); } } Here is the whole java file /* * * */ package build; import java.lang.reflect.Method; import javax.swing.JOptionPane; import java.util.Arrays; import java.util.*; import java.util.ArrayList; import javax.swing.*; import javax.swing.AbstractButton; import javax.swing.JScrollPane; import javax.swing.JButton; import javax.swing.JPanel; import javax.swing.JFrame; import javax.swing.ImageIcon; import java.awt.*; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.event.KeyEvent; /* * ButtonDemo.java requires the following files: * images/right.gif * images/middle.gif * images/left.gif */ public class StockTable extends JPanel implements ActionListener { static int multi = 1; int roll = 0; static TextVars textvars = new TextVars(); static final String[] browsers = { "firefox", "opera", "konqueror", "epiphany", "seamonkey", "galeon", "kazehakase", "mozilla", "netscape" }; JFrame frame; JPanel mainpanel, panel1, panel2, panel3, panel4, panel2left, panel2center, panel2right; JButton stknames_btn[] = new JButton[textvars.getNumberOfStocks()]; JLabel label[] = new JLabel[textvars.getNumberOfStocks()]; JLabel headlabel, dayspan, namelabel; JRadioButton radioButton; JButton button; JScrollPane scrollpane; int wid = 825; public JPanel createContentPane() { mainpanel = new JPanel(); mainpanel.setPreferredSize(new Dimension(wid, 800)); mainpanel.setLayout(new GridBagLayout()); GridBagConstraints c = new GridBagConstraints(); panel1 = new JPanel(); panel1.setPreferredSize(new Dimension(wid, 25)); c.gridx = 0; c.gridy = 0; c.insets = new Insets(0,0,0,0); mainpanel.add(panel1, c); // Panel 2------------ panel2 = new JPanel(); panel2.setPreferredSize(new Dimension(wid, 51)); c.gridx = 0; c.gridy = 1; c.insets = new Insets(0,0,0,0); mainpanel.add(panel2, c); panel2left = new JPanel(); panel2left.setPreferredSize(new Dimension(270, 51)); c.gridx = 0; c.gridy = 1; c.insets = new Insets(0,0,0,0); panel2.add(panel2left, c); panel2center = new JPanel(); panel2center.setPreferredSize(new Dimension(258, 51)); c.gridx = 1; c.gridy = 1; c.insets = new Insets(0,0,0,0); panel2.add(panel2center, c); panel2right = new JPanel(); panel2right.setPreferredSize(new Dimension(270, 51)); c.gridx = 2; c.gridy = 1; c.insets = new Insets(0,0,0,0); panel2.add(panel2right, c); // ------------------ panel3 = new JPanel(); panel3.setLayout(new GridBagLayout()); scrollpane = new JScrollPane(panel3); scrollpane.setPreferredSize(new Dimension(wid, 675)); c.gridx = 0; c.gridy = 2; c.insets = new Insets(0,0,0,0); mainpanel.add(scrollpane, c); ImageIcon leftButtonIcon = createImageIcon("images/right.gif"); //b1 = new JButton("Disable middle button", leftButtonIcon); //b1.setVerticalTextPosition(AbstractButton.CENTER); //b1.setHorizontalTextPosition(AbstractButton.LEADING); //aka LEFT, for left-to-right locales //b1.setMnemonic(KeyEvent.VK_D); //b1.setActionCommand("disable"); //Listen for actions on buttons 1 //b1.addActionListener(this); //b1.setToolTipText("Click this button to disable the middle button."); //Add Components to this container, using the default FlowLayout. //add(b1); headlabel = new JLabel("hellorow1"); c.gridx = 0; c.gridy = 0; c.insets = new Insets(0, 0, 0, 0); panel1.add(headlabel, c); radioButton = new JRadioButton("Percentage"); c.gridx = 2; c.gridy = 0; c.insets = new Insets(0, 0, 0, 0); panel1.add(radioButton, c); radioButton = new JRadioButton("Days Range"); c.gridx = 3; c.gridy = 0; c.insets = new Insets(0, 0, 0, 0); panel1.add(radioButton, c); radioButton = new JRadioButton("Open / Close"); c.gridx = 4; c.gridy = 0; c.insets = new Insets(0, 0, 0,0 ); panel1.add(radioButton, c); button = new JButton("<<"); button.setPreferredSize(new Dimension(50, 50)); button.setActionCommand("reverse"); button.addActionListener(this); c.gridx = 0; c.gridy = 1; c.insets = new Insets(0, 0, 0, 0); panel2left.add(button, c); dayspan = new JLabel("hellorow2"); dayspan.setHorizontalAlignment(JLabel.CENTER); dayspan.setVerticalAlignment(JLabel.CENTER); dayspan.setPreferredSize(new Dimension(270, 50)); c.gridx = 1; c.gridy = 1; c.insets = new Insets(0, 0, 0, 0); panel2center.add(dayspan, c); button = new JButton(">>"); button.setPreferredSize(new Dimension(50, 50)); button.setActionCommand("advance"); button.addActionListener(this); if (multi == 0) { button.setEnabled(false); } else { button.setEnabled(true); } c.gridx = 2; c.gridy = 1; c.insets = new Insets(0, 0, 0, 0); panel2right.add(button, c); int availSpace_int = textvars.getStocks().size()-textvars.getNumberOfStocks()*7; ArrayList<String[]> stocknames = textvars.getStockNames(); ArrayList<String[]> stocks = textvars.getStocks(); for (int column = 0; column < 8; column++) { for (int row = 0; row < textvars.getNumberOfStocks(); row++) { if (column==0) { if (row==0) { namelabel = new JLabel(stocknames.get(0)[0]); namelabel.setVerticalAlignment(JLabel.CENTER); namelabel.setHorizontalAlignment(JLabel.CENTER); namelabel.setPreferredSize(new Dimension(100, 25)); c.gridx = column; c.gridy = row; c.insets = new Insets(0, 0, 0, 0); panel3.add(namelabel, c); } else { stknames_btn[row] = new JButton(stocknames.get(row)[0], leftButtonIcon); stknames_btn[row].setVerticalTextPosition(AbstractButton.CENTER); stknames_btn[row].setActionCommand(stocknames.get(row)[1]); stknames_btn[row].addActionListener(this); stknames_btn[row].setToolTipText("go to Google Finance "+stocknames.get(row)[0]); stknames_btn[row].setPreferredSize(new Dimension(100, 25)); c.gridx = column; c.gridy = row; c.insets = new Insets(0, 0, 0, 0); //scrollpane.add(stknames[row], c); panel3.add(stknames_btn[row], c); } } else { label[row]= new JLabel(textvars.getStocks().get(columnMulti(multi))[1]); label[row].setBorder(BorderFactory.createLineBorder(Color.black)); label[row].setVerticalAlignment(JLabel.CENTER); label[row].setHorizontalAlignment(JLabel.CENTER); label[row].setPreferredSize(new Dimension(100, 25)); c.gridx = column; c.gridy = row; c.insets = new Insets(0,0,0,0); panel3.add(label[row], c); } } } return mainpanel; } public void actionPerformed(ActionEvent e) { //System.out.println(e.getActionCommand()); if (e.getActionCommand().equals("advance")) { multi--; } else if (e.getActionCommand().equals("reverse")) { multi++; } else { openURL(e.getActionCommand()); } } /** Returns an ImageIcon, or null if the path was invalid. */ protected static ImageIcon createImageIcon(String path) { java.net.URL imgURL = StockTable.class.getResource(path); if (imgURL != null) { return new ImageIcon(imgURL); } else { System.err.println("Couldn't find file: " + path); return null; } } public static void openURL(String url) { String osName = System.getProperty("os.name"); try { if (osName.startsWith("Mac OS")) { Class<?> fileMgr = Class.forName("com.apple.eio.FileManager"); Method openURL = fileMgr.getDeclaredMethod("openURL", new Class[] {String.class}); openURL.invoke(null, new Object[] {url}); } else if (osName.startsWith("Windows")) { Runtime.getRuntime().exec("rundll32 url.dll,FileProtocolHandler " + url); } else { //assume Unix or Linux boolean found = false; for (String browser : browsers) if (!found) { found = Runtime.getRuntime().exec( new String[] {"which", browser}).waitFor() == 0; if (found) Runtime.getRuntime().exec(new String[] {browser, url}); } if (!found) throw new Exception(Arrays.toString(browsers)); } } catch (Exception e) { JOptionPane.showMessageDialog(null, "Error attempting to launch web browser\n" + e.toString()); } } int reit = 0; int start = textvars.getStocks().size()-((textvars.getNumberOfStocks()*5)*7)-1; public int columnMulti(int multi) { reit++; start++; if (reit == textvars.getNumberOfStocks()) { reit = 0; start=start+64; } //start = start - (multi*(textvars.getNumberOfStocks())); return start; } /** * Create the GUI and show it. For thread safety, * this method should be invoked from the * event-dispatching thread. */ private static void createAndShowGUI() { //Create and set up the window. JFrame frame = new JFrame("Stock Table"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); //Create and set up the content pane. StockTable newContentPane = new StockTable(); //newContentPane.setOpaque(true); //content panes must be opaque //frame.setContentPane(newContentPane); frame.setContentPane(newContentPane.createContentPane()); frame.setSize(800, 800); //Display the window. frame.pack(); frame.setVisible(true); } public static void main(String[] args) { //Schedule a job for the event-dispatching thread: //creating and showing this application's GUI. javax.swing.SwingUtilities.invokeLater(new Runnable() { public void run() { createAndShowGUI(); } }); } }

    Read the article

  • ob_start() is partially capturing data

    - by AAA
    I am using the following code: PHP: // Generate Guid function NewGuid() { $s = strtoupper(uniqid(rand(),true)); $guidText = substr($s,0,8) . '-' . substr($s,8,4) . '-' . substr($s,12,4). '-' . substr($s,16,4). '-' . substr($s,20); return $guidText; } // End Generate Guid $Guid = NewGuid(); $alphabet = '123456789abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ'; function base_encode($num, $alphabet) { $base_count = strlen($alphabet); $encoded = ''; while ($num >= $base_count) { $div = $num/$base_count; $mod = ($num-($base_count*intval($div))); $encoded = $alphabet[$mod] . $encoded; $num = intval($div); } if ($num) $encoded = $alphabet[$num] . $encoded; return $encoded; } function base_decode($num, $alphabet) { $decoded = 0; $multi = 1; while (strlen($num) > 0) { $digit = $num[strlen($num)-1]; $decoded += $multi * strpos($alphabet, $digit); $multi = $multi * strlen($alphabet); $num = substr($num, 0, -1); } return $decoded; } ob_start(); echo base_encode($Guid, $alphabet); //should output: bUKpk $theid = ob_get_contents(); ob_get_clean(); The problem: When i echo $theid, it shows the complete entry, but as it is being inserted into the database, only the first entry in the sequence gets inserted, for example for the entry buKPK, only 'b' is being inserted not the rest.

    Read the article

  • Windows Build System: How to build a project (from its source code) which doesn't have *.sln or Visu

    - by claws
    I'm facing this problem. So, I need to build the support libraries (zlib, libtiff, libpng, libxml2, libiconv) with "Multithreaded DLL" (/MD) & "Multithreaded DLL Debug" (/MDd) run-time options. But the problem is there is no direct way . I mean there is no *.sln / *.vcproj file which I can open in Visual C++ and build it. I'm aware with the GNU build system: $./configure --with-all-sorts-of-required-switches $./make $./make install During my search I've encountered with something called CMake which generates *.vcproj & *.sln file but for that CMakeLists.txt is required. Not all projects provide CMakeLists.txt. I've never compiled anything from Visual C++ Command Line. Generally most projects provide makefile. Now how do I generate *.vcproj / *.sln from this? Can I compile with mingw-make of MinGW? If I can, how do I set different options ("Multi-Threaded"(/MT), "Multi-Threaded Debug"(/MTd), "Multi-Threaded DLL"(/MD), "Multi-Threaded DLL Debug"(/MDd)) for run-time libraries? I don't know what other ways are available. Please throw some light on this.

    Read the article

  • Parsing a JSON feed from YQL using jQuery

    - by Keith
    I am using YQL's query.multi to grab multiple feeds so I can parse a single JSON feed with jQuery and reduce the number of connections I'm making. In order to parse a single feed, I need to be able to check the type of result (photo, item, entry, etc) so I can pull out items in specific ways. Because of the way the items are nested within the JSON feed, I'm not sure the best way to loop through the results and check the type and then loop through the items to display them. Here is a YQL (http://developer.yahoo.com/yql/console/) query.multi example and you can see three different result types (entry, photo, and item) and then the items nested within them: select * from query.multi where queries= "select * from twitter.user.timeline where id='twitter'; select * from flickr.photos.search where has_geo='true' and text='san francisco'; select * from delicious.feeds.popular" or here is the JSON feed itself: http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20query.multi%20where%20queries%3D%22select%20*%20from%20flickr.photos.search%20where%20user_id%3D'23433895%40N00'%3Bselect%20*%20from%20delicious.feeds%20where%20username%3D'keith.muth'%3Bselect%20*%20from%20twitter.user.timeline%20where%20id%3D'keithmuth'%22&format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys&callback=

    Read the article

  • txt file read/overwrite/append. Is this feasible? (Visual C#)

    - by Arcadian
    Hi, I'm writing a program for some data entry I have to periodically do. I have begun testing a few things that the program will have to do but i'm not sure about this part. What i need this part to do is: read a .txt file of data take the first 12 characters from each line take the first 12 characters from each line of the data that has been entered in a multi-line text box compare the two lists line by line if one of the 12 character blocks from the multi-line text box match one of the blocks in the .txt file then overwrite that entire line (only 17 characters in total) if one of the 12 character blocks from the multi-line text box DO NOT match any of the blocks in the.txt file then append that entire line to the file thats all it has to do. i'll do an example: TXT FILE: G01:78:08:32 JG05 G08:80:93:10 JG02 G28:58:29:28 JG04 MULTI-LINE TEXT BOX: G01:78:08:32 JG06 G28:58:29:28 JG03 G32:10:18:14 JG01 G32:18:50:78 JG07 RESULTING TXT FILE: G01:78:08:32 JG06 G08:80:93:10 JG02 G28:58:29:28 JG03 G32:10:18:14 JG01 G32:18:50:78 JG07 as you can see lines 1 and 3 were overwriten, line 2 was left alone as it did not match any blocks in the text box, lines 4 and 5 were appended to the file. thats all i want it to do. How do i go about this? Thanks in advance

    Read the article

  • C# 4: The Curious ConcurrentDictionary

    - by James Michael Hare
    In my previous post (here) I did a comparison of the new ConcurrentQueue versus the old standard of a System.Collections.Generic Queue with simple locking.  The results were exactly what I would have hoped, that the ConcurrentQueue was faster with multi-threading for most all situations.  In addition, concurrent collections have the added benefit that you can enumerate them even if they're being modified. So I set out to see what the improvements would be for the ConcurrentDictionary, would it have the same performance benefits as the ConcurrentQueue did?  Well, after running some tests and multiple tweaks and tunes, I have good and bad news. But first, let's look at the tests.  Obviously there's many things we can do with a dictionary.  One of the most notable uses, of course, in a multi-threaded environment is for a small, local in-memory cache.  So I set about to do a very simple simulation of a cache where I would create a test class that I'll just call an Accessor.  This accessor will attempt to look up a key in the dictionary, and if the key exists, it stops (i.e. a cache "hit").  However, if the lookup fails, it will then try to add the key and value to the dictionary (i.e. a cache "miss").  So here's the Accessor that will run the tests: 1: internal class Accessor 2: { 3: public int Hits { get; set; } 4: public int Misses { get; set; } 5: public Func<int, string> GetDelegate { get; set; } 6: public Action<int, string> AddDelegate { get; set; } 7: public int Iterations { get; set; } 8: public int MaxRange { get; set; } 9: public int Seed { get; set; } 10:  11: public void Access() 12: { 13: var randomGenerator = new Random(Seed); 14:  15: for (int i=0; i<Iterations; i++) 16: { 17: // give a wide spread so will have some duplicates and some unique 18: var target = randomGenerator.Next(1, MaxRange); 19:  20: // attempt to grab the item from the cache 21: var result = GetDelegate(target); 22:  23: // if the item doesn't exist, add it 24: if(result == null) 25: { 26: AddDelegate(target, target.ToString()); 27: Misses++; 28: } 29: else 30: { 31: Hits++; 32: } 33: } 34: } 35: } Note that so I could test different implementations, I defined a GetDelegate and AddDelegate that will call the appropriate dictionary methods to add or retrieve items in the cache using various techniques. So let's examine the three techniques I decided to test: Dictionary with mutex - Just your standard generic Dictionary with a simple lock construct on an internal object. Dictionary with ReaderWriterLockSlim - Same Dictionary, but now using a lock designed to let multiple readers access simultaneously and then locked when a writer needs access. ConcurrentDictionary - The new ConcurrentDictionary from System.Collections.Concurrent that is supposed to be optimized to allow multiple threads to access safely. So the approach to each of these is also fairly straight-forward.  Let's look at the GetDelegate and AddDelegate implementations for the Dictionary with mutex lock: 1: var addDelegate = (key,val) => 2: { 3: lock (_mutex) 4: { 5: _dictionary[key] = val; 6: } 7: }; 8: var getDelegate = (key) => 9: { 10: lock (_mutex) 11: { 12: string val; 13: return _dictionary.TryGetValue(key, out val) ? val : null; 14: } 15: }; Nothing new or fancy here, just your basic lock on a private object and then query/insert into the Dictionary. Now, for the Dictionary with ReadWriteLockSlim it's a little more complex: 1: var addDelegate = (key,val) => 2: { 3: _readerWriterLock.EnterWriteLock(); 4: _dictionary[key] = val; 5: _readerWriterLock.ExitWriteLock(); 6: }; 7: var getDelegate = (key) => 8: { 9: string val; 10: _readerWriterLock.EnterReadLock(); 11: if(!_dictionary.TryGetValue(key, out val)) 12: { 13: val = null; 14: } 15: _readerWriterLock.ExitReadLock(); 16: return val; 17: }; And finally, the ConcurrentDictionary, which since it does all it's own concurrency control, is remarkably elegant and simple: 1: var addDelegate = (key,val) => 2: { 3: _concurrentDictionary[key] = val; 4: }; 5: var getDelegate = (key) => 6: { 7: string s; 8: return _concurrentDictionary.TryGetValue(key, out s) ? s : null; 9: };                    Then, I set up a test harness that would simply ask the user for the number of concurrent Accessors to attempt to Access the cache (as specified in Accessor.Access() above) and then let them fly and see how long it took them all to complete.  Each of these tests was run with 10,000,000 cache accesses divided among the available Accessor instances.  All times are in milliseconds. 1: Dictionary with Mutex Locking 2: --------------------------------------------------- 3: Accessors Mostly Misses Mostly Hits 4: 1 7916 3285 5: 10 8293 3481 6: 100 8799 3532 7: 1000 8815 3584 8:  9:  10: Dictionary with ReaderWriterLockSlim Locking 11: --------------------------------------------------- 12: Accessors Mostly Misses Mostly Hits 13: 1 8445 3624 14: 10 11002 4119 15: 100 11076 3992 16: 1000 14794 4861 17:  18:  19: Concurrent Dictionary 20: --------------------------------------------------- 21: Accessors Mostly Misses Mostly Hits 22: 1 17443 3726 23: 10 14181 1897 24: 100 15141 1994 25: 1000 17209 2128 The first test I did across the board is the Mostly Misses category.  The mostly misses (more adds because data requested was not in the dictionary) shows an interesting trend.  In both cases the Dictionary with the simple mutex lock is much faster, and the ConcurrentDictionary is the slowest solution.  But this got me thinking, and a little research seemed to confirm it, maybe the ConcurrentDictionary is more optimized to concurrent "gets" than "adds".  So since the ratio of misses to hits were 2 to 1, I decided to reverse that and see the results. So I tweaked the data so that the number of keys were much smaller than the number of iterations to give me about a 2 to 1 ration of hits to misses (twice as likely to already find the item in the cache than to need to add it).  And yes, indeed here we see that the ConcurrentDictionary is indeed faster than the standard Dictionary here.  I have a strong feeling that as the ration of hits-to-misses gets higher and higher these number gets even better as well.  This makes sense since the ConcurrentDictionary is read-optimized. Also note that I tried the tests with capacity and concurrency hints on the ConcurrentDictionary but saw very little improvement, I think this is largely because on the 10,000,000 hit test it quickly ramped up to the correct capacity and concurrency and thus the impact was limited to the first few milliseconds of the run. So what does this tell us?  Well, as in all things, ConcurrentDictionary is not a panacea.  It won't solve all your woes and it shouldn't be the only Dictionary you ever use.  So when should we use each? Use System.Collections.Generic.Dictionary when: You need a single-threaded Dictionary (no locking needed). You need a multi-threaded Dictionary that is loaded only once at creation and never modified (no locking needed). You need a multi-threaded Dictionary to store items where writes are far more prevalent than reads (locking needed). And use System.Collections.Concurrent.ConcurrentDictionary when: You need a multi-threaded Dictionary where the writes are far more prevalent than reads. You need to be able to iterate over the collection without locking it even if its being modified. Both Dictionaries have their strong suits, I have a feeling this is just one where you need to know from design what you hope to use it for and make your decision based on that criteria.

    Read the article

  • How to safely reboot via First Boot script

    - by unixman
    With the cost and performance benefits of the SPARC T4 and SPARC T5 systems undeniably validated, the banking sector is actively moving to Solaris 11.  I was recently asked to help a banking customer of ours look at migrating some of their Solaris 10 logic over to Solaris 11.  While we've introduced a number of holistic improvements in Solaris 11, in terms of how we ease long-term software lifecycle management, it is important to appreciate that customers may not be able to move all of their Solaris 10 scripts and procedures at once; there are years of scripts that reflect fine-tuned requirements of proprietary banking software that gets layered on top of the operating system. One of these requirements is to go through a cycle of reboots, after the system is installed, in order to ensure appropriate software dependencies and various configuration files are in-place. While Solaris 10 introduced a facility that aids here, namely SMF, many of our customers simply haven't yet taken the time to take advantage of this - proceeding with logic that, while functional, without further analysis has an appearance of not being optimal in terms of taking advantage of all the niceties bundled in Solaris 11 at no extra cost. When looking at Solaris 11, we recognize that one of the vehicles that bridges the gap between getting the operating system image payload delivered, and the customized banking software installed, is a notion of a First Boot script.  I had a working example of this at one of the Oracle OpenWorld sessions a few years ago - we've since improved our documentation and have introduced sections where this is described in better detail.   If you're looking at this for the first time and you've not worked with IPS and SMF previously, you might get the sense that the tasks are daunting.   There is a set of technologies involved that are jointly engineered in order to make the process reliable, predictable and extensible. As you go down the path of writing your first boot script, you'll be faced with a need to wrap it into a SMF service and then packaged into a IPS package. The IPS package would then need to be placed onto your IPS repository, in order to subsequently be made available to all of your AI (Automated Install) clients (i.e. the systems that you're installing Solaris and your software onto).     With this blog post, I wanted to create a single place that outlines the entire process (simplistically), and provide a hint of how a good old "at" command may make the requirement of forcing an initial reboot handy. The syntax and references to commands here is based on running this on a version of Solaris 11 that has been updated since its initial release in 2011 (i.e. I am writing this on Solaris 11.1) Assuming you've built an AI server (see this How To article for an example), you might be asking yourself: "Ok, I've got some logic that I need executed AFTER Solaris is deployed and I need my own little script that would make that happen. How do I go about hooking that script into the Solaris 11 AI framework?"  You might start here, in Chapter 13 of the "Installing Oracle Solaris 11.1 Systems" guide, which talks about "Running a Custom Script During First Boot".  And as you do, you'll be confronted with command that might be unfamiliar to you if you're new to Solaris 11, like our dear new friend: svcbundle svcbundle is an aide to creating manifests and profiles.  It is awesome, but don't let its awesomeness overwhelm you. (See this How To article by my colleague Glynn Foster for a nice working example).  In order to get your script's logic integrated into the Solaris 11 deployment process, you need to wrap your (shell) script into 2 manifests -  a SMF service manifest and a IPS package manifest.  ....and if you're new to XML, well then -- buckle up We have some examples of small first boot scripts shown here, as templates to build upon. Necessary structure of the script, particularly in leveraging SMF interfaces, is key. I won't go into that here as that is covered nicely in the doc link above.    Let's say your script ends up looking like this (btw: if things appear to be cut-off in your browser, just select them, copy and paste into your editor and it'll be grabbed - the source gets captured eventhough the browser may not render it "correctly" - ah, computers). #!/bin/sh # Load SMF shell support definitions . /lib/svc/share/smf_include.sh # If nothing to do, exit with temporary disable completed=`svcprop -p config/completed site/first-boot-script-svc:default` [ "${completed}" = "true" ] && \ smf_method_exit $SMF_EXIT_TEMP_DISABLE completed "Configuration completed" # Obtain the active BE name from beadm: The active BE on reboot has an R in # the third column of 'beadm list' output. Its name is in column one. bename=`beadm list -Hd|nawk -F ';' '$3 ~ /R/ {print $1}'` beadm create ${bename}.orig echo "Original boot environment saved as ${bename}.orig" # ---- Place your one-time configuration tasks here ---- # For example, if you have to pull some files from your own pre-existing system: /usr/bin/wget -P /var/tmp/ $PULL_DOWN_ADDITIONAL_SCRIPTS_FROM_A_CORPORATE_SYSTEM /usr/bin/chmod 755 /var/tmp/$SCRIPTS_THAT_GOT_PULLED_DOWN_IN_STEP_ABOVE # Clearly the above 2 lines represent some logic that you'd have to customize to fit your needs. # # Perhaps additional things you may want to do here might be of use, like # (gasp!) configuring ssh server for root login and X11 forwarding (for testing), and the like... # # Oh and by the way, after we're done executing all of our proprietary scripts we need to reboot # the system in accordance with our operational software requirements to ensure all layered bits # get initialized properly and pull-in their own modules and components in the right sequence, # subsequently. # We need to set a "time bomb" reboot, that would take place upon completion of this script. # We already know that *this* script depends on multi-user-server SMF milestone, so it should be # safe for us to schedule a reboot for 5 minutes from now. The "at" job get scheduled in the queue # while our little script continues thru the rest of the logic. /usr/bin/at now + 5 minutes <<REBOOT /usr/bin/sync /usr/sbin/reboot REBOOT # ---- End of your customizations ---- # Record that this script's work is done svccfg -s site/first-boot-script-svc:default setprop config/completed = true svcadm refresh site/first-boot-script-svc:default smf_method_exit $SMF_EXIT_TEMP_DISABLE method_completed "Configuration completed"  ...and you're happy with it and are ready to move on. Where do you go and what do you do? The next step is creating the IPS package for your script. Since running the logic of your script constitutes a service, you need to create a service manifest. This is described here, in the middle of Chapter 13 of "Creating an IPS package for the script and service".  Assuming the name of your shell script is first-boot-script.sh, you could end up doing the following: $ cd some_working_directory_for_this_project$ mkdir -p proto/lib/svc/manifest/site$ mkdir -p proto/opt/site $ cp first-boot-script.sh proto/opt/site  Then you would create the service manifest  file like so: $ svcbundle -s service-name=site/first-boot-script-svc \ -s start-method=/opt/site/first-boot-script.sh \ -s instance-property=config:completed:boolean:false -o \ first-boot-script-svc-manifest.xml   ...as described here, and place it into the directory hierarchy above. But before you place it into the directory, make sure to inspect the manifest and adjust the appropriate service dependencies.  That is to say, you want to properly specify what milestone should be reached before your service runs.  There's a <dependency> section that looks like this, before you modify it: <dependency restart_on="none" type="service" name="multi_user_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user"/>  </dependency>  So if you'd like to have your service run AFTER the multi-user-server milestone has been reached (i.e. later, as multi-user-server has more dependencies then multi-user and our intent to reboot the system may have significant ramifications if done prematurely), you would modify that section to read:  <dependency restart_on="none" type="service" name="multi_user_server_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user-server"/>  </dependency> Save the file and validate it: $ svccfg validate first-boot-script-svc-manifest.xml Assuming there are no errors returned, copy the file over into the directory hierarchy: $ cp first-boot-script-svc-manifest.xml proto/lib/svc/manifest/site Now that we've created the service manifest (.xml), create the package manifest (.p5m) file named: first-boot-script.p5m.  Populate it as follows: set name=pkg.fmri value=first-boot-script-AT-1-DOT-0,5.11-0 set name=pkg.summary value="AI first-boot script" set name=pkg.description value="Script that runs at first boot after AI installation" set name=info.classification value=\ "org.opensolaris.category.2008:System/Administration and Configuration" file lib/svc/manifest/site/first-boot-script-svc-manifest.xml \ path=lib/svc/manifest/site/first-boot-script-svc-manifest.xml owner=root \ group=sys mode=0444 dir path=opt/site owner=root group=sys mode=0755 file opt/site/first-boot-script.sh path=opt/site/first-boot-script.sh \ owner=root group=sys mode=0555 Now we are going to publish this package into a IPS repository. If you don't have one yet, don't worry. You have 2 choices: You can either  publish this package into your mirror of the Oracle Solaris IPS repo or create your own customized repo.  The best practice is to create your own customized repo, leaving your mirror of the Oracle Solaris IPS repo untouched.  From this point, you have 2 choices as well - you can either create a repo that will be accessible by your clients via HTTP or via NFS.  Since HTTP is how the default Solaris repo is accessed, we'll go with HTTP for your own IPS repo.   This nice and comprehensive How To by Albert White describes how to create multiple internal IPS repos for Solaris 11. We'll zero in on the basic elements for our needs here: We'll create the IPS repo directory structure hanging off a separate ZFS file system, and we'll tie it into an instance of pkg.depotd. We do this because we want our IPS repo to be accessible to our AI clients through HTTP, and the pkg.depotd SMF service bundled in Solaris 11 can help us do this. We proceed as follows: # zfs create rpool/export/MyIPSrepo # pkgrepo create /export/MyIPSrepo # svccfg -s pkg/server add MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg pkg application # svccfg -s pkg/server:MyIPSrepo setprop pkg/port=10081 # svccfg -s pkg/server:MyIPSrepo setprop pkg/inst_root=/export/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg general framework # svccfg -s pkg/server:MyIPSrepo addpropvalue general/complete astring: MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpropvalue general/enabled boolean: true # svccfg -s pkg/server:MyIPSrepo setprop pkg/readonly=true # svccfg -s pkg/server:MyIPSrepo setprop pkg/proxy_base = astring: http://your_internal_websrvr/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo setprop pkg/threads = 200 # svcadm refresh application/pkg/server:MyIPSrepo # svcadm enable application/pkg/server:MyIPSrepo Now that the IPS repo is created, we need to publish our package into it: # pkgsend publish -d ./proto -s /export/MyIPSrepo first-boot-script.p5m If you find yourself making changes to your script, remember to up-rev the version in the .p5m file (which is your IPS package manifest), and re-publish the IPS package. Next, you need to go to your AI install server (which might be the same machine) and modify the AI manifest to include a reference to your newly created package.  We do that by listing an additional publisher, which would look like this (replacing the IP address and port with your own, from the "svccfg" commands up above): <publisher name="firstboot"> <origin name="http://192.168.1.222:10081"/> </publisher>  Further down, in the  <software_data action="install">  section add: <name>pkg:/first-boot-script</name> Make sure to update your Automated Install service with the new AI manifest via installadm update-manifest command.  Don't forget to boot your client from the network to watch the entire process unfold and your script get tested.  Once the system makes the initial reboot, the first boot script will be executed and whatever logic you've specified in it should be executed, too, followed by a nice reboot. When the system comes up, your service should stay in a disabled state, as specified by the tailing lines of your SMF script - this is normal and should be left as is as it helps provide an auditing trail for you.   Because the reboot is quite a significant action for the system, you may want to add additional logic to the script that actually places and then checks for presence of certain lock files in order to avoid doing a reboot unnecessarily. You may also want to, alternatively, remove the SMF service entirely - if you're unsure of the potential for someone to try and accidentally enable that service -- eventhough its role in life is to only run once upon the system's first boot. That is how I spent a good chunk of my pre-Halloween time this week, hope yours was just as SPARCkly^H^H^H^H fun!    

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >