Search Results

Search found 7473 results on 299 pages for 'usage statistics'.

Page 283/299 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • Can ping IP address and nslookup hostname but cannot ping hostname

    - by jao
    On a windows 2003 server I can nslookup www.google.com which returns Server: localhost Address: 127.0.0.1 Non-authoritative answer: Name: www.l.google.com Addresses: 74.125.79.104, 74.125.79.147, 74.125.79.99 Aliases: www.google.com I can then ping 74.125.79.104: Pinging 74.125.79.104 with 32 bytes of data: Reply from 74.125.79.104: bytes=32 time=16ms TTL=54 Reply from 74.125.79.104: bytes=32 time=32ms TTL=54 Reply from 74.125.79.104: bytes=32 time=15ms TTL=54 Reply from 74.125.79.104: bytes=32 time=15ms TTL=54 Ping statistics for 74.125.79.104: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 15ms, Maximum = 32ms, Average = 19ms But I cannot ping www.google.com: Ping request could not find host www.google.com. Please check the name and try again. (this one is different from the other question in that this one has a TLD, it is not a local domain.) Update: I am running a dns server at localhost (127.0.0.1). Even when I change it to use for example opendns, it still can nslookup hostname and ping ip address, but not ping hostname. So what is wrong? Update 2: here is the ipconfig /all result: Windows IP Configuration Host Name . . . . . . . . . . . . : SERVER Primary Dns Suffix . . . . . . . : NETWORK.local Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : NETWORK.local Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetXtreme Gigabit Ethernet #2 Physical Address. . . . . . . . . : 00-0F-1F-56-3B-AA DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.7.2 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.7.1 DNS Servers . . . . . . . . . . . : 127.0.0.1 Update 3: Thanks everyone for their help and suggestions. I appreciate that. Ipconfig /flushdns returns: Sucessfully flushed the DNS resolver cache Ipconfig /displaydns returns: 2.7.168.192.in-addr.arpa ---------------------------------------- Record Name . . . . . : 2.7.168.192.in-addr.arpa. Record Type . . . . . : 12 Time To Live . . . . : 0 Data Length . . . . . : 4 Section . . . . . . . : Answer PTR Record . . . . . : webserver.mydomainname.com 1.0.0.127.in-addr.arpa ---------------------------------------- Record Name . . . . . : 1.0.0.127.in-addr.arpa. Record Type . . . . . : 12 Time To Live . . . . : 0 Data Length . . . . . : 4 Section . . . . . . . : Answer PTR Record . . . . . : localhost Update 4: Wireshark shows the following: 3 11.540542 208.67.220.220 192.168.7.2 DNS Standard query response A 74.125.79.99 A 74.125.79.104 A 74.125.79.147 6 42.056794 192.168.7.2 192.168.7.255 NBNS Name query NB WWW.GOOGLE.COM<00> which is weird: when I ping, it sends a packet to 192.168.7.255 instead of asking the DNS server for an address

    Read the article

  • All downloads being interrupted

    - by Jake
    System: Windows 7 Professional 64bit. 8GB RAM, Intel i5-2400 CPU, +300GB free on the hard drive. AVG Internet Security 2012 (enabled & disabled, with firewall enabled and disabled - no effect for either). This computer is less than a year old. Network: This problem is occurring on a single computer on a network with multiple computers. The router is a Motorola Netopia 3347-02 (DSL Modem/Wireless Router combined). The computer is plugged in directly to the modem, other computers are using the wireless successfully. The router has been reset. The only thing odd about the connection between the router and computer is that it is configured to allow RDP through, so it is assigned a static IP by the router and port forwarding is enabled for port 3389. Also, though I doubt it matters, a second wireless router is active behind this router providing a second network that some computers in the area use without issues. Details: All downloads initiated on this specific computer eventually fail, this includes streaming from youtube, specialized downloads (itunes), downloads from websites, FTP downloads, etc. Failure occurs with all browsers, but in chrome this is the process it takes: 1) Download begins normally, 2) At some point between (observed) 7MBs and 229MBs the download stops progressing (at this point, if watching chrome's task manager, you can see the network activity for the downloading tab drop to 0kps), 3) for some time the download sits there still attempting to complete, but will eventually display "123,049,871/0 B, Interrupted" (where the number is whatever it actually got to). The file I am using to test this is a very large .zip file located on a server I control, but the problem seems to occur on any site. The amount downloaded is completely random, and seems to be more time-based than anything (if I start a download immediately after the last one fails, it tends to get further than the last one). Small files can get through for this reason, though they can fail as well. In a test where I simultaneously downloaded the same file via HTTP (chrome) and FTP (windows explorer), both downloads failed at the same instant, though explorer displayed "Connection timed out" several minutes before chrome finally showed the download as interrupted. Other things I have tried based on advice given to people with similar/identical problems: Setting my MTU to 1492 (as described here: http://blog.thecompwiz.com/2011/08/networking-issues.html) Disabling write caching to the hard drive storing the download on an external device successfully transmitted +1GB file from one computer on the same network to this computer disabling indexing in the folder the download was being stored in disabling all security software checked to make sure all drivers were up to date read about 50 accounts with nearly exact descriptions of what I'm experiencing, none of which had a solution given Running Processes: Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ System Idle Process 0 Services 0 24 K System 4 Services 0 104,836 K smss.exe 332 Services 0 1,276 K csrss.exe 764 Services 0 5,060 K wininit.exe 820 Services 0 4,748 K csrss.exe 844 Console 1 23,764 K services.exe 876 Services 0 11,856 K lsass.exe 892 Services 0 14,420 K lsm.exe 900 Services 0 7,820 K winlogon.exe 944 Console 1 7,716 K svchost.exe 428 Services 0 12,744 K svchost.exe 796 Services 0 12,240 K svchost.exe 1036 Services 0 22,372 K svchost.exe 1084 Services 0 174,132 K svchost.exe 1112 Services 0 56,144 K svchost.exe 1288 Services 0 18,640 K svchost.exe 1404 Services 0 29,616 K spoolsv.exe 1576 Services 0 25,924 K svchost.exe 1616 Services 0 12,788 K AppleMobileDeviceService. 1728 Services 0 9,796 K avgwdsvc.exe 1820 Services 0 8,268 K mDNSResponder.exe 1844 Services 0 5,832 K w3dbsmgr.exe 1108 Services 0 43,760 K QBCFMonitorService.exe 1336 Services 0 16,408 K svchost.exe 2404 Services 0 28,240 K taskhost.exe 3020 Console 1 12,372 K dwm.exe 2280 Console 1 5,968 K explorer.exe 2964 Console 1 152,476 K WUDFHost.exe 3316 Services 0 6,740 K svchost.exe 3408 Services 0 5,556 K RAVCpl64.exe 3684 Console 1 13,864 K igfxtray.exe 3700 Console 1 7,804 K hkcmd.exe 3772 Console 1 7,868 K igfxpers.exe 3788 Console 1 10,940 K sidebar.exe 3836 Console 1 84,400 K chrome.exe 3964 Console 1 19,640 K pptd40nt.exe 4068 Console 1 5,156 K acrotray.exe 3908 Console 1 14,676 K avgtray.exe 3872 Console 1 9,508 K jusched.exe 4076 Console 1 4,412 K iTunesHelper.exe 1532 Console 1 87,308 K SearchIndexer.exe 3492 Services 0 36,948 K iPodService.exe 4136 Services 0 7,944 K BrccMCtl.exe 4276 Console 1 18,132 K splwow64.exe 4380 Console 1 32,600 K qbupdate.exe 4836 Console 1 24,236 K svchost.exe 4288 Services 0 20,700 K wmpnetwk.exe 3112 Services 0 9,516 K FNPLicensingService.exe 5248 Services 0 5,852 K QBW32.EXE 5508 Console 1 127,068 K QBDBMgrN.exe 5600 Services 0 42,252 K EXCEL.EXE 2512 Console 1 99,100 K LMS.exe 3188 Services 0 5,616 K UNS.exe 1600 Services 0 7,308 K axlbridge.exe 5260 Console 1 5,132 K chrome.exe 5888 Console 1 200,336 K chrome.exe 3536 Console 1 26,076 K chrome.exe 1952 Console 1 20,168 K chrome.exe 4596 Console 1 24,696 K chrome.exe 4292 Console 1 48,096 K chrome.exe 2796 Console 1 23,520 K Acrobat.exe 1240 Console 1 87,252 K 123w.exe 4892 Console 1 22,728 K calc.exe 1700 Console 1 12,636 K chrome.exe 1328 Console 1 28,888 K chrome.exe 3696 Console 1 47,012 K rundll32.exe 6320 Console 1 7,104 K chrome.exe 4928 Console 1 44,248 K AVGIDSAgent.exe 260 Services 0 12,940 K avgfws.exe 6052 Services 0 26,912 K avgnsa.exe 5064 Services 0 2,496 K avgrsa.exe 3088 Services 0 2,200 K avgcsrva.exe 2596 Services 0 380 K avgcsrva.exe 6948 Services 0 408 K StikyNot.exe 452 Console 1 14,772 K chrome.exe 4580 Console 1 28,200 K chrome.exe 4016 Console 1 57,756 K svchost.exe 7140 Services 0 4,500 K chrome.exe 6264 Console 1 56,824 K chrome.exe 7008 Console 1 56,896 K chrome.exe 2224 Console 1 38,032 K taskhost.exe 612 Console 1 7,228 K chrome.exe 6000 Console 1 10,928 K chrome.exe 2568 Console 1 43,052 K chrome.exe 272 Console 1 75,988 K chrome.exe 7328 Console 1 53,240 K PaprPort.exe 7976 Console 1 137,152 K pplinks.exe 7500 Console 1 14,052 K ppscanmg.exe 5744 Console 1 18,996 K taskeng.exe 7388 Console 1 6,308 K SearchProtocolHost.exe 8024 Services 0 8,804 K SearchFilterHost.exe 7232 Services 0 7,848 K chrome.exe 8016 Console 1 37,440 K cmd.exe 7692 Console 1 3,096 K conhost.exe 7516 Console 1 5,872 K tasklist.exe 8160 Console 1 5,772 K WmiPrvSE.exe 7684 Services 0 6,400 K Any help with this would be greatly appreciated, I've been beating my head against a wall over this all day. This computer serves dual purpose as the main company document server and the Owner's work computer, it's fairly important it be fully functional and I cannot figure this out.

    Read the article

  • ASP.NET WebControl ITemplate child controls are null

    - by tster
    Here is what I want: I want a control to put on a page, which other developers can place form elements inside of to display the entities that my control is searching. I have the Searching logic all working. The control builds custom search fields and performs searches based on declarative C# classes implementing my SearchSpec interface. Here is what I've been trying: I've tried using ITemplate on a WebControl which implements INamingContainer I've tried implementing a CompositeControl The closest I can get to working is below. OK I have a custom WebControl [ AspNetHostingPermission(SecurityAction.Demand, Level = AspNetHostingPermissionLevel.Minimal), AspNetHostingPermission(SecurityAction.InheritanceDemand, Level = AspNetHostingPermissionLevel.Minimal), DefaultProperty("SearchSpecName"), ParseChildren(true), ToolboxData("<{0}:SearchPage runat=\"server\"> </{0}:SearchPage>") ] public class SearchPage : WebControl, INamingContainer { [Browsable(false), PersistenceMode(PersistenceMode.InnerProperty), DefaultValue(typeof(ITemplate), ""), Description("Form template"), TemplateInstance(TemplateInstance.Single), TemplateContainer(typeof(FormContainer))] public ITemplate FormTemplate { get; set; } public class FormContainer : Control, INamingContainer{ } public Control MyTemplateContainer { get; private set; } [Bindable(true), Category("Behavior"), DefaultValue(""), Description("The class name of the SearchSpec to use."), Localizable(false)] public virtual string SearchSpecName { get; set; } [Bindable(true), Category("Behavior"), DefaultValue(true), Description("True if this is query mode."), Localizable(false)] public virtual bool QueryMode { get; set; } private SearchSpec _spec; private SearchSpec Spec { get { if (_spec == null) { Type type = Assembly.GetExecutingAssembly().GetTypes().Where(t => t.Name == SearchSpecName).First(); _spec = (SearchSpec)Assembly.GetExecutingAssembly().CreateInstance(type.Namespace + "." + type.Name); } return _spec; } } protected override void CreateChildControls() { if (FormTemplate != null) { MyTemplateContainer = new FormTemplateContainer(this); FormTemplate.InstantiateIn(MyTemplateContainer); Controls.Add(MyTemplateContainer); } else { Controls.Add(new LiteralControl("blah")); } } protected override void RenderContents(HtmlTextWriter writer) { // <snip> } protected override HtmlTextWriterTag TagKey { get { return HtmlTextWriterTag.Div; } } } public class FormTemplateContainer : Control, INamingContainer { private SearchPage parent; public FormTemplateContainer(SearchPage parent) { this.parent = parent; } } then the usage: <tster:SearchPage ID="sp1" runat="server" SearchSpecName="TestSearchSpec" QueryMode="False"> <FormTemplate> <br /> Test Name: <asp:TextBox ID="testNameBox" runat="server" Width="432px"></asp:TextBox> <br /> Owner: <asp:TextBox ID="ownerBox" runat="server" Width="427px"></asp:TextBox> <br /> Description: <asp:TextBox ID="descriptionBox" runat="server" Height="123px" Width="432px" TextMode="MultiLine" Wrap="true"></asp:TextBox> </FormTemplate> </tster:SearchPage> The problem is that in the CodeBehind, the page has members descriptionBox, ownerBox and testNameBox. However, they are all null. Furthermore, FindControl("ownerBox") returns null as does this.sp1.FindControl("ownerBox"). I have to do this.sp1.MyTemplateContainer.FindControl("ownerBox") to get the control. How can I make it so that the C# Code Behind will have the controls generated and not null in my Page_Load event so that developers can just do this: testNameBox.Text = "foo"; ownerBox.Text = "bar"; descriptionBox.Text = "baz";

    Read the article

  • Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server)

    - by rahul286
    After debugging for 6-hours - I am giving this up :| We have a nginx+php-fpm+mysql in LAN with almost 100 wordpress (created and used by different designers/developers all working on test wordpres setup) We are using nginx without any issues from long. Today, all of a sudden - nginx started returning "504 Gateway Time-out" out of the blue... I checked nginx error log for a virtual host... 2010/09/06 21:24:24 [error] 12909#0: *349 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:11 [error] 12909#0: *349 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:11 [error] 12909#0: *443 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:12 [error] 12909#0: *443 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:08:32 [error] 12909#0: *1025 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:33 [error] 12909#0: *1025 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:40 [error] 12909#0: *1064 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:40 [error] 12909#0: *1064 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:24:44 [error] 12909#0: *1313 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:24:53 [error] 12909#0: *1313 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" As I run php-fpm on port 9000 via TCP mode, I ran "netstat | grep 9000" and noticed something unusual... (Pasting partial output here for ease of read) tcp 9 0 localhost:9000 localhost:36094 CLOSE_WAIT 14269/php5-fpm tcp 0 0 localhost:46664 localhost:9000 FIN_WAIT2 - tcp 1257 0 localhost:9000 localhost:36135 CLOSE_WAIT - tcp 1257 0 localhost:9000 localhost:36125 CLOSE_WAIT - tcp 9 0 localhost:9000 localhost:36102 CLOSE_WAIT 14268/php5-fpm tcp 0 0 localhost:46662 localhost:9000 FIN_WAIT2 - tcp 745 0 localhost:9000 localhost:46644 CLOSE_WAIT - tcp 0 0 localhost:46658 localhost:9000 FIN_WAIT2 - tcp 1265 0 localhost:9000 localhost:46607 CLOSE_WAIT - tcp 0 0 localhost:46672 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1257 0 localhost:9000 localhost:36119 CLOSE_WAIT - tcp 1265 0 localhost:9000 localhost:46613 CLOSE_WAIT - tcp 0 0 localhost:46646 localhost:9000 FIN_WAIT2 - tcp 1257 0 localhost:9000 localhost:36137 CLOSE_WAIT - tcp 0 0 localhost:46670 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1265 0 localhost:9000 localhost:46619 CLOSE_WAIT - tcp 1336 0 localhost:9000 localhost:46668 ESTABLISHED - tcp 0 0 localhost:46648 localhost:9000 FIN_WAIT2 - tcp 1336 0 localhost:9000 localhost:46670 ESTABLISHED - tcp 9 0 localhost:9000 localhost:36108 CLOSE_WAIT 14274/php5-fpm tcp 1336 0 localhost:9000 localhost:46684 ESTABLISHED - tcp 0 0 localhost:46674 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1336 0 localhost:9000 localhost:46666 ESTABLISHED - tcp 1257 0 localhost:9000 localhost:46648 CLOSE_WAIT - tcp 1336 0 localhost:9000 localhost:46678 ESTABLISHED - tcp 0 0 localhost:46668 localhost:9000 ESTABLISHED 12909/nginx: wo There are plenty of "CLOSE_WAIT" & "FIN_WAIT2" pairs as highlighted below (in above output): tcp 1337 0 localhost:9000 localhost:46680 CLOSE_WAIT - tcp 0 0 localhost:46680 localhost:9000 FIN_WAIT2 - Please note port 46680 in above. I enabled mysql slow queries error log, but it didn't work. As of now restarting php5-fpm every minute via a cronjob (see command below) keeping everything running "smoothly" but I hate patchwork and want to solve this... 1 * * * * service php5-fpm restart > /dev/null I searched extensively on Google - got no help. As mentioned, this a test-server in LAN, CPU load is never crossed 0.10 and memory usage is also below 25% (System has 2GB RAM and ubuntu-server installed) So if you find its time-confusing to help me out, please atleast drop a hint. Thanks in advance for help. -Rahul (note - this is reposting of - http://forum.nginx.org/read.php?11,127694) Update: I found answer, which is posted below.

    Read the article

  • Load-balancing between a Procurve switch and a server

    - by vlad
    Hello I've been searching around the web for this problem i've been having. It's similar in a way to this question: How exactly & specifically does layer 3 LACP destination address hashing work? My setup is as follows: I have a central switch, a Procurve 2510G-24, image version Y.11.16. It's the center of a star topology, there are four switches connected to it via a single gigabit link. Those switches service the users. On the central switch, I have a server with two gigabit interfaces that I want to bond together in order to achieve higher throughput, and two other servers that have single gigabit connections to the switch. The topology looks as follows: sw1 sw2 sw3 sw4 | | | | --------------------- | sw0 | --------------------- || | | srv1 srv2 srv3 The servers were running FreeBSD 8.1. On srv1 I set up a lagg interface using the lacp protocol, and on the switch I set up a trunk for the two ports using lacp as well. The switch showed that the server was a lacp partner, I could ping the server from another computer, and the server could ping other computers. If I unplugged one of the cables, the connection would keep working, so everything looked fine. Until I tested throughput. There was only one link used between srv1 and sw0. All testing was conducted with iperf, and load distribution was checked with systat -ifstat. I was looking to test the load balancing for both receive and send operations, as I want this server to be a file server. There were therefore two scenarios: iperf -s on srv1 and iperf -c on the other servers iperf -s on the other servers and iperf -c on srv1 connected to all the other servers. Every time only one link was used. If one cable was unplugged, the connections would keep going. However, once the cable was plugged back in, the load was not distributed. Each and every server is able to fill the gigabit link. In one-to-one test scenarios, iperf was reporting around 940Mbps. The CPU usage was around 20%, which means that the servers could withstand a doubling of the throughput. srv1 is a dell poweredge sc1425 with onboard intel 82541GI nics (em driver on freebsd). After troubleshooting a previous problem with vlan tagging on top of a lagg interface, it turned out that the em could not support this. So I figured that maybe something else is wrong with the em drivers and / or lagg stack, so I started up backtrack 4r2 on this same server. So srv1 now uses linux kernel 2.6.35.8. I set up a bonding interface bond0. The kernel module was loaded with option mode=4 in order to get lacp. The switch was happy with the link, I could ping to and from the server. I could even put vlans on top of the bonding interface. However, only half the problem was solved: if I used srv1 as a client to the other servers, iperf was reporting around 940Mbps for each connection, and bwm-ng showed, of course, a nice distribution of the load between the two nics; if I run the iperf server on srv1 and tried to connect with the other servers, there was no load balancing. I thought that maybe I was out of luck and the hashes for the two mac addresses of the clients were the same, so I brought in two new servers and tested with the four of them at the same time, and still nothing changed. I tried disabling and reenabling one of the links, and all that happened was the traffic switched from one link to the other and back to the first again. I also tried setting the trunk to "plain trunk mode" on the switch, and experimented with other bonding modes (roundrobin, xor, alb, tlb) but I never saw any traffic distribution. One interesting thing, though: one of the four switches is a Cisco 2950, image version 12.1(22)EA7. It has 48 10/100 ports and 2 gigabit uplinks. I have a server (call it srv4) with a 4 channel trunk connected to it (4x100), FreeBSD 8.0 release. The switch is connected to sw0 via gigabit. If I set up an iperf server on one of the servers connected to sw0 and a client on srv4, ALL 4 links are used, and iperf reports around 330Mbps. systat -ifstat shows all four interfaces are used. The cisco port-channel uses src-mac to balance the load. The HP should use both the source and destination according to the manual, so it should work as well. Could this mean there is some bug in the HP firmware? Am I doing something wrong?

    Read the article

  • Modern Java alternatives

    - by Ralph
    I'm not sure if stackoverflow is the best forum for this discussion. I have been a Java developer for 14 years and have written an enterprise-level (~500,000 line) Swing application that uses most of the standard library APIs. Recently, I have become disappointed with the progress that the language has made to "modernize" itself, and am looking for an alternative for ongoing development. I have considered moving to the .NET platform, but I have issues with using something the only runs well in Windows (I know about Mono, but that is still far behind Microsoft). I also plan on buying a new Macbook Pro as soon as Apple releases their new rumored Arrandale-based machines and want to develop in an environment that will feel "at home" in Unix/Linux. I have considered using Python or Ruby, but the standard Java library is arguably the largest of any modern language. In JVM-based languages, I looked at Groovy, but am disappointed with its performance. Rumor has it that with the soon-to-be released JDK7, with its InvokeDynamic instruction, this will improve, but I don't know how much. Groovy is also not truly a functional language, although it provides closures and some of the "functional" features on collections. It does not embrace immutability. I have narrowed my search down to two JVM-based alternatives: Scala and Clojure. Each has its strengths and weaknesses. I am looking for the stackoverflow readerships' opinions. I am not an expert at either of these languages; I have read 2 1/2 books on Scala and am currently reading Stu Halloway's book on Clojure. Scala is strongly statically typed. I know the dynamic language folks claim that static typing is a crutch for not doing unit testing, but it does provide a mechanism for compile-time location of a whole class of errors. Scala is more concise than Java, but not as much as Clojure. Scala's inter-operation with Java seems to be better than Clojure's, in that most Java operations are easier to do in Scala than in Clojure. For example, I can find no way in Clojure to create a non-static initialization block in a class derived from a Java superclass. For example, I like the Apache commons CLI library for command line argument parsing. In Java and Scala, I can create a new Options object and add Option items to it in an initialization block as follows (Java code): final Options options = new Options() { { addOption(new Option("?", "help", false, "Show this usage information"); // other options } }; I can't figure out how to the same thing in Clojure (except by using (doit...)), although that may reflect my lack of knowledge of the language. Clojure's collections are optimized for immutability. They rarely require copy-on-write semantics. I don't know if Scala's immutable collections are implemented using similar algorithms, but Rich Hickey (Clojure's inventor) goes out of his way to explain how that language's data structures are efficient. Clojure was designed from the beginning for concurrency (as was Scala) and with modern multi-core processors, concurrency takes on more importance, but I occasionally need to write simple non-concurrent utilities, and Scala code probably runs a little faster for these applications since it discourages, but does not prohibit, "simple" mutability. One could argue that one-off utilities do not have to be super-fast, but sometimes they do tasks that take hours or days to complete. I know that there is no right answer to this "question", but I thought I would open it up for discussion. If anyone has a suggestion for another JVM-based language that can be used for enterprise level development, please list it. Also, it is not my intent to start a flame war. Thanks, Ralph

    Read the article

  • Simple Convention Automapper for two-way Mapping (Entities to/from ViewModels)

    - by Omu
    UPDATE: this stuff has evolved into a nice project, see it at http://valueinjecter.codeplex.com check this out, I just wrote a simple automapper, it takes the value from the property with the same name and type of one object and puts it into another, and you can add exceptions (ifs, switch) for each type you may need so tell me what do you think about it ? I did it so I could do something like this: Product –> ProductDTO ProductDTO –> Product that's how it begun: I use the "object" type in my Inputs/Dto/ViewModels for DropDowns because I send to the html a IEnumerable<SelectListItem> and I receive a string array of selected keys back public void Map(object a, object b) { var pp = a.GetType().GetProperties(); foreach (var pa in pp) { var value = pa.GetValue(a, null); // property with the same name in b var pb = b.GetType().GetProperty(pa.Name); if (pb == null) { //no such property in b continue; } if (pa.PropertyType == pb.PropertyType) { pb.SetValue(b, value, null); } } } UPDATE: the real usage: the Build methods (Input = Dto): public static TI BuildInput<TI, T>(this T entity) where TI: class, new() { var input = new TI(); input = Map(entity, input) as TI; return input; } public static T BuildEntity<T, TI, TR>(this TI input) where T : class, new() where TR : IBaseAdvanceService<T> { var id = (long)input.GetType().GetProperty("Id").GetValue(input, null); var entity = LocatorConfigurator.Resolve<TR>().Get(id) ?? new T(); entity = Map(input, entity) as T; return entity; } public static TI RebuildInput<T, TI, TR>(this TI input) where T: class, new() where TR : IBaseAdvanceService<T> where TI : class, new() { return input.BuildEntity<T, TI, TR>().BuildInput<TI, T>(); } in the controller: public ActionResult Create() { return View(new Organisation().BuildInput<OrganisationInput, Organisation>()); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(OrganisationInput o) { if (!ModelState.IsValid) { return View(o.RebuildInput<Organisation,OrganisationInput, IOrganisationService>()); } organisationService.SaveOrUpdate(o.BuildEntity<Organisation, OrganisationInput, IOrganisationService>()); return RedirectToAction("Index"); } The real Map method public static object Map(object a, object b) { var lookups = GetLookups(); var propertyInfos = a.GetType().GetProperties(); foreach (var pa in propertyInfos) { var value = pa.GetValue(a, null); // property with the same name in b var pb = b.GetType().GetProperty(pa.Name); if (pb == null) { continue; } if (pa.PropertyType == pb.PropertyType) { pb.SetValue(b, value, null); } else if (lookups.Contains(pa.Name) && pa.PropertyType == typeof(LookupItem)) { pb.SetValue(b, (pa.GetValue(a, null) as LookupItem).GetSelectList(pa.Name), null); } else if (lookups.Contains(pa.Name) && pa.PropertyType == typeof(object)) { pb.SetValue(b, pa.GetValue(a, null).ReadSelectItemValue(), null); } else if (pa.PropertyType == typeof(long) && pb.PropertyType == typeof(Organisation)) { pb.SetValue(b, pa.GetValue<long>(a).ReadOrganisationId(), null); } else if (pa.PropertyType == typeof(Organisation) && pb.PropertyType == typeof(long)) { pb.SetValue(b, pa.GetValue<Organisation>(a).Id, null); } } return b; }

    Read the article

  • Is this question too hard for a seasoned C++ architect?

    - by Monomer
    Background Information We're looking to hire a seasoned C++ architect (10+years dev, of which at least 6years must be C++ ) for a high frequency trading platform. Job advert says STL, Boost proficiency is a must with preferences to modern uses of C++. The company I work for is a Fortune 500 IB (aka finance industry), it requires passes in all the standard SHL tests (numeric, vocab, spatial etc) before interviews can commence. Everyone on the team was given the task of coming up with one question to ask the candidates during a written/typed test, please note this is the second test provided to the candidates, the first being Advanced IKM C++ test, done in the offices supervised and without internet access. People passing that do the second test. After roughly 70 candidates, my question has been determined to be statistically the worst performing - aka least number of people attempted it, furthermore even less people were able to give meaningful answers. Please note, the second test is not timed, the candidate can literally take as long as they like (we've had one person take roughly 10.5hrs) My question to SO is this, after SHL and IKM adv c++ tests, backed up with at least 6+ years C++ development experience, is it still ok not to be able to even comment about let alone come up with some loose strategy for solving the following question. The Question There is a class C with methods foo, boo, boo_and_foo and foo_and_boo. Each method takes i,j,k and l clock cycles respectively, where i < j, k < i+j and l < i+j. class C { public: int foo() {...} int boo() {...} int boo_and_foo() {...} int foo_and_boo() {...} }; In code one might write: C c; . . int i = c.foo() + c.boo(); But it would be better to have: int i = c.foo_and_boo(); What changes or techniques could one make to the definition of C, that would allow similar syntax of the original usage, but instead have the compiler generate the latter. Note that foo and boo are not commutative. Possible Solution We were basically looking for an expression templates based approach, and were willing to give marks to anyone who had even hinted or used the phrase or related terminology. We got only two people that used the wording, but weren't able to properly describe how they accomplish the task in detail. We use such techniques all over the place, due to the use of various mathematical operators for matrix and vector based calculations, for example to decide when to use IPP or hand woven implementations at compile time for a particular architecture and many other things. The particular area of software development requires microsecond response times. I believe could/should be able to teach a junior such techniques, but given the assumed caliber of candidates I expected a little more. Is this really a difficult question? Should it be removed? Or are we just not seeing the right candidates?

    Read the article

  • Manipulating matrix operations (transpose, negation, addition, and mutiplication) using functions in

    - by user292489
    I was trying to manipulate matrices in my input file using functions. My input file is: A 3 3 1 2 3 4 5 6 7 8 9 B 3 3 1 0 0 0 1 0 0 0 1 C 2 3 3 5 8 -1 -2 -3 D 3 5 0 0 0 1 0 1 0 1 0 1 0 1 0 0 1 E 1 1 10 F 3 10 1 0 2 0 3 0 4 0 5 0 0 2 3 -1 -3 -4 -3 8 3 7 0 0 0 4 6 5 8 2 -1 10 I am having trouble in implementing the functions that I declared. I assumed my program will perform those operations: transpose, negate, add, and multiply matices according to the users choice: /* once this program is compiled and executed, it will perform the basic matrix * operations: negation, transpose, addition, and multiplication. */ #include <stdio.h> #include <stdlib.h> #define MAX 10 int readmatrix(FILE *input, char martixname[6],int , mat[10][10], int i, int j); void printmatrix(char matrixname[6], int mat[10][10], int i, int j); void Negate(char matrixname[6], int mat[10][10], int i, int j); void add(char matrixname[6], int mat[10][10],int i, int k); void multiply(char matrixname[], int mat[][10], char A[], int i, int k); void transpose (char matrixname[], int mat[][10], char A[], int); void printT(int mat[][10], int); int selctoption(); char selectmatrix(); int main(int argc, char *argv[]) { char matrixtype[6]; int mat[][10]; FILE *filein; int size; int optionop; int matrixop; int option; if (argc != 2) { printf("Usage: executable input.\n"); exit(0); } filein = fopen(argv[1], "r"); if (!filein) { printf("ERROR: input file not found.\n"); exit (0); } size = readmatrix (filein, matrixtype); printmatrix(matrix[][10], size); option = selectoption(); matrixtype = selectmatrix(); //printf("You have: %5.2f ", deposit); optionop = readmatrix(option, matrix[][10], size); if (choiceop == 6) { printf("Thanks for using the matrix operation program.\n"); exit(0); } printf("Please select from the following matrix operations:\n") printf("\t1. Print matrix\n"); printf("\t2. Negate matrix\n"); printf("\t3. Transpose matrix\n"); printf("\t4. Add matrices\n"); printf("\t5. Multiply matrices\n"); printf("\t6. Quit\n"); fclose(filein); return 0; } do { printf("Please select option(1-%d):", optionop); scanf("%d", &matrixop); } while(matrixop <= 0 || matrixop > optionop); void readmatrix (FILE *in, int mat[][10], char A[], int i, int j) { int i=0,j = 0; while (fscanf(in, "%d", &mat[i][j]) != EOF) return 0; } // I would appreciate anyone's feedback.

    Read the article

  • Converting AES encryption token code in C# to php

    - by joey
    Hello, I have the following .Net code which takes two inputs. 1) A 128 bit base 64 encoded key and 2) the userid. It outputs the AES encrypted token. I need the php equivalent of the same code, but dont know which corresponding php classes are to be used for RNGCryptoServiceProvider,RijndaelManaged,ICryptoTransform,MemoryStream and CryptoStream. Im stuck so any help regarding this would be really appreciated. using System; using System.Text; using System.IO; using System.Security.Cryptography; class AESToken { [STAThread] static int Main(string[] args) { if (args.Length != 2) { Console.WriteLine("Usage: AESToken key userId\n"); Console.WriteLine("key Specifies 128-bit AES key base64 encoded supplied by MediaNet to the partner"); Console.WriteLine("userId specifies the unique id"); return -1; } string key = args[0]; string userId = args[1]; StringBuilder sb = new StringBuilder(); // This example code uses the magic string “CAMB2B”. The implementer // must use the appropriate magic string for the web services API. sb.Append("CAMB2B"); sb.Append(args[1]); // userId sb.Append('|'); // pipe char sb.Append(System.DateTime.UtcNow.ToString("yyyy-MM-dd HH:mm:ssUTC")); //timestamp Byte[] payload = Encoding.ASCII.GetBytes(sb.ToString()); byte[] salt = new Byte[16]; // 16 bytes of random salt RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider(); rng.GetBytes(salt); // the plaintext is 16 bytes of salt followed by the payload. byte[] plaintext = new byte[salt.Length + payload.Length]; salt.CopyTo(plaintext, 0); payload.CopyTo(plaintext, salt.Length); // the AES cryptor: 128-bit key, 128-bit block size, CBC mode RijndaelManaged cryptor = new RijndaelManaged(); cryptor.KeySize = 128; cryptor.BlockSize = 128; cryptor.Mode = CipherMode.CBC; cryptor.GenerateIV(); cryptor.Key = Convert.FromBase64String(args[0]); // the key byte[] iv = cryptor.IV; // the IV. // do the encryption ICryptoTransform encryptor = cryptor.CreateEncryptor(cryptor.Key, iv); MemoryStream ms = new MemoryStream(); CryptoStream cs = new CryptoStream(ms, encryptor, CryptoStreamMode.Write); cs.Write(plaintext, 0, plaintext.Length); cs.FlushFinalBlock(); byte[] ciphertext = ms.ToArray(); ms.Close(); cs.Close(); // build the token byte[] tokenBytes = new byte[iv.Length + ciphertext.Length]; iv.CopyTo(tokenBytes, 0); ciphertext.CopyTo(tokenBytes, iv.Length); string token = Convert.ToBase64String(tokenBytes); Console.WriteLine(token); return 0; } } Please help. Thank You.

    Read the article

  • multiple-inheritance substitution

    - by Luigi
    I want to write a module (framework specific), that would wrap and extend Facebook PHP-sdk (https://github.com/facebook/php-sdk/). My problem is - how to organize classes, in a nice way. So getting into details - Facebook PHP-sdk consists of two classes: BaseFacebook - abstract class with all the stuff sdk does Facebook - extends BaseFacebook, and implements parent abstract persistance-related methods with default session usage Now I have some functionality to add: Facebook class substitution, integrated with framework session class shorthand methods, that run api calls, I use mostly (through BaseFacebook::api()), authorization methods, so i don't have to rewrite this logic every time, configuration, sucked up from framework classes, insted of passed as params caching, integrated with framework cache module I know something has gone very wrong, because I have too much inheritance that doesn't look very normal.Wrapping everything in one "complex extension" class also seems too much. I think I should have few working togheter classes - but i get into problems like: if cache class doesn't really extend and override BaseFacebook::api() method - shorthand and authentication classes won't be able to use the caching. Maybe some kind of a pattern would be right in here? How would you organize these classes and their dependencies? EDIT 04.07.2012 Bits of code, related to the topic: This is how the base class of Facebook PHP-sdk: abstract class BaseFacebook { // ... some methods public function api(/* polymorphic */) { // ... method, that makes api calls } public function getUser() { // ... tries to get user id from session } // ... other methods abstract protected function setPersistentData($key, $value); abstract protected function getPersistentData($key, $default = false); // ... few more abstract methods } Normaly Facebook class extends it, and impelements those abstract methods. I replaced it with my substitude - Facebook_Session class: class Facebook_Session extends BaseFacebook { protected function setPersistentData($key, $value) { // ... method body } protected function getPersistentData($key, $default = false) { // ... method body } // ... implementation of other abstract functions from BaseFacebook } Ok, then I extend this more with shorthand methods and configuration variables: class Facebook_Custom extends Facebook_Session { public funtion __construct() { // ... call parent's constructor with parameters from framework config } public function api_batch() { // ... a wrapper for parent's api() method return $this->api('/?batch=' . json_encode($calls), 'POST'); } public function redirect_to_auth_dialog() { // method body } // ... more methods like this, for common queries / authorization } I'm not sure, if this isn't too much for a single class ( authorization / shorthand methods / configuration). Then there comes another extending layer - cache: class Facebook_Cache extends Facebook_Custom { public function api() { $cache_file_identifier = $this->getUser(); if(/* cache_file_identifier is not null and found a valid file with cached query result */) { // return the result } else { try { // call Facebook_Custom::api, cache and return the result } catch(FacebookApiException $e) { // if Access Token is expired force refreshing it parent::redirect_to_auth_dialog(); } } } // .. some other stuff related to caching } Now this pretty much works. New instance of Facebook_Cache gives me all the functionality. Shorthand methods from Facebook_Custom use caching, because Facebook_Cache overwrited api() method. But here is what is bothering me: I think it's too much inheritance. It's all very tight coupled - like look how i had to specify 'Facebook_Custom::api' instead of 'parent:api', to avoid api() method loop on Facebook_Cache class extending. Overall mess and ugliness. So again, this works but I'm just asking about patterns / ways of doing this in a cleaner and smarter way.

    Read the article

  • HP-UX: libstd_v2 in stack trace of JNI code compiled with g++

    - by Miguel Rentes
    Hello, uname -mr: B.11.23 ia64 g++ --version: g++ (GCC) 4.4.0 java -version: Java(TM) SE Runtime Environment (build 1.6.0.06-jinteg_20_jan_2010_05_50-b00) Java HotSpot(TM) Server VM (build 14.3-b01-jre1.6.0.06-rc1, mixed mode) I'm trying to run a Java application that uses JNI. It is crashing inside the JNI code with the following (abbreviated) stack trace: (0) 0xc0000000249353e0 VMError::report_and_die{_ZN7VMError14report_and_dieEv} + 0x440 at /CLO/Components/JAVA_HOTSPOT/Src/src/share/vm/utilities/vmError.cpp:738 [/opt/java6/jre/lib/IA64W/server/libjvm.so] (1) 0xc000000024559240 os::Hpux::JVM_handle_hpux_signal{_ZN2os4Hpux22JVM_handle_hpux_signalEiP9 __siginfoPvi} + 0x760 at /CLO/Components/JAVA_HOTSPOT/Src/src/os_cpu/hp-ux_ia64/vm/os_hp-ux_ia64.cpp:1051 [/opt/java6/jre/lib/IA64W/server/libjvm.so] (2) 0xc0000000245331c0 os::Hpux::signalHandler{_ZN2os4Hpux13signalHandlerEiP9__siginfoPv} + 0x80 at /CLO/Components/JAVA_HOTSPOT/Src/src/os/hp-ux/vm/os_hp-ux.cpp:4295 [/opt/java6/jre/lib/IA64W/server/libjvm.so] (3) 0xe00000010e002620 ---- Signal 11 (SIGSEGV) delivered ---- (4) 0xc0000000000d2d20 __pthread_mutex_lock + 0x400 at /ux/core/libs/threadslibs/src/common/pthreads/mutex.c:3895 [/usr/lib/hpux64/libpthread.so.1] (5) 0xc000000000342e90 __thread_mutex_lock + 0xb0 at ../../../../../core/libs/libc/shared_em_64/../core/threads/wrappers1.c:273 [/usr/lib/hpux64/libc.so.1] (6) 0xc00000000177dff0 _HPMutexWrapper::lock{_ZN15_HPMutexWrapper4lockEPv} + 0x90 [/usr/lib/hpux64/libstd_v2.so.1] (7) 0xc0000000017e9960 std::basic_string,std::allocator{_ZNSsC1ERKSs} + 0x80 [/usr/lib/hpux64/libstd_v2.so.1] (8) 0xc000000008fd9fe0 JniString::str{_ZNK9JniString3strEv} + 0x50 at eg_handler_jni.cxx:50 [/soft/bus-7_0/lib/libbus_registry_jni.so.7.0.0] (9) 0xc000000008fd7060 pt_efacec_se_aut_frk_cmp_registry_REGHandler::getKey{_ZN44pt_efacec_se_aut_frk_cmp_registry_REGHandler6getKeyEP8_jstringi} + 0xa0 [/soft/bus-7_0/lib/libbus_registry_jni.so.7.0.0] (10) 0xc000000008fd17f0 Java_pt_efacec_se_aut_frk_cmp_registry_REGHandler_getKey__Ljava_lang_String_2I + 0xa0 [/soft/bus-7_0/lib/libbus_registry_jni.so.7.0.0] (11) 0x9fffffffdf400ed0 Internal error (-3) while unwinding stack [/CLO/Components/JAVA_HOTSPOT/Src/src/os_cpu/hp-ux_ia64/vm/thread_hp-ux_ia64.cpp:142] This JNI code and dependencies are being compiled using g++, are multithreaded and 64 bit (-pthread -mlp64 -shared -fPIC). The LD_LIBRARY_PATH environment variable is set the dependencies location, and running ldd on the JNI shared libraries finds them all: ldd libbus_registry_jni.so: libefa-d.so.7 = /soft/bus-7_0/lib/libefa-d.so.7 libbus_registry-d.so.7 = /soft/bus-7_0/lib/libbus_registry-d.so.7 libboost_thread-gcc44-mt-d-1_41.so = /usr/local/lib/libboost_thread-gcc44-mt-d-1_41.so libboost_system-gcc44-mt-d-1_41.so = /usr/local/lib/libboost_system-gcc44-mt-d-1_41.so libboost_regex-gcc44-mt-d-1_41.so = /usr/local/lib/libboost_regex-gcc44-mt-d-1_41.so librt.so.1 = /usr/lib/hpux64/librt.so.1 libstdc++.so.6 = /opt/hp-gcc-4.4.0/lib/gcc/ia64-hp-hpux11.23/4.4.0/../../../hpux64/libstdc++.so.6 libm.so.1 = /usr/lib/hpux64/libm.so.1 libgcc_s.so.0 = /opt/hp-gcc-4.4.0/lib/gcc/ia64-hp-hpux11.23/4.4.0/../../../hpux64/libgcc_s.so.0 libunwind.so.1 = /usr/lib/hpux64/libunwind.so.1 librt.so.1 = /usr/lib/hpux64/librt.so.1 libm.so.1 = /usr/lib/hpux64/libm.so.1 libunwind.so.1 = /usr/lib/hpux64/libunwind.so.1 libdl.so.1 = /usr/lib/hpux64/libdl.so.1 libunwind.so.1 = /usr/lib/hpux64/libunwind.so.1 libc.so.1 = /usr/lib/hpux64/libc.so.1 libuca.so.1 = /usr/lib/hpux64/libuca.so.1 Looking at the stack trace, it seams odd that, although ldd list g++'s libstdc++ is being used, the std:string copy c'tor being reported as used is the one in libstd_v2, the implementation provided by aCC. The crash happens in the following code, when method str() returns: class JniString { std::string m_utf8; public: JniString(JNIEnv* env, jstring instance) { const char* utf8Chars = env-GetStringUTFChars(instance, 0); if (utf8Chars == 0) { env-ExceptionClear(); // RPF throw std::runtime_error("GetStringUTFChars returned 0"); } m_utf8.assign(utf8Chars); env-ReleaseStringUTFChars(instance, utf8Chars); } std::string str() const { return m_utf8; } }; Simultaneous usage of the two C++ implementations could likely be a reason for the crash, but that should not be happening. Any ideas?

    Read the article

  • OS X won't see Windows 7 in network (and vice versa)

    - by meds
    I've enabled SMB sharing in OS X Lion and have added folders to share, it says 'Windows Sharing: On' with a green circle next to it (from the sharing window) and that to access the volume I will need to to go to \\192.168.0.17. It also says that the OS X should be visible as 'macbook' in the network. Both my WIndows 7 and OS X are connected to the same network, yet when I try to go to \\192.168.0.17 or from the Mac try to go to my Windows system (smb://192.168.0.6) the two OSs don't see each other. Any ideas why? Attempting to ping the Mac from Windows results in this output in the command prompt: Pinging 192.168.0.17 with 32 bytes of data: Reply from 192.168.0.6: Destination host unreachable. Request timed out. Request timed out. Request timed out. Ping statistics for 192.168.0.17: Packets: Sent = 4, Received = 1, Lost = 3 (75% loss), ipconfig in Windows is: Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::8918:efd1:b05c:890f%21 IPv4 Address. . . . . . . . . . . : 192.168.0.6 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.0.1 Ethernet adapter Bluetooth Network Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Ethernet adapter VMware Network Adapter VMnet1: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::98ab:63fc:3c07:d837%13 IPv4 Address. . . . . . . . . . . : 192.168.74.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter VMware Network Adapter VMnet8: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::80ff:c575:7b50:3a10%14 IPv4 Address. . . . . . . . . . . : 192.168.21.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Tunnel adapter isatap.{2E97D0AE-9E18-4072-AC23-1979BA0DCB79}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{E260CE43-E9A7-4DE0-A88E-4EAFF68ACDDB}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{A5130812-59CE-4DDF-9C35-9433BCED9831}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter Teredo Tunneling Pseudo-Interface: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{134BCAE7-CFFF-4A98-8DA0-3708806AABEB}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{8D9E3B8F-161C-4ACE-B211-3EDD694416B2}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : in OS X: lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384 options=3<RXCSUM,TXCSUM> inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 inet 127.0.0.1 netmask 0xff000000 inet6 ::1 prefixlen 128 gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280 stf0: flags=0<> mtu 1280 en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 options=2b<RXCSUM,TXCSUM,VLAN_HWTAGGING,TSO4> ether c8:2a:14:01:24:c1 media: autoselect (none) status: inactive en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether e0:f8:47:0c:fe:04 inet6 fe80::e2f8:47ff:fe0c:fe04%en1 prefixlen 64 scopeid 0x5 inet 192.168.0.17 netmask 0xffffff00 broadcast 192.168.0.255 media: autoselect status: active p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304 ether 02:f8:47:0c:fe:04 media: autoselect status: inactive fw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 4078 lladdr 70:cd:60:ff:fe:d8:f1:32 media: autoselect <full-duplex> status: inactive

    Read the article

  • How to do inclusive range queries when only half-open range is supported (ala SortedMap.subMap)

    - by polygenelubricants
    On SortedMap.subMap This is the API for SortedMap<K,V>.subMap: SortedMap<K,V> subMap(K fromKey, K toKey) : Returns a view of the portion of this map whose keys range from fromKey, inclusive, to toKey, exclusive. This inclusive lower bound, exclusive upper bound combo ("half-open range") is something that is prevalent in Java, and while it does have its benefits, it also has its quirks, as we shall soon see. The following snippet illustrates a simple usage of subMap: static <K,V> SortedMap<K,V> someSortOfSortedMap() { return Collections.synchronizedSortedMap(new TreeMap<K,V>()); } //... SortedMap<Integer,String> map = someSortOfSortedMap(); map.put(1, "One"); map.put(3, "Three"); map.put(5, "Five"); map.put(7, "Seven"); map.put(9, "Nine"); System.out.println(map.subMap(0, 4)); // prints "{1=One, 3=Three}" System.out.println(map.subMap(3, 7)); // prints "{3=Three, 5=Five}" The last line is important: 7=Seven is excluded, due to the exclusive upper bound nature of subMap. Now suppose that we actually need an inclusive upper bound, then we could try to write a utility method like this: static <V> SortedMap<Integer,V> subMapInclusive(SortedMap<Integer,V> map, int from, int to) { return (to == Integer.MAX_VALUE) ? map.tailMap(from) : map.subMap(from, to + 1); } Then, continuing on with the above snippet, we get: System.out.println(subMapInclusive(map, 3, 7)); // prints "{3=Three, 5=Five, 7=Seven}" map.put(Integer.MAX_VALUE, "Infinity"); System.out.println(subMapInclusive(map, 5, Integer.MAX_VALUE)); // {5=Five, 7=Seven, 9=Nine, 2147483647=Infinity} A couple of key observations need to be made: The good news is that we don't care about the type of the values, but... subMapInclusive assumes Integer keys for to + 1 to work. A generic version that also takes e.g. Long keys is not possible (see related questions) Not to mention that for Long, we need to compare against Long.MAX_VALUE instead Overloads for the numeric primitive boxed types Byte, Character, etc, as keys, must all be written individually A special check need to be made for toInclusive == Integer.MAX_VALUE, because +1 would overflow, and subMap would throw IllegalArgumentException: fromKey > toKey This, generally speaking, is an overly ugly and overly specific solution What about String keys? Or some unknown type that may not even be Comparable<?>? So the question is: is it possible to write a general subMapInclusive method that takes a SortedMap<K,V>, and K fromKey, K toKey, and perform an inclusive-range subMap queries? Related questions Are upper bounds of indexed ranges always assumed to be exclusive? Is it possible to write a generic +1 method for numeric box types in Java? On NavigableMap It should be mentioned that there's a NavigableMap.subMap overload that takes two additional boolean variables to signify whether the bounds are inclusive or exclusive. Had this been made available in SortedMap, then none of the above would've even been asked. So working with a NavigableMap<K,V> for inclusive range queries would've been ideal, but while Collections provides utility methods for SortedMap (among other things), we aren't afforded the same luxury with NavigableMap. Related questions Writing a synchronized thread-safety wrapper for NavigableMap On API providing only exclusive upper bound range queries Does this highlight a problem with exclusive upper bound range queries? How were inclusive range queries done in the past when exclusive upper bound is the only available functionality?

    Read the article

  • ORDER BY job failed in the Pig script while running EmbeddedPig using Java

    - by C.c. Huang
    I have this following pig script, which works perfectly using grunt shell (stored the results to HDFS without any issues); however, the last job (ORDER BY) failed if I ran the same script using Java EmbeddedPig. If I replace the ORDER BY job by others, such as GROUP or FOREACH GENERATE, the whole script then succeeded in Java EmbeddedPig. So I think it's the ORDER BY which causes the issue. Anyone has any experience with this? Any help would be appreciated! The Pig script: REGISTER pig-udf-0.0.1-SNAPSHOT.jar; user_similarity = LOAD '/tmp/sample-sim-score-results-31/part-r-00000' USING PigStorage('\t') AS (user_id: chararray, sim_user_id: chararray, basic_sim_score: float, alt_sim_score: float); simplified_user_similarity = FOREACH user_similarity GENERATE $0 AS user_id, $1 AS sim_user_id, $2 AS sim_score; grouped_user_similarity = GROUP simplified_user_similarity BY user_id; ordered_user_similarity = FOREACH grouped_user_similarity { sorted = ORDER simplified_user_similarity BY sim_score DESC; top = LIMIT sorted 10; GENERATE group, top; }; top_influencers = FOREACH ordered_user_similarity GENERATE com.aol.grapevine.similarity.pig.udf.AssignPointsToTopInfluencer($1, 10); all_influence_scores = FOREACH top_influencers GENERATE FLATTEN($0); grouped_influence_scores = GROUP all_influence_scores BY bag_of_topSimUserTuples::user_id; influence_scores = FOREACH grouped_influence_scores GENERATE group AS user_id, SUM(all_influence_scores.bag_of_topSimUserTuples::points) AS influence_score; ordered_influence_scores = ORDER influence_scores BY influence_score DESC; STORE ordered_influence_scores INTO '/tmp/cc-test-results-1' USING PigStorage(); The error log from Pig: 12/04/05 10:00:56 INFO pigstats.ScriptState: Pig script settings are added to the job 12/04/05 10:00:56 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 12/04/05 10:00:58 INFO mapReduceLayer.JobControlCompiler: Setting up single store job 12/04/05 10:00:58 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 12/04/05 10:00:58 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission. 12/04/05 10:00:58 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/04/05 10:00:58 INFO input.FileInputFormat: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths (combined) to process : 1 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating tmp-1546565755 in /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134-work-6955502337234509704 with rwxr-xr-x 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir. 12/04/05 10:00:58 INFO mapred.TaskRunner: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/pigsample_854728855_1333645258470 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.jar.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.jar.crc 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.split.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.split.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.splitmetainfo.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.splitmetainfo.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.xml.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.xml.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.jar <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.jar 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.split <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.split 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.splitmetainfo <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.splitmetainfo 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.xml <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.xml 12/04/05 10:00:59 INFO mapred.Task: Using ResourceCalculatorPlugin : null 12/04/05 10:00:59 INFO mapred.MapTask: io.sort.mb = 100 12/04/05 10:00:59 INFO mapred.MapTask: data buffer = 79691776/99614720 12/04/05 10:00:59 INFO mapred.MapTask: record buffer = 262144/327680 12/04/05 10:00:59 WARN mapred.LocalJobRunner: job_local_0004 java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:139) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigFileInputFormat.listStatus(PigFileInputFormat.java:37) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248) at org.apache.pig.impl.io.ReadToEndLoader.init(ReadToEndLoader.java:153) at org.apache.pig.impl.io.ReadToEndLoader.<init>(ReadToEndLoader.java:115) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:112) ... 6 more 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Deleted path /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:59 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: job job_local_0004 has failed! Stop running all dependent jobs 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: 100% complete 12/04/05 10:01:04 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed! 12/04/05 10:01:04 INFO pigstats.PigStats: Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 0.20.2-cdh3u3 0.8.1-cdh3u3 cchuang 2012-04-05 10:00:34 2012-04-05 10:01:04 GROUP_BY,ORDER_BY Some jobs have failed! Stop running all dependent jobs Job Stats (time in seconds): JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MaxReduceTime MinReduceTime AvgReduceTime Alias Feature Outputs job_local_0001 0 0 0 0 0 0 0 0 all_influence_scores,grouped_user_similarity,simplified_user_similarity,user_similarity GROUP_BY job_local_0002 0 0 0 0 0 0 0 0 grouped_influence_scores,influence_scores GROUP_BY,COMBINER job_local_0003 0 0 0 0 0 0 0 0 ordered_influence_scores SAMPLER Failed Jobs: JobId Alias Feature Message Outputs job_local_0004 ordered_influence_scores ORDER_BY Message: Job failed! Error - NA /tmp/cc-test-results-1, Input(s): Successfully read 0 records from: "/tmp/sample-sim-score-results-31/part-r-00000" Output(s): Failed to produce result in "/tmp/cc-test-results-1" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_local_0001 -> job_local_0002, job_local_0002 -> job_local_0003, job_local_0003 -> job_local_0004, job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: Some jobs have failed! Stop running all dependent jobs

    Read the article

  • Clipplanes, vertex shaders and hardware vertex processing in Direct3D 9

    - by Igor
    Hi, I have an issue with clipplanes in my application that I can reproduce in a sample from DirectX SDK (February 2010). I added a clipplane to the HLSLwithoutEffects sample: ... D3DXPLANE g_Plane( 0.0f, 1.0f, 0.0f, 0.0f ); ... void SetupClipPlane(const D3DXMATRIXA16 & view, const D3DXMATRIXA16 & proj) { D3DXMATRIXA16 m = view * proj; D3DXMatrixInverse( &m, NULL, &m ); D3DXMatrixTranspose( &m, &m ); D3DXPLANE plane; D3DXPlaneNormalize( &plane, &g_Plane ); D3DXPLANE clipSpacePlane; D3DXPlaneTransform( &clipSpacePlane, &plane, &m ); DXUTGetD3D9Device()->SetClipPlane( 0, clipSpacePlane ); } void CALLBACK OnFrameMove( double fTime, float fElapsedTime, void* pUserContext ) { // Update the camera's position based on user input g_Camera.FrameMove( fElapsedTime ); // Set up the vertex shader constants D3DXMATRIXA16 mWorldViewProj; D3DXMATRIXA16 mWorld; D3DXMATRIXA16 mView; D3DXMATRIXA16 mProj; mWorld = *g_Camera.GetWorldMatrix(); mView = *g_Camera.GetViewMatrix(); mProj = *g_Camera.GetProjMatrix(); mWorldViewProj = mWorld * mView * mProj; g_pConstantTable->SetMatrix( DXUTGetD3D9Device(), "mWorldViewProj", &mWorldViewProj ); g_pConstantTable->SetFloat( DXUTGetD3D9Device(), "fTime", ( float )fTime ); SetupClipPlane( mView, mProj ); } void CALLBACK OnFrameRender( IDirect3DDevice9* pd3dDevice, double fTime, float fElapsedTime, void* pUserContext ) { // If the settings dialog is being shown, then // render it instead of rendering the app's scene if( g_SettingsDlg.IsActive() ) { g_SettingsDlg.OnRender( fElapsedTime ); return; } HRESULT hr; // Clear the render target and the zbuffer V( pd3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB( 0, 45, 50, 170 ), 1.0f, 0 ) ); // Render the scene if( SUCCEEDED( pd3dDevice->BeginScene() ) ) { pd3dDevice->SetVertexDeclaration( g_pVertexDeclaration ); pd3dDevice->SetVertexShader( g_pVertexShader ); pd3dDevice->SetStreamSource( 0, g_pVB, 0, sizeof( D3DXVECTOR2 ) ); pd3dDevice->SetIndices( g_pIB ); pd3dDevice->SetRenderState( D3DRS_CLIPPLANEENABLE, D3DCLIPPLANE0 ); V( pd3dDevice->DrawIndexedPrimitive( D3DPT_TRIANGLELIST, 0, 0, g_dwNumVertices, 0, g_dwNumIndices / 3 ) ); pd3dDevice->SetRenderState( D3DRS_CLIPPLANEENABLE, 0 ); RenderText(); V( g_HUD.OnRender( fElapsedTime ) ); V( pd3dDevice->EndScene() ); } } When I rotate the camera I have different visual results when using hardware and software vertex processing. In software vertex processing mode or when using the reference device the clipping plane works fine as expected. In hardware mode it seems to rotate with the camera. If I remove the call to RenderText(); from OnFrameRender then hardware rendering also works fine. Further debugging reveals that the problem is in ID3DXFont::DrawText. I have this issue in Windows Vista and Windows 7 but not in Windows XP. I tested the code with the latest NVidia and ATI drivers in all three OSes on different PCs. Is it a DirectX issue? Or incorrect usage of clipplanes? Thanks Igor

    Read the article

  • Changing html <-> ajax <-> php/mysql to threaded approach

    - by Saif Bechan
    I have an application that needs to be updated real-time. There are various counters and other information that have to come from the database and the system needs to be up to date for the user. My approach now is just a normal ajax request every second to get the new values from the database. There is a JavaScript which loops every second getting the values trough ajax. This works fine but I think its very inefficient. The problem There is an ajax script that loops every second requesting data from php # On the server it has to load the PHP interpeter The PHP file has to get the data and format it correctly # PHP has to make a connection with the mysql database Work with the database(reads,never writes) Format the data so it can be send Send the data back to the browser # Close the database connection, and close the php interpeter Last the browser has to read these values and update the various html parts Now with this approach it has to load the interpreter and make a db connection every second. I was thinking of a way to make this more efficient, and maybe use a threaded approach to this. Threaded aprouch Do a post to the PHP when you enter the page and keep the connection alive In PHP only load the interpreter once, and make a connection to the DB ones Every second send an ajax response to the javascript listener The javascript listener than just changes values as the response from php arrives. I think this approach will be a great optimization to the server load and overall performance. But I can spot some weak point in the system and i need some help with these. Problems with the approach PHP execution time limit I don't think PHP is designed for such a setup. I know there is a time limit on php script execution. I don't know if an everlasting loop in PHP will cause any serious cpu/memory problems. Sending ajax request without breaking I don't know if it is possible to have just one ajax post action and have open and accepting data. user exists the page What will happen when the user exists the page and the PHP script is still going. Will it go on forever. security issues so far i can't think of any security issues. Almost every setup you use have some security issues. Maybe there are some with this solution I do not know of. Open to other solution I really want to change the setup as it is now and move to a threaded approach or better. If someone has a better approach to tackle this I definitely want to hear that. Maybe the usage of some other scripts is better suited for having an ongoing runtime. I only know php and java so any suggestions are welcome and I am willing to dig trough. I know there are things like perl, python etcetera that are used for this type of threaded but i don't know which one is best suited. When using other script If the best way is to go with other type of script like perl,python etcetera I do have some critera. The script has to be accessible via ajax post If it accepts some kind of json encode/decode it would be nice The script has to be able to access the session file This is essential because I need to know if the user is logged in The script has to be able to easily talk to MySQL All comments are welcome, and I hope this question is helpful to other also. Cheers!

    Read the article

  • My Dijit DateTimeCombo widget doesn't send selected value on form submission

    - by david bessire
    i need to create a Dojo widget that lets users specify date & time. i found a sample implementation attached to an entry in the Dojo bug tracker. It looks nice and mostly works, but when i submit the form, the value sent by the client is not the user-selected value but the value sent from the server. What changes do i need to make to get the widget to submit the date & time value? Sample usage is to render a JSP with basic HTML tags (form & input), then dojo.addOnLoad a function which selects the basic elements by ID, adds dojoType attribute, and dojo.parser.parse()-es the page. Thanks in advance. The widget is implemented in two files. The application uses Dojo 1.3. File 1: DateTimeCombo.js dojo.provide("dojox.form.DateTimeCombo"); dojo.require("dojox.form._DateTimeCombo"); dojo.require("dijit.form._DateTimeTextBox"); dojo.declare( "dojox.form.DateTimeCombo", dijit.form._DateTimeTextBox, { baseClass: "dojoxformDateTimeCombo dijitTextBox", popupClass: "dojox.form._DateTimeCombo", pickerPostOpen: "pickerPostOpen_fn", _selector: 'date', constructor: function (argv) {}, postMixInProperties: function() { dojo.mixin(this.constraints, { /* datePattern: 'MM/dd/yyyy HH:mm:ss', timePattern: 'HH:mm:ss', */ datePattern: 'MM/dd/yyyy HH:mm', timePattern: 'HH:mm', clickableIncrement:'T00:15:00', visibleIncrement:'T00:15:00', visibleRange:'T01:00:00' }); this.inherited(arguments); }, _open: function () { this.inherited(arguments); if (this._picker!==null && (this.pickerPostOpen!==null && this.pickerPostOpen!=="")) { if (this._picker.pickerPostOpen_fn!==null) { this._picker.pickerPostOpen_fn(this); } } } } ); File 2: _DateTimeCombo.js dojo.provide("dojox.form._DateTimeCombo"); dojo.require("dojo.date.stamp"); dojo.require("dijit._Widget"); dojo.require("dijit._Templated"); dojo.require("dijit._Calendar"); dojo.require("dijit.form.TimeTextBox"); dojo.require("dijit.form.Button"); dojo.declare("dojox.form._DateTimeCombo", [dijit._Widget, dijit._Templated], { // invoked only if time picker is empty defaultTime: function () { var res= new Date(); res.setHours(0,0,0); return res; }, // id of this table below is the same as this.id templateString: " <table class=\"dojoxDateTimeCombo\" waiRole=\"presentation\">\ <tr class=\"dojoxTDComboCalendarContainer\">\ <td>\ <center><input dojoAttachPoint=\"calendar\" dojoType=\"dijit._Calendar\"></input></center>\ </td>\ </tr>\ <tr class=\"dojoxTDComboTimeTextBoxContainer\">\ <td>\ <center><input dojoAttachPoint=\"timePicker\" dojoType=\"dijit.form.TimeTextBox\"></input></center>\ </td>\ </tr>\ <tr><td><center><button dojoAttachPoint=\"ctButton\" dojoType=\"dijit.form.Button\">Ok</button></center></td></tr>\ </table>\ ", widgetsInTemplate: true, constructor: function(arg) {}, postMixInProperties: function() { this.inherited(arguments); }, postCreate: function() { this.inherited(arguments); this.connect(this.ctButton, "onClick", "_onValueSelected"); }, // initialize pickers to calendar value pickerPostOpen_fn: function (parent_inst) { var parent_value = parent_inst.attr('value'); if (parent_value !== null) { this.setValue(parent_value); } }, // expects a valid date object setValue: function(value) { if (value!==null) { this.calendar.attr('value', value); this.timePicker.attr('value', value); } }, // return a Date constructed date in calendar & time in time picker. getValue: function() { var value = this.calendar.attr('value'); var result=value; if (this.timePicker.value !== null) { if ((this.timePicker.value instanceof Date) === true) { result.setHours(this.timePicker.value.getHours(), this.timePicker.value.getMinutes(), this.timePicker.value.getSeconds()); return result; } } else { var defTime=this.defaultTime(); result.setHours(defTime.getHours(), defTime.getMinutes(), defTime.getSeconds()); return result; } }, _onValueSelected: function() { var value = this.getValue(); this.onValueSelected(value); }, onValueSelected: function(value) {} });

    Read the article

  • Can the STREAM and GUPS (single CPU) benchmark use non-local memory in NUMA machine

    - by osgx
    Hello I want to run some tests from HPCC, STREAM and GUPS. They will test memory bandwidth, latency, and throughput (in term of random accesses). Can I start Single CPU test STREAM or Single CPU GUPS on NUMA node with memory interleaving enabled? (Is it allowed by the rules of HPCC - High Performance Computing Challenge?) Usage of non-local memory can increase GUPS results, because it will increase 2- or 4- fold the number of memory banks, available for random accesses. (GUPS typically limited by nonideal memory-subsystem and by slow memory bank opening/closing. With more banks it can do update to one bank, while the other banks are opening/closing.) Thanks. UPDATE: (you may nor reorder the memory accesses that the program makes). But can compiler reorder loops nesting? E.g. hpcc/RandomAccess.c /* Perform updates to main table. The scalar equivalent is: * * u64Int ran; * ran = 1; * for (i=0; i<NUPDATE; i++) { * ran = (ran << 1) ^ (((s64Int) ran < 0) ? POLY : 0); * table[ran & (TableSize-1)] ^= stable[ran >> (64-LSTSIZE)]; * } */ for (j=0; j<128; j++) ran[j] = starts ((NUPDATE/128) * j); for (i=0; i<NUPDATE/128; i++) { /* #pragma ivdep */ for (j=0; j<128; j++) { ran[j] = (ran[j] << 1) ^ ((s64Int) ran[j] < 0 ? POLY : 0); Table[ran[j] & (TableSize-1)] ^= stable[ran[j] >> (64-LSTSIZE)]; } } The main loop here is for (i=0; i<NUPDATE/128; i++) { and the nested loop is for (j=0; j<128; j++) {. Using 'loop interchange' optimization, compiler can convert this code to for (j=0; j<128; j++) { for (i=0; i<NUPDATE/128; i++) { ran[j] = (ran[j] << 1) ^ ((s64Int) ran[j] < 0 ? POLY : 0); Table[ran[j] & (TableSize-1)] ^= stable[ran[j] >> (64-LSTSIZE)]; } } It can be done because this loop nest is perfect loop nest. Is such optimization prohibited by rules of HPCC?

    Read the article

  • Having problem with C++ file handling

    - by caramel1991
    Our lecturer has given us a task,I've attempted it and try every single effort I can,but I still struggle with one of the problem in it,here goes the question: The company you work at receives a monthly report in a text format. The report contains the following information. • Department Name • Head of Department Name • Month • Minimum spending of the month • Maximum spending of the month Your program is to obtain the name of the input file from the user. Implement a structure to represent the data: Once the file has been read into your program, print out the following statistics for the user: • List which department has the minimum spending per month by month • List which department has the minimum spending by month by month Write the information into a file called “MaxMin.txt” Then do a processing so that the Department Name, Head of Department Name, Minimum spending and Maximum spending are written to separate files based on the month, eg Jan, Feb, March and so on. and of course our lecturer does send us a text file with the content: Engineering Bill Jan 2000 15000 IT Jack Jan 300 20000 HR Jill Jan 1500 10000 Engineering Bill Feb 5000 45000 IT Jack Feb 4500 7000 HR Jill Feb 5600 60000 Engineering Bill Mar 5000 45000 IT Jack Mar 4500 7000 HR Jill Mar 5600 60000 Engineering Bill Apr 5000 45000 IT Jack Apr 4500 7000 HR Jill Apr 5600 60000 Engineering Bill May 2000 15000 IT Jack May 300 20000 HR Jill May 1500 10000 Engineering Bill Jun 2000 15000 IT Jack Jun 300 20000 HR Jill Jun 1500 10000 and here's the c++ code I've written ue#include include include using namespace std; struct Record { string depName; string head; string month; float max; float min; string name; }myRecord[19]; int main () { string line; ofstream minmax,jan,feb,mar,apr,may,jun; char a[50]; char b[50]; int i = 0,j,k; float temp; //float maxjan=myRecord[0].max,maxfeb=myRecord[0].max,maxmar=myRecord[0].max,maxapr=myRecord[0].max,maxmay=myRecord[0].max,maxjune=myRecord[0].max; float minjan=myRecord[1].min,minfeb=myRecord[1].min,minmar=myRecord[1].min,minapr=myRecord[1].min,minmay=myRecord[1].min,minjune=myRecord[1].min; float maxjan=0,maxfeb=0,maxmar=0,maxapr=0,maxmay=0,maxjune=0; //float minjan=0,minfeb=0,minmar=0,minapr=0,minmay=0,minjune=0; string maxjanDep,maxfebDep,maxmarDep,maxaprDep,maxmayDep,maxjunDep; string minjanDep,minfebDep,minmarDep,minaprDep,minmayDep,minjunDep; cout<<"Enter file name: "; cina; ifstream myfile (a); //minmax.open ("MaxMin.txt"); if (myfile.is_open()){ while (! myfile.eof()){ myfilemyRecord[i].depNamemyRecord[i].headmyRecord[i].monthmyRecord[i].minmyRecord[i].max; cout << myRecord[i].depName<<"\t"< cout<<"Enter file name: "; cinb; ifstream myfile1 (b); minmax.open ("MaxMin.txt"); jan.open ("Jan.txt"); feb.open ("Feb.txt"); mar.open ("March.txt"); apr.open ("April.txt"); may.open ("May.txt"); jun.open ("Jun.txt"); if (myfile1.is_open()){ while (! myfile1.eof()){ myfile1myRecord[i].depNamemyRecord[i].headmyRecord[i].monthmyRecord[i].minmyRecord[i].max; if (myRecord[i].month == "Jan"){ jan<< myRecord[i].depName<<"\t"< if (maxjan< myRecord[i].max){ maxjan=myRecord[i].max; maxjanDep=myRecord[i].depName;} //if (minjan myRecord[i].min){ // minjan=myRecord[i].min; //minjanDep=myRecord[i].depName; //} for (k=1;k<=3;k++){ for (j=0;j<2;j++){ if (myRecord[j].minmyRecord[j+1].min){ temp=myRecord[j].min; myRecord[j].min=myRecord[j+1].min; myRecord[j+1].min=temp; minjanDep=myRecord[j].depName; }}} } if (myRecord[i].month == "Feb"){ feb<< myRecord[i].depName<<"\t"< //if (minfebmyRecord[i].min){ //minfeb=myRecord[i].min; //minfebDep=myRecord[i].depName; //} for (k=1;k<=3;k++){ for (j=0;j<2;j++){ if (myRecord[j].minmyRecord[j+1].min){ temp=myRecord[j].min; myRecord[j].min=myRecord[j+1].min; myRecord[j+1].min=temp; minfebDep=myRecord[j+1].depName; }}} } if (myRecord[i].month == "Mar"){ mar<< myRecord[i].depName<<"\t"< if (myRecord[i].month == "Apr"){ apr<< myRecord[i].depName<<"\t"< if (minaprmyRecord[i].min){ minapr=myRecord[i].min; minaprDep=myRecord[i].min;} } if (myRecord[i].month == "May"){ may< if (minmaymyRecord[i].min){ minmay=myRecord[i].min; minmayDep=myRecord[i].depName;} } if (myRecord[i].month == "Jun"){ jun<< myRecord[i].depName<<"\t"< if (minjunemyRecord[i].min){ minjune=myRecord[i].min; minjunDep=myRecord[i].depName;} } i++; myfile.close(); } minmax<<"department that has maximum spending at jan "< else{ cout << "Unable to open file"< } sorry inside that code ue#include should has iostream along with another two #include fstream and string,but at here it was treated as html tag,so i can't type it. my problem here is,I can't seems to get the minimum spending,I've try all I can but I'm still lingering on it,any idea??THANK YOU!

    Read the article

  • manuplating matrix operation(transpose, negation, addition, and mutipication) using functions in c

    - by user292489
    i was trying to manuplate matrices in my input file using functions. my input file is, A 3 3 1 2 3 4 5 6 7 8 9 B 3 3 1 0 0 0 1 0 0 0 1 C 2 3 3 5 8 -1 -2 -3 D 3 5 0 0 0 1 0 1 0 1 0 1 0 1 0 0 1 E 1 1 10 F 3 10 1 0 2 0 3 0 4 0 5 0 0 2 3 -1 -3 -4 -3 8 3 7 0 0 0 4 6 5 8 2 -1 10 i am having trouble in impementing the funcitons that i declared. i assumed my program will perform those operations: transpose, negate, add, and mutiply matices according to the users choise: /* once this program is compliled and excuted, it will perform the basic matrix operations: negation, transpose,a\ ddition, and multiplication. */ #include <stdio.h> #include <stdlib.h> #define MAX 10 int readmatrix(FILE *input, char martixname[6],int , mat[10][10], int i, int j); void printmatrix(char matrixname[6], int mat[10][10], int i, int j); void Negate(char matrixname[6], int mat[10][10], int i, int j); void add(char matrixname[6], int mat[10][10],int i, int k); void multiply(char matrixname[], int mat[][10], char A[], int i, int k); void transpose (char matrixname[], int mat[][10], char A[], int); void printT(int mat[][10], int); int selctoption(); char selectmatrix(); int main(int argc, char *argv[]) { char matrixtype[6]; int mat[][10]; FILE *filein; int size; int optionop; int matrixop; int option; if (argc != 2) { printf("Usage: excutable input.\n"); exit (0); } filein = fopen(argv[1], "r"); if (!filein) { printf("ERROR: input file not found.\n"); exit (0); } size = readmatrix (filein, matrixtype); printmatrix(matrix[][10], size); option = selectoption(); matrixtype = selectmatrix(); //printf("You have: %5.2f ", deposit); optionop = readmatrix(option, matrix[][10], size); if (choiceop == 6) { printf("Thanks for using the matrix operation program.\n"); exit(0); } printf("Please select from the following matrix operations:\n") printf("\t1. Print matrix\n"); printf("\t2. Negate matrix\n"); printf("\t3. Transpose matrix\n"); printf("\t4. Add matrices\n"); printf("\t5. Multiply matrices\n"); printf("\t6. Quit\n"); fclose(filein); return 0; } do { printf("Please select option(1-%d):", optionop); scanf("%d", &matrixop); }while(matrixop <= 0 || matrixop > optionop); void readmatrix (FILE *in, int mat[][10], char A[], int i, int j) { int i=0,j = 0; while (fscanf(in, "%d", &mat[i][j]) != EOF) return 0; } // i would apprtaite anyones feedback. //thank you!

    Read the article

  • bind() fails with windows socket error 10038

    - by herrturtur
    I'm trying to write a simple program that will receive a string of max 20 characters and print that string to the screen. The code compiles, but I get a bind() failed: 10038. After looking up the error number on msdn (socket operation on nonsocket), I changed some code from int sock; to SOCKET sock which shouldn't make a difference, but one never knows. Here's the code: #include <iostream> #include <winsock2.h> #include <cstdlib> using namespace std; const int MAXPENDING = 5; const int MAX_LENGTH = 20; void DieWithError(char *errorMessage); int main(int argc, char **argv) { if(argc!=2){ cerr << "Usage: " << argv[0] << " <Port>" << endl; exit(1); } // start winsock2 library WSAData wsaData; if(WSAStartup(MAKEWORD(2,0), &wsaData)!=0){ cerr << "WSAStartup() failed" << endl; exit(1); } // create socket for incoming connections SOCKET servSock; if(servSock=socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)==INVALID_SOCKET) DieWithError("socket() failed"); // construct local address structure struct sockaddr_in servAddr; memset(&servAddr, 0, sizeof(servAddr)); servAddr.sin_family = AF_INET; servAddr.sin_addr.s_addr = INADDR_ANY; servAddr.sin_port = htons(atoi(argv[1])); // bind to the local address int servAddrLen = sizeof(servAddr); if(bind(servSock, (SOCKADDR*)&servAddr, servAddrLen)==SOCKET_ERROR) DieWithError("bind() failed"); // mark the socket to listen for incoming connections if(listen(servSock, MAXPENDING)<0) DieWithError("listen() failed"); // accept incoming connections int clientSock; struct sockaddr_in clientAddr; char buffer[MAX_LENGTH]; int recvMsgSize; int clientAddrLen = sizeof(clientAddr); for(;;){ // wait for a client to connect if((clientSock=accept(servSock, (sockaddr*)&clientAddr, &clientAddrLen))<0) DieWithError("accept() failed"); // clientSock is connected to a client // BEGIN Handle client cout << "Handling client " << inet_ntoa(clientAddr.sin_addr) << endl; if((recvMsgSize = recv(clientSock, buffer, MAX_LENGTH, 0)) <0) DieWithError("recv() failed"); cout << "Word in the tubes: " << buffer << endl; closesocket(clientSock); // END Handle client } } void DieWithError(char *errorMessage) { fprintf(stderr, "%s: %d\n", errorMessage, WSAGetLastError()); exit(1); }

    Read the article

  • Performance required to improve Windows Experience Index?

    - by Ian Boyd
    Is there a guide on the metrics required to obtain a certain Windows Experience Index? A Microsoft guy said in January 2009: On the matter of transparency, it is indeed our plan to disclose in great detail how the scores are calculated, what the tests attempt to measure, why, and how they map to realistic scenarios and usage patterns. Has that amount of transparency happened? Is there a technet article somewhere? If my score was limited by my Memory subscore of 5.9. A nieve person would suggest: Buy a faster RAM Which is wrong of course. From the Windows help: If your computer has a 64-bit central processing unit (CPU) and 4 gigabytes (GB) or less random access memory (RAM), then the Memory (RAM) subscore for your computer will have a maximum of 5.9. You can buy the fastest, overclocked, liquid-cooled, DDR5 RAM on the planet; you'll still have a maximum Memory subscore of 5.9. So in general the knee-jerk advice "buy better stuff" is not helpful. What i am looking for is attributes required to achieve a certain score, or move beyond a current limitation. The information i've been able to compile so far, chiefly from 3 Windows blog entries, and an article: Memory subscore Score Conditions ======= ================================ 1.0 < 256 MB 2.0 < 500 MB 2.9 <= 512 MB 3.5 < 704 MB 3.9 < 944 MB 4.5 <= 1.5 GB 5.9 < 4.0GB-64MB on a 64-bit OS Windows Vista highest score 7.9 Windows 7 highest score Graphics Subscore Score Conditions ======= ====================== 1.0 doesn't support DX9 1.9 doesn't support WDDM 4.9 does not support Pixel Shader 3.0 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 7.9 Windows 7 highest score Gaming graphics subscore Score Result ======= ============================= 1.0 doesn't support D3D 2.0 supports D3D9, DX9 and WDDM 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 6.0-6.9 good framerates (e.g. 40-50fps) at normal resoltuions (e.g. 1280x1024) 7.0-7.9 even higher framerates at even higher resolutions 7.9 Windows 7 highest score Processor subscore Score Conditions ======= ========================================================================== 5.9 Windows Vista highest score 6.0-6.9 many quad core processors will be able to score in the high 6 low 7 ranges 7.0+ many quad core processors will be able to score in the high 6 low 7 ranges 7.9 8-core systems will be able to approach 8.9 Windows 7 highest score Primary hard disk subscore (note) Score Conditions ======= ======================================== 1.9 Limit for pathological drives that stop responding when pending writes 2.0 Limit for pathological drives that stop responding when pending writes 2.9 Limit for pathological drives that stop responding when pending writes 3.0 Limit for pathological drives that stop responding when pending writes 5.9 highest you're likely to see without SSD Windows Vista highest score 7.9 Windows 7 highest score Bonus Chatter You can find your WEI detailed test results in: C:\Windows\Performance\WinSAT\DataStore e.g. 2011-11-06 01.00.19.482 Disk.Assessment (Recent).WinSAT.xml <WinSAT> <WinSPR> <DiskScore>5.9</DiskScore> </WinSPR> <Metrics> <DiskMetrics> <AvgThroughput units="MB/s" score="6.4" ioSize="65536" kind="Sequential Read">89.95188</AvgThroughput> <AvgThroughput units="MB/s" score="4.0" ioSize="16384" kind="Random Read">1.58000</AvgThroughput> <Responsiveness Reason="UnableToAssess" Kind="Cap">TRUE</Responsiveness> </DiskMetrics> </Metrics> </WinSAT> Pre-emptive snarky comment: "WEI is useless, it has no relation to reality" Fine, how do i increase my hard-drive's random I/O throughput? Update - Amount of memory limits rating Some people don't believe Microsoft's statement that having less than 4GB of RAM on a 64-bit edition of Windows doesn't limit the rating to 5.9: And from xxx.Formal.Assessment (Recent).WinSAT.xml: <WinSPR> <LimitsApplied> <MemoryScore> <LimitApplied Friendly="Physical memory available to the OS is less than 4.0GB-64MB on a 64-bit OS : limit mem score to 5.9" Relation="LT">4227858432</LimitApplied> </MemoryScore> </LimitsApplied> </WinSPR> References Windows Vista Team Blog: Windows Experience Index: An In-Depth Look Understand and improve your computer's performance in Windows Vista Engineering Windows 7 Blog: Engineering the Windows 7 “Windows Experience Index”

    Read the article

  • Overriding Object.Equals() instance method in C#; now Code Analysis / FxCop warning CA2218: "should

    - by Chris W. Rea
    I've got a complex class in my C# project on which I want to be able to do equality tests. It is not a trivial class; it contains a variety of scalar properties as well as references to other objects and collections (e.g. IDictionary). For what it's worth, my class is sealed. To enable a performance optimization elsewhere in my system (an optimization that avoids a costly network round-trip), I need to be able to compare instances of these objects to each other for equality – other than the built-in reference equality – and so I'm overriding the Object.Equals() instance method. However, now that I've done that, Visual Studio 2008's Code Analysis a.k.a. FxCop, which I keep enabled by default, is raising the following warning: warning : CA2218 : Microsoft.Usage : Since 'MySuperDuperClass' redefines Equals, it should also redefine GetHashCode. I think I understand the rationale for this warning: If I am going to be using such objects as the key in a collection, the hash code is important. i.e. see this question. However, I am not going to be using these objects as the key in a collection. Ever. Feeling justified to suppress the warning, I looked up code CA2218 in the MSDN documentation to get the full name of the warning so I could apply a SuppressMessage attribute to my class as follows: [SuppressMessage("Microsoft.Naming", "CA2218:OverrideGetHashCodeOnOverridingEquals", Justification="This class is not to be used as key in a hashtable.")] However, while reading further, I noticed the following: How to Fix Violations To fix a violation of this rule, provide an implementation of GetHashCode. For a pair of objects of the same type, you must ensure that the implementation returns the same value if your implementation of Equals returns true for the pair. When to Suppress Warnings ----- Do not suppress a warning from this rule. [arrow & emphasis mine] So, I'd like to know: Why shouldn't I suppress this warning as I was planning to? Doesn't my case warrant suppression? I don't want to code up an implementation of GetHashCode() for this object that will never get called, since my object will never be the key in a collection. If I wanted to be pedantic, instead of suppressing, would it be more reasonable for me to override GetHashCode() with an implementation that throws a NotImplementedException? Update: I just looked this subject up again in Bill Wagner's good book Effective C#, and he states in "Item 10: Understand the Pitfalls of GetHashCode()": If you're defining a type that won't ever be used as the key in a container, this won't matter. Types that represent window controls, web page controls, or database connections are unlikely to be used as keys in a collection. In those cases, do nothing. All reference types will have a hash code that is correct, even if it is very inefficient. [...] In most types that you create, the best approach is to avoid the existence of GetHashCode() entirely. ... that's where I originally got this idea that I need not be concerned about GetHashCode() always.

    Read the article

  • "Row not found or changed" Problem

    - by winston schröder
    Hi there, I'm working on a SQL CE Database and get into the "Row nor found or changed" exception. The exception only occurs when I try to update. On the first Run after the insert it shows up a MemberChangeConflict which says, that my Column Created_at has in all three values (current, original, database) the same. But in a second attempt it doesn't appear anymore. The DataContext is instanciated on Startup and freed on Exit of my Local(!) Application. I use a sqlmetal generated mapping and code file. In the map I added some Associations and set the timemstamp columns UpdateCheck property to Always while all other have the setting never. The Timestamp Column is marked as isVersion="true", the Id Column as Primary Key. Since I don't dispose the datacontext I expected to be using implicit transaction. When I run the SubmitChanges Method within a TransactionScope. Can anyone tell me how I can update the timestamp within the code ? I know about the Problems one has to deal with if you dispose the datacontext. So I decided not to do this since I use a Single User Local DB Cache File. (I did already use a version where I disposed the datacontext after every usage, but this version had a real bad performance and error rate, so I decided to choose the other variant.) LibDB.Client.Vehicles tmp = null; try { tmp = e.Parameter as LibDB.Client.Vehicles; if (tmp == null) return; if (!this._dc.Vehicles.Contains(tmp)) { this._dc.Vehicles.Attach(tmp); } this.ShowChangesReport(this._dc.GetChangeSet()); using (TransactionScope ts = new TransactionScope()) { try { this._dc.SubmitChanges(); ts.Complete(); } catch (ChangeConflictException cce) { Console.WriteLine("Optimistic concurrency error."); Console.WriteLine(cce.Message); Console.ReadLine(); foreach (ObjectChangeConflict occ in this._dc.ChangeConflicts) { MetaTable metatable = this._dc.Mapping.GetTable(occ.Object.GetType()); LibDB.Client.Vehicles entityInConflict = (LibDB.Client.Vehicles)occ.Object; Console.WriteLine("Table name: {0}", metatable.TableName); Console.Write("Vin: "); Console.WriteLine(entityInConflict.Vin); foreach (MemberChangeConflict mcc in occ.MemberConflicts) { object currVal = mcc.CurrentValue; object origVal = mcc.OriginalValue; object databaseVal = mcc.DatabaseValue; MemberInfo mi = mcc.Member; Console.WriteLine("Member: {0}", mi.Name); Console.WriteLine("current value: {0}", currVal); Console.WriteLine("original value: {0}", origVal); Console.WriteLine("database value: {0}", databaseVal); } throw cce; } } catch (Exception ex) { this.ShowChangeConflicts(this._dc.ChangeConflicts); Console.WriteLine(ex.Message); } } this.ShowChangesReport(this._dc.GetChangeSet());

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >