Search Results

Search found 5137 results on 206 pages for 'i like traffic lights'.

Page 188/206 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • How do I configure a C# web service client to send HTTP request header and body in parallel?

    - by Christopher
    Hi, I am using a traditional C# web service client generated in VS2008 .Net 3.5, inheriting from SoapHttpClientProtocol. This is connecting to a remote web service written in Java. All configuration is done in code during client initialization, and can be seen below: ServicePointManager.Expect100Continue = false; ServicePointManager.DefaultConnectionLimit = 10; var client = new APIService { EnableDecompression = true, Url = _url + "?guid=" + Guid.NewGuid(), Credentials = new NetworkCredential(user, password, null), PreAuthenticate = true, Timeout = 5000 // 5 sec }; It all works fine, but the time taken to execute the simplest method call is almost double the network ping time. Whereas a Java test client takes roughly the same as the network ping time: C# client ~ 550ms Java client ~ 340ms Network ping ~ 300ms After analyzing the TCP traffic for a session discovered the following: Basically, the C# client sent TCP packets in the following sequence. Client Send HTTP Headers in one packet. Client Waits For TCP ACK from server. Client Sends HTTP Body in one packet. Client Waits For TCP ACK from server. The Java client sent TCP packets in the following sequence. Client Sends HTTP Headers in one packet. Client Sends HTTP Body in one packet. Client Revieves ACK for first packet. Client Revieves ACK for second packet. Client Revieves ACK for second packet. Is there anyway to configure the C# web service client to send the header/body in parallel as the Java client appears to? Any help or pointers much appreciated.

    Read the article

  • Flush kernel's TCP buffer with `MSG_MORE`-flagged packets

    - by timn
    send()'s man page reveals the MSG_MORE flag which is asserted to act like TCP_CORK. I have a wrapper function around send(): int SocketConnection_Write(SocketConnection *this, void *buf, int len) { errno = 0; int sent = send(this->fd, buf, len, MSG_NOSIGNAL); if (errno == EPIPE || errno == ENOTCONN) { throw(exc, &SocketConnection_NotConnectedException); } else if (errno == ECONNRESET) { throw(exc, &SocketConnection_ConnectionResetException); } else if (sent != len) { throw(exc, &SocketConnection_LengthMismatchException); } return sent; } Assuming I want to use the kernel buffer, I could go with TCP_CORK, enable whenever it is necessary and then disable it to flush the buffer. But on the other hand, thereby the need for an additional system call arises. Thus, the usage of MSG_MORE seems more appropriate to me. I'd simply change the above send() line to: int sent = send(this->fd, buf, len, MSG_NOSIGNAL | MSG_MORE); According to lwm.net, packets will be flushed automatically if they are large enough: If an application sets that option on a socket, the kernel will not send out short packets. Instead, it will wait until enough data has shown up to fill a maximum-size packet, then send it. When TCP_CORK is turned off, any remaining data will go out on the wire. But this section only refers to TCP_CORK. Now, what is the proper way to flush MSG_MORE packets? I can only think of two possibilities: Call send() with an empty buffer and without MSG_MORE being set Re-apply the TCP_CORK option as described on this page Unfortunately the whole topic is very poorly documented and I couldn't find much on the Internet. I am also wondering how to check that everything works as expected? Obviously running the server through strace' is not an option. So the only simplest way would be to usenetcat' and then look at its `strace' output? Or will the kernel handle traffic differently transmitted over a loopback interface?

    Read the article

  • Using PHP cURL with an HTTP Debugging Proxy

    - by Kane
    I'm using the app "Fiddler" to debug a GET attempt to a website via PHP cURL. In order to see the cURL traffic I had to specify that the cURL connection use the Fiddler proxy (see code below). $ch = curl_init(); curl_setopt($ch, CURLOPT_HTTPPROXYTUNNEL, 1); curl_setopt($ch, CURLOPT_PROXY, '127.0.0.1:8888'); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5); curl_setopt($ch, CURLOPT_TIMEOUT, 10); curl_setopt($ch, CURLOPT_HEADERFUNCTION, 'read_header'); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_USERAGENT, $user_agent); curl_setopt($ch, CURLOPT_REFERER, "http://domain.com"); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); curl_setopt($ch, CURLOPT_COOKIEJAR, "my_cookies.txt"); curl_setopt($ch, CURLOPT_COOKIEFILE, "my_cookies.txt"); curl_setopt($ch, CURLOPT_URL, "http://domain.com"); $response = curl_exec($ch); But the problem is that in Fiddler I can only see this: Request (domain.com is just an alias): CONNECT domain.com:80 HTTP/1.1 Response: HTTP/1.1 200 Blind-Connection Established If I manually load the website in a browser Fiddler gives me WAY more information. I can see the cookies, the header information, and what I'm receiving via the GET. Any ideas why Fiddler can't see more useful information from PHP cURL? Edit: I tried turning on the "Enable HTTPS Decryption" option inside Tools / Fiddler Options / HTTPS (which I'm not sure why I'd need to use as I didn't tell cURL to use HTTPS). Unfortunately, by changing this setting I now get a Response of: HTTP/1.1 502 Connection failed Edit: If it helps, the app "Charles" shows me WAY more information than Fiddler, but I really want to figure out Fiddler since I like it better.

    Read the article

  • Flash movies in inactive browser tabs pause or don't execute in real time

    - by ZenBlender
    I'm noticing some unexpected behavior. Some time in the last few months, a change in either Firefox, the Flash player, or both, has made it so that Flash movies that are in inactive browser tabs no longer execute in real time. They appear to still execute, but only in bursts, and not in a predictable way. This is a problem because I develop a Flash-based (Actionscript 2.0, Flash CS3) multiplayer game that maintains a network connection and allows players to chat, etc. Many of our players complain about Firefox crashing while playing the game. I have noticed it too, not too frequently, but it crashes several times a week. (Firefox crashes, I do not get a message from Flash player that indicates an infinite loop or problem in my code) My theory is that this new behavior is causing crashes when there is a lot of activity in my game, leading to lots of unhandled network traffic for my game getting buffered before Firefox/Flash will give it a chance to execute. Maybe this leads to a buffer overflow or missing packets, and as a result, something crashes. At times I will switch back to the tab that is running my game and discover a display bug, which looks as though Flash has simply failed to execute something that it was supposed to. I would assume this new behavior is on purpose, for example to prevent all the Flash-based advertisements in inactive tabs from executing and therefore killing performance. In a quick test on Chrome (5.0.342.9 beta), this "pausing" of Flash seems to be there as well, but somehow it seems much less of a problem. My users have only complained about Firefox crashing, not other browsers. My machine: Windows 7 x64 Firefox 3.6.3 Flash Player 10.1.50.426 My game: triplejack.com Any ideas? Ideally I'd like to disable this behavior for my Flash game so it can execute in real time even when in an inactive tab. Thanks for any help!

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • make local only daemon listening on different interface (using iptables port forwarding)?

    - by UniIsland
    i have a daemon program which listens on 127.0.0.1:8000. i need to access it when i connect to my box with vpn. so i want it to listen on the ppp0 interface too. i've tried the "ssh -L" method. it works, but i don't think it's the right way to do that, having an extra ssh process running in the background. i tried the "netcat" method. it exits when the connection is closed. so not a valid way for "listening". i also tried several iptables rules. none of them worked. i'm not listing here all the rules i've used. iptables -A FORWARD -j ACCEPT iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 8000 -j DNAT --to-destination 127.0.0.1:8000 the above ruleset doesn't work. i have net.ipv4.ip_forward set to 1. anyone knows how to redirect traffic from ppp interface to lo? say, listen on "192.168.45.1:8000 (ppp0)" as well as "127.0.0.1:8000 (lo)" there's no need to alter the port. thanx

    Read the article

  • Correct escaping of delimited identifers in SQL Server without using QUOTENAME

    - by Ross Bradbury
    Is there anything else that the code must do to sanitize identifiers (table, view, column) other than to wrap them in double quotation marks and "double up" and double quotation marks present in the identifier name? References would be appreciated. I have inherited a code base that has a custom object-relational mapping (ORM) system. SQL cannot be written in the application but the ORM must still eventually generate the SQL to send to the SQL Server. All identifiers are quoted with double quotation marks. string QuoteName(string identifier) { return "\" + identifier.Replace("\"", "\"\"") + "\""; } If I were building this dynamic SQL in SQL, I would use the built-in SQL Server QUOTENAME function: declare @identifier nvarchar(128); set @identifier = N'Client"; DROP TABLE [dbo].Client; --'; declare @delimitedIdentifier nvarchar(258); set @delimitedIdentifier = QUOTENAME(@identifier, '"'); print @delimitedIdentifier; -- "Client""; DROP TABLE [dbo].Client; --" I have not found any definitive documentation about how to escape quoted identifiers in SQL Server. I have found Delimited Identifiers (Database Engine) and I also saw this stackoverflow question about sanitizing. If it were to have to call the QUOTENAME function just to quote the identifiers that is a lot of traffic to SQL Server that should not be needed. The ORM seems to be pretty well thought out with regards to SQL Injection. It is in C# and predates the nHibernate port and Entity Framework etc. All user input is sent using ADO.NET SqlParameter objects, it is just the identifier names that I am concerned about in this question. This needs to work on SQL Server 2005 and 2008.

    Read the article

  • Optimized Publish/Subcribe JMS Broker Cluster and Conflicting Posts on StackOverFlow for the Answer

    - by Gene
    Hi, I am looking to build a publish/subscribe distributed messaging framework that can manage huge volumes of message traffic with some intelligence at the broker level. I don't know if there's a topology that describes this, but this is the model I'm going after: EXAMPLE MODEL A A) There are two running message brokers (ideally all on localhost if possible, for easier demo-ing) : Broker-A Broker-B B) Each broker will have 2 listeners and 1 publisher. Example Figure [subscriber A1, subscriber A2, publisher A1] <-- BrokerA <-- BrokerB <-- [publisher B1, subscriber B1, subscriber B2] IF a message-X is published to broker A and there no subscribers for it among the listeners on Broker-B (via criteria in Message Selectors or Broker routing rules), then that message-X will never be published to Broker-B. ELSE, broker A will publish the message to broker B, where one of the broker B listeners/subscribers/services is expecting that message based on the subscription criteria. Is Clustering the Correct Approach? At first, I concluded that the "Broker Clustering" concept is what I needed to support this. However, as I have come to understand it, the typical use of clustering entails either: message redundancy across all brokers ... or Competing Consumers pattern ... and neither of these satisfy the requirement in the EXAMPLE MODEL A. What is the Correct Approach? My question is, does anyone know of a JMS implementation that supports the model I described? I scanned through all the stackoverflow post titles for the search: JMS and Cluster. I found these two informative, but seemingly conflicting posts: Says the EXAMPLE MODEL A is/should-be implicitly supported: http://stackoverflow.com/questions/2255816/jms-consumer-with-activemq-network-of-brokers " this means you pick a broker, connect to it, and let the broker network sort it out amongst themselves. In theory." Says the EXAMPLE MODEL A IS NOT suported: http://stackoverflow.com/questions/2017520/how-does-a-jms-topic-subscriber-in-a-clustered-application-server-recieve-message "All the instances of PropertiesSubscriber running on different app servers WILL get that message." Any suggestions would be greatly appreciated. Thanks very much for reading my post, Gene

    Read the article

  • How to stop MVC caching the results of invoking and action method?

    - by Trey Carroll
    I am experiencing a problem with IE caching the results of an action method. Other articles I found were related to security and the [Authorize] attribute. This problem has nothing to do with security. This is a very simple "record a vote, grab the average, return the avg and the number of votes" method. The only slightly interesting thing about it is that it is invoked via Ajax and returns a Json object. I believe that it is the Json object that is getting catched. When I run it from FireFox and watch the XHR traffic with Firebug, everything works perfectly. However, under IE 8 the "throbber" graphic doesn't ever have time to show up and the page elements that display the "new" avg and count that are being injected into the page with jQuery are never different. I need a way to tell MVC to never cache this action method. This article seems to address the problem, but I cannot understand it: http://stackoverflow.com/questions/1441467/prevent-caching-of-attributes-in-asp-net-mvc-force-attribute-execution-every-tim I need a bit more context for the solution to understand how to extend AuthorizationAttribute. Please address your answer as if you were speaking to someone who lacks a deep understanding of MVC even if that means replying with an article on some basics/prerequisites that are required. Thanks, Trey Carroll

    Read the article

  • Multiple connections in a single SSH SOCKS 5 Proxy

    - by Elie Zedeck
    Hey guys, My fist question here on Stackoverflow: What should I need to do so that the SSH SOCKS 5 Proxy (SSH2) will allow multiple connections? What I have noticed, is that when I load a page in Firefox (already configured to use the SOCKS 5 proxy), it loads everything one by one. It can be perceived by bare eyes, and I also confirm that through the use of Firebug's NET tab, which logs the connections that have been made. I have already configure some of the directives in the about:config page, like pipeline, persistent proxy connections, and a few other things. But I still get this kind of sequential load of resources, which is noticeably very slow. network.http.pipelining;true network.http.pipelining.maxrequests;8 network.http.pipelining.ssl;true network.http.proxy.pipelining;true network.http.max-persistent-connections-per-proxy;100 network.proxy.socks_remote_dns;true My ISP sucks because during the day, it intentionally breaks connections on a random basis. And so, it is impossible to actually accomplish meaningful works without the need of a lot of browser refresh or hitting F5 key. So, that is why I started to find solutions to this. The SSH's dynamic port forwarding is the best solution I find to date, because it has some pretty good compression which saves a lot of useless traffic, and is also secure. The only thing remaining is to get it to have multiple connections running in it. Thanks for all the inputs.

    Read the article

  • Can nginx be used as a reverse proxy for a backend websocket server?

    - by John Reilly
    We're working on a Ruby on Rails app that needs to take advantage of html5 websockets. At the moment, we have two separate "servers" so to speak: our main app running on nginx+passenger, and a separate server using Pratik Naik's Cramp framework (which is running on Thin) to handle the websocket connections. Ideally, when it comes time for deployment, we'd have the rails app running on nginx+passenger, and the websocket server would be proxied behind nginx, so we wouldn't need to have the websocket server running on a different port. Problem is, in this setup it seems that nginx is closing the connections to Thin too early. The connection is successfully established to the Thin server, then immediately closed with a 200 response code. Our guess is that nginx doesn't realize that the client is trying to establish a long-running connection for websocket traffic. Admittedly, I'm not all that savvy with nginx config, so, is it even possible to configure nginx to act as a reverse proxy for a websocket server? Or do I have to wait for nginx to offer support for the new websocket handshake stuff? Assuming that having both the app server and the websocket server listening on port 80 is a requirement, might that mean I have to have Thin running on a separate server without nginx in front for now? Thanks in advance for any advice or suggestions. :) -John

    Read the article

  • What are the attack vectors for passwords sent over http?

    - by KevinM
    I am trying to convince a customer to pay for SSL for a web site that requires login. I want to make sure I correctly understand the major scenarios in which someone can see the passwords that are being sent. My understanding is that at any of the hops along the way can use a packet analyzer to view what is being sent. This seems to require that any hacker (or their malware/botnet) be on the same subnet as any of the hops the packet takes to arrive at its destination. Is that right? Assuming some flavor of this subnet requirement holds true, do I need to worry about all the hops or just the first one? The first one I can obviously worry about if they're on a public Wifi network since anyone could be listening in. Should I be worried about what's going on in subnets that packets will travel across outside this? I don't know a ton about network traffic, but I would assume it's flowing through data centers of major carriers and there's not a lot of juicy attack vectors there, but please correct me if I am wrong. Are there other vectors to be worried about outside of someone listening with a packet analyzer? I am a networking and security noob, so please feel free to set me straight if I am using the wrong terminology in any of this.

    Read the article

  • Why do people say that Ruby is slow?

    - by stephen murdoch
    I like Ruby on Rails and I use it for all my web development projects. A few years ago there was a lot of talk about Rails being a memory hog and about how it didn't scale very well but these suggestions were put to bed by Gregg Pollack here. Lately though, I've been hearing people saying that Ruby itself is slow. Why is Ruby considered slow? I do not find Ruby to be slow but then again, I'm just using it to make simple CRUD apps and company blogs. What sort of projects would I need to be doing before I find Ruby becoming slow? Or is this slowness just something that affects all programming languages? What are your options as a Ruby programmer if you want to deal with this "slowness"? Which version of Ruby would best suit an application like Stack Overflow where speed is critical and traffic is intense? The questions are subjective, and I realise that architectural setup (EC2 vs standalone servers etc) makes a big difference but I'd like to hear what people think about Ruby being slow. Finally, I can't find much news on Ruby 2.0 - I take it we're a good few years away from that then?

    Read the article

  • What is the correct way to implement a massive hierarchical, geographical search for news?

    - by Philip Brocoum
    The company I work for is in the business of sending press releases. We want to make it possible for interested parties to search for press releases based on a number of criteria, the most important being location. For example, someone might search for all news sent to New York City, Massachusetts, or ZIP code 89134, sent from a governmental institution, under the topic of "traffic". Or whatever. The problem is, we've sent, literally, hundreds of thousands of press releases. Searching is slow and complex. For example, a press release sent to Queens, NY should show up in the search I mentioned above even though it wasn't specifically sent to New York City, because Queens is a subset of New York City. We may also want to implement "and" and "or" and negation and text search to the query to create complex searches. These searches also have to be fast enough to function as dynamic RSS feeds. I really don't know anything about search theory, or how it's properly done. The way we are getting by right now is using a data mart to store the locations the releases were sent to in a single table. However, because of the subset thing mentioned above, the data mart is gigantic with millions of rows. And we haven't even implemented cities yet, and there are about 50,000 cities in the United States, which will exponentially increase the size of the data mart by so much I'm afraid it just won't work anymore. Anyway, I realize this is not a simple question and there won't be a "do this" answer. However, I'm hoping one of you can point me in the right direction where I can learn about how massive searches are done? Because I really know nothing about it. And such a search engine is turning out to be incredibly difficult to make. Thanks! I know there must be a way because if Google can search the entire internet we must be able to search our own database :-)

    Read the article

  • How to protect/monitor your site from crawling by malicious user

    - by deathy
    Situation: Site with content protected by username/password (not all controlled since they can be trial/test users) a normal search engine can't get at it because of username/password restrictions a malicious user can still login and pass the session cookie to a "wget -r" or something else. The question would be what is the best solution to monitor such activity and respond to it (considering the site policy is no-crawling/scraping allowed) I can think of some options: Set up some traffic monitoring solution to limit the number of requests for a given user/IP. Related to the first point: Automatically block some user-agents (Evil :)) Set up a hidden link that when accessed logs out the user and disables his account. (Presumably this would not be accessed by a normal user since he wouldn't see it to click it, but a bot will crawl all links.) For point 1. do you know of a good already-implemented solution? Any experiences with it? One problem would be that some false positives might show up for very active but human users. For point 3: do you think this is really evil? Or do you see any possible problems with it? Also accepting other suggestions.

    Read the article

  • Updating a Safari Extension?

    - by Ricky Romero
    Hi there, I'm writing a simple Safari Extension, and I'm trying to figure out how to get the update mechanism working. Apple's documentation here is delightfully vague: http://developer.apple.com/safari/library/documentation/Tools/Conceptual/SafariExtensionGuide/UpdatingExtensions/UpdatingExtensions.html And here's my manifest, based on that documentation: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Extension Updates</key> <array> <dict> <key>CFBundleIdentifier</key> <string>net.rickyromero.safari.shutup</string> <key>Team Identifier</key> <string>TMM5P68287</string> <key>CFBundleVersion</key> <string>1</string> <key>CFBundleShortVersionString</key> <string>1.0</string> <key>URL</key> <string>http://rickyromero.net/misc/SafariExtensions/ShutUp.safariextz</string> </dict> </array> </dict> </plist> I don't know where to get "YourCertifcateID," for example. And when I increment the values for CFBundleVersion and CFBundleShortVersionString, it doesn't trigger an update. I know Safari is hitting my manifest though, because I'm watching HTTP traffic. Thoroughly stumped. Any ideas, guys?

    Read the article

  • RIA: how to get functionality, not a data

    - by Budda
    On the server side I have the following class: public class Customer { [Key] public int Id { get; set; } public string FirstName { get; set; } public string SecondName { get; set; } public string FullName { get { return string.Concat(FirstName, " ", SecondName); } } } The problem is that each field is calculated and transferred to the client, for example 'FullName' property: [DataMember()] [Editable(false)] [ReadOnly(true)] public string FullName { get { return this._fullName; } set { if ((this._fullName != value)) { this.ValidateProperty("FullName", value); this.OnFullNameChanging(value); this._fullName = value; this.RaisePropertyChanged("FullName"); this.OnFullNameChanged(); } } } Instead of data transferring (that is traffic consuming, in some cases it introduces significant overhead). I would like to have a calculation on the client side. Is this possible without manual duplication of the property implementation? Thank you.

    Read the article

  • PF, load balanced gateways, and Squid

    - by Santa
    Hi, So I have a FreeBSD router running PF and Squid, and it has three network interfaces: two connected to upstream providers (em0 and em1 respectively), and one for LAN (re0) that we serve. There is some load balancing configured with PF. Basically, it routes all traffic to ports 1-1024 through one interface (em0) and everything else through the other (em1). Now, I have a Squid proxy also running on the box that transparently redirects any HTTP request from LAN to port 3128 in 127.0.0.1. Since Squid redirects this request to HTTP outside, it should follow the load balancing rule through em0, no? The problem is, when we tested it out (by browsing from a computer in the LAN to http://whatismyip.com, it reports the external IP of the em1 interface! When we turn Squid off, the external IP of em0 is reported, as expected. How do I make Squid behave with the load balancing rule that we have set up? Here's the related settings in /etc/pf.conf that I have: ext_if1="em1" # DSL ext_if2="em0" # T1 int_if="re0" ext_gw1="x.x.x.1" ext_gw2="y.y.y.1" int_addr="10.0.0.1" int_net="10.0.0.0/16" dsl_ports = "1024:65535" t1_ports = "1:1023" ... squid=3128 rdr on $int_if inet proto tcp from $int_net \ to any port 80 -> 127.0.0.1 port $squid pass in quick on $int_if route-to lo0 inet proto tcp \ from $int_net to 127.0.0.1 port $squid keep state ... # load balancing pass in on $int_if route-to ($ext_if1 $ext_gw1) \ proto tcp from $int_net to any port $dsl_ports keep state pass in on $int_if route-to ($ext_if1 $ext_gw1) \ proto udp from $int_net to any port $dsl_ports pass in on $int_if route-to ($ext_if2 $ext_gw2) \ proto tcp from $int_net to any port $t1_ports keep state pass in on $int_if route-to ($ext_if2 $ext_gw2) \ proto udp from $int_net to any port $t1_ports Thanks!

    Read the article

  • being able to solve google code jam problem sets

    - by JPro
    This is not a homework question, but rather my intention to know if this is what it takes to learn programming. I keep loggin into TopCoder not to actually participate but to get the basic understand of how the problems are solved. But to my knowledge I don't understand what the problem is and how to translate the problem into an algorithm that can solve it. Just now I happen to look at ACM ICPC 2010 World Finals which is being held in china. The teams were given problem sets and one of them is this: Given at most 100 points on a plan with distinct x-coordinates, find the shortest cycle that passes through each point exactly once, goes from the leftmost point always to the right until it reaches the rightmost point, then goes always to the left until it gets back to the leftmost point. Additionally, two points are given such that the the path from left to right contains the first point, and the path from right to left contains the second point. This seems to be a very simple DP: after processing the last k points, and with the first path ending in point a and the second path ending in point b, what is the smallest total length to achieve that? This is O(n^2) states, transitions in O(n). We deal with the two special points by forcing the first path to contain the first one, and the second path contain the second one. Now I have no idea what I am supposed to solve after reading the problem set. and there's an other one from google code jam: Problem In a big, square room there are two point light sources: one is red and the other is green. There are also n circular pillars. Light travels in straight lines and is absorbed by walls and pillars. The pillars therefore cast shadows: they do not let light through. There are places in the room where no light reaches (black), where only one of the two light sources reaches (red or green), and places where both lights reach (yellow). Compute the total area of each of the four colors in the room. Do not include the area of the pillars. Input * One line containing the number of test cases, T. Each test case contains, in order: * One line containing the coordinates x, y of the red light source. * One line containing the coordinates x, y of the green light source. * One line containing the number of pillars n. * n lines describing the pillars. Each contains 3 numbers x, y, r. The pillar is a disk with the center (x, y) and radius r. The room is the square described by 0 = x, y = 100. Pillars, room walls and light sources are all disjoint, they do not overlap or touch. Output For each test case, output: Case #X: black area red area green area yellow area Is it required that people who program should be should be able to solve these type of problems? I would apprecite if anyone can help me interpret the google code jam problem set as I wish to participate in this years Code Jam to see if I can do anthing or not. Thanks.

    Read the article

  • Login Website, curious Cookie Problem

    - by Collin Peters
    Hello, Language: C# Development Environment: Visual Studio 2008 Sorry if the english is not perfect. I want to login to a Website and get some Data from there. My Problem is that the Cookies does not work. Everytime the Website says that I should activate Cookies but i activated the Cookies trough a Cookiecontainer. I sniffed the traffic serveral times for the login progress and I see no problem there. I tried different methods to login and I have searched if someone else have this Problem but no results... Login Page is: "www.uploaded.to", Here is my Code to Login in Short Form: private void login() { //Global CookieContainer for all the Cookies CookieContainer _cookieContainer = new CookieContainer(); //First Login to the Website HttpWebRequest _request1 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/login"); _request1.Method = "POST"; _request1.CookieContainer = _cookieContainer; string _postData = "email=XXXXX&password=XXXXX"; byte[] _byteArray = Encoding.UTF8.GetBytes(_postData); Stream _reqStream = _request1.GetRequestStream(); _reqStream.Write(_byteArray, 0, _byteArray.Length); _reqStream.Close(); HttpWebResponse _response1 = (HttpWebResponse)_request1.GetResponse(); _response1.Close(); //######################## //Follow the Link from Request1 HttpWebRequest _request2 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/login?coo=1"); _request2.Method = "GET"; _request2.CookieContainer = _cookieContainer; HttpWebResponse _response2 = (HttpWebResponse)_request2.GetResponse(); _response2.Close(); //####################### //Get the Data from the Page after Login HttpWebRequest _request3 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/home"); _request3.Method = "GET"; _request3.CookieContainer = _cookieContainer; HttpWebResponse _response3 = (HttpWebResponse)_request3.GetResponse(); _response3.Close(); } I'm stuck at this problem since many weeks and i found no solution that works, please help...

    Read the article

  • Building an *efficient* if/then interface for non-technical users to build flow-control in PHP

    - by Brendan
    I am currently building an internal tool to be used by our management to control the flow of traffic. I have built an if/then interface allowing the user to set conditions for certain outcomes, however it is inefficient to use the switch statement to control the flow. How can I improve the efficiency of my code? Example of code: if($previous['route_id'] == $condition['route_id'] && $failed == 0) //if we have not moved on to a new set of rules and we haven't failed yet { switch($condition['type']) { case 0 : $type = $user['hour']; break; case 1 : $type = $user['location']['region_abv']; break; case 2 : $type = $user['referrer_domain']; break; case 3 : $type = $user['affiliate']; break; case 4 : $type = $user['location']['country_code']; break; case 5 : $type = $user['location']['city']; break; } $type = strtolower($type); $condition['value'] = strtolower($condition['value']); switch($condition['operator']) { case 0 : if($type == $condition['value']); else $failed = '1'; break; case 1 : if($type != $condition['value']); else $failed = '1'; break; case 2 : if($type > $condition['value']); else $failed = '1'; break; case 3 : if($type >= $condition['value']); else $failed = '1'; break; case 4 : if($type < $condition['value']); else $failed = '1'; break; case 5 : if($type <= $condition['value']); else $failed = '1'; break; } }

    Read the article

  • How do I convert Data::Dumper output back into a Perl data structure?

    - by newbee_me
    Hi all! I was wondering if you could shed some lights regarding the code I've been doing for a couple of days. I've been trying to convert a Perl-parsed hash back to XML using the XMLout() and XMLin() method and it has been quite successful with this format. #!/usr/bin/perl -w use strict; # use module use IO::File; use XML::Simple; use XML::Dumper; use Data::Dumper; my $dump = new XML::Dumper; my ( $data, $VAR1 ); Topology:$VAR1 = { 'device' => { 'FOC1047Z2SZ' => { 'ChassisID' => '2009-09', 'Error' => undef, 'Group' => { 'ID' => 'A1', 'Type' => 'Base' }, 'Model' => 'CATALYST', 'Name' => 'CISCO-SW1', 'Neighbor' => {}, 'ProbedIP' => 'TEST', 'isDerived' => 0 } }, 'issues' => [ 'TEST' ] }; # create object my $xml = new XML::Simple (NoAttr=>1, RootName=>'data', SuppressEmpty => 'true'); # convert Perl array ref into XML document $data = $xml->XMLout($VAR1); #reads an XML file my $X_out = $xml->XMLin($data); # access XML data print Dumper($data); print "STATUS: $X_out->{issues}\n"; print "CHASSIS ID: $X_out->{device}{ChassisID}\n"; print "GROUP ID: $X_out->{device}{Group}{ID}\n"; print "DEVICE NAME: $X_out->{device}{Name}\n"; print "DEVICE NAME: $X_out->{device}{name}\n"; print "ERROR: $X_out->{device}{error}\n"; I can access all the element in the XML with no problem. But when I try to create a file that will house the parsed hash, problem arises because I can't seem to access all the XML elements. I guess, I wasn't able to unparse the file with the following code. #!/usr/bin/perl -w use strict; #!/usr/bin/perl # use module use IO::File; use XML::Simple; use XML::Dumper; use Data::Dumper; my $dump = new XML::Dumper; my ( $data, $VAR1, $line_Holder ); #this is the file that contains the parsed hash my $saveOut = "C:/parsed_hash.txt"; my $result_Holder = IO::File->new($saveOut, 'r'); while ($line_Holder = $result_Holder->getline){ print $line_Holder; } # create object my $xml = new XML::Simple (NoAttr=>1, RootName=>'data', SuppressEmpty => 'true'); # convert Perl array ref into XML document $data = $xml->XMLout($line_Holder); #reads an XML file my $X_out = $xml->XMLin($data); # access XML data print Dumper($data); print "STATUS: $X_out->{issues}\n"; print "CHASSIS ID: $X_out->{device}{ChassisID}\n"; print "GROUP ID: $X_out->{device}{Group}{ID}\n"; print "DEVICE NAME: $X_out->{device}{Name}\n"; print "DEVICE NAME: $X_out->{device}{name}\n"; print "ERROR: $X_out->{device}{error}\n"; Do you have any idea how I could access the $VAR1 inside the text file? Regards, newbee_me

    Read the article

  • TCP: Address already in use exception - possible causes for client port? NO PORT EXHAUSTION

    - by TomTom
    Hello, stupid problem. I get those from a client connecting to a server. Sadly, the setup is complicated making debugging complex - and we run out of options. The environment: *Client/Server system, both running on the same machine. The client is actually a service doing some database manipulation at specific times. * The cnonection comes from C# going through OleDb to an EasySoft JDBC driver to a custom written JDBC server that then hosts logic in C++. Yeah, compelx - but the third party supplier decided to expose the extension mechanisms for their server through a JDBC interface. Not a lot can be done here ;) The Symptom: At (ir)regular intervals we get a "Address already in use: connect" told from the JDBC driver. They seem to come from one particular service we run. Now, I did read all the stuff about port exhaustion. This is why we have a little tool running now that counts ports and their states every minute. Last time this happened, we had an astonishing 370 ports in use, with the count rising to about 900 AFTER the error. We aleady patched the registry (it is a windows machine) to allow more than the 5000 client ports standard, but even then, we are far far from that limit to start with. Which is why I am asking here. Ayneone an ide what ELSE could cause this? It is a Windows 2003 Server machine, 64 bit. The only other thing I can see that may cause it (but this functionality is supposedly disabled) is Symantec Endpoint Protection that is installed on the server - and being capable of actinc as a firewall, it could possibly intercept network traffic. I dont want to open a can of worms by pointing to Symantec prematurely (if pointing to Symantec can ever be seen as such). So, anyone an idea what else may be the cause? Thanks

    Read the article

  • How should I implement lazy session creation in PHP?

    - by Adam Franco
    By default, PHP's session handling mechanisms set a session cookie header and store a session even if there is no data in the session. If no data is set in the session then I don't want a Set-Cookie header sent to the client in the response and I don't want an empty session record stored on the server. If data is added to $_SESSION, then the normal behavior should continue. My goal is to implement lazy session creation behavior of the sort that Drupal 7 and Pressflow where no session is stored (or session cookie header sent) unless data is added to the $_SESSION array during application execution. The point of this behavior is to allow reverse proxies such as Varnish to cache and serve anonymous traffic while letting authenticated requests pass through to Apache/PHP. Varnish (or another proxy-server) is configured to pass through any requests without cookies, assuming correctly that if a cookie exists then the request is for a particular client. I have ported the session handling code from Pressflow that uses session_set_save_handler() and overrides the implementation of session_write() to check for data in the $_SESSION array before saving and will write this up as library and add an answer here if this is the best/only route to take. My Question: While I can implement a fully custom session_set_save_handler() system, is there an easier way to get this lazy session creation behavior in a relatively generic way that would be transparent to most applications?

    Read the article

  • Flex 3: should I provide prepared data to my component or make it to process data before display?

    - by grapkulec
    I'm starting to learn a little Flex just for fun and maybe to prove that I still can learn something new :) I have some idea for a project and one of its parts is a tree component which could display data in different ways depending on configuration. The idea There is list of objects having properties like id, date, time, name, description. And sometimes list should be displayed like this: first level: date second level: time third level: name and sometimes like this: first level: year second level: month third level: day fourth level: time and name By level I mean level of nesting of course. So, we can have years, that have months, that have days, that have hours and so forth. The problem What could be the best way to do it? I mean, should I prepare data for different ways of nesting outside of component or even outside of flex? I can do it at web service level in C# where I plan to have database access layer and send to flex nice and ready to display XML or array of objects. But I wonder if that won't cause additional and maybe unneccessary network traffic. I tried to hack some code in my component to convert my data objects into XML or ArrayCollection but I don't know enough of Flex and got stuck on elimination of duplicates or getting specific data by some key value. Usually to do such things I have STL with maps, sets and vectors and I find Flex arrays and even Dictionary a little bit confusing (I've read language reference and googled without any significant luck). The question So, to sum things up: should I give my tree component data prepared just for chosen type of display or should I try to do it internally inside component (or some helper class written in ActionScript)?

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >