Search Results

Search found 4721 results on 189 pages for 'traffic'.

Page 172/189 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Flash movies in inactive browser tabs pause or don't execute in real time

    - by ZenBlender
    I'm noticing some unexpected behavior. Some time in the last few months, a change in either Firefox, the Flash player, or both, has made it so that Flash movies that are in inactive browser tabs no longer execute in real time. They appear to still execute, but only in bursts, and not in a predictable way. This is a problem because I develop a Flash-based (Actionscript 2.0, Flash CS3) multiplayer game that maintains a network connection and allows players to chat, etc. Many of our players complain about Firefox crashing while playing the game. I have noticed it too, not too frequently, but it crashes several times a week. (Firefox crashes, I do not get a message from Flash player that indicates an infinite loop or problem in my code) My theory is that this new behavior is causing crashes when there is a lot of activity in my game, leading to lots of unhandled network traffic for my game getting buffered before Firefox/Flash will give it a chance to execute. Maybe this leads to a buffer overflow or missing packets, and as a result, something crashes. At times I will switch back to the tab that is running my game and discover a display bug, which looks as though Flash has simply failed to execute something that it was supposed to. I would assume this new behavior is on purpose, for example to prevent all the Flash-based advertisements in inactive tabs from executing and therefore killing performance. In a quick test on Chrome (5.0.342.9 beta), this "pausing" of Flash seems to be there as well, but somehow it seems much less of a problem. My users have only complained about Firefox crashing, not other browsers. My machine: Windows 7 x64 Firefox 3.6.3 Flash Player 10.1.50.426 My game: triplejack.com Any ideas? Ideally I'd like to disable this behavior for my Flash game so it can execute in real time even when in an inactive tab. Thanks for any help!

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • Correct escaping of delimited identifers in SQL Server without using QUOTENAME

    - by Ross Bradbury
    Is there anything else that the code must do to sanitize identifiers (table, view, column) other than to wrap them in double quotation marks and "double up" and double quotation marks present in the identifier name? References would be appreciated. I have inherited a code base that has a custom object-relational mapping (ORM) system. SQL cannot be written in the application but the ORM must still eventually generate the SQL to send to the SQL Server. All identifiers are quoted with double quotation marks. string QuoteName(string identifier) { return "\" + identifier.Replace("\"", "\"\"") + "\""; } If I were building this dynamic SQL in SQL, I would use the built-in SQL Server QUOTENAME function: declare @identifier nvarchar(128); set @identifier = N'Client"; DROP TABLE [dbo].Client; --'; declare @delimitedIdentifier nvarchar(258); set @delimitedIdentifier = QUOTENAME(@identifier, '"'); print @delimitedIdentifier; -- "Client""; DROP TABLE [dbo].Client; --" I have not found any definitive documentation about how to escape quoted identifiers in SQL Server. I have found Delimited Identifiers (Database Engine) and I also saw this stackoverflow question about sanitizing. If it were to have to call the QUOTENAME function just to quote the identifiers that is a lot of traffic to SQL Server that should not be needed. The ORM seems to be pretty well thought out with regards to SQL Injection. It is in C# and predates the nHibernate port and Entity Framework etc. All user input is sent using ADO.NET SqlParameter objects, it is just the identifier names that I am concerned about in this question. This needs to work on SQL Server 2005 and 2008.

    Read the article

  • Optimized Publish/Subcribe JMS Broker Cluster and Conflicting Posts on StackOverFlow for the Answer

    - by Gene
    Hi, I am looking to build a publish/subscribe distributed messaging framework that can manage huge volumes of message traffic with some intelligence at the broker level. I don't know if there's a topology that describes this, but this is the model I'm going after: EXAMPLE MODEL A A) There are two running message brokers (ideally all on localhost if possible, for easier demo-ing) : Broker-A Broker-B B) Each broker will have 2 listeners and 1 publisher. Example Figure [subscriber A1, subscriber A2, publisher A1] <-- BrokerA <-- BrokerB <-- [publisher B1, subscriber B1, subscriber B2] IF a message-X is published to broker A and there no subscribers for it among the listeners on Broker-B (via criteria in Message Selectors or Broker routing rules), then that message-X will never be published to Broker-B. ELSE, broker A will publish the message to broker B, where one of the broker B listeners/subscribers/services is expecting that message based on the subscription criteria. Is Clustering the Correct Approach? At first, I concluded that the "Broker Clustering" concept is what I needed to support this. However, as I have come to understand it, the typical use of clustering entails either: message redundancy across all brokers ... or Competing Consumers pattern ... and neither of these satisfy the requirement in the EXAMPLE MODEL A. What is the Correct Approach? My question is, does anyone know of a JMS implementation that supports the model I described? I scanned through all the stackoverflow post titles for the search: JMS and Cluster. I found these two informative, but seemingly conflicting posts: Says the EXAMPLE MODEL A is/should-be implicitly supported: http://stackoverflow.com/questions/2255816/jms-consumer-with-activemq-network-of-brokers " this means you pick a broker, connect to it, and let the broker network sort it out amongst themselves. In theory." Says the EXAMPLE MODEL A IS NOT suported: http://stackoverflow.com/questions/2017520/how-does-a-jms-topic-subscriber-in-a-clustered-application-server-recieve-message "All the instances of PropertiesSubscriber running on different app servers WILL get that message." Any suggestions would be greatly appreciated. Thanks very much for reading my post, Gene

    Read the article

  • make local only daemon listening on different interface (using iptables port forwarding)?

    - by UniIsland
    i have a daemon program which listens on 127.0.0.1:8000. i need to access it when i connect to my box with vpn. so i want it to listen on the ppp0 interface too. i've tried the "ssh -L" method. it works, but i don't think it's the right way to do that, having an extra ssh process running in the background. i tried the "netcat" method. it exits when the connection is closed. so not a valid way for "listening". i also tried several iptables rules. none of them worked. i'm not listing here all the rules i've used. iptables -A FORWARD -j ACCEPT iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 8000 -j DNAT --to-destination 127.0.0.1:8000 the above ruleset doesn't work. i have net.ipv4.ip_forward set to 1. anyone knows how to redirect traffic from ppp interface to lo? say, listen on "192.168.45.1:8000 (ppp0)" as well as "127.0.0.1:8000 (lo)" there's no need to alter the port. thanx

    Read the article

  • How to stop MVC caching the results of invoking and action method?

    - by Trey Carroll
    I am experiencing a problem with IE caching the results of an action method. Other articles I found were related to security and the [Authorize] attribute. This problem has nothing to do with security. This is a very simple "record a vote, grab the average, return the avg and the number of votes" method. The only slightly interesting thing about it is that it is invoked via Ajax and returns a Json object. I believe that it is the Json object that is getting catched. When I run it from FireFox and watch the XHR traffic with Firebug, everything works perfectly. However, under IE 8 the "throbber" graphic doesn't ever have time to show up and the page elements that display the "new" avg and count that are being injected into the page with jQuery are never different. I need a way to tell MVC to never cache this action method. This article seems to address the problem, but I cannot understand it: http://stackoverflow.com/questions/1441467/prevent-caching-of-attributes-in-asp-net-mvc-force-attribute-execution-every-tim I need a bit more context for the solution to understand how to extend AuthorizationAttribute. Please address your answer as if you were speaking to someone who lacks a deep understanding of MVC even if that means replying with an article on some basics/prerequisites that are required. Thanks, Trey Carroll

    Read the article

  • Multiple connections in a single SSH SOCKS 5 Proxy

    - by Elie Zedeck
    Hey guys, My fist question here on Stackoverflow: What should I need to do so that the SSH SOCKS 5 Proxy (SSH2) will allow multiple connections? What I have noticed, is that when I load a page in Firefox (already configured to use the SOCKS 5 proxy), it loads everything one by one. It can be perceived by bare eyes, and I also confirm that through the use of Firebug's NET tab, which logs the connections that have been made. I have already configure some of the directives in the about:config page, like pipeline, persistent proxy connections, and a few other things. But I still get this kind of sequential load of resources, which is noticeably very slow. network.http.pipelining;true network.http.pipelining.maxrequests;8 network.http.pipelining.ssl;true network.http.proxy.pipelining;true network.http.max-persistent-connections-per-proxy;100 network.proxy.socks_remote_dns;true My ISP sucks because during the day, it intentionally breaks connections on a random basis. And so, it is impossible to actually accomplish meaningful works without the need of a lot of browser refresh or hitting F5 key. So, that is why I started to find solutions to this. The SSH's dynamic port forwarding is the best solution I find to date, because it has some pretty good compression which saves a lot of useless traffic, and is also secure. The only thing remaining is to get it to have multiple connections running in it. Thanks for all the inputs.

    Read the article

  • Can nginx be used as a reverse proxy for a backend websocket server?

    - by John Reilly
    We're working on a Ruby on Rails app that needs to take advantage of html5 websockets. At the moment, we have two separate "servers" so to speak: our main app running on nginx+passenger, and a separate server using Pratik Naik's Cramp framework (which is running on Thin) to handle the websocket connections. Ideally, when it comes time for deployment, we'd have the rails app running on nginx+passenger, and the websocket server would be proxied behind nginx, so we wouldn't need to have the websocket server running on a different port. Problem is, in this setup it seems that nginx is closing the connections to Thin too early. The connection is successfully established to the Thin server, then immediately closed with a 200 response code. Our guess is that nginx doesn't realize that the client is trying to establish a long-running connection for websocket traffic. Admittedly, I'm not all that savvy with nginx config, so, is it even possible to configure nginx to act as a reverse proxy for a websocket server? Or do I have to wait for nginx to offer support for the new websocket handshake stuff? Assuming that having both the app server and the websocket server listening on port 80 is a requirement, might that mean I have to have Thin running on a separate server without nginx in front for now? Thanks in advance for any advice or suggestions. :) -John

    Read the article

  • Why do people say that Ruby is slow?

    - by stephen murdoch
    I like Ruby on Rails and I use it for all my web development projects. A few years ago there was a lot of talk about Rails being a memory hog and about how it didn't scale very well but these suggestions were put to bed by Gregg Pollack here. Lately though, I've been hearing people saying that Ruby itself is slow. Why is Ruby considered slow? I do not find Ruby to be slow but then again, I'm just using it to make simple CRUD apps and company blogs. What sort of projects would I need to be doing before I find Ruby becoming slow? Or is this slowness just something that affects all programming languages? What are your options as a Ruby programmer if you want to deal with this "slowness"? Which version of Ruby would best suit an application like Stack Overflow where speed is critical and traffic is intense? The questions are subjective, and I realise that architectural setup (EC2 vs standalone servers etc) makes a big difference but I'd like to hear what people think about Ruby being slow. Finally, I can't find much news on Ruby 2.0 - I take it we're a good few years away from that then?

    Read the article

  • What are the attack vectors for passwords sent over http?

    - by KevinM
    I am trying to convince a customer to pay for SSL for a web site that requires login. I want to make sure I correctly understand the major scenarios in which someone can see the passwords that are being sent. My understanding is that at any of the hops along the way can use a packet analyzer to view what is being sent. This seems to require that any hacker (or their malware/botnet) be on the same subnet as any of the hops the packet takes to arrive at its destination. Is that right? Assuming some flavor of this subnet requirement holds true, do I need to worry about all the hops or just the first one? The first one I can obviously worry about if they're on a public Wifi network since anyone could be listening in. Should I be worried about what's going on in subnets that packets will travel across outside this? I don't know a ton about network traffic, but I would assume it's flowing through data centers of major carriers and there's not a lot of juicy attack vectors there, but please correct me if I am wrong. Are there other vectors to be worried about outside of someone listening with a packet analyzer? I am a networking and security noob, so please feel free to set me straight if I am using the wrong terminology in any of this.

    Read the article

  • What is the correct way to implement a massive hierarchical, geographical search for news?

    - by Philip Brocoum
    The company I work for is in the business of sending press releases. We want to make it possible for interested parties to search for press releases based on a number of criteria, the most important being location. For example, someone might search for all news sent to New York City, Massachusetts, or ZIP code 89134, sent from a governmental institution, under the topic of "traffic". Or whatever. The problem is, we've sent, literally, hundreds of thousands of press releases. Searching is slow and complex. For example, a press release sent to Queens, NY should show up in the search I mentioned above even though it wasn't specifically sent to New York City, because Queens is a subset of New York City. We may also want to implement "and" and "or" and negation and text search to the query to create complex searches. These searches also have to be fast enough to function as dynamic RSS feeds. I really don't know anything about search theory, or how it's properly done. The way we are getting by right now is using a data mart to store the locations the releases were sent to in a single table. However, because of the subset thing mentioned above, the data mart is gigantic with millions of rows. And we haven't even implemented cities yet, and there are about 50,000 cities in the United States, which will exponentially increase the size of the data mart by so much I'm afraid it just won't work anymore. Anyway, I realize this is not a simple question and there won't be a "do this" answer. However, I'm hoping one of you can point me in the right direction where I can learn about how massive searches are done? Because I really know nothing about it. And such a search engine is turning out to be incredibly difficult to make. Thanks! I know there must be a way because if Google can search the entire internet we must be able to search our own database :-)

    Read the article

  • Updating a Safari Extension?

    - by Ricky Romero
    Hi there, I'm writing a simple Safari Extension, and I'm trying to figure out how to get the update mechanism working. Apple's documentation here is delightfully vague: http://developer.apple.com/safari/library/documentation/Tools/Conceptual/SafariExtensionGuide/UpdatingExtensions/UpdatingExtensions.html And here's my manifest, based on that documentation: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Extension Updates</key> <array> <dict> <key>CFBundleIdentifier</key> <string>net.rickyromero.safari.shutup</string> <key>Team Identifier</key> <string>TMM5P68287</string> <key>CFBundleVersion</key> <string>1</string> <key>CFBundleShortVersionString</key> <string>1.0</string> <key>URL</key> <string>http://rickyromero.net/misc/SafariExtensions/ShutUp.safariextz</string> </dict> </array> </dict> </plist> I don't know where to get "YourCertifcateID," for example. And when I increment the values for CFBundleVersion and CFBundleShortVersionString, it doesn't trigger an update. I know Safari is hitting my manifest though, because I'm watching HTTP traffic. Thoroughly stumped. Any ideas, guys?

    Read the article

  • How to protect/monitor your site from crawling by malicious user

    - by deathy
    Situation: Site with content protected by username/password (not all controlled since they can be trial/test users) a normal search engine can't get at it because of username/password restrictions a malicious user can still login and pass the session cookie to a "wget -r" or something else. The question would be what is the best solution to monitor such activity and respond to it (considering the site policy is no-crawling/scraping allowed) I can think of some options: Set up some traffic monitoring solution to limit the number of requests for a given user/IP. Related to the first point: Automatically block some user-agents (Evil :)) Set up a hidden link that when accessed logs out the user and disables his account. (Presumably this would not be accessed by a normal user since he wouldn't see it to click it, but a bot will crawl all links.) For point 1. do you know of a good already-implemented solution? Any experiences with it? One problem would be that some false positives might show up for very active but human users. For point 3: do you think this is really evil? Or do you see any possible problems with it? Also accepting other suggestions.

    Read the article

  • RIA: how to get functionality, not a data

    - by Budda
    On the server side I have the following class: public class Customer { [Key] public int Id { get; set; } public string FirstName { get; set; } public string SecondName { get; set; } public string FullName { get { return string.Concat(FirstName, " ", SecondName); } } } The problem is that each field is calculated and transferred to the client, for example 'FullName' property: [DataMember()] [Editable(false)] [ReadOnly(true)] public string FullName { get { return this._fullName; } set { if ((this._fullName != value)) { this.ValidateProperty("FullName", value); this.OnFullNameChanging(value); this._fullName = value; this.RaisePropertyChanged("FullName"); this.OnFullNameChanged(); } } } Instead of data transferring (that is traffic consuming, in some cases it introduces significant overhead). I would like to have a calculation on the client side. Is this possible without manual duplication of the property implementation? Thank you.

    Read the article

  • PF, load balanced gateways, and Squid

    - by Santa
    Hi, So I have a FreeBSD router running PF and Squid, and it has three network interfaces: two connected to upstream providers (em0 and em1 respectively), and one for LAN (re0) that we serve. There is some load balancing configured with PF. Basically, it routes all traffic to ports 1-1024 through one interface (em0) and everything else through the other (em1). Now, I have a Squid proxy also running on the box that transparently redirects any HTTP request from LAN to port 3128 in 127.0.0.1. Since Squid redirects this request to HTTP outside, it should follow the load balancing rule through em0, no? The problem is, when we tested it out (by browsing from a computer in the LAN to http://whatismyip.com, it reports the external IP of the em1 interface! When we turn Squid off, the external IP of em0 is reported, as expected. How do I make Squid behave with the load balancing rule that we have set up? Here's the related settings in /etc/pf.conf that I have: ext_if1="em1" # DSL ext_if2="em0" # T1 int_if="re0" ext_gw1="x.x.x.1" ext_gw2="y.y.y.1" int_addr="10.0.0.1" int_net="10.0.0.0/16" dsl_ports = "1024:65535" t1_ports = "1:1023" ... squid=3128 rdr on $int_if inet proto tcp from $int_net \ to any port 80 -> 127.0.0.1 port $squid pass in quick on $int_if route-to lo0 inet proto tcp \ from $int_net to 127.0.0.1 port $squid keep state ... # load balancing pass in on $int_if route-to ($ext_if1 $ext_gw1) \ proto tcp from $int_net to any port $dsl_ports keep state pass in on $int_if route-to ($ext_if1 $ext_gw1) \ proto udp from $int_net to any port $dsl_ports pass in on $int_if route-to ($ext_if2 $ext_gw2) \ proto tcp from $int_net to any port $t1_ports keep state pass in on $int_if route-to ($ext_if2 $ext_gw2) \ proto udp from $int_net to any port $t1_ports Thanks!

    Read the article

  • Login Website, curious Cookie Problem

    - by Collin Peters
    Hello, Language: C# Development Environment: Visual Studio 2008 Sorry if the english is not perfect. I want to login to a Website and get some Data from there. My Problem is that the Cookies does not work. Everytime the Website says that I should activate Cookies but i activated the Cookies trough a Cookiecontainer. I sniffed the traffic serveral times for the login progress and I see no problem there. I tried different methods to login and I have searched if someone else have this Problem but no results... Login Page is: "www.uploaded.to", Here is my Code to Login in Short Form: private void login() { //Global CookieContainer for all the Cookies CookieContainer _cookieContainer = new CookieContainer(); //First Login to the Website HttpWebRequest _request1 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/login"); _request1.Method = "POST"; _request1.CookieContainer = _cookieContainer; string _postData = "email=XXXXX&password=XXXXX"; byte[] _byteArray = Encoding.UTF8.GetBytes(_postData); Stream _reqStream = _request1.GetRequestStream(); _reqStream.Write(_byteArray, 0, _byteArray.Length); _reqStream.Close(); HttpWebResponse _response1 = (HttpWebResponse)_request1.GetResponse(); _response1.Close(); //######################## //Follow the Link from Request1 HttpWebRequest _request2 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/login?coo=1"); _request2.Method = "GET"; _request2.CookieContainer = _cookieContainer; HttpWebResponse _response2 = (HttpWebResponse)_request2.GetResponse(); _response2.Close(); //####################### //Get the Data from the Page after Login HttpWebRequest _request3 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/home"); _request3.Method = "GET"; _request3.CookieContainer = _cookieContainer; HttpWebResponse _response3 = (HttpWebResponse)_request3.GetResponse(); _response3.Close(); } I'm stuck at this problem since many weeks and i found no solution that works, please help...

    Read the article

  • Building an *efficient* if/then interface for non-technical users to build flow-control in PHP

    - by Brendan
    I am currently building an internal tool to be used by our management to control the flow of traffic. I have built an if/then interface allowing the user to set conditions for certain outcomes, however it is inefficient to use the switch statement to control the flow. How can I improve the efficiency of my code? Example of code: if($previous['route_id'] == $condition['route_id'] && $failed == 0) //if we have not moved on to a new set of rules and we haven't failed yet { switch($condition['type']) { case 0 : $type = $user['hour']; break; case 1 : $type = $user['location']['region_abv']; break; case 2 : $type = $user['referrer_domain']; break; case 3 : $type = $user['affiliate']; break; case 4 : $type = $user['location']['country_code']; break; case 5 : $type = $user['location']['city']; break; } $type = strtolower($type); $condition['value'] = strtolower($condition['value']); switch($condition['operator']) { case 0 : if($type == $condition['value']); else $failed = '1'; break; case 1 : if($type != $condition['value']); else $failed = '1'; break; case 2 : if($type > $condition['value']); else $failed = '1'; break; case 3 : if($type >= $condition['value']); else $failed = '1'; break; case 4 : if($type < $condition['value']); else $failed = '1'; break; case 5 : if($type <= $condition['value']); else $failed = '1'; break; } }

    Read the article

  • How should I implement lazy session creation in PHP?

    - by Adam Franco
    By default, PHP's session handling mechanisms set a session cookie header and store a session even if there is no data in the session. If no data is set in the session then I don't want a Set-Cookie header sent to the client in the response and I don't want an empty session record stored on the server. If data is added to $_SESSION, then the normal behavior should continue. My goal is to implement lazy session creation behavior of the sort that Drupal 7 and Pressflow where no session is stored (or session cookie header sent) unless data is added to the $_SESSION array during application execution. The point of this behavior is to allow reverse proxies such as Varnish to cache and serve anonymous traffic while letting authenticated requests pass through to Apache/PHP. Varnish (or another proxy-server) is configured to pass through any requests without cookies, assuming correctly that if a cookie exists then the request is for a particular client. I have ported the session handling code from Pressflow that uses session_set_save_handler() and overrides the implementation of session_write() to check for data in the $_SESSION array before saving and will write this up as library and add an answer here if this is the best/only route to take. My Question: While I can implement a fully custom session_set_save_handler() system, is there an easier way to get this lazy session creation behavior in a relatively generic way that would be transparent to most applications?

    Read the article

  • TCP: Address already in use exception - possible causes for client port? NO PORT EXHAUSTION

    - by TomTom
    Hello, stupid problem. I get those from a client connecting to a server. Sadly, the setup is complicated making debugging complex - and we run out of options. The environment: *Client/Server system, both running on the same machine. The client is actually a service doing some database manipulation at specific times. * The cnonection comes from C# going through OleDb to an EasySoft JDBC driver to a custom written JDBC server that then hosts logic in C++. Yeah, compelx - but the third party supplier decided to expose the extension mechanisms for their server through a JDBC interface. Not a lot can be done here ;) The Symptom: At (ir)regular intervals we get a "Address already in use: connect" told from the JDBC driver. They seem to come from one particular service we run. Now, I did read all the stuff about port exhaustion. This is why we have a little tool running now that counts ports and their states every minute. Last time this happened, we had an astonishing 370 ports in use, with the count rising to about 900 AFTER the error. We aleady patched the registry (it is a windows machine) to allow more than the 5000 client ports standard, but even then, we are far far from that limit to start with. Which is why I am asking here. Ayneone an ide what ELSE could cause this? It is a Windows 2003 Server machine, 64 bit. The only other thing I can see that may cause it (but this functionality is supposedly disabled) is Symantec Endpoint Protection that is installed on the server - and being capable of actinc as a firewall, it could possibly intercept network traffic. I dont want to open a can of worms by pointing to Symantec prematurely (if pointing to Symantec can ever be seen as such). So, anyone an idea what else may be the cause? Thanks

    Read the article

  • Flex 3: should I provide prepared data to my component or make it to process data before display?

    - by grapkulec
    I'm starting to learn a little Flex just for fun and maybe to prove that I still can learn something new :) I have some idea for a project and one of its parts is a tree component which could display data in different ways depending on configuration. The idea There is list of objects having properties like id, date, time, name, description. And sometimes list should be displayed like this: first level: date second level: time third level: name and sometimes like this: first level: year second level: month third level: day fourth level: time and name By level I mean level of nesting of course. So, we can have years, that have months, that have days, that have hours and so forth. The problem What could be the best way to do it? I mean, should I prepare data for different ways of nesting outside of component or even outside of flex? I can do it at web service level in C# where I plan to have database access layer and send to flex nice and ready to display XML or array of objects. But I wonder if that won't cause additional and maybe unneccessary network traffic. I tried to hack some code in my component to convert my data objects into XML or ArrayCollection but I don't know enough of Flex and got stuck on elimination of duplicates or getting specific data by some key value. Usually to do such things I have STL with maps, sets and vectors and I find Flex arrays and even Dictionary a little bit confusing (I've read language reference and googled without any significant luck). The question So, to sum things up: should I give my tree component data prepared just for chosen type of display or should I try to do it internally inside component (or some helper class written in ActionScript)?

    Read the article

  • How should I manage my many-to-many relationships?

    - by wes
    Hello all, I have a database containing a couple tables: files and users. This relationship is many-to-many, so I also have a table called users_files_ref which holds foreign keys to both of the above tables. Here's the schema of each table: files - file_id, file_name users - user_id, user_name users_files_ref - user_file_ref_id, user_id, file_id I'm using Codeigniter to build a file host application, and I'm right in the middle of adding the functionality that enables users to upload files. This is where I'm running into my problem. Once I add a file to the files table, I will need that new file's id to update the users_files_ref table. Right now I'm adding the record to the files table, and then I imagined I'd run a query to grab the last file added, so that I can get the ID, and then use that ID to insert the new users_files_ref record. I know this will work on a small scale, but I imagine there is a better way of managing these records, especially in a heavy-traffic scenario. I am new to relational database stuff but have been around PHP for a while, so please bear with me here :-) I have primary and foreign keys set up correctly for the files, users, and users_files_ref tables, I'm just wondering how to manage the adding of file records for this scenario? Thanks for any help provided, it's much appreciated. -Wes

    Read the article

  • Algorithm for finding the best routes for food distribution in game

    - by Tautrimas
    Hello, I'm designing a city building game and got into a problem. Imagine Sierra's Caesar III game mechanics: you have many city districts with one market each. There are several granaries over the distance connected with a directed weighted graph. The difference: people (here cars) are units that form traffic jams (here goes the graph weights). Note: in Ceasar game series, people harvested food and stockpiled it in several big granaries, whereas many markets (small shops) took food from the granaries and delivered it to the citizens. The task: tell each district where they should be getting their food from while taking least time and minimizing congestions on the city's roads. Map example Sample diagram Suppose that yellow districts need 7, 7 and 4 apples accordingly. Bluish granaries have 7 and 11 apples accordingly. Suppose edges weights to be proportional to their length. Then, the solution should be something like the gray numbers indicated on the edges. Eg, first district gets 4 apples from the 1st and 3 apples from the 2nd granary, while the last district gets 4 apples from only the 2nd granary. Here, vertical roads are first occupied to the max, and then the remaining workers are sent to the diagonal paths. Question What practical and very fast algorithm should I use? I was looking at some papers (Congestion Games: Optimization in Competition etc.) describing congestion games, but could not get the big picture. Any help is very appreciated! P. S. I can post very little links and no images because of new user restriction.

    Read the article

  • Windows Form color changing

    - by Sef
    Hello, So i am attempting to make a MasterMind program as sort of exercise. Field of 40 picture boxes (line of 4, 10 rows) 6 buttons (red, green, orange, yellow, blue, purple) When i press one of these buttons (lets assume the red one) then a picture box turns red. My question is how do i iterate trough all these picture boxes? I can get it to work but only if i write : And this is offcourse no way to write this, would take me countless of lines that contain basicly the same. private void picRood_Click(object sender, EventArgs e) { UpdateDisplay(); pb1.BackColor = System.Drawing.Color.Red; } Press the red button - first picture box turns red Press the blue button - second picture box turns blue Press the orange button - third picture box turns orange And so on... Ive had a previous similar program that simulates a traffic light, there i could assign a value to each color (red 0, orange 1, green 2). Is something similar needed or how exactly do i adress all those picture boxes and make them correspond to the proper button. Best Regards.

    Read the article

  • Problem with GWT behind a reverse proxy - either nginx or apache

    - by Don Branson
    I'm having this problem with GWT when it's behind a reverse proxy. The backend app is deployed within a context - let's call it /context. The GWT app works fine when I hit it directly: http://host:8080/context/ I can configure a reverse proxy in front it it. Here's my nginx example: upstream backend { server 127.0.0.1:8080; } ... location / { proxy_pass http://backend/context/; } But, when I run through the reverse proxy, GWT gets confused, saying: 2009-10-04 14:05:41.140:/:WARN: Login: ERROR: The serialization policy file '/C7F5ECA5E3C10B453290DE47D3BE0F0E.gwt.rpc' was not found; did you forget to include it in this deployment? 2009-10-04 14:05:41.140:/:WARN: Login: WARNING: Failed to get the SerializationPolicy 'C7F5ECA5E3C10B453290DE47D3BE0F0E' for module 'https://hostname:444/'; a legacy, 1.3.3 compatible, serialization policy will be used. You may experience SerializationExceptions as a result. 2009-10-04 14:05:41.292:/:WARN: StoryService: ERROR: The serialization policy file '/0445C2D48AEF2FB8CB70C4D4A7849D88.gwt.rpc' was not found; did you forget to include it in this deployment? 2009-10-04 14:05:41.292:/:WARN: StoryService: WARNING: Failed to get the SerializationPolicy '0445C2D48AEF2FB8CB70C4D4A7849D88' for module 'https://hostname:444/'; a legacy, 1.3.3 compatible, serialization policy will be used. You may experience SerializationExceptions as a result. In other words, GWT isn't getting the word that it needs to prepend /context/ hen look for C7F5ECA5E3C10B453290DE47D3BE0F0E.gwt.rpc, but only when the request comes throught proxy. A workaround is to add the context to the url for the web site: location /context/ { proxy_pass http://backend/context/; } but that means the context is now part of the url that the user sees, and that's ugly. Anybody know how to make GWT happy in this case? Software versions: GWT - 1.7.0 (same problem with 1.7.1) Jetty - 6.1.21 (but the same problem existed under tomcat) nginx - 0.7.62 (same problem under apache 2.x) I've looked at the traffic between the proxy and the backend using DonsProxy, but there's nothing noteworthy there.

    Read the article

  • Synchronizing files between Linux servers, through FTP

    - by Daniel Magliola
    I have the following configuration of servers: 1 central linux server, a VPS 8 satellite linux servers, "crappy shared hostings" I have a bunch of files that I need to have in all servers. Right now i'm copying them everywhere manually, but I want to be able to copy them to the central server, and then have a scheduled process that runs every now and then and synchronizes them (only outwardly, no need to try to find "new" files in the satellite servers). There are a couple of catches though: I can't have any custom software in the satellite servers, or do strange command line things that'll auto connect to them and send the files directly. I know this is the way these kinds of things are normally done, but the satellite servers are crappy shared hosting ones where I have absolutely no control over anything. I need to send the files over FTP I also need to have, in my central server, a list of the files that are available in each of the satellite servers, to make sure they are ready before I send traffic to them. If I were to do this manually, the steps would be: get the list of files in a satellite server compare to my own, and send the files that are missing get the list of files again, and store it in my central database. I'd like to know what tools are out there that can alleviate as much of this as possible, first the syncing, and then the "getting the list of files available in the other server". I'm going to be doing everything from PHP, not sure if there are good tools to "use FTP from PHP", which i'm pretty sure i'll have to do for step 3 at least. Thanks in advance for any ideas! Daniel

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >