Search Results

Search found 1797 results on 72 pages for 'bandwidth measuring'.

Page 27/72 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • BITS client fails to specify HTTP Range header

    - by user256890
    Our system is designed to deploy to regions with unreliable and/or insufficient network connections. We build our own fault tolerating data replication services that uses BITS. Due to some security and maintenance requirements, we implemented our own ASP.NET file download service on the server side, instead of just letting IIS serving up the files. When BITS client makes an HTTP download request with the specified range of the file, our ASP.NET page pulls the demanded file segment into memory and serve that up as the HTTP response. That is the theory. ;) This theory fails in artificial lab scenarios but I would not let the system deploy in real life scenarios unless we can overcome that. Lab scenario: I have BITS client and the IIS on the same developer machine, so practically I have enormous network "bandwidth" and BITS is intelligent enough to detect that. As BITS client discovers the unlimited bandwidth, it gets more and more "greedy". At each HTTP request, BITS wants to grasp greater and greater file ranges (we are talking about downloading CD iso files, videos), demanding 20-40MB inside a single HTTP request, a size that I am not comfortable to pull into memory on the server side as one go. I can overcome that simply by giving less than demanded. It is OK. However, BITS gets really "confident" and "arrogant" demanding files WITHOUT specifying the download range, i.e., it wants the entire file in a single request, and this is where things go wrong. I do not know how to answer that response in the case of a 600MB file. If I just provide the starting 1MB range of the file, BITS client keeps sending HTTP requests for the same file without download range to continue, it hammers its point that it wants the entire file in one go. Since I am reluctant to provide the entire file, BITS gives up after several trials and reports error. Any thoughts?

    Read the article

  • How are you taking advantage of Multicore?

    - by tgamblin
    As someone in the world of HPC who came from the world of enterprise web development, I'm always curious to see how developers back in the "real world" are taking advantage of parallel computing. This is much more relevant now that all chips are going multicore, and it'll be even more relevant when there are thousands of cores on a chip instead of just a few. My questions are: How does this affect your software roadmap? I'm particularly interested in real stories about how multicore is affecting different software domains, so specify what kind of development you do in your answer (e.g. server side, client-side apps, scientific computing, etc). What are you doing with your existing code to take advantage of multicore machines, and what challenges have you faced? Are you using OpenMP, Erlang, Haskell, CUDA, TBB, UPC or something else? What do you plan to do as concurrency levels continue to increase, and how will you deal with hundreds or thousands of cores? If your domain doesn't easily benefit from parallel computation, then explaining why is interesting, too. Finally, I've framed this as a multicore question, but feel free to talk about other types of parallel computing. If you're porting part of your app to use MapReduce, or if MPI on large clusters is the paradigm for you, then definitely mention that, too. Update: If you do answer #5, mention whether you think things will change if there get to be more cores (100, 1000, etc) than you can feed with available memory bandwidth (seeing as how bandwidth is getting smaller and smaller per core). Can you still use the remaining cores for your application?

    Read the article

  • can load data(google app enngine) from http://localhost:8100/remote_api ..

    - by zjm1126
    i can download data from gae (http://zjm1126.appspot.com/remote_api), this is code: appcfg.py download_data --application=zjm1126 --url=http://zjm1126.appspot.com/remote_api --filename=a.csv and it successful : D:\zjm_demo\app>appcfg.py download_data --application=zjm1126 --url=http://zjm1 126.appspot.com/remote_api --filename=a.csv Downloading data records. [INFO ] Logging to bulkloader-log-20100618.162421 [INFO ] Throttling transfers: [INFO ] Bandwidth: 250000 bytes/second [INFO ] HTTP connections: 8/second [INFO ] Entities inserted/fetched/modified: 20/second [INFO ] Batch Size: 10 [INFO ] Opening database: bulkloader-progress-20100618.162421.sql3 [INFO ] Opening database: bulkloader-results-20100618.162421.sql3 [INFO ] Connecting to zjm1126.appspot.com/remote_api Please enter login credentials for zjm1126.appspot.com Email: [email protected] Password for [email protected]: [INFO ] Downloading kinds: [u'LogText', u'Greeting', u'Forum', u'Thread'] .... [INFO ] Have 0 entities, 0 previously transferred [INFO ] 0 entities (8804 bytes) transferred in 11.3 seconds so i want to know can load data from 127.0.0.1 , this is my code : appcfg.py download_data --application=zjm1126 --url=http://localhost:8100/remote_api --filename=a.csv and the error is : D:\zjm_demo\app>appcfg.py download_data --application=zjm1126 --url=http://loca lhost:8100/remote_api --filename=a.csv Downloading data records. [INFO ] Logging to bulkloader-log-20100618.162325 [INFO ] Throttling transfers: [INFO ] Bandwidth: 250000 bytes/second [INFO ] HTTP connections: 8/second [INFO ] Entities inserted/fetched/modified: 20/second [INFO ] Batch Size: 10 [INFO ] Opening database: bulkloader-progress-20100618.162325.sql3 [INFO ] Opening database: bulkloader-results-20100618.162325.sql3 Please enter login credentials for localhost Email: [email protected] Password for [email protected]: [INFO ] Connecting to localhost:8100/remote_api [ERROR ] Exception during authentication Traceback (most recent call last): File "d:\Program Files\Google\google_appengine\google\appengine\tools\bulkload er.py", line 3169, in Run self.request_manager.Authenticate() File "d:\Program Files\Google\google_appengine\google\appengine\tools\bulkload er.py", line 1178, in Authenticate remote_api_stub.MaybeInvokeAuthentication() File "d:\Program Files\Google\google_appengine\google\appengine\ext\remote_api \remote_api_stub.py", line 542, in MaybeInvokeAuthentication datastore_stub._server.Send(datastore_stub._path, payload=None) File "d:\Program Files\Google\google_appengine\google\appengine\tools\appengin e_rpc.py", line 346, in Send f = self.opener.open(req) File "D:\Python25\lib\urllib2.py", line 387, in open response = meth(req, response) File "D:\Python25\lib\urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "D:\Python25\lib\urllib2.py", line 425, in error return self._call_chain(*args) File "D:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "D:\Python25\lib\urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 404: Not Found [INFO ] Authentication Failed so what should i do , thanks

    Read the article

  • send message to a web service according to its schema

    - by hguser
    Hi: When I request a web servcie,it give me a response which show me the required parameters and its schema,for example: the response of the web service for the descriptin of the parameter Then I start to organize the next requset according to the parameter,for the parameter "bandWith" I set it as the following: <InputParameter parameterID="bandWidth"> <value> <commonData> <swe:Category> <swe:quality> <swe:Text> <swe:value>low</swe:value> </swe:Text> </swe:quality> </swe:Category> </commonData> </value> </InputParameter> However I got a exception : error information Also I tried the following format,things does not chage: <InputParameter parameterID="bandWidth"> <value> <commonData> <swe:Category> <swe:value>low</swe:value> </swe:Category> </commonData> </value> </InputParameter> So, I wonder how do define the parameter to match the format it defined? The schema can be found there: The schema

    Read the article

  • What can I do to get Mozilla Firefox to preload the eventual image result?

    - by Dalal
    I am attempting to preload images using JavaScript. I have declared an array as follows with image links from different places: var imageArray = new Array(); imageArray[0] = new Image(); imageArray[1] = new Image(); imageArray[2] = new Image(); imageArray[3] = new Image(); imageArray[0].src = "http://www.bollywoodhott.com/wp-content/uploads/2008/12/arjun-rampal.jpg"; imageArray[1].src = "http://labelleetleblog.files.wordpress.com/2009/06/josie-maran.jpg"; imageArray[2].src = "http://1.bp.blogspot.com/_22EXDJCJp3s/SxbIcZHTHTI/AAAAAAAAIXc/fkaDiOKjd-I/s400/black-male-model.jpg"; imageArray[3].src = "http://www.iill.net/wp-content/uploads/images/hot-chick.jpg"; The image fade and transformation effects that I am doing using this array work properly for the first 3 images, but for the last one, imageArray[3], the actual image data of the image does not get preloaded and it completely ruins the effect, since the actual image data loads AFTERWARDS, only at the time it needs to be displayed, it seems. This happens because the last link http://www.iill.net/wp-content/uploads/images/hot-chick.jpg is not a direct link to the image. If you go to that link, your browser will redirect you to the ACTUAL location. Now, my image preloading code in Chrome works perfectly well, and the effects look great. Because it seems that Chrome preloads the actual data - the EVENTUAL image that is to be shown. This means that in Chrome if I preloaded an image that will redirect to 'stop stealing my bandwidth', then the image that gets preloaded is 'stop stealing my bandwidth'. How can I modify my code to get Firefox to behave the same way?

    Read the article

  • How do I handle freeing unmanaged structures on application close?

    - by LostKaleb
    I have a C# project in which i use several unmanaged C++ functions. More so, I also have static IntPtr that I use as parameters for those functions. I know that whenever I use them, I should implement IDisposable in that class and use a destructor to invoke the Dispose method, where I free the used IntPtr, as is said in the MSDN page. public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } private void Dispose(bool disposing) { // Check to see if Dispose has already been called. if (!this.disposed) { if (disposing) { component.Dispose(); } CloseHandle(m_InstanceHandle); m_InstanceHandle = IntPtr.Zero; disposed = true; } } [System.Runtime.InteropServices.DllImport("Kernel32")] private extern static Boolean CloseHandle(IntPtr handle); However, when I terminate the application, I'm still left with a hanging process in TaskManager. I believe that it must be related to the used of the MarshalAs instruction in my structures: [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)] public struct SipxAudioCodec { [MarshalAs(UnmanagedType.ByValTStr, SizeConst=32)] public string CodecName; public SipxAudioBandwidth Bandwidth; public int PayloadType; } When I create such a structure should I also be careful to free the space it allocs using a destructor? [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)] public struct SipxAudioCodec { [MarshalAs(UnmanagedType.ByValTStr, SizeConst=32)] public string CodecName; public SipxAudioBandwidth Bandwidth; public int PayloadType; ~SipxAudioCodec() { Marshal.FreeGlobal(something...); } }

    Read the article

  • Regex to GENERATE thumbnails!?!?! (but that's crazy!)

    - by CryptoMonkey
    Hello everyone! So here is my situation, and the solution that I've come up with to solve the problem. I have created an application that includes TinyMCE to allow users to create HTML content for publishing. The user can include images in their markup, and drag/resize those images effecting the final Width/Height attributes in the IMG tag. This is all great, the users can include images and resize/relocate them to their desired appearance. But one big problem is that I am now sending a (possibly) much larger image to the client, only to have the browser resize the image into the requested Width/Height attributes. All that bandwidth and lost load time.... So my solution is to pre-process my users markup content, scanning all of the IMG tags and parsing out the Height/Width/Src attributes. Then set each img's SRC tag to a phpThumb request with the parsed Height/Width passed into the thumbnails URL. This will create my reduced size image (optimising bandwidth at the expense of CPU and caching). What do you think about this solution? I've seen other posts where people were using mod_rewrite to do something similar, but I want to effect the content on the page service and not manipulate the image requests as they're being received. .... Any thoughts about this design? I need some help with the fine details as my regex skills need some work, but I'm very short on time and promise to pay my technical knowledge debt soon. To make the regex's easier, I can be sure of some things. Only img tags that need this processing will have an existing width="" height="" attributes (with the double quotes, and lower cased text, but I suppose matching the text case insensitive would be better if TinyMCE changes) So a regex to match only the necessary Img tags, and maybe another three regex's to extract the src, the width, and the height? Thanks everyone.

    Read the article

  • How do I handle freeing unmanaged structures in C# on application close?

    - by LostKaleb
    I have a C# project in which i use several unmanaged C++ functions. More so, I also have static IntPtr that I use as parameters for those functions. I know that whenever I use them, I should implement IDisposable in that class and use a destructor to invoke the Dispose method, where I free the used IntPtr, as is said in the MSDN page. public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } private void Dispose(bool disposing) { // Check to see if Dispose has already been called. if (!this.disposed) { if (disposing) { component.Dispose(); } CloseHandle(m_InstanceHandle); m_InstanceHandle = IntPtr.Zero; disposed = true; } } [System.Runtime.InteropServices.DllImport("Kernel32")] private extern static Boolean CloseHandle(IntPtr handle); However, when I terminate the application, I'm still left with a hanging process in TaskManager. I believe that it must be related to the used of the MarshalAs instruction in my structures: [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)] public struct SipxAudioCodec { [MarshalAs(UnmanagedType.ByValTStr, SizeConst=32)] public string CodecName; public SipxAudioBandwidth Bandwidth; public int PayloadType; } When I create such a structure should I also be careful to free the space it allocs using a destructor? [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)] public struct SipxAudioCodec { [MarshalAs(UnmanagedType.ByValTStr, SizeConst=32)] public string CodecName; public SipxAudioBandwidth Bandwidth; public int PayloadType; ~SipxAudioCodec() { Marshal.FreeGlobal(something...); } } Thanks in advance!

    Read the article

  • How to reduce the Number of threads running at instance in jetty server ?

    - by Thirst for Excellence
    i would like to reduce the live threads on server to reduce the bandwidth consumption for data(data pull while application launching time) transfer from my application to clients in my application. i did setting like is this setting enough to reduce the bandwidth consumption on jetty server ? Please help me any one 1) in Jetty.xml: <Set name="ThreadPool"> <New class="org.eclipse.jetty.util.thread.QueuedThreadPool"> <name="minThreads"> 1 > <Set name="maxThreads" value=50> 2: services-config.xml channel-definition id="my-longpolling-amf" class="mx.messaging.channels.AMFChannel" endpoint url="http://MyIp:8400/blazeds/messagebroker/amflongpolling" class="flex.messaging.endpoints.AMFEndpoint" properties <polling-enabled>true</polling-enabled> <polling-interval-seconds>1</polling-interval-seconds> <wait-interval-millis>60000</wait-interval-millis> <client-wait-interval-millis>1</client-wait-interval-millis> <max-waiting-poll-requests>50</max-waiting-poll-requests> </properties> </channel-definition>

    Read the article

  • Big Data – Is Big Data Relevant to me? – Big Data Questionnaires – Guest Post by Vinod Kumar

    - by Pinal Dave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions of SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. I think the series from Pinal is a good one for anyone planning to start on Big Data journey from the basics. In my daily customer interactions this buzz of “Big Data” always comes up, I react generally saying – “Sir, do you really have a ‘Big Data’ problem or do you have a big Data problem?” Generally, there is a silence in the air when I ask this question. Data is everywhere in organizations – be it big data, small data, all data and for few it is bad data which is same as no data :). Wow, don’t discount me as someone who opposes “Big Data”, I am a big supporter as much as I am a critic of the abuse of this term by the people. In this post, I wanted to let my mind flow so that you can also think in the direction I want you to see these concepts. In any case, this is not an exhaustive dump of what is in my mind – but you will surely get the drift how I am going to question Big Data terms from customers!!! Is Big Data Relevant to me? Many of my customers talk to me like blank whiteboard with no idea – “why Big Data”. They want to jump into the bandwagon of technology and they want to decipher insights from their unexplored data a.k.a. unstructured data with structured data. So what are these industry scenario’s that come to mind? Here are some of them: Financials Fraud detection: Banks and Credit cards are monitoring your spending habits on real-time basis. Customer Segmentation: applies in every industry from Banking to Retail to Aviation to Utility and others where they deal with end customer who consume their products and services. Customer Sentiment Analysis: Responding to negative brand perception on social or amplify the positive perception. Sales and Marketing Campaign: Understand the impact and get closer to customer delight. Call Center Analysis: attempt to take unstructured voice recordings and analyze them for content and sentiment. Medical Reduce Re-admissions: How to build a proactive follow-up engagements with patients. Patient Monitoring: How to track Inpatient, Out-Patient, Emergency Visits, Intensive Care Units etc. Preventive Care: Disease identification and Risk stratification is a very crucial business function for medical. Claims fraud detection: There is no precise dollars that one can put here, but this is a big thing for the medical field. Retail Customer Sentiment Analysis, Customer Care Centers, Campaign Management. Supply Chain Analysis: Every sensors and RFID data can be tracked for warehouse space optimization. Location based marketing: Based on where a check-in happens retail stores can be optimize their marketing. Telecom Price optimization and Plans, Finding Customer churn, Customer loyalty programs Call Detail Record (CDR) Analysis, Network optimizations, User Location analysis Customer Behavior Analysis Insurance Fraud Detection & Analysis, Pricing based on customer Sentiment Analysis, Loyalty Management Agents Analysis, Customer Value Management This list can go on to other areas like Utility, Manufacturing, Travel, ITES etc. So as you can see, there are obviously interesting use cases for each of these industry verticals. These are just representative list. Where to start? A lot of times I try to quiz customers on a number of dimensions before starting a Big Data conversation. Are you getting the data you need the way you want it and in a timely manner? Can you get in and analyze the data you need? How quickly is IT to respond to your BI Requests? How easily can you get at the data that you need to run your business/department/project? How are you currently measuring your business? Can you get the data you need to react WITHIN THE QUARTER to impact behaviors to meet your numbers or is it always “rear-view mirror?” How are you measuring: The Brand Customer Sentiment Your Competition Your Pricing Your performance Supply Chain Efficiencies Predictive product / service positioning What are your key challenges of driving collaboration across your global business?  What the challenges in innovation? What challenges are you facing in getting more information out of your data? Note: Garbage-in is Garbage-out. Hold good for all reporting / analytics requirements Big Data POCs? A number of customers get into the realm of setting a small team to work on Big Data – well it is a great start from an understanding point of view, but I tend to ask a number of other questions to such customers. Some of these common questions are: To what degree is your advanced analytics (natural language processing, sentiment analysis, predictive analytics and classification) paired with your Big Data’s efforts? Do you have dedicated resources exploring the possibilities of advanced analytics in Big Data for your business line? Do you plan to employ machine learning technology while doing Advanced Analytics? How is Social Media being monitored in your organization? What is your ability to scale in terms of storage and processing power? Do you have a system in place to sort incoming data in near real time by potential value, data quality, and use frequency? Do you use event-driven architecture to manage incoming data? Do you have specialized data services that can accommodate different formats, security, and the management requirements of multiple data sources? Is your organization currently using or considering in-memory analytics? To what degree are you able to correlate data from your Big Data infrastructure with that from your enterprise data warehouse? Have you extended the role of Data Stewards to include ownership of big data components? Do you prioritize data quality based on the source system (that is Facebook/Twitter data has lower quality thresholds than radio frequency identification (RFID) for a tracking system)? Do your retention policies consider the different legal responsibilities for storing Big Data for a specific amount of time? Do Data Scientists work in close collaboration with Data Stewards to ensure data quality? How is access to attributes of Big Data being given out in the organization? Are roles related to Big Data (Advanced Analyst, Data Scientist) clearly defined? How involved is risk management in the Big Data governance process? Is there a set of documented policies regarding Big Data governance? Is there an enforcement mechanism or approach to ensure that policies are followed? Who is the key sponsor for your Big Data governance program? (The CIO is best) Do you have defined policies surrounding the use of social media data for potential employees and customers, as well as the use of customer Geo-location data? How accessible are complex analytic routines to your user base? What is the level of involvement with outside vendors and third parties in regard to the planning and execution of Big Data projects? What programming technologies are utilized by your data warehouse/BI staff when working with Big Data? These are some of the important questions I ask each customer who is actively evaluating Big Data trends for their organizations. These questions give you a sense of direction where to start, what to use, how to secure, how to analyze and more. Sign off Any Big data is analysis is incomplete without a compelling story. The best way to understand this is to watch Hans Rosling – Gapminder (2:17 to 6:06) videos about the third world myths. Don’t get overwhelmed with the Big Data buzz word, the destination to what your data speaks is important. In this blog post, we did not particularly look at any Big Data technologies. This is a set of questionnaire one needs to keep in mind as they embark their journey of Big Data. I did write some of the basics in my blog: Big Data – Big Hype yet Big Opportunity. Do let me know if these questions make sense?  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • NUMA-aware placement of communication variables

    - by Dave
    For classic NUMA-aware programming I'm typically most concerned about simple cold, capacity and compulsory misses and whether we can satisfy the miss by locally connected memory or whether we have to pull the line from its home node over the coherent interconnect -- we'd like to minimize channel contention and conserve interconnect bandwidth. That is, for this style of programming we're quite aware of where memory is homed relative to the threads that will be accessing it. Ideally, a page is collocated on the node with the thread that's expected to most frequently access the page, as simple misses on the page can be satisfied without resorting to transferring the line over the interconnect. The default "first touch" NUMA page placement policy tends to work reasonable well in this regard. When a virtual page is first accessed, the operating system will attempt to provision and map that virtual page to a physical page allocated from the node where the accessing thread is running. It's worth noting that the node-level memory interleaving granularity is usually a multiple of the page size, so we can say that a given page P resides on some node N. That is, the memory underlying a page resides on just one node. But when thinking about accesses to heavily-written communication variables we normally consider what caches the lines underlying such variables might be resident in, and in what states. We want to minimize coherence misses and cache probe activity and interconnect traffic in general. I don't usually give much thought to the location of the home NUMA node underlying such highly shared variables. On a SPARC T5440, for instance, which consists of 4 T2+ processors connected by a central coherence hub, the home node and placement of heavily accessed communication variables has very little impact on performance. The variables are frequently accessed so likely in M-state in some cache, and the location of the home node is of little consequence because a requester can use cache-to-cache transfers to get the line. Or at least that's what I thought. Recently, though, I was exploring a simple shared memory point-to-point communication model where a client writes a request into a request mailbox and then busy-waits on a response variable. It's a simple example of delegation based on message passing. The server polls the request mailbox, and having fetched a new request value, performs some operation and then writes a reply value into the response variable. As noted above, on a T5440 performance is insensitive to the placement of the communication variables -- the request and response mailbox words. But on a Sun/Oracle X4800 I noticed that was not the case and that NUMA placement of the communication variables was actually quite important. For background an X4800 system consists of 8 Intel X7560 Xeons . Each package (socket) has 8 cores with 2 contexts per core, so the system is 8x8x2. Each package is also a NUMA node and has locally attached memory. Every package has 3 point-to-point QPI links for cache coherence, and the system is configured with a twisted ladder "mobius" topology. The cache coherence fabric is glueless -- there's not central arbiter or coherence hub. The maximum distance between any two nodes is just 2 hops over the QPI links. For any given node, 3 other nodes are 1 hop distant and the remaining 4 nodes are 2 hops distant. Using a single request (client) thread and a single response (server) thread, a benchmark harness explored all permutations of NUMA placement for the two threads and the two communication variables, measuring the average round-trip-time and throughput rate between the client and server. In this benchmark the server simply acts as a simple transponder, writing the request value plus 1 back into the reply field, so there's no particular computation phase and we're only measuring communication overheads. In addition to varying the placement of communication variables over pairs of nodes, we also explored variations where both variables were placed on one page (and thus on one node) -- either on the same cache line or different cache lines -- while varying the node where the variables reside along with the placement of the threads. The key observation was that if the client and server threads were on different nodes, then the best placement of variables was to have the request variable (written by the client and read by the server) reside on the same node as the client thread, and to place the response variable (written by the server and read by the client) on the same node as the server. That is, if you have a variable that's to be written by one thread and read by another, it should be homed with the writer thread. For our simple client-server model that means using split request and response communication variables with unidirectional message flow on a given page. This can yield up to twice the throughput of less favorable placement strategies. Our X4800 uses the QPI 1.0 protocol with source-based snooping. Briefly, when node A needs to probe a cache line it fires off snoop requests to all the nodes in the system. Those recipients then forward their response not to the original requester, but to the home node H of the cache line. H waits for and collects the responses, adjudicates and resolves conflicts and ensures memory-model ordering, and then sends a definitive reply back to the original requester A. If some node B needed to transfer the line to A, it will do so by cache-to-cache transfer and let H know about the disposition of the cache line. A needs to wait for the authoritative response from H. So if a thread on node A wants to write a value to be read by a thread on node B, the latency is dependent on the distances between A, B, and H. We observe the best performance when the written-to variable is co-homed with the writer A. That is, we want H and A to be the same node, as the writer doesn't need the home to respond over the QPI link, as the writer and the home reside on the very same node. With architecturally informed placement of communication variables we eliminate at least one QPI hop from the critical path. Newer Intel processors use the QPI 1.1 coherence protocol with home-based snooping. As noted above, under source-snooping a requester broadcasts snoop requests to all nodes. Those nodes send their response to the home node of the location, which provides memory ordering, reconciles conflicts, etc., and then posts a definitive reply to the requester. In home-based snooping the snoop probe goes directly to the home node and are not broadcast. The home node can consult snoop filters -- if present -- and send out requests to retrieve the line if necessary. The 3rd party owner of the line, if any, can respond either to the home or the original requester (or even to both) according to the protocol policies. There are myriad variations that have been implemented, and unfortunately vendor terminology doesn't always agree between vendors or with the academic taxonomy papers. The key is that home-snooping enables the use of a snoop filter to reduce interconnect traffic. And while home-snooping might have a longer critical path (latency) than source-based snooping, it also may require fewer messages and less overall bandwidth. It'll be interesting to reprise these experiments on a platform with home-based snooping. While collecting data I also noticed that there are placement concerns even in the seemingly trivial case when both threads and both variables reside on a single node. Internally, the cores on each X7560 package are connected by an internal ring. (Actually there are multiple contra-rotating rings). And the last-level on-chip cache (LLC) is partitioned in banks or slices, which with each slice being associated with a core on the ring topology. A hardware hash function associates each physical address with a specific home bank. Thus we face distance and topology concerns even for intra-package communications, although the latencies are not nearly the magnitude we see inter-package. I've not seen such communication distance artifacts on the T2+, where the cache banks are connected to the cores via a high-speed crossbar instead of a ring -- communication latencies seem more regular.

    Read the article

  • cisco 2911 router vs 2811 router

    - by NickToyota
    Just wondering if anyone has experience with the cisco 2911 or 2900 series routers. I understand it is newer and similar to the 2811 but more robust. The price difference is not that much more. I am trying to determine if I should go with the 29xx or 28xx series for a small-medium sized company. ISP bandwidth load balancing and fail over is required. T1 and ADSL lines already in place.

    Read the article

  • Network Logon Issues with Group Policy and Network

    - by bobloki
    I am gravely in need of your help and assistance. We have a problem with our logon and startup to our Windows 7 Enterprise system. We have more than 3000 Windows Desktops situated in roughly 20+ buildings around campus. Almost every computer on campus has the problem that I will be describing. I have spent over one month peering over etl files from Windows Performance Analyzer (A great product) and hundreds of thousands of event logs. I come to you today humbled that I could not figure this out. The problem as simply put our logon times are extremely long. An average first time logon is roughly 2-10 minutes depending on the software installed. All computers are Windows 7, the oldest computers being 5 years old. Startup times on various computers range from good (1-2 minutes) to very bad (5-60). Our second time logons range from 30 seconds to 4 minutes. We have a gigabit connection between each computer on the network. We have 5 domain controllers which also double as our DNS servers. Initial testing led us to believe that this was a software problem. So I spent a few days testing machines only to find inconsistent results from the etl files from xperfview. Each subset of computers on campus had a different subset of software issues, none seeming to interfere with logon just startup. So I started looking at our group policy and located some very interesting event ID’s. Group Policy 1129: The processing of Group Policy failed because of lack of network connectivity to a domain controller. Group Policy 1055: The processing of Group Policy failed. Windows could not resolve the computer name. This could be caused by one of more of the following: a) Name Resolution failure on the current domain controller. b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller). NETLOGON 5719 : This computer was not able to set up a secure session with a domain controller in domain OURDOMAIN due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. E1kexpress 27: Intel®82567LM-3 Gigabit Network Connection – Network link is disconnected. NetBT 4300 – The driver could not be created. WMI 10 - Event filter with query "SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA "Win32_Processor" AND TargetInstance.LoadPercentage 99" could not be reactivated in namespace "//./root/CIMV2" because of error 0x80041003. Events cannot be delivered through this filter until the problem is corrected. More or less with timestamps it becomes apparent that the network maybe the issue. 1:25:57 - Group Policy is trying to discover the domain controller information 1:25:57 - The network link has been disconnected 1:25:58 - The processing of Group Policy failed because of lack of network connectivity to a domain controller. This may be a transient condition. A success message would be generated once the machine gets connected to the domain controller and Group Policy has successfully processed. If you do not see a success message for several hours, then contact your administrator. 1:25:58 - Making LDAP calls to connect and bind to active directory. DC1.ourdomain.edu 1:25:58 - Call failed after 0 milliseconds. 1:25:58 - Forcing rediscovery of domain controller details. 1:25:58 - Group policy failed to discover the domain controller in 1030 milliseconds 1:25:58 - Periodic policy processing failed for computer OURDOMAIN\%name%$ in 1 seconds. 1:25:59 - A network link has been established at 1Gbps at full duplex 1:26:00 - The network link has been disconnected 1:26:02 - NtpClient was unable to set a domain peer to use as a time source because of discovery error. NtpClient will try again in 3473457 minutes and DOUBLE THE REATTEMPT INTERVAL thereafter. 1:26:05 - A network link has been established at 1Gbps at full duplex 1:26:08 - Name resolution for the name %Name% timed out after none of the configured DNS servers responded. 1:26:10 – The TCP/IP NetBIOS Helper service entered the running state. 1:26:11 - The time provider NtpClient is currently receiving valid time data at dc4.ourdomain.edu 1:26:14 – User Logon Notification for Customer Experience Improvement Program 1:26:15 - Group Policy received the notification Logon from Winlogon for session 1. 1:26:15 - Making LDAP calls to connect and bind to Active Directory. dc4.ourdomain.edu 1:26:18 - The LDAP call to connect and bind to Active Directory completed. dc4. ourdomain.edu. The call completed in 2309 milliseconds. 1:26:18 - Group Policy successfully discovered the Domain Controller in 2918 milliseconds. 1:26:18 - Computer details: Computer role : 2 Network name : (Blank) 1:26:18 - The LDAP call to connect and bind to Active Directory completed. dc4.ourdomain.edu. The call completed in 2309 milliseconds. 1:26:18 - Group Policy successfully discovered the Domain Controller in 2918 milliseconds. 1:26:19 - The WinHTTP Web Proxy Auto-Discovery Service service entered the running state. 1:26:46 - The Network Connections service entered the running state. 1:27:10 – Retrieved account information 1:27:10 – The system call to get account information completed. 1:27:10 - Starting policy processing due to network state change for computer OURDOMAIN\%name%$ 1:27:10 – Network state change detected 1:27:10 - Making system call to get account information. 1:27:11 - Making LDAP calls to connect and bind to Active Directory. dc4.ourdomain.edu 1:27:13 - Computer details: Computer role : 2 Network name : ourdomain.edu (Now not blank) 1:27:13 - Group Policy successfully discovered the Domain Controller in 2886 milliseconds. 1:27:13 - The LDAP call to connect and bind to Active Directory completed. dc4.ourdomain.edu The call completed in 2371 milliseconds. 1:27:15 - Estimated network bandwidth on one of the connections: 0 kbps. 1:27:15 - Estimated network bandwidth on one of the connections: 8545 kbps. 1:27:15 - A fast link was detected. The Estimated bandwidth is 8545 kbps. The slow link threshold is 500 kbps. 1:27:17 – Powershell - Engine state is changed from Available to Stopped. 1:27:20 - Completed Group Policy Local Users and Groups Extension Processing in 4539 milliseconds. 1:27:25 - Completed Group Policy Scheduled Tasks Extension Processing in 5210 milliseconds. 1:27:27 - Completed Group Policy Registry Extension Processing in 1529 milliseconds. 1:27:27 - Completed policy processing due to network state change for computer OURDOMAIN\%name%$ in 16 seconds. 1:27:27 – The Group Policy settings for the computer were processed successfully. There were no changes detected since the last successful processing of Group Policy. Any help would be appreciated. Please ask for any relevant information and it will be provided as soon as possible.

    Read the article

  • Windows 7 extremely slow login, exchange performance, printer enumeration, etc...

    - by Jeff
    Background: I have a fresh copy of Windows 7 Professional x64 on a Dell Latitude E6500. The laptop has 8GB RAM, 250GB drive, and all Intel peripherals (net/wifi/graphics). All available Windows updates, as well as hardware drivers are installed. The IT folks where I work joined the computer to our Windows 2003-based Active Directory domain. There are no errors in any logs that we've looked at, and Group Policy templates appear to have applied properly. Problem: Every time I turn on or reboot the computer, it takes between 2 to 10 (all times are actual) minutes after successfully typing my username/password to get to my desktop. My login script does not always run. Sometimes I get a black screen, and a couple of minutes later the login script will pop up and take up to 10 minutes to complete. I can get around this by hitting cntrl-shift-esc and running explorer.exe from the Task Manager. The login script continues to hang, but I can minimize it and go on about my business. Either way, it generally throws errors prior to completing. I often get slow or failed connectivity to Exchange via Outlook. When I bring up printer dialogs, they take several minutes to populate, and block the calling app while doing so. Copies to SMB shares are very slow. On my home network, everything works fine. On both the work network and home network, I can use remote internet resources just fine. Web pages pull up, remote VPN's are fine, I can max out bandwidth on SpeakEasy Speed Test. I can get almost max bandwidth transferring FTP/HTTP over a LAN. Another symptom of the problem is that when I first log in, the work network shows as "Identifying" for a long time in the Network and Sharing Center, and will often then change to the name of the work domain, but say "Unauthenticated Network". Note that this computer previously ran Windows Vista with none of these problems. Attempts to Fix: Installed the Win7 admin pack Uninstalled/reinstalled all hardware drivers Verified Active Directory DNS settings (Vista works relatively well on the same network) Reset all TCP/IP settings on all adapters using the netsh commands to do so Disabled ipv6 on all adapters Disable wifi adapter while on work network Locked the network card to 100/Full, 1000/Full; also tried Auto Added various important addresses to hosts file (exchange, dns, ad) -- removed when didn't help My background is a jpeg (sounds unrelated but there is apparently a win7 login bug related to solid color background) More I have forgotten The IT staff at my company indicated they believe this is due to having Windows 2003 AD servers and not having any Windows 2008 R2 AD servers. Other than that, they have no advice or assistance to offer other than a rebuild (already tried that once with similar symptoms), or downgrade to Vista. Any thoughts out there?

    Read the article

  • Windows 7: Icons disappear

    - by NoCanDo
    I have a strange problem. My icons seem to disappear randomly on my Windows 7. They were there a while back, now they ain't. It happens often and quite randomly. Anyone know a fix? PS: Please stop editing out the thumbnail! I don't want to leech bandwidth from my favorite picture hoster!

    Read the article

  • GUI FTP and File Management for Linux VPS

    - by Cyrcle
    I'm interested in how I could remotely control FTP and file management on my Linux VPS with a GUI. I frequently transfer sites to my VPS for testing, and I'd much rather do it directly on the high bandwidth connection instead of my 10down, 2up Comcrap cable.

    Read the article

  • GUI FTP and File Management for Linux VPS

    - by Cyrcle
    I'm interested in how I could remotely control FTP and file management on my Linux VPS with a GUI. I frequently transfer sites to my VPS for testing, and I'd much rather do it directly on the high bandwidth connection instead of my 10down, 2up Comcrap cable.

    Read the article

  • Disk performance below expectations

    - by paulH
    this is a follow-up to a previous question that I asked (Two servers with inconsistent disk speed). I have a PowerEdge R510 server with a PERC H700 integrated RAID controller (call this Server B) that was built using eight disks with 3Gb/s bandwidth that I was comparing with an almost identical server (call this Server A) that was built using four disks with 6Gb/s bandwidth. Server A had much better I/O rates than Server B. Once I discovered the difference with the disks, I had Server A rebuilt with faster 6Gbps disks. Unfortunately this resulted in no increase in the performance of the disks. Expecting that there must be some other configuration difference between the servers, we took the 6Gbps disks out of Server A and put them in Server B. This also resulted in no increase in the performance of the disks. We now have two identical servers built, with the exception that one is built with six 6Gbps disks and the other with eight 3Gbps disks, and the I/O rates of the disks is pretty much identical. This suggests that there is some bottleneck other than the disks, but I cannot understand how Server B originally had better I/O that has subsequently been 'lost'. Comparative I/O information below, as measured by SQLIO. The same parameters were used for each test. It's not the actual numbers that are significant but rather the variations between systems. In each case D: is a 2 disk RAID 1 volume, and E: is a 4 disk RAID 10 volume (apart from the original Server A, where E: was a 2 disk RAID 0 volume). Server A (original setup with 6Gpbs disks) D: Read (MB/s) 63 MB/s D: Write (MB/s) 170 MB/s E: Read (MB/s) 68 MB/s E: Write (MB/s) 320 MB/s Server B (original setup with 3Gpbs disks) D: Read (MB/s) 52 MB/s D: Write (MB/s) 88 MB/s E: Read (MB/s) 112 MB/s E: Write (MB/s) 130 MB/s Server A (new setup with 3Gpbs disks) D: Read (MB/s) 55 MB/s D: Write (MB/s) 85 MB/s E: Read (MB/s) 67 MB/s E: Write (MB/s) 180 MB/s Server B (new setup with 6Gpbs disks) D: Read (MB/s) 61 MB/s D: Write (MB/s) 95 MB/s E: Read (MB/s) 69 MB/s E: Write (MB/s) 180 MB/s Can anybody suggest any ideas what is going on here? The drives in use are as follows: Dell Seagate F617N ST3300657SS 300GB 15K RPM SAS Dell Hitachi HUS156030VLS600 300GB 3.5 inch 15000rpm 6GB SAS Hitachi Hus153030vls300 300GB Server SAS Dell ST3146855SS Seagate 3.5 inch 146GB 15K SAS

    Read the article

  • Network Logon Issues with Group Policy and Network

    - by bobloki
    I am gravely in need of your help and assistance. We have a problem with our logon and startup to our Windows 7 Enterprise system. We have more than 3000 Windows Desktops situated in roughly 20+ buildings around campus. Almost every computer on campus has the problem that I will be describing. I have spent over one month peering over etl files from Windows Performance Analyzer (A great product) and hundreds of thousands of event logs. I come to you today humbled that I could not figure this out. The problem as simply put our logon times are extremely long. An average first time logon is roughly 2-10 minutes depending on the software installed. All computers are Windows 7, the oldest computers being 5 years old. Startup times on various computers range from good (1-2 minutes) to very bad (5-60). Our second time logons range from 30 seconds to 4 minutes. We have a gigabit connection between each computer on the network. We have 5 domain controllers which also double as our DNS servers. Initial testing led us to believe that this was a software problem. So I spent a few days testing machines only to find inconsistent results from the etl files from xperfview. Each subset of computers on campus had a different subset of software issues, none seeming to interfere with logon just startup. So I started looking at our group policy and located some very interesting event ID’s. Group Policy 1129: The processing of Group Policy failed because of lack of network connectivity to a domain controller. Group Policy 1055: The processing of Group Policy failed. Windows could not resolve the computer name. This could be caused by one of more of the following: a) Name Resolution failure on the current domain controller. b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller). NETLOGON 5719 : This computer was not able to set up a secure session with a domain controller in domain OURDOMAIN due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. E1kexpress 27: Intel®82567LM-3 Gigabit Network Connection – Network link is disconnected. NetBT 4300 – The driver could not be created. WMI 10 - Event filter with query "SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA "Win32_Processor" AND TargetInstance.LoadPercentage 99" could not be reactivated in namespace "//./root/CIMV2" because of error 0x80041003. Events cannot be delivered through this filter until the problem is corrected. More or less with timestamps it becomes apparent that the network maybe the issue. 1:25:57 - Group Policy is trying to discover the domain controller information 1:25:57 - The network link has been disconnected 1:25:58 - The processing of Group Policy failed because of lack of network connectivity to a domain controller. This may be a transient condition. A success message would be generated once the machine gets connected to the domain controller and Group Policy has successfully processed. If you do not see a success message for several hours, then contact your administrator. 1:25:58 - Making LDAP calls to connect and bind to active directory. DC1.ourdomain.edu 1:25:58 - Call failed after 0 milliseconds. 1:25:58 - Forcing rediscovery of domain controller details. 1:25:58 - Group policy failed to discover the domain controller in 1030 milliseconds 1:25:58 - Periodic policy processing failed for computer OURDOMAIN\%name%$ in 1 seconds. 1:25:59 - A network link has been established at 1Gbps at full duplex 1:26:00 - The network link has been disconnected 1:26:02 - NtpClient was unable to set a domain peer to use as a time source because of discovery error. NtpClient will try again in 3473457 minutes and DOUBLE THE REATTEMPT INTERVAL thereafter. 1:26:05 - A network link has been established at 1Gbps at full duplex 1:26:08 - Name resolution for the name %Name% timed out after none of the configured DNS servers responded. 1:26:10 – The TCP/IP NetBIOS Helper service entered the running state. 1:26:11 - The time provider NtpClient is currently receiving valid time data at dc4.ourdomain.edu 1:26:14 – User Logon Notification for Customer Experience Improvement Program 1:26:15 - Group Policy received the notification Logon from Winlogon for session 1. 1:26:15 - Making LDAP calls to connect and bind to Active Directory. dc4.ourdomain.edu 1:26:18 - The LDAP call to connect and bind to Active Directory completed. dc4. ourdomain.edu. The call completed in 2309 milliseconds. 1:26:18 - Group Policy successfully discovered the Domain Controller in 2918 milliseconds. 1:26:18 - Computer details: Computer role : 2 Network name : (Blank) 1:26:18 - The LDAP call to connect and bind to Active Directory completed. dc4.ourdomain.edu. The call completed in 2309 milliseconds. 1:26:18 - Group Policy successfully discovered the Domain Controller in 2918 milliseconds. 1:26:19 - The WinHTTP Web Proxy Auto-Discovery Service service entered the running state. 1:26:46 - The Network Connections service entered the running state. 1:27:10 – Retrieved account information 1:27:10 – The system call to get account information completed. 1:27:10 - Starting policy processing due to network state change for computer OURDOMAIN\%name%$ 1:27:10 – Network state change detected 1:27:10 - Making system call to get account information. 1:27:11 - Making LDAP calls to connect and bind to Active Directory. dc4.ourdomain.edu 1:27:13 - Computer details: Computer role : 2 Network name : ourdomain.edu (Now not blank) 1:27:13 - Group Policy successfully discovered the Domain Controller in 2886 milliseconds. 1:27:13 - The LDAP call to connect and bind to Active Directory completed. dc4.ourdomain.edu The call completed in 2371 milliseconds. 1:27:15 - Estimated network bandwidth on one of the connections: 0 kbps. 1:27:15 - Estimated network bandwidth on one of the connections: 8545 kbps. 1:27:15 - A fast link was detected. The Estimated bandwidth is 8545 kbps. The slow link threshold is 500 kbps. 1:27:17 – Powershell - Engine state is changed from Available to Stopped. 1:27:20 - Completed Group Policy Local Users and Groups Extension Processing in 4539 milliseconds. 1:27:25 - Completed Group Policy Scheduled Tasks Extension Processing in 5210 milliseconds. 1:27:27 - Completed Group Policy Registry Extension Processing in 1529 milliseconds. 1:27:27 - Completed policy processing due to network state change for computer OURDOMAIN\%name%$ in 16 seconds. 1:27:27 – The Group Policy settings for the computer were processed successfully. There were no changes detected since the last successful processing of Group Policy. Any help would be appreciated. Please ask for any relevant information and it will be provided as soon as possible.

    Read the article

  • DI-524UP will become deadly slow when downloading torrents

    - by abolotnov
    I have a DI-524UP router that shares internet to two notebooks at home via wireless and a desktop via wired collection. It will become barely available/pingable via wireless when I download torrents on a desktop computer. Even when I limit download speed to 20% or less of available bandwidth - it must be something else, not the speed that causes the issue. I've tried limiting number of active connections and stuff but it will still not cure. Is there anything I can do about this?

    Read the article

  • Use personal Amazon S3 account to backup Gmail using Ubuntu

    - by lokheart
    As question, I have found online that backupify offer shifting from their S3 to user's personal S3, I have a backupify account but I can't find this options, besides, I don't prefer having my email being processed by somebody else. Is it possible to use my own personal amazon s3 account to backup gmail? Preferably as incremental backup, as I don't have to use too much of bandwidth to load redundant data back to S3. I am using ubuntu, so script is OK for me. Thanks!

    Read the article

  • Daisy-chaining network switches with multiple cables

    - by user72153
    This might just be the dumbest question you'll ever read, but I digress. Say I have two 100Mbps Ethernet switches with 2 computers on each, connected together by a single cable. This way, the two PCs on each switch share the 100Mbps bandwidth with the others. If I added another cable between the 2 switches, would there be 200Mbps throughput available between the switches? Or am I completely off my rocker? Thanks for the help.

    Read the article

  • Need a good colocation host

    - by storm
    I'm looking for a good collocation host in Los Angeles. I've looked at quite a few hosts, but found precious few reviews. Has anyone had particularly good or bad experiences that can point me in a good direction? We will be starting out with 2U's and I'm leaning towards burstable bandwidth rather than 95th percentile. Thanks for the ideas.

    Read the article

  • Forward all traffic through an ssh tunnel

    - by Eamorr
    I hope someone can follow this and I'll explain as best I can. I'm trying to forward all traffic from port 6999 on x.x.x.224, through an ssh tunnel, and onto port 7000 on x.x.x.218. Here is some ASCII art: |browser|-----|Squid on x.x.x.224|------|ssh tunnel|------<satellite link>-----|Squid on x.x.x.218|-----|www| 3128 6999 7000 80 When I remove the ssh tunnel, everything works fine. The idea is to turn off encryption on the ssh tunnel (to save bandwidth) and turn on maximum compression (to save more bandwidth). This is because it's a satellite link. Here's the ssh tunnel I've been using: ssh -C -f -C -o CompressionLevel=9 -o Cipher=none [email protected] -L 7000:172.16.1.224:6999 -N The trouble is, I don't know how to get data from Squid on x.x.x.224 into the ssh tunnel? Am I going about this the wrong way? Should I create an ssh tunnel on x.x.x.218? I use iptables to stop squid on x.x.x.224 from reading port 80, but to feed from port 6999 instead (i.e. via the ssh tunnel). Do I need another iptables rule? Any comments greatly appreciated. Many thanks in advance, Regarding Eduardo Ivanec's question, here is a netstat -i any port 7000 -nn dump from x.x.x.218: 14:42:15.386462 IP 172.16.1.224.40006 > 172.16.1.218.7000: Flags [S], seq 2804513708, win 14600, options [mss 1460,sackOK,TS val 86702647 ecr 0,nop,wscale 4], length 0 14:42:15.386690 IP 172.16.1.218.7000 > 172.16.1.224.40006: Flags [R.], seq 0, ack 2804513709, win 0, length 0 Update 2: When I run the second command, I get the following error in my browser: ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://109.123.109.205/index.php Zero Sized Reply Squid did not receive any data for this request. Your cache administrator is webmaster. Generated Fri, 01 Jul 2011 16:06:06 GMT by remote-site (squid/2.7.STABLE9) remote-site is 172.16.1.224 When I do a tcpdump -i any port 7000 -nn I get the following: root@remote-site:~# tcpdump -i any port 7000 -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused

    Read the article

  • ColdFusion Server crash after thousands of HTTP requests

    - by Jason Bristol
    We are running ColdFusion 8 on a windows server 2003 VPS with an API that exposes student records to a partner API through a connector. Our API returns around 50k student records serialized in XML format pretty seamlessly. My question originates when something very frightening happened today when we tested our connector to our partners API. Our entire website and web host went down. We assumed that our host was just having some issues and after 4 hours with no resolution and no response from their customer service we finally got a response from them claiming that they had an "unauthorized user" in their network. After our server was back up we were unable to connect to our website as if the web service or coldfusion itself had froze. This is really where my concern comes from as I fear we may have overloaded the web service. As I mentioned before we tried sending over 50k HTTP POST requests over to our partner's API, however everything stopped after around 1.6k Is this bad practice or is there some sort of rate limiting I can relax somewhere in server configuration? We managed to find a workaround, but it bypasses our connector which is critical to our design. This would have been a one time deal as the purpose of so many requests was to populate our partner's website with current data, after that hourly syncs will keep requests down to around 100 per hour. UPDATE Our partner API is owned and operated by Pardot. We are converting students to prospects by passing student data to their API which unfortunately only seems to accept one student at a time. For that reason we have to do all 50k requests individually. Our server has 4GB of RAM, an Intel Core 2 Duo @ 2.8GHz running Windows Server 2003 SP2. I monitored the server during a 100 student sync, a 400 student sync, and a 1.4k student sync with the following results: 100 students - 2.25GB of Memory, 30-40% CPU utilization, 0.2-0.3% network bandwidth 400 students - 2.30GB of Memory, 30-50% CPU utilization, 0.2-1.0% network bandwidth 1.4k students - 2.30GB of Memory, 30-70% CPU utilization, 0.2-1.0% network bandwidth I know this is a far cry from 50k students, but I don't want to risk taking down our CMS system again assuming that was the cause. To give you a look at our code: <cfif (#getStudents.statusCode# eq "200 OK")> <cftry> <cfloop index="StudentXML" array="#XmlSearch(responseSTUD,'/students/student')#"> <cfset StudentXML = XmlParse(StudentXML)> <cfhttp url="#PARDOT_CMS_UPSERT#" method="post" timeout="10000" > <cfhttpparam type="url" name="user_key" value="#PARDOT_CMS_USERKEY#"> <cfhttpparam type="url" name="api_key" value="#api_key#"> <cfhttpparam type="url" name="email" value="#StudentXML.student.email.XmlText#"> <cfhttpparam type="url" name="first_name" value="#StudentXML.student.first.XmlText#"> <cfhttpparam type="url" name="last_name" value="#StudentXML.student.last.XmlText#"> <cfhttpparam type="url" name="in_cms" value="#StudentXML.student.studentid.XmlText#"> <cfhttpparam type="url" name="company" value="#StudentXML.student.agencyname.XmlText#"> <cfhttpparam type="url" name="country" value="#StudentXML.student.countryname.XmlText#"> <cfhttpparam type="url" name="address_one" value="#StudentXML.student.address.XmlText#"> <cfhttpparam type="url" name="address_two" value="#StudentXML.student.address2.XmlText#"> <cfhttpparam type="url" name="city" value="#StudentXML.student.city.XmlText#"> <cfhttpparam type="url" name="state" value="#StudentXML.student.state_province.XmlText#"> <cfhttpparam type="url" name="zip" value="#StudentXML.student.postalcode.XmlText#"> <cfhttpparam type="url" name="phone" value="#StudentXML.student.phone.XmlText#"> <cfhttpparam type="url" name="fax" value="#StudentXML.student.fax.XmlText#"> <cfhttpparam type="url" name="output" value="simple"> </cfhttp> </cfloop> <cfcatch type="any"> <cfdump var="#cfcatch.Message#"> </cfcatch> </cftry> </cfif> UPDATE 2 I checked the CF logs and found a couple of these: "Error","jrpp-8","06/06/13","16:10:18","CMS-API","Java heap space The specific sequence of files included or processed is: D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm, line: 675 " java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.io.CharArrayWriter.write(CharArrayWriter.java:105) at coldfusion.runtime.CharBuffer.replace(CharBuffer.java:37) at coldfusion.runtime.CharBuffer.replace(CharBuffer.java:50) at coldfusion.runtime.NeoBodyContent.write(NeoBodyContent.java:254) at cfapi2ecfm292155732._factor30(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:675) at cfapi2ecfm292155732._factor31(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:662) at cfapi2ecfm292155732._factor36(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:659) at cfapi2ecfm292155732._factor42(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:657) at cfapi2ecfm292155732._factor37(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor44(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:456) at cfapi2ecfm292155732._factor38(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor46(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:455) at cfapi2ecfm292155732._factor39(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor47(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:453) at cfapi2ecfm292155732.runPage(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:1) at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:192) at coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:366) at coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65) at coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:279) at coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:48) at coldfusion.filter.MonitoringFilter.invoke(MonitoringFilter.java:40) at coldfusion.filter.PathFilter.invoke(PathFilter.java:86) at coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:70) at coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:28) at coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38) at coldfusion.filter.NoCacheFilter.invoke(NoCacheFilter.java:46) at coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38) at coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22) at coldfusion.CfmServlet.service(CfmServlet.java:175) at coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:89) at jrun.servlet.FilterChain.doFilter(FilterChain.java:86) Looks like I might have crashed the JVM in CF, is there a better way to do this? We are thinking of just exporting all records initially as a CSV file and importing it into Pardot seeing as we will never have to do a request this large again.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >