Search Results

Search found 762 results on 31 pages for 'peer allan'.

Page 23/31 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Why shouldn't I always use nullable types in C#.

    - by Matthew Vines
    I've been searching for some good guidance on this since the concept was introduced in .net 2.0. Why would I ever want to use non-nullable data types in c#? (A better question is why wouldn't I choose nullable types by default, and only use non-nullable types when that explicitly makes sense.) Is there a 'significant' performance hit to choosing a nullable data type over its non-nullable peer? I much prefer to check my values against null instead of Guid.empty, string.empty, DateTime.MinValue,<= 0, etc, and to work with nullable types in general. And the only reason I don't choose nullable types more often is the itchy feeling in the back of my head that makes me feel like it's more than backwards compatibility that forces that extra '?' character to explicitly allow a null value. Is there anybody out there that always (most always) chooses nullable types rather than non-nullable types? Thanks for your time,

    Read the article

  • Trouble on setting SSL certificates for Virtual Hosts using Apache\Phusion Passenger in localhost

    - by user502052
    I am using Ruby on Rails 3 and I would like to make to work HTTPS connections on localhost. I am using: Apache v2 + Phusion Passenger Mac OS + Snow Leopard v10.6.6 My Ruby on Rails installation use the Typhoeus gem (it is possible to use the Ruby net\http library but the result doesn't change) to make HTTP requests over HTTPS. I created self-signed ca.key, pjtname.crt and pjtname.key as detailed on the Apple website. Notice: Following instruction from the Apple website, on running the openssl req -new -key server.key -out server.csr command (see the link) at this point Common Name (eg, YOUR name) []: (this is the important one) I entered *pjtname.com so that is valid for all sub_domain of that site. In my Apache httpd.conf I have two virtual hosts configured in this way: # Secure (SSL/TLS) connections #Include /private/etc/apache2/extra/httpd-ssl.conf # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> Include /private/etc/apache2/other/*.conf # Passenger configuration LoadModule passenger_module /Users/<my_user_name>/.rvm/gems/ruby-1.9.2-p136/gems/passenger-3.0.2/ext/apache2/mod_passenger.so PassengerRoot /Users/<my_user_name>/.rvm/gems/ruby-1.9.2-p136/gems/passenger-3.0.2 PassengerRuby /Users/<my_user_name>/.rvm/wrappers/ruby-1.9.2-p136/ruby # Go ahead and accept connections for these vhosts # from non-SNI clients SSLStrictSNIVHostCheck off # Ensure that Apache listens on port 443 Listen 443 # Listen for virtual host requests on all IP addresses NameVirtualHost *:80 NameVirtualHost *:443 # # PJTNAME.COM and subdomains SETTING # <VirtualHost *:443> # Because this virtual host is defined first, it will # be used as the default if the hostname is not received # in the SSL handshake, e.g. if the browser doesn't support # SNI. ServerName pjtname.com:443 DocumentRoot "/Users/<my_user_name>/Sites/pjtname.com/pjtname.com/public" ServerAdmin [email protected] ErrorLog "/private/var/log/apache2/error_log" TransferLog "/private/var/log/apache2/access_log" RackEnv development <Directory "/Users/<my_user_name>/Sites/pjtname.com/pjtname.com/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on # Self Signed certificates # Server Certificate SSLCertificateFile /private/etc/apache2/ssl/wildcard.certificate/pjtname.crt # Server Private Key SSLCertificateKeyFile /private/etc/apache2/ssl/wildcard.certificate/pjtname.key # Server Intermediate Bundle SSLCertificateChainFile /private/etc/apache2/ssl/wildcard.certificate/ca.crt </VirtualHost> # HTTP Setting <VirtualHost *:80> ServerName pjtname.com DocumentRoot "/Users/<my_user_name>/Sites/pjtname.com/pjtname.com/public" RackEnv development <Directory "/Users/<my_user_name>/Sites/pjtname.com/pjtname.com/public"> Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:443> ServerName users.pjtname.com:443 DocumentRoot "/Users/<my_user_name>/Sites/pjtname.com/users.pjtname.com/public" ServerAdmin [email protected] ErrorLog "/private/var/log/apache2/error_log" TransferLog "/private/var/log/apache2/access_log" RackEnv development <Directory "/Users/<my_user_name>/Sites/pjtname.com/users.pjtname.com/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on # Self Signed certificates # Server Certificate SSLCertificateFile /private/etc/apache2/ssl/wildcard.certificate/pjtname.crt # Server Private Key SSLCertificateKeyFile /private/etc/apache2/ssl/wildcard.certificate/pjtname.key # Server Intermediate Bundle SSLCertificateChainFile /private/etc/apache2/ssl/wildcard.certificate/ca.crt </VirtualHost> # HTTP Setting <VirtualHost *:80> ServerName users.pjtname.com DocumentRoot "/Users/<my_user_name>/Sites/pjtname.com/users.pjtname.com/public" RackEnv development <Directory "/Users/<my_user_name>/Sites/pjtname.com/users.pjtname.com/public"> Order allow,deny Allow from all </Directory> </VirtualHost> In the host file I have: ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost # PJTNAME.COM SETTING 127.0.0.1 pjtname.com 127.0.0.1 users.pjtname.com All seems to work properly because I have already set everything (I think correctly): I generated a wildcard certificate for my domains and sub-domains (in this example: *.pjtname.com) I have set base-named virtualhosts in the http.conf file listening on port :433 and :80 My browser accept certificates also if it alerts me that those aren't safe (notice: I must accept certificates for each domain\sub-domain; that is, [only] at the first time I access a domain or sub-domain over HTTPS I must do the same procedure for acceptance) and I can have access to pages using HTTPS After all this work, when I make a request using Typhoeus (I can use also the Ruby Net::Http library and the result doesn't change) from the pjtname.com RoR application: # Typhoeus request Typhoeus::Request.get("https://users.pjtname.com/") I get something like a warning about the certificate: --- &id001 !ruby/object:Typhoeus::Response app_connect_time: 0.0 body: "" code: 0 connect_time: 0.000625 # Here is the warning curl_error_message: Peer certificate cannot be authenticated with known CA certificates curl_return_code: 60 effective_url: https://users.pjtname.com/ headers: "" http_version: mock: false name_lookup_time: 0.000513 pretransfer_time: 0.0 request: !ruby/object:Typhoeus::Request after_complete: auth_method: body: ... All this means that something is wrong. So, what I have to do to avoid the "Peer certificate cannot be authenticated with known CA certificates" warning and make the HTTPS request to work? Where is\are the error\errors (I think in the Apache configuration, but where?!)? P.S.: if you need some more info, let me know.

    Read the article

  • Linux HA / cluster: what are the differences between Pacemaker, Heartbeat, Corosync, wackamole?

    - by Continuation
    Can you help me understand Linux HA? Pacemaker, Heartbeat, Corosync seem to be part of a whole HA stack, but how do they fit together? How does wackamole differ from Pacemaker/Heartbeat/Corosync? I've seen opinions that wackamole is better than Heartbeat because it's peer-based. Is that valid? The last release of wackamole was 2.5 years ago. Is it still being maintained or active? What would you recommend for HA setup for web/application/database servers?

    Read the article

  • Jmeter Exception response code :000 ,response message:Read timed out,for Java Web service?

    - by vipin k.
    I am testing a java web service(jax-ws),but whenever i am running the test i am getting response code as response message:Read timed out . And at tomcat serevr side i am getting exception : SEVERE: caught throwable ClientAbortException: java.net.SocketException: Connection reset by peer: socket write error at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:358) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:354) at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:381) at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:370) at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:89) I found out that Read timed out exception may occur because of big size of SOAP response. But i am clue less because same web service i can access form an application.

    Read the article

  • Errno socket error in python

    - by Emma
    i wrote this code : import random import sys import urllib openfile = open(sys.argv[1]).readlines() c = random.choice(openfile) i = 0 while i < 5: i=i+1 c = random.choice(openfile) proxies = {'http': c} opener = urllib.FancyURLopener(proxies).open("http://whatismyip.com.au/").read() ::: I put 3 proxy in a txt file . : http://211.161.159.74:8080 http://119.70.40.101:8080 http://124.42.10.119:8080 but when execute it i get this error : IOError: [Errno socket error] (10054, 'Connection reset by peer') what am i going to do ? please help me .

    Read the article

  • Can not clone git repo to server

    - by Classified
    I'm running the same command on 2 different servers. One works, the other doesn't. I'm running git clone https://blah.com:8443/blah.git On server A, it works fine. I get the objects, files, etc. no problems. On server B, I get the following message. git clone https://blah.com:8443/blah.git Cloning into 'blah'... error: Peer certificate cannot be authenticated with known CA certificates while accessing https://blah.com:8443/blah.git/info/refs?service=git-upload-pack fatal: HTTP request failed Does anyone know what this means or what I need to do to get this to work? Thanks in advance for any help you can give me.

    Read the article

  • How to implement a bidirectional "mailbox service" over tcp?

    - by igorgatis
    The idea is to allow to peer processes to exchange messages (packets) over tcp as much asynchronously as possible. The way I'd like it to work is each process to have an outbox and an inbox. The send operation is just a push on the outbox. The receive operation is just a pop on the inbox. Underlying protocol would take care of the communication details. Is there a way to implement such mechanism using a single TCP connection? How would that be implemented using BSD sockets and modern OO Socket APIs (like Java or C# socket API)?

    Read the article

  • Machine Learning Algorithm for Parallel Nodes

    - by FreshCode
    I want to apply machine learning to a classification problem in a parallel environment. Several independent nodes, each with multiple on/off sensors, can communicate their sensor data with the goal of classifying an event defined by a heuristic, training data or both. Each peer will be measuring the same data from their unique perspective and will attempt to classify the result while taking into account that any neighbouring node (or its sensors or just the connection to the node) could be faulty. Nodes should function as equal peers and determine the most likely classification by communicating their results. Ultimately each node should make a decision based on their own sensor data and their peers' data. If it matters, false positives are OK (albeit undesirable) but false negatives are totally unacceptable. Given that each final classification will receive good or bad feedback, what would be an appropriate machine learning algorithm to approach this problem with if the nodes could communicate with each other to determine the most likely classification?

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario Conventional Structures Columnstore ? SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Errors when connecting to HTTPS using HTTP::Net routines (Ruby on Rails)

    - by jaycode
    Hi all, the code below explains the problem in detail #this returns error Net::HTTPBadResponse url = URI.parse('https://sitename.com') response = Net::HTTP.start(url.host, url.port) {|http| http.get('/remote/register_device') } #this works url = URI.parse('http://sitename.com') response = Net::HTTP.start(url.host, url.port) {|http| http.get('/remote/register_device') } #this returns error Net::HTTPBadResponse response = Net::HTTP.post_form(URI.parse('https://sitename.com/remote/register_device'), {:text => 'hello world'}) #this returns error Errno::ECONNRESET (Connection reset by peer) response = Net::HTTP.post_form(URI.parse('https://sandbox.itunes.apple.com/verifyReceipt'), {:text => 'hello world'}) #this works response = Net::HTTP.post_form(URI.parse('http://sitename.com/remote/register_device'), {:text => 'hello world'}) So... how do I send POST parameters to https://sitename.com or https://sandbox.itunes.apple.com/verifyReceipt in this example? Further information, I am trying to get this working in Rails: http://developer.apple.com/iphone/library/documentation/NetworkingInternet/Conceptual/StoreKitGuide/VerifyingStoreReceipts/VerifyingStoreReceipts.html#//apple_ref/doc/uid/TP40008267-CH104-SW1

    Read the article

  • Implement a vpn

    - by jackson
    I want to build an application client(client.exe) - server to do the following: when the clients run it they are thrown in a VPN and they can communicate each other within 1 applicataion. For example : clients run client.exe and they can see each other in LAN ONLY in Starcraft. From what i have read the right type of vpn for this situation is Secured Socket Tunneling Protocol: "Secure socket tunneling protocol, also referred to as SSTP, is by definition an application-layer protocol. It is designed to employ a synchronous communication in a back and forth motion between two programs. It allows many application endpoints over one network connection, between peer nodes, thereby enabling efficient usage of the communication resources that are available to that network. " Question: I don't have experience with networking programming so my question for the ones who have, is this the right approach? PS1: i don't want something done like OpenVpn, i do this as learning exercise. PS2: the application is targeting Windows and i plan to use .NET Thanks for reading the whole story, i am waiting for your replies.

    Read the article

  • Windows in StreamInsight: Hopping vs. Snapshot

    - by Roman Schindlauer
    Three weeks ago, we explained the basic concept of windows in StreamInsight: defining sets of events that serve as arguments for set-based operations, like aggregations. Today, we want to discuss the so-called Hopping Windows and compare them with Snapshot Windows. We will compare these two, because they can serve similar purposes with different behaviors; we will discuss the remaining window type, Count Windows, another time. Hopping (and its syntactic-sugar-sister Tumbling) windows are probably the most straightforward windowing concept in StreamInsight. A hopping window is defined by its length, and the offset from one window to the next. They are aligned with some absolute point on the timeline (which can also be given as a parameter to the window) and create sets of events. The diagram below shows an example of a hopping window with length of 1h and hop size (the offset) of 15 minutes, hence creating overlapping windows:   Two aspects in this diagram are important: Since this window is overlapping, an event can fall into more than one windows. If an (interval) event spans a window boundary, its lifetime will be clipped to the window, before it is passed to the set-based operation. That’s the default and currently only available window input policy. (This should only concern you if you are using a time-sensitive user-defined aggregate or operator.) The set-based operation will be applied to each of these sets, yielding a result. This result is: A single scalar value in case of built-in or user-defined aggregates. A subset of the input payloads, in case of the TopK operator. Arbitrary events, when using a user-defined operator. The timestamps of the result are almost always the ones of the windows. Only the user-defined  operator can create new events with timestamps. (However, even these event lifetimes are subject to the window’s output policy, which is currently always to clip to the window end.) Let’s assume we were calculating the sum over some payload field: var result = from window in source.HoppingWindow( TimeSpan.FromHours(1), TimeSpan.FromMinutes(15), HoppingWindowOutputPolicy.ClipToWindowEnd) select new { avg = window.Avg(e => e.Value) }; Now each window is reflected by one result event:   As you can see, the window definition defines the output frequency. No matter how many or few events we got from the input, this hopping window will produce one result every 15 minutes – except for those windows that do not contain any events at all, because StreamInsight window operations are empty-preserving (more about that another time). The “forced” output for every window can become a performance issue if you have a real-time query with many events in a wide group & apply – let me explain: imagine you have a lot of events that you group by and then aggregate within each group – classical streaming pattern. The hopping window produces a result in each group at exactly the same point in time for all groups, since the window boundaries are aligned with the timeline, not with the event timestamps. This means that the query output will become very bursty, delivering the results of all the groups at the same point in time. This becomes especially obvious if the events are long-lasting, spanning multiple windows each, so that the produced result events do not change their value very often. In such a case, a snapshot window can remedy. Snapshot windows are more difficult to explain than hopping windows: they represent those periods in time, when no event changes occur. In other words, if you mark all event start and and times on your timeline, then you are looking at all snapshot window boundaries:   If your events are never overlapping, the snapshot window will not make much sense. It is commonly used together with timestamp modification, which make it a very powerful tool. Or as Allan Mitchell expressed in in a recent tweet: “I used to look at SnapshotWindow() with disdain. Now she is my mistress, the one I turn to in times of trouble and need”. Let’s look at a simple example: I want to compute the average of some value in my events over the last minute. I don’t want this output be produced at fixed intervals, but at soon as it changes (that’s the true event-driven spirit!). The snapshot window will include all currently active event at each point in time, hence we need to extend our original events’ lifetimes into the future: Applying the Snapshot window on these events, it will appear to be “looking back into the past”: If you look at the result produced in this diagram, you can easily prove that, at each point in time, the current event value represents the average of all original input event within the last minute. Here is the LINQ representation of that query, applying the lifetime extension before the snapshot window: var result = from window in source .AlterEventDuration(e => TimeSpan.FromMinutes(1)) .SnapshotWindow(SnapshotWindowOutputPolicy.Clip) select new { avg = window.Avg(e => e.Value) }; With more complex modifications of the event lifetimes you can achieve many more query patterns. For instance “running totals” by keeping the event start times, but snapping their end times to some fixed time, like the end of the day. Each snapshot then “sees” all events that have happened in the respective time period so far. Regards, The StreamInsight Team

    Read the article

  • How can I do a partial update (i.e., get isolated changesets) from subversion with subclipse?

    - by Ingvald
    If a file is committed several times with various changes, how can I fetch one change at a time, i.e., one changeset at a time? I use eclipse, subversion, and subclipse, and I can't change the former two for the time being (or the MS platform..). In my list/ overview a file seems to be listed only in the latest relevant changeset even if all changesets are listed. So an earlier changeset doesn't necessarily show the full set of files in the original commit, nor the original diff for a file in a commit. Update: I'm thinking about using changesets for simplified peer review, so I'd like the partial update represented for all the files commited in one changeset. It's easy to get diffs and specific revisions for specific files in eclipse, but I'd like to step through all the changes in one specific commit/ changeset in a practical manner.

    Read the article

  • Stuck with luasec LUA secure socket

    - by PeterMmm
    This example code fails: require("socket") require("ssl") -- TLS/SSL server parameters local params = { mode = "server", protocol = "sslv23", key = "server.key", certificate = "server.crt", cafile = "server.key", password = "123456", verify = {"peer", "fail_if_no_peer_cert"}, options = {"all", "no_sslv2"}, ciphers = "ALL:!ADH:@STRENGTH", } local socket = require("socket") local server = socket.bind("*", 8888) local client = server:accept() client:settimeout(10) -- TLS/SSL initialization local conn,emsg = ssl.wrap(client, params) print(emsg) conn:dohandshake() -- conn:send("one line\n") conn:close() request https://localhost:8888/ output error loading CA locations ((null)) lua: a.lua:25: attempt to index local 'conn' (a nil value) stack traceback: a.lua:25: in main chunk [C]: ? Not very much info. Any idea how to trace down to the problem ?

    Read the article

  • Help with calling a secure (SSL) webservice in Android.

    - by mmattax
    I'm new to Android and am struggling to make a call to an SSL web service for an Android Application. My code is as follows: Log.v("fs", "Making HTTP call..."); HttpClient http = new DefaultHttpClient(); HttpGet request = new HttpGet("https://example.com/api"); try { String response = http.execute(request, new BasicResponseHandler()); Log.v("fs", response); } catch (Exception e) { Log.v("fs", e.toString()); } The Output is: Making HTTP call... javax.net.SSLPeerUnverifiedException: No peer certificate Any suggestions to make this work would be great. EDIT I should note that this is a valid cert. It is not self-signed.

    Read the article

  • What Use are Threads Outside of Parallel Problems on MultiCore Systesm?

    - by Robert S. Barnes
    Threads make the design, implementation and debugging of a program significantly more difficult. Yet many people seem to think that every task in a program that can be threaded should be threaded, even on a single core system. I can understand threading something like an MPEG2 decoder that's going to run on a multicore cpu ( which I've done ), but what can justify the significant development costs threading entails when you're talking about a single core system or even a multicore system if your task doesn't gain significant performance from a parallel implementation? Or more succinctly, what kinds of non-performance related problems justify threading? Edit Well I just ran across one instance that's not CPU limited but threads make a big difference: TCP, HTTP and the Multi-Threading Sweet Spot Multiple threads are pretty useful when trying to max out your bandwidth to another peer over a high latency network connection. Non-blocking I/O would use significantly less local CPU resources, but would be much more difficult to design and implement.

    Read the article

  • Reason for not properly closed socket?

    - by gc
    Here is what I am trying to do: The server sends message to connected clients when new messages are available. The client, on the other hand, when connected, tries to send a message to the server using send() and then receive message using recv(), right after that, the client calls close() to close the connection. Sometimes, after the client finishes, the server tries to receive message from client will result in a 104 - "connection reset by peer" error. When this happens, Wireshark reveals that the last two segments sent by the client is: 1. an ACK acknowledging the receipt of the message sent by the server 2. a RST/ACK No FIN is sent by the client. Why is this happening and how can I close the socket "properly" at the client?

    Read the article

  • Is GOTO really as evil as we are led to believe?

    - by RoboShop
    I'm a young programmer, so all my working life I've been told GOTO is evil, don't use it, if you do, your first born son will die. Recently, I've realized that GOTO actually still exists in .NET and I was wondering, is GOTO really as bad as they say, or is it just because everyone says you shouldn't use it, so that's why you don't. I know GOTO can be used badly, but are there any legit situations where you may possibly use it. The only thing I can think of is maybe to use GOTO to break out of a bunch of nested loops. I reckon that might be better then having to "break" out of each of them but because GOTO is supposedly always bad, I would never use it and it would probably never pass a peer review. What are your views? Is GOTO always bad? Can it sometimes be good? Has anyone here actually been gutsy enough to use GOTO for a real life system?

    Read the article

  • Sending bulk notification emails without blocking

    - by FreshCode
    For my client's custom-built CRM, I want users (technicians) to be notified of changes to marked cases via email. This warrants a simple subscription mapping table between users and cases and automated emails to be sent every time a change is made to a case from within the logging method. How do I send 10-100 emails to subscribed users without bogging down my logging method? My SMTP server is on a peer on my LAN, so sends should be quick, but ideally this should be handled by an external queuing process. I can have a cron job send any outstanding emails every 10 minutes, but for this specific client cases are quite time-sensitive and instant notification (as instant as email can be) would be great. How can I send bulk notification emails from within ASP.NET MVC without bogging down my logging method?

    Read the article

  • tomcat 6.0.18 HTTPS not working

    - by user180152
    Hi, I am trying to configure tomcat for HTTPS on localhost. I am using self signed certification. I added folowing line of code to server.xml. <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" disableUploadTimeout="true" acceptCount="100" debug="0" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" SSLEngine="on" keystoreFile="path-to-keystore" keystorePass="password" /> I am getting following error in browser: An error occurred during a connection to localhost:8443. Peer reports it experienced an internal error. (Error code: ssl_error_internal_error_alert) Can anybody guide me to proper direction. Thanks.

    Read the article

  • Advice about testing an application before release?

    - by Troy
    I would like to get some tips from peer developers about how you go about testing an application you developed, prior to release to QA. Keep in mind, this is a small scale application (requirements are verbal), and so doing formal testing processes wont work, especially, since your boss told you to develop this app quick, push it out the door. Despite the time restraints, I would like to make sure it is bug free, however, numerous times in the past, I have had the app sent back to me because clicking the "Reset" button, messes up the other controls alignment etc. I know there are people out there that develop small scale apps fast, and send them out with minimal bugs. How can I achieve that? I researched this post, but it didnt quite answer my question. Testing your code before releasing to QA

    Read the article

  • tsql sum data and include default values for missing data

    - by markpirvine
    Hi, I would like a query that will shouw a sum of columns with a default value for missing data. For example assume I have a table as follows: type_lookup: id name 1 self 2 manager 3 peer And a table as follows data: id type_lookup_id value 1 1 1 2 1 4 3 2 9 4 2 1 5 2 9 6 1 5 7 2 6 8 1 2 9 1 1 After running a query I would like a result set as follows: type_lookup_id value 1 13 2 25 3 0 I would like all rows in type_lookup table to be included in the result set - even if they don't appear in the data table. Any help would be greatly appreciated, Mark

    Read the article

  • Redirecting before POST upload has been completed

    - by vartec
    I have form with file upload. The files to be uploaded actually are pictures and videos, so they can be quite big. I have logic which based on headers and first 1KB can determine if the rest will be processed or immediately rejected. In the later case I'd like to redirect client to error page without having to wait for upload to finish. The case is, that just sending response before POST is complete doesn't seem to work. The redirect get's ignored and if I close connection, browser complains with "Connection reset by peer" error. So the question is: is it even possible to do that in pure HTTP (without JavaScript on client-side), and if so, how?

    Read the article

  • Is there an open source repository for SQL code?

    - by morpheous
    I find myself writing SQL code (queries or stored procs) to solve problems that can definitely be defined as 'patterns' that occur frequently in business. Rather than having to wrack my brain each time I encounter a new problem (which must have been solved a countless times by other coders/db analysts, I wondered if there was a repository I could go to check out (peer reviewed) code - and maybe add my two pence every now and then. I know different db vendors tend to write slightly variant forms of SQL - but there could still be a repository with ANSI stuff and proprietary stuff. Hopefully, such a site would encourage more people to write standardized SQL. Is there such a site?. If no - why not? (would anyone else be interested in such a site?) If such a site exists, please provide link(s), as Google is not finding anything remotely useful.

    Read the article

  • Boost ASIO read X bytes synchroniously into a vector

    - by xeross
    Hey, I've been attempting to write a client/server app with boost now, so far it sends and receives but I can't seem to just read X bytes into a vector. If I use the following code vector<uint8_t> buf; for (;;) { buf.resize(4); boost::system::error_code error; size_t len = socket.read_some(boost::asio::buffer(buf), error); if (error == boost::asio::error::eof) break; // Connection closed cleanly by peer. else if (error) throw boost::system::system_error(error); // Some other error. } And the packet is bigger then 4 bytes then it seems it keeps writing into those 4 bytes until the entire packet has been received, however I want it to fetch 4 bytes, then allow me to parse them, and then get the rest of the packet. Can anyone provide me with a working example, or at least a pointer on how to make it work properly ? Regards, Xeross

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >