Search Results

Search found 2676 results on 108 pages for 'spam blocking'.

Page 76/108 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • "port forwarding": redirect calls to webservice at port 8081 to port 80

    - by niba
    Hi, a colleague of mine wrote a webservice that runs on port 8081 of our Windows 2008 Server. He uses the class ServiceHost, afaik this means its a standalone host (no IIS or ASP involvement). Note: I'm new into WCF ;) Now there are some issues with clients behind a firewall blocking the requests to remote port 8081 of our server (where the webservice runs). The easiest solution would be: run the webservice host at port 80 ... But: there is also a Apache 2.2 webserver running on the Windows Server, hosting some websites. By default it runs on port 80. My solution after some researching: use a virtual host to route requests to a virtual host (lets say http://webservice.[hostname]:80) to the webservice host (http://[hostname]:8081). Is this a good idea? Can Apache handle forwards to standalone webservice hosts? It would be nice if someone could lead me on to the right track :) Best regards, Niels

    Read the article

  • "Quoted-printable line longer than 76 chars" warning when sending HTML E-Mail

    - by Chris Roberts
    Hi, I have written some code in my VB.NET application to send an HTML e-mail (in this case, a lost password reminder). When I test the e-mail, it gets eaten by my spam filter. One of the things that it's scoring badly on is because of the following problem: MIME_QP_LONG_LINE RAW: Quoted-printable line longer than 76 chars I've been through the source of the e-mail, and I've broken each line longer than 76 characters into two lines with a CR+LF in between, but that hasn't fixed the problem. Can anyone point me in the right direction? Thanks!

    Read the article

  • Serial Mac OS X constantly freezes/locks/dissappears for USB to Arduino

    - by Niraj D
    I have a problem with my C++ code running in Xcode with both the AMSerial library as well as the generic C (ioctl, termios). After a fresh restart, my application works well but after I "kill" the program the Serial (I think) is not released. I have checked my open files under /dev and have killed the connection to serial USB from there, but my C++ still can't open the USB port. I have narrowed this down to being a low level Mac OS X issue, regarding blocking the port indefinitely, regardless of closing it using the aforementioned libraries. Just for context, I'm trying to send numbers through my USB port, serially to an Arduino Duemilanove at 9600 baud. Running Serial Monitor in Arduino is perfectly fine, however, running through a C++ application it freezes up my computer, occasionally, my mouse/keyboard freeze up: requiring a hard reset. How can this problem be fixed? It seems like Mac OS X is not USB friendly!

    Read the article

  • Deciding between Apache Commons exec or ProcessBuilder

    - by Moev4
    I am trying to decide as to whether to use ProcessBuilder or Commons exec, My requirements are that I am simply trying to create a daemon process whose stdout/stdin/stderr I do not care about. In addition I want to execute a kill to destroy this process when the time comes. I am using Java on Linux. I know that both have their pains and pitfalls (such as being sure to use separate thread to swallow streams can lead to blocking or deadlocks, and closing the streams so not to leave open files hanging around)and wanted to know if anyone had suggestions one way or the other as well as any good resources to follow.

    Read the article

  • $.Post with Form submit

    - by Michael
    ...Some form gets submitted... $.("form").submit(function() { saveFormValues($(this), "./../.."; } function saveFormValues(form, path) { var inputs = getFormData(form); var params = createAction("saveFormData", inputs); var url = path + "/scripts/sessions.php"; $.post(url, params); } The weird thing is that if i add a function to the $.post(url, params, function(data) { alert(data); } I get a blank alert statement. Within scripts/sessions.php i have a function to save whatever the $_POST information is to a file, and the sessions.php never saves this saveFormValues call. It never shows up to the file. But if i keep trying to get it to save, about once every 15 will actually allow it to be saved. This leads me to believe that the forms POST is somehow blocking this value saving post. Any help?

    Read the article

  • How to write Asynchronous LINQ query?

    - by Morgan Cheng
    After I read a bunch of LINQ related stuff, I suddenly realized that no articles introduce how to write asynchronous LINQ query. Suppose we use LINQ to SQL, below statement is clear. However, if the SQL database responds slowly, then the thread using this block of code would be hindered. var result = from item in Products where item.Price > 3 select item.Name; foreach (var name in result) { Console.WriteLine(name); } Seems that current LINQ query spec doesn't provide support to this. Is there any way to do asynchronous programming LINQ? It works like there is a callback notification when results are ready to use without any blocking delay on I/O.

    Read the article

  • Linq to SQl over WCF Timesout after several calls

    - by Redeemed1
    I have a L2S Repository class which instantiates the L2S DataContext in its constructor. The repository is instantiated at run time (using Unity) in a service hosted in IIS with WCF. When I run up the client MVC applicaton the calls to the backend WCF service work for a while and then timeout. I suspected perhaps a database issue as I was depending on IIS garbage collection to dispose of unused DataContext instances in the IIS host but when I checked the characteristics of the problem I notice the following: The client makes the call to WCF but the WCF service does not respond. Next, the client times out Some time later (several minutes) the service actually executes the request by instantiating the repository and servicing the call. I have checked both client and server traces logs and only the client shows WCF errors (the timeout error). Where should I look? Is it something in WCF or is L2S possibly blocking with unfreed conenctions, resources etc.? Many thanks Brian

    Read the article

  • Installing sSMTP from SSH

    - by James
    I'm on a Web Hosting Buzz reseller account. They have some very stringent mail sending rules, including blocking of authenticated SMTP socket mail sending using PEAR. It was suggested in WHB forum that this was possible with sSMTP. I've since gotten SSH access and googled how to install sSMTP from SSH: rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-3.noarch.rpm yum install ssmtp However, the first line fails with: Retrieving http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-3.noarch.rpm error: skipping http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-3.noarch.rpm - transfer failed - Unknown or unexpected error It was a very old thread in WHB forum and the thread poster could not be reached for assistance. Any help would be much appreciated!

    Read the article

  • A checklist for fixing .NET applications to SQL Server timeout problems and improve execution time

    - by avgbody
    A checklist for improving execution time between .NET code and SQL Server. Anything from the basic to weird solutions is appreciated. Code: Change default timeout in command and connection by avgbody. Use stored procedure calls instead of inline sql statement by avgbody. Look for blocking/locking using Activity monitor by Jay Shepherd. SQL Server: Watch out for parameter sniffing in stored procedures by AlexCuse. Beware of dynamically growing the database by Martin Clarke. Use Profiler to find any queries/stored procedures taking longer then 100 milliseconds by BradO. Increase transaction timeout by avgbody. Convert dynamic stored procedures into static ones by avgbody. Check how busy the server is by Jay Shepherd.

    Read the article

  • Named pipe is not flushing in Python

    - by BrainCore
    I have a named pipe created via the os.mkfifo() command. I have two different Python processes accessing this named pipe, process A is reading, and process B is writing. Process A uses the select function to determine when there is data available in the fifo/pipe. Despite the fact that process B flushes after each write call, process A's select function does not always return (it keeps blocking as if there is no new data). After looking into this issue extensively, I finally just programmed process B to add 5KB of garbage writes before and after my real call, and likewise process A is programmed to ignore those 5KB. Now everything works fine, and select is always returning appropriately. I came to this hack-ish solution by noticing that process A's select would return if process B were to be killed (after it was writing and flushing, it would sleep on a read pipe). Is there a problem with flush in Python for named pipes?

    Read the article

  • C# How to communicate between 2 servers

    - by Chau
    I have a website running ASP.NET (C#) on server A. I need my website to access a webservice on server B. server B will only accept incoming requests if the requestee is located within a certain IP range and server A is not within this range. I have a server server C which is located within the IP range and the only thing blocking server A from server C is a firewall (which I have access to). It must be possible to create a hole in the firewall between server A and server C, but my question is: How do I relay the request from server A to server B via server C? I need the response from server B to get back to server A also :) Thanks in advance.

    Read the article

  • jquery-Ajax call on tornado handlers waits for pervious ajax call to return

    - by harshh
    Hey All. I recently started testing TornadoWeb for a home-project, which uses jquery getJSON function to call my tornado handlers. And found something strange, which i seek an explanation for. I fire an ajax request for Handler1 on tornado, and in some cases request for Handler2 is initiated before Handler1 returns. It appears from development-server logs, and firebug console-debugging, that Handler2 request waits for Handler1 request to finish, and then return. So basically, XHR call is waiting for earlier XHRs. They are supposed to be asynchronous/non-blocking right?? Or am i missing something. You can check the test-case environment called testtornado at http://github.com/harshh/Harsh-Projects/ with main.py as server triggering file. I would appreciate help from anyone who can throw some light on this.

    Read the article

  • serving large file using select, epoll or kqueue

    - by xask
    Nginx uses epoll, or other multiplexing techniques(select) for its handling multiple clients, i.e it does not spawn a new thread for every request unlike apache. I tried to replicate the same in my own test program using select. I could accept connections from multiple client by creating a non-blocking socket and using select to decide which client to serve. My program would simply echo their data back to them .It works fine for small data transfers (some bytes per client) The problem occurs when I need to send a large file over a connection to the client. Since i have only one thread to serve all client till the time I am finished reading the file and writing it over to the socket i cannot resume serving other client. Is there a known solution to this problem, or is it best to create a thread for every such request ?

    Read the article

  • How to determine if a file will be logically moved or physically moved.

    - by Frederic Morin
    The facts: When a file is moved, there's two possibilities: The source and destination file are on the same partition and only the file system index is updated The source and destination are on two different file system and the file need to be moved byte per byte. (aka copy on move) The question: How can I determine if a file will be either logically or physically moved ? I'm transferring large files (700+ megs) and would adopt a different behaviors for each situation. Edit: I've already coded a moving file dialog with a worker thread that perform the blocking io call to copy the file a meg at a time. It provide information to the user like rough estimate of the remaining time and transfer rate. The problem is: how do I know if the file can be moved logically before trying to move it physically ?

    Read the article

  • Is the IP from the source or target in this System.Net.Sockets.SocketException?

    - by Jeremy Mullin
    I'm making an outbound connection using a DNS name to a server other than the localhost, and I get this exception: System.Net.WebException: Unable to connect to the remote server --- System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it 127.0.0.1:5555 The text implies that the TARGET machine refused the connection, but the IP address and port are from the localhost, which is kind of confusing. So is that IP address really the outgoing IP and port, even though the exception was caused by the target refusing the connection? Or is the exception from the local firewall blocking the outgoing connection?

    Read the article

  • Is T R O L L O L O G Y a good term for studies of cyber spying, cyber stalking and cyber mobbing ? [closed]

    - by e-mental
    To fight spam and to free speech is necessary for produtive online communication. But this field is plenty of problems and there are many questions not yet put together. The last problem of constituting an interdisciplinary forum and theoretical frame for such studies of cyber spying & stalking & mobbing is the best name for it. But I would like to ask, whether trollology is a good idea - or if You have betters. Thanx a lot.

    Read the article

  • I need to block my feed completly

    - by justjoe
    i'm in need a solution via coding. on how to completely hide my blog feed. I know how to optimize related hook and filter such as 'the_excerpt_rss' and 'the_post_rss'. And also understand how to limit access or make my blog private. so, the question is more about howto blocking feed access without make my blog private ? i hope the solution will be not some apache .htacceess. Cause i need to code it directly into my theme.. sorry if this's too much to asked.

    Read the article

  • Creating an independent draw thread using pthreads (C++)

    - by sagekilla
    Hi all, I'm working on a graphical application which looks something like this: while (Simulator.simulating) { Simulator.update(); InputManager.processInput(); VideoManager.draw(); } I do this several times a second, and in the vast majority of cases my computation will be taking up 90 - 99% of my processing time. What I would like to do is take out the processInput and draw functions and have each one run independently. That way, I can have the input thread always checking for input (at a reasonable rate), and the draw thread attempting to redraw at a given frame rate. My issue is I'm not sure how I can properly do this. How would I properly initialize my pthread_t and associated pthread_attr_t so that the thread runs without blocking what I'm doing? Any help or even a link is appreciated, thanks!

    Read the article

  • How to distribute email's delivery between 2 or more servers

    - by user181186
    We provide Email Marketing service through our online App. We have about 30 customers. And each one has it's own mailling list (5k to 20k emails each). What we really want is to distribute email's delivery between 2 or more servers. I was wondering What kind of aproach/solutions MailChimp , Constant Contact uses to provide a great service ? use many servers ? many IPs ? Our spam policy suspends ANY user/customer that gets 10% bounced .

    Read the article

  • Boost::Asio : io_service.run() vs poll() or how do I integrate boost::asio in mainloop

    - by user300713
    Hi, I am currently trying to use boost::asio for some simple tcp networking for the first time, and I allready came across something I am not really sure how to deal with. As far as I understand io_service.run() method is basically a loop which runs until there is nothing more left to do, which means it will run until I release my little server object. Since I allready got some sort of mainloop set up, I would rather like tp update the networking loop manually from there just for the sake of simplicity, and I think io_service.poll() would do what I want, sort of like this: void myApplication::update() { myIoService.poll(); //do other stuff } This seems to work, but I am still wondering if there is a drawback from this method since that does not seem to be the common way to deal with boost::asios io services. Is this a valid approach or should I rather use io_service.run() in a non blocking extra thread?

    Read the article

  • Does the Microsoft SQL Server native client support IDBAsynchNotify?

    - by Aaron Klotz
    I'm working on some OLE DB code that runs queries on MS SQL Server via ICommand::Execute. I'm converting this code to operate asynchronously by setting the DBPROPVAL_ASYNCH_INITIALIZE property on the command before executing. I'd prefer to register a IDBAsynchNotify sink so that my code can be notified of events, as opposed to polling or blocking via ISSAsynchStatus. The documentation for ICommand::Execute does not show IConnectionPointContainer as an acceptable riid parameter, but the same document, when discussing the DB_S_ASYNCHRONOUS return code, suggests that it is possible to request an IConnectionPointContainer interface that I could use to register my event sink. When I call ICommand::Execute, passing IID_IConnectionPointContainer as the riid parameter, I receive the E_NOINTERFACE error. I also tried setting the DBPROP_IConnectionPointContainer property before Execute but I received the same results. If I have to, I'll use ISSAsynchStatus, but I'd much rather use IDBAsynchNotify. Is it possible?

    Read the article

  • OpenLayers, Layers: Tiled vs. single tile

    - by Chau
    Each time we add a new layer to our OpenLayers based website (data provided primarily by a GeoServer server), we discuss whether to use a single-tile or a tiled approach. Some of the parameters we evaluate are the following: Using the tiled approach we get: Slow but continuous buildup of the viewport Lots of small images Client side caching possibilities Blocking of the loading pipeline (6 requests at a time) Jerky feeling when navigating during load Using the single-tile approach we get: Smoother feeling when navigating during load Time delay before layer is loaded One large image for each layer No caching of the single tile We have a lot of data editing in the layers, thus a tile-cache might not be that efficient. Are there any best-practices when it comes to tiling? Progressing towards infinitely fast hardware and unlimited data connections, the discussion becomes irrelevant, but what configuration do you percieve as the most user-pleasing?

    Read the article

  • TIBRV: Remote vs Local RVD

    - by jsw
    When connected to a local RVD a sending application is shielded from network interruptions and the send message methods will only block for the time it takes for the message to reach the local RVD process. With remote RVD the sending application is no longer shielded from network interruptions and the send message methods will block for the time it takes to hop across the network to reach the remote RVD process. Is my understanding correct? The documentation is vague regarding remote daemons. I'm mostly concerned with how reliable and performant the send message will be from the perspective of a sending application. Introducing unnecessary blocking on the client side due to sending a message (especially a network hop) is a big no-no in this application. The speed at which the messages reach the consumer is not of the utmost importance. With this in mind is a remote RVD out of the question?

    Read the article

  • Workflow foundation 4.0 message correlation and error reporting

    - by Lygpt
    I have a workflow service that runs and performs a number of different operations (such as web service calls). If one of these operations fails I call an error reporting web service to notify a seperate system that one of my workflow operations has failed. As the error could be something like the web service being down, I loop and retry this operation until it works. There can be times though when the data I'm passing to this web service is faulty and it needs changing. So I need to be able to hook into this running (but delayed) workflow and change local workflow variables and then re-run the operation. I've looked at message correlation in workflow 4.0 to achieve this but because the delay activity is active in my running workflow instance, any second service call doesn't do anything (it's like the delay activity is blocking any other requests). I've tried setting 'CanCreateInstance' to both true and false but it doesn't help. Thanks!

    Read the article

  • Is it possible for competing file access to cause deadlock in Java?

    - by BlairHippo
    I'm chasing a production bug that's intermittent enough to be a real bastich to diagnose properly but frequent enough to be a legitimate nuisance for our customers. While I'm waiting for it to happen again on a machine set to spam the logfile with trace output, I'm trying to come up with a theory on what it could be. Is there any way for competing file read/writes to create what amounts to a deadlock condition? For instance, let's say I have Thread A that occasionally writes to config.xml, and Thread B that occasionally reads from it. Is there a set of circumstances that would cause Thread B to prevent Thread A from proceeding? My thanks in advance to anybody who helps with this theoretical fishing expedition. Edit: To answer Pyrolistical's questions: the code isn't using Filelock, and is running on a WinXP machine.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >