Search Results

Search found 15272 results on 611 pages for 'request for enhancement'.

Page 185/611 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Pushing complete notifications to client

    - by ton.yeung
    So with cqrs, we accept that consistency is eventual. However, that doesn't mean that the user has to continually poll, or that eventual means an update has to take more then 500ms to sync. For the sake of UX, we want to at least give the illusion of consistency, or if not possible, be as transparent as possible. With that in mind, I have this setup: angularjs web client, consumes webapi restful services, sends commands to nservicebus command handlers, saves to neventstore, dispatches events to nservicebus event handlers, sends message to signalr hub, sends notifications to angularjs web client so with that setup, theoretically some initiates a request the server validates the request sends out the necessary commands In the mean time the client gets a 200 response updates the view: working on it gets message sometime later: done, here's the updated data Here's where things get interesting, each command could spawn multiple events. Not sure if this is a serious no, no, or not, but that's how it is currently. For example, a new customer spawns CustomerIDCreated, CustomerNameUpdated, CustomerAddressUpdated, etc... Which event handler needs to notify the client? Should all of them in a progress bar style update?

    Read the article

  • resolve access violation exception (0xC0000005) crashing IIS app pool

    - by Joseph
    IIS 7.5, server 2008 r2, classic asp and asp .net 2.0, 3.5 website same server, same app pool. The past 4 weeks thousands of these errors 'C0000005' are occurring. I know from IIS debug diag tool that 'C0000005' is an access violation error. Below is the top line from my debug diag report. In w3wp__PID__6656__Date__01_08_2011__Time_01_42_46AM__281__First Chance Access Violation.dmp the assembly instruction at asp!CActiveScriptEngine::GetApplication+27 in \\?\C:\Windows\System32\inetsrv\asp.dll from Microsoft Corporation has caused an access violation exception (0xC0000005) when trying to read from memory location 0x00000000 on thread 29 Thread 29 - System ID 6736 Entry point 0x00000000 Create time 1/8/2011 12:46:26 AM Time spent in user mode 0 Days 00:00:00.140 Time spent in kernel mode 0 Days 00:00:00.078 Function Source asp!CActiveScriptEngine::GetApplication+27 vbscript!COleScript::GetDebugApplicationCoreNoRef+2b vbscript!COleScript::FDebuggerEnabled+30 vbscript!COleScript::SetScriptSite+cd asp!CActiveScriptEngine::Init+125 asp!CScriptManager::GetEngine+252 asp!AllocAndLoadEngines+28f asp!ExecuteGlobal+17a asp!Execute+b5 asp!CHitObj::ViperAsyncCallback+3fc asp!CViperAsyncRequest::OnCall+6a comsvcs!CSTAActivityWork::STAActivityWorkHelper+32 ole32!EnterForCallback+f4 ole32!SwitchForCallback+1a8 ole32!PerformCallback+a3 ole32!CObjectContext::InternalContextCallback+15b ole32!CObjectContext::DoCallback+1c comsvcs!CSTAActivityWork::DoWork+12f comsvcs!CSTAThread::DoWork+18 comsvcs!CSTAThread::ProcessQueueWork+37 comsvcs!CSTAThread::WorkerLoop+135 msvcrt!_endthreadex+44 msvcrt!_endthreadex+ce kernel32!BaseThreadInitThunk+e ntdll!__RtlUserThreadStart+70 ntdll!_RtlUserThreadStart+1b BELOW is the faulting module. ASP report Executing ASP requests 0 Request(s) ASP templates cached 0 Template(s) ASP template cache size 0.00 Bytes Loaded ASP applications 1 Application(s) ASP.DLL Version 7.5.7600.16620 ASP application report ASP application metabase key Physical Path Virtual Root Session Count 0 Session(s) Request Count 0 Request(s) Session Timeout 0 minutes(s) Path to Global.asa Server side script debugging enabled False Client side script debugging enabled False Out of process COM servers allowed False Session state turned on False Write buffering turned on False Application restart enabled False Parent paths enabled False ASP Script error messages will be sent to browser False ASP!CACTIVESCRIPTENGINE::GETAPPLICATION+27In w3wp__PID__6656__Date__01_08_2011__Time_01_42_46AM__281__First Chance Access Violation.dmp the assembly instruction at asp!CActiveScriptEngine::GetApplication+27 in \\?\C:\Windows\System32\inetsrv\asp.dll from Microsoft Corporation has caused an access violation exception (0xC0000005) when trying to read from memory location 0x00000000 on thread 29 recent events: server was being brute forced by hackers all of Dec and probably earlier, they weren't able to gain access but did get a virus on and blasted out spam. insatlled AVG and about the 17 or 22 latest patches. after that the app pool started crashing and the server has crashed a couple times since then. I am in no mans land as I am a developer and not a sys admin but I have to assume many roles. So I'm reaching out for help. Sometimes I will see hundreds of these 'C0000005' scriptengine errors in the event log in a matter of seconds and other times just a few times an hour. I googled this line 'ASP!CACTIVESCRIPTENGINE::GETAPPLICATION' and got nothing. Its like the function don't exist or something. I have spent many hours google-ing to no avail and am now turning to the experts on the forums. Thank you for your help

    Read the article

  • CQRS applicability when some commands need to block the UI

    - by regularfry
    I am working on an app which I would dearly love to transition from a fairly traditional layered architecture to CQRS, for a number of reasons, not least fo which is that having a robust event log will make adding a couple of feature requests I can see barrelling towards me trivial to accomodate. Now, I have a conceptual problem: of around 40 commands the user can initiate, there are three which the user needs to be sure have successfully completed before the UI lets them do anything else. Everything else fits into the "submit a request, query for success later" model, except for these three commands. How is this handled in CQRS-land? Do I separate the three blocking commands to effectively a third service, so I have Commands, Queries, and BlockingCommands? Do I have a two-stage event processor with an in-request blocking first stage which only gets used for the blocking commands? Does the existence of these three commands mean that the whole idea of applying CQRS is invalid? Should I just pretend they aren't blocking and poll for success in the UI? I'm sure this must come up on other projects, how is it usually handled?

    Read the article

  • Source of Unexplained Requests in Server Logs

    - by Synetech inc.
    Hi, I am baffled by some entries in my server logs, specifically the web-server logs. Other than normal, expected traffic, I have noticed three types of request errors (eg 404, etc.): Broken links, ie links from old, external pages that point to pages that are no longer here Sequences of probes, ie some jerk trying to hack in by scanning my server for a series of exploitable admin type pages and such What appear to be completely random requests for things that have never existed on the server or even have anything to do with the server, and appear by themselves (ie not a series of requests like the probes) Could it somehow be a mistyped URL or IP? That’s about the only thing that I can think of, but still, how could I get a request on say, foobar.dyndns.org (12.34.56.78) for something like www.wantsfly.com/prx2.php or /MNG/LIVE or http://ant.dsabuse.com/abc.php?auth=45V456b09m&strPassword=X%5BMTR__CBZ%40VA&nLoginId=43. (Those are a few actual requests from my logs.) Can someone please explain scenario three to me? Thanks.

    Read the article

  • Can not login Dashboard / Unable to find the server at mykeystoneurl

    - by neo0
    I installed Dashboard following this guide: http://wiki.openstack.org/OpenStackDashboard Everything fine, but when I run the server, I can not login with the username and password in DATABASE config in local_settings.py. Here's my config: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dashboarddb', 'USER': 'nova', 'PASSWORD': 'nova', 'HOST': 'localhost', 'default-character-set': 'utf8' }, } When I run the Dashboard server and enter username + password. It returned this error on browser: Unable to find the server at mykeystoneurl (HTTP 400) And in the command line: DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. Validating models... 0 errors found Django version 1.3.1, using settings 'openstack_dashboard.settings' Development server is running at http://0.0.0.0:8888/ Quit the server with CONTROL-C. Request returned failure status. Traceback (most recent call last): File "/home/us/horizon/.venv/src/python-keystoneclient/keystoneclient/client.py", line 121, in request body = json.loads(body) File "/usr/lib/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded [06/Mar/2012 15:20:03] "POST /auth/login/ HTTP/1.1" 200 3735 I also tried login as "admin" with password is "password" or "secrete" but I didn't work. What's wrong? Thank you!

    Read the article

  • Passenger, Apache and avoiding page caching

    - by user38382
    I'm hosting a rack application with passenger and apache. The application is setup to cache the content of each request to the public directory after each request. This allows apache to serve the content directly as a static page for future requests. I would like to tell Apache, presumably through some rewrite rules that any requests with query parameters should not be cached, but instead passed down to the rack application. With a mongrel setup I would just redirect it to the balancer if it meets my rewrite conditions. How do you do the same with passenger?

    Read the article

  • MvcReportViewer v.0.4.0 is available!

    - by Ilya Verbitskiy
    Originally posted on: http://geekswithblogs.net/ilich/archive/2014/06/04/mvcreportviewer-v.0.4.0-is-available.aspxToday I released new version of MvcReportViewer. This release contains mostly bug fixes reported by library users. I am glad to see that Open Source model works and people try to contribute to the project! Thank you everybody for your bug repots and help with the project. Version 0.4.0 Added support for ASP.NET MVC 5 Removed jQuery dependency. I have not tested it on IE8 or earlier versions. Any help with testing is welcome! Fixed problem with SSRS keep-alive cookies. Keep-alive cookies are issued every time a report is opened during a browser session. Many people don't restart their browsers and in my case, Chrome doesn't get rid of the cookie session data on close - had to manually delete them for the reports to start working again. I added KeepSessionAlive control settings to manage SSRS keep-alive behavior. It is set to false by default to fix Bad Request 400: Request Too Long issue. You can find usage example in Fluent.cshtml. Fixed the bug when ReportViewer Control parameters was not parsed when ShowParameterPrompts parameter had not been set. Changed public static MvcReportViewerIframe MvcReportViewer method to use IEnumerable<KeyValuePair<string, object>> reportParameters instead of simple object. The reason is users reported that they mostly use multiple report parameters’ values. Added support for SSRS hosted on Windows Azure. Users should set MvcReportViewer.IsAzureSSRS property to true in Web.config to use Windows Azure authentication. I do not have Windows Azure SSRS and build the code using http://msdn.microsoft.com/en-us/library/gg552871.aspx#Authentication article. It would be nice if somebody from community tested the change or provided me a test report on Windows Azure for testing purposes.

    Read the article

  • Determine Server specs for a Rails with MySQL database (on AWS)

    - by Rogier
    I developed a intranet applications with Rails (3.2) for one of my customers. There will be around 30-40 employees working with it. Backend is MySQL (5). What would be the best way to determine the servers specs needed? Given: max. load will be roughly 2400 (40*60) HTTP requests (mixed GET / POST) per hour. 15% of these calls are JSON calls (iOS) avg request will make between 5-10 database calls 500-800 SQL INSERTS per day webpages are fairly simple (no images, just text) avg webpage is 15 request (css/js/etc) and total size is 35-45 KB More specific, since they need access from multiple geographical locations, we are thinking of running a bitnami Ruby stack in the AWS cloud (uptime is important). Any thoughts on a AWS Instance (small/medium) and Utilization (light/medium/heavy) ? Thanks!

    Read the article

  • Is realtime validation of username good or bad?

    - by iamserious
    I have a simple form for the user to sign up to my site; with email, username and password fields. We are now trying to implement an ajax validation so the user doesn't have to post the form to find out if the username is already taken. I can do this either on keyup event or on text blur event. My question is, which of these is really the best way to do? Keyup From the user POV, it would be good if the validation is done as and when they are typing, (on key up event) - of course, I am waiting for half a second to see if the user stops typing before firing off the request, and user can make any adjustments immediately. But this means I am sending way more requests than if I validated the username on Blur event. Blur The number of requests will be much lower when the validation is done on blur event, But this means the user has to actually go away from the textbox, look at the validation result, and if necessary go back to it to make any changes and repeat the whole process until he gets it right. I had a quick look at google, tumblr, twitter and no one actually does username validations on keyup events, (heck, tubmlr waits for the form to be posted) but I can swear I have seen keyup validations in a lot of places too. So, coming back to the question, will keyup validations be too many for server, is it an unnecessary overhead? or is it worth taking these hits to give user a better experience? ps: all my regex validations etc are already done on javascript and only when it passes all these other criteria does it send a request to server to check if a username already exists. (And the server is doing a select count(1) from user where username = '' - nothing substantial, but still enough to occupy some resource) pps: I'm on asp.net, MS SQL stack., if that matters.

    Read the article

  • Hyperic HQ metrics not working

    - by Robin Weston
    I am having a problem with Hyperic HQ. Several metrics, some in IIS 6.x (Request Execution Time, Request Wait Time) and some in .NET 2.0 (Bytes in all Heaps, Exceptions Thrown per Minute), always show 0. If I view perfmon on the server itself I can see that the counters have values greater than zero. There are some metrics that work fine, such as Total Get Requests per Minute and the other IIS defaults. I have looked in the server logs but nothing obvious shows up. Please advise. Am happy to find more information if required.

    Read the article

  • Forward Hostname via my router to another PC for debugging

    - by Markive
    Hi, My web service runs on for example: http://mydomain.com/mywebservice.asmx. This works great, but I have a PDA application which I want to debug it synchronising through this web service. Currently the only way I can do this is to debug the webservice running on the actual server which is far from ideal. What I would like to do is for any device connecting on my wireless network, if it requests mywebservice.asmx for this to forward the request to my development PC and for IIS to then handle the request and allow me to debug in Visual Studio. So device on the network that requests the hostname: mywebservice.asmx will his this PC.. I am at a loss to set this up on my router (Zoom ADSL X6), this is massively out of my scope but any help would be much appreciated

    Read the article

  • How to REALLY start thinking in terms of objects?

    - by Mr Grieves
    I work with a team of developers who all have several years of experience with languages such as C# and Java. Most of them are young enough to have been shown OOP as a standard way to develop software in university and are very comfortable with concepts such as inheritance, abstraction, encapsulation and polymorphism. Yet, many of them, and I have to include myself, still tend to create classes which are meant to be used in a very functional fashion. The resulting software is often several smaller classes which correctly represent business objects which get passed through larger classes which only supply ways to modify and use those objects (functions). Large complex difficult-to-maintain classes named Manager are usually the result of such behaviour. I can see two theoretical reasons why people might write this type of code: It's easy to start thinking of everything in terms of the database Deep down, for me, a computer handling a web request feels more like a functional operation than an object oriented operation when you think about Request Handlers, Threads, Processes, CPU Cores and CPU operations... I want source code which is easy to read and easy to modify. I have seen excellent examples of OO code which meet these objectives. How can I start writing code like this? How I can I really start thinking in an object oriented fashion? How can I share such a mentality with my colleagues?

    Read the article

  • nginx timeout albeit ridicolous configuration

    - by Joa Ebert
    The scenario is an API server that should handle uploads. Posting on my.host.com/api/upload should do something with the body the client sends. However the API server has been designed to block the whole request until it fully processed the file, including some analysis which can take up to approx. 5min (...!). This has to change of course. In the meantime I wanted to setup nginx as a load balancer in front of the API servers. I quickly ran into a timeout issue, consulted Google and came up with this ridiculous test configuration: user www-data; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; access_log off; sendfile on; send_timeout 3600; keepalive_timeout 3600 120; tcp_nopush on; tcp_nodelay on; gzip off; client_header_timeout 3600; client_body_timeout 3600; proxy_send_timeout 3600; proxy_read_timeout 3600; proxy_connect_timeout 1800; proxy_next_upstream error; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And upstream test { server host1; server host2; } server { listen 80; server_name my.host.com; client_max_body_size 10m; location /api/ { proxy_pass http://test; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } } Still, when an upload happens, I get the following result in the error.log: 2010/12/22 13:36:42 [error] 5256#0: *187359 upstream timed out (110: Connection timed out) while reading response header from upstream, client: xx.xx.xx.xx, server: my.host.com, request: "POST /api/upload HTTP/1.1", upstream: "http://apiserver:80/upload", host: "my.host.com" What else could I do? If I look at the log of the API server I still see that it is processing the request and analyzing the file. But I think 3600 seconds as a timeout should be more than enough. This happens even after a could of seconds. And I did a reload and force-reload of the configuration as well of course.

    Read the article

  • Nginx server_name is set to mydomain.com, so why is www.mydomain.com getting served too?

    - by Lorenz Forvang
    I have my Nginx conf set up as follows: server { listen 443 ssl; server_name mydomain.com; ... } When I load https://mydomain.com, the site loads fine. But when I load https://www.mydomain.com, the site loads as well. Why is this happening? I set up the DNS records using Amazon Route 53 as: A mydomain.com xxx.xxx.xxx.xxx (IP) CNAME www.mydomain.com mydomain.com So is a request to www.mydomain.com arriving at Nginx as a request to mydomain.com? If so, how do I differentiate requests to www.mydomain.com and mydomain.com at my server?

    Read the article

  • Bundling in visual studio 2012 for web optimization

    - by Jalpesh P. Vadgama
    I have been writing a series of posts about Visual Studio 2012 features. This series describes what are the new features in the Visual Studio 2012. This post will also be part of Visual Studio 2012 feature series. As we know now days web applications or site are providing more and more features and due to that we have include lots of JavaScript and CSS files in our web application.So once we load site then we will have all the JavaScript  js files and CSS files loaded in the browsers and If you have lots of JavaScript files then its consumes lots of time when browser request them. Following images show the same situation over there.   Here you can see total 25 files loaded into the system and it's almost more than 1MB of total size. As we need to have our web application of site very responsive and need to have high performance application/site, this will be a performance bottleneck to our site. In situation like this, the bundling feature of Visual Studio 2012 and ASP.NET 4.5 comes very handy. With the help of this feature we do optimization there and we can increase performance of our application. To enable this feature in Visual Studio 2012 we just made debug=”false” in web.config of our application like following. Now once you enable this feature and run this application in the browser to see your traffic it will have less items like following. As you can see in the above image there are only 8 items. So after enabling bundling it will automatically convert all js and css files into the one request. Isn’t that cool feature? This feature will surely going to have great impact on performance. Hope you like it. Stay tuned for more.. Till then happy programming!!

    Read the article

  • How to create an GUI that communicate with the USB Devices

    - by VINAYAK
    I am doing my Project using win 32 programming.I am just learning about win32 programming and able to create an UI.I want to communicate with an USB Device with that UI.SO,How can i go for that?Is there any predefined functions will be there are we need to create the function for communicating with the OS and get the devices List and got the details about them. My purpose is to , 1.Creating an UI that tells about the Basic information about the device(We want to send a control request to the device to get the descriptors). 2.For that first of all i want to communicate with the OS for device attachment.That will lead to get the information about the device and Enumeration takes place and then only i request the device information through descriptors by using standard Requests. 3.And also i want to create the driver for my device.That will also need to achieve for communicating with OS(Windows). So,can anyone help me about this?How can i achieve this or approach this? Note: I am at the entry level now so anyone give response will be in a detailed format like step by step process would be appreciable.

    Read the article

  • nginx + Jetty - thousands of connections stuck in LAST_ACK

    - by virulence
    I have a FreeBSD machine with jails -- two in particular, one that runs nginx and another that runs a Java program that accepts requests via Jetty (embedded mode) Jetty receives upwards of 500 requests/sec constantly and there has been an issue lately where I will constantly have over 60,000 connections in the LAST_ACK state between nginx and jetty. Distribution of all connections (includes some other services, particularly php-fpm) root@host:/root # netstat -an > conns.txt root@host:/root # cat conns.txt | awk '{print $6}' | sort | uniq -c | sort -n 18 LISTEN 112 CLOSING 485 ESTABLISHED 650 FIN_WAIT_2 1425 FIN_WAIT_1 3301 TIME_WAIT 64215 LAST_ACK Distribution of nginx - jetty connections root@host:/root # cat conns.txt | grep '10.10.1.57' | awk '{print $6}' | sort | uniq -c | sort -n 1 3 CLOSE_WAIT 3 LISTEN 18 FIN_WAIT_2 125 ESTABLISHED 64193 LAST_ACK I'd prefer every request to fully close the connection. Clients requests are about 10 minutes apart from each other so connections must be closed. Some of the connections, tcp4 0 0 10.10.1.50.46809 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46805 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46797 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46794 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46790 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46789 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46771 10.10.1.57.9050 LAST_ACK etc.. On Jetty's end I've set maxIdleTime to 2000 -- before this all connections were in ESTABLISHED but they are now LAST_ACK On Jetty's end I've set Connection: close (i.e response.setHeader(HttpHeaders.CONNECTION, HttpHeaderValues.CLOSE);) Jetty never reports a lot of open connections -- always very few. PF/IPFW is not currently being used nginx - reset_timedout_connection is on I cannot figure out how to get nginx or jetty to forcibly close the connection, is this simply something that needs to be fixed in Jetty so that it fully closes the socket after the request finishes? Thanks a lot in advance EDIT: forgot my nginx config for the proxy setup- proxy_pass http://10.10.1.57:9050; proxy_set_header HTTP_X_GEOIP $http_x_geoip; proxy_set_header GEOIP_COUNTRY_CODE $geoip_country_code; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header Connection ""; proxy_http_version 1.1; EDIT2: Forcing Jetty to close the connection via request.getConnection().getEndPoint().close() does nothing -- it's obvious the connection IS being closed (as it's in LAST_ACK) but why isn't it getting past this? Is Nginx keeping the connection open to the backend for some reason?

    Read the article

  • Bridging two sockets

    - by Itehnological
    I wondered if it is possible to bridge two incoming tcp sockets. For example: Client A -----> Server <----- Client B The the server sends it's magic to both clients and then they connect to each other bypassing the server Server Client A ----------><---------- Client B UPDATE: The idea is when those clients can't bind to ports to listen to still be able to create connection between each other with the help of the server. For example Client A and Client B have tcp sockets with the server. User A decides to chat with User B and creates a new tcp connection with the server with the request to bridge it with User B. The server sends that request to Client B and it also opens up a new tcp connection with the server for that chat line. Now when the server has both chat connections from A and B it bridges them and they can work without the server, and as a result the server won't have to process all the messages and files the two users share. That's the idea/

    Read the article

  • Why is rsync.exe [cwRsync] trying to open a port when in client mode?

    - by hemancuso
    I'm trying to use a cygwin compiled version of rsync [the cwrsync package] on Windows and in seemingly whatever configuration I test in there is a request to the user presented by Windows Firewall to allow inbound traffic. If you deny this request, everything works fine - as expected. I'm doing a vanilla push rsync.exe localpath user@remotepath:/absolutepath and it works just fine. I've also attempted this command having deleted ssh from the path and using rsync on local paths - still a firewall prompt. Why is this listen() happening and is there a way I can force the client to not attempt to listen without recompiling and maintaing a patch?

    Read the article

  • Capturing BizTalk 2004 SQLAdapter failures

    - by DanBedassa
    I was recently working on a BizTalk 2004 project where I encountered an issue with capturing exceptions (inside my orchestration) occurring from an external source. Like database server down, non-existing stored procedure, …   I thought I might write-up this in case it might help someone …   To reproduce an issue, I just rename the database to something different.   The orchestration was failing at the point where I make a SQL request via a Response-Request Port. The exception handlers were bypassed but I can see a warning in the event log saying: "The adapter failed to transmit message going to send port "   After scratching my head for a while (as a newbie to BTS 2004) to find a way to catch the exceptions from the SQLAdapter in an orchestration, here is the solution I had.   ·         Put the Send and Receive shapes inside a Scope shape ·         Set the Scope’s transaction type to “Long Running” ·         Add a Catch block expecting type “System.Exception” ·         Set the “Delivery Notification” of the associated Port to “Transmitted” ·         Change the “Retry Count” of the associated port to 0 (This will make sure BizTalk will raise the exception, instead of a warning, and you can capture that) ·         Now capture and do whatever with the exception inside the Catch block

    Read the article

  • thick client migration to web based application

    - by user1151597
    This query is related to application design the technology that I should consider during migration. The Scenario: I have a C#.net Winform application which communicates with a device. One of the main feature of this application is monitoring cyclic data(rate 200ms) sent from the device to the application. The request to start the cyclic data is sent only once in the beginning and then the application starts receiving data from the device until it sends a stop request. Now this same application needs to be deployed over the web in a intranet. The application is composed of a business logic layer and a communication layer which communicates with the device through UDP ports. I am trying to look at a solution which will allow me to have a single instance of the application on the server so that the device thinks that it is connected as usual and then from the business logic layer I can manage the clients. I want to reuse the code of the business layer and the communication layer as much as possible. Please let me know if webserives/WCF/ etc what i should consider to design the migration. Thanks in advance.

    Read the article

  • wget has a 4 second delay

    - by guisius
    Hello. I have tried to wget a page with windows/mac, and the response is instant while the linux vesion needs to wait for 4 seconds before it shows the response. I just hope this can be solved. More information added: in Ubuntu : wget xxx://192.168.0.135/test.cgi?cmd= -O test.txt --2011-03-04 14:21:17-- xxx://192.168.0.135/test.cgi?cmd= Connecting to 192.168.0.135:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `test.txt' [ <=> ] 17 --.-K/s in 0s 2011-03-04 14:21:22 (1.88 MB/s) - `test.txt' saved [17] while in Mac OS : wget xxx://192.168.0.135/test.cgi?cmd= -O test.txt --2011-03-04 14:22:33-- xxx://192.168.0.135/test.cgi?cmd= Connecting to 192.168.0.135:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `test.txt' [ <=> ] 17 --.-K/s in 0s 2011-03-04 14:22:33 (755 KB/s) - `test.txt' saved [17] in ubuntu it delays 4 seconds while windows and mac will not i believe it may related to some setting in the network config such as packet size , window frame , but i have no idea to set this PS: because the limit of the post not allow to post the url so i mark this as xxx

    Read the article

  • Spreading incoming batched data into a real-time stream

    - by pr1001
    I would like to display some events in 'real-time'. However, I must fetch the data from another source. I can request the last X minutes, though the source is updated approximately every 5 minutes. This means that there will be a delay between the most recent data retrieved and the point in time that I make the request. Second, because I will be receiving a batch of data, I don't want to just fire out all the events down a socket once my fetcher has retrieved it: I would like to spread out the events so that they are both accurately spaced amongst each other and in sync with their original occurrences (e.g. an event is always displayed 6 minutes after it actually happened). My thought is to fetch the data every 5 minutes from the source, knowing that I won't get the very latest data. The original data would be then queued to be sent down the socket 7.5 minutes from its original timestamp – that is, at least ~2.5 minutes from when its batch was fetched and at most 7.5 minutes since then. My question is this: is this the best way to approach the problem? Does this problem have any standard approaches or associated literature related to implementation best-practices and edge cases? I am a bit worried that the frequency of my fetches and the frequency in which the source is updated will get out of sync, leading to points where no data will be retrieved from the source. However, since my socket delay is greater than my fetch frequency, the subsequent fetch should retrieve newer data before the socket queue is empty. Is that correct? Am I missing something? Thanks!

    Read the article

  • Android From Local DB (DAO) to Server sync (JSON) - Design issue

    - by Taiko
    I sync data between my local DB and a Server. I'm looking for the cleanest way to modelise all of this. I have a com.something.db package That contains a Data Helper and couple of DAO classes that represents objects stored in the db (I didn't write that part) com.something.db --public DataHelper --public Employee @DatabaseField e.g. "name" will be an actual column name in the DB -name @DatabaseField -salary etc... (all in all 50 fields) I have a com.something.sync package That contains all the implementation detail on how to send data to the server. It boils down to a ConnectionManager that is fed by different classes that implements a 'Request' interface com.something.sync --public interface ConnectionManager --package ConnectionManagerImpl --public interface Request --package LoginRequest --package GetEmployeesRequest My issue is, at some point in the sync process, I have to JSONise and de-JSONise my data (E.g. the Employee class). But I really don't feel like having the same Employee class be responsible for both his JSONisation and his actual representation inside the local database. It really doesn't feel right, because I carefully decoupled the rest, I am only stuck on this JSON thing. What should I do ? Should I write 3 Employee classes ? EmployeeDB @DatabaseField e.g. "name" will be an actual column name in the DB -name @DatabaseField -salary -etc... 50 fields EmployeeInterface -getName -getSalary -etc... 50 fields EmployeeJSON -JSON_KEY_NAME = "name" The JSON key happens to be the same as the table name, but it isn't requirement -name -JSON_KEY_SALARY = "salary" -salary -etc... 50 fields It feels like a lot of duplicates. Is there a common pattern I can use there ?

    Read the article

  • Poor backlink profile - search rankings not updated for 2+ months

    - by fistameeny
    I am carrying out some work on a website that is a PR2 with a few good quality, relevant backlinks (PR4-6). It has a presence on Twitter that is updated regularly, a Google Places listing, and listings on some decent directories (Qype etc). The site was rebuilt into Drupal 7 two months ago, with all the basics done - URL rewriting, XML Sitemap submitted to Google, and most importantly, good quality, structured content. I've noticed that Google is still showing "old" URL's from the previous version of the site that was ditched 8 weeks ago. I think the site may be penalised under the Penguin update, as a previous SEO company created many low quality links from link farms/directories. My question is what the correct way to deal with this is. Bing Webmaster Tools can "disavow" links, and I guess I can attempt to contact the link farms to have them removed. I've already submitted a request to Google to request that we have the penalty removed as we're trying to tidy up a bad history. We submit updated sitemaps to Google and Bing daily, and have built some further decent quality, relevant links. Is there anything further I can do?

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >