Search Results

Search found 10391 results on 416 pages for 'sys dm exec requests'.

Page 57/416 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How can I disable keep-alive on ASP.NET Web Service client requests?

    - by Matthew Brindley
    I have a few web servers behind an Amazon EC2 load balancer. I'm using TCP balancing on port 80 (rather than HTTP balancing). I have a client polling a Web Service (running on all web servers) for new items every few seconds. However, the client seems to stay connected to one server and polls that same server each time. I've tried using ServicePointManager to disable KeepAlive, but that didn't change anything. The outgoing connection still had its "connection: keep-alive" HTTP header, and the server kept the TCP connection open. I've also tried adding an override of GetWebRequest to the proxy class created by VS, which inherits from SoapHttpClientProtocol, but I still see the keep-alive header. If I kill the client's process and restart, it'll connect to a new server via the load balancer, but it'll continue polling that new server forever. Is there a way to force it to connect to a random server each time? I want the load from the one client to be spread across all of the web servers. The client is written in C# (as is the server) and uses a Web Reference (not a Service Reference), which points to the load balancer.

    Read the article

  • Performing a clean database creation using msbuild

    - by Robert May
    So I’m taking a break from writing about other Agile stuff for a post. :)  I’m still going to get back to the other subjects, but this is fun too. Something I’ve done quite a bit of is MSBuild and CI work.  I’m experimenting with ways to improve what I’ve done in the past, particularly around database CI. Today, I developed a mechanism for starting from scratch with your database.  By scratch, I mean blowing away the existing database and creating it again from a single command line call.  I’m a firm believer that developers should be able to get to a known clean state at the database level with a single command and that they should be operating off of their own isolated database to improve productivity.  These scripts will help that. Here’s how I did it.  First, we have to disconnect users.  I did so using the help of a script from sql server central.  Note that I’m using sqlcmd variable replacement. -- kills all the users in a particular database -- dlhatheway/3M, 11-Jun-2000 declare @arg_dbname sysname declare @a_spid smallint declare @msg varchar(255) declare @a_dbid int set @arg_dbname = '$(DatabaseName)' select @a_dbid = sdb.dbid from master..sysdatabases sdb where sdb.name = @arg_dbname declare db_users insensitive cursor for select sp.spid from master..sysprocesses sp where sp.dbid = @a_dbid open db_users fetch next from db_users into @a_spid while @@fetch_status = 0 begin select @msg = 'kill '+convert(char(5),@a_spid) print @msg execute (@msg) fetch next from db_users into @a_spid end close db_users deallocate db_users GO Once all users are booted from the database, we can commence with recreating the database.  I generated the script that is used to create a database from SQL Server management studio, so I’m only going to show the bits that weren’t generated that are important.  There are a bunch of Alter Database statements that aren’t shown. First, I had to find the default location of the database files in the install, since they can be in many different locations.  I used Method 1 from a technet blog and then modified it a bit to do what I needed to do.  I ended up using dynamic SQL because for the life of me, I couldn’t get the “Filename” property to not return an error when I used anything besides a string.  I’m dropping the database first, if it exists.  Here’s the code:   IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = N'$(DatabaseName)') BEGIN drop database $(DatabaseName) END; go IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; -- Create temp database. Because no options are given, the default data and --- log path locations are used CREATE DATABASE zzTempDBForDefaultPath; DECLARE @Default_Data_Path VARCHAR(512), @Default_Log_Path VARCHAR(512); --Get the default data path SELECT @Default_Data_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 0); --Get the default Log path SELECT @Default_Log_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 1); --Clean up. IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; DECLARE @SQL nvarchar(max) SET @SQL= 'CREATE DATABASE $(DatabaseName) ON PRIMARY ( NAME = N''$(DatabaseName)'', FILENAME = N''' + @Default_Data_Path + N'$(DatabaseName)' + '.mdf' + ''', SIZE = 2048KB , FILEGROWTH = 1024KB ) LOG ON ( NAME = N''$(DatabaseName)Log'', FILENAME = N''' + @Default_Log_Path + N'$(DatabaseName)' + '.ldf' + ''', SIZE = 1024KB , FILEGROWTH = 10%) ' exec (@SQL) GO And with that, your database is created.  You can run these scripts on any server and on any database name.  To do that, I created an MSBuild script that looks like this: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0"> <PropertyGroup> <DatabaseName>MyDatabase</DatabaseName> <Server>localhost</Server> <SqlCmd>sqlcmd -v DatabaseName=$(DatabaseName) -S $(Server) -i </SqlCmd> <ScriptDirectory>.\Scripts</ScriptDirectory> </PropertyGroup> <Target Name ="Rebuild"> <ItemGroup> <ScriptFiles Include="$(ScriptDirectory)\*.sql"/> </ItemGroup> <Exec Command="$(SqlCmd) &quot;%(ScriptFiles.Identity)&quot;" ContinueOnError="false"/> </Target> </Project> Note that the Scripts directory is underneath the directory where I’m running the msbuild command and is relative to that directory.  Note also that the target is using batching to run each script in the scripts subdirectory, one after the other.  Each script is passed to the sqlcmd command line execution using the .Identity property on the itemgroup that is created.  This target file is saved in the file “Database.target”. To make this work, you’ll need msbuild in your path, and then run the following command: msbuild database.target /target:Rebuild Once you’ve got your virgin database setup, you’d then need to use a tool like dbdeploy.net to determine that it was a virgin database, build a change script based on the change scripts, and then you’d want another sqlcmd call to update the database with the appropriate scripts.  I’m doing that next, so I’ll post a blog update when I’ve got it working. Technorati Tags: MSBuild,Agile,CI,Database

    Read the article

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • Jersey non blocking client

    - by Pavel Bucek
    Although Jersey already have support for making asynchronous requests, it is implemented by standard blocking way - every asynchronous request is handled by one thread and that thread is released only after request is completely processed. That is OK for lots of cases, but imagine how that will work when you need to do lots of parallel requests. Of course you can limit (and its really wise thing to do, you do want control your resources) number of threads used for asynchronous requests, but you'll get another maybe not pleasant consequence - obviously processing time will incerase. There are few projects which are trying to deal with that problem, commonly named as async http clients. I didn't want to "re-implement a wheel" and I decided I'll use AHC - Async Http Client made by Jeanfrancois Arcand. There is also interesting implementation from Apache - HttpAsyncClient, but it is still in "very early stages of development" and others haven't been in similar or better shape as AHC. How this works? Non-blocking clients allow users to make same asynchronous requests as we can do with standard approach but implementation is different - threads are better utilized, they don't spend most of time in idle state. Simply described - when you make a request (send it over the network), you are waiting for reply from other side. And there comes main advantage of non-blocking approach - it uses these threads for further work, like making other requests or processing responses etc.. Idle time is minimized and your resources (threads) will be far better used. Who should consider using this? Everyone who is making lots of asynchronous requests. I haven't done proper benchmark yet, but some simple dumb tests are showing huge improvement in cases where lots of concurrent asynchronous requests are made in short period. Last but not least - this module is still experimental, so if you don't like something or if you have ideas for improvements/any feedback, feel free to comment this blog post, send mail to [email protected] or contact me personally. All feedback is greatly appreciated! maven dependency (will be present in java.net maven 2 repo by the end of the day): link: http://download.java.net/maven/2/com/sun/jersey/experimental/jersey-non-blocking-client <dependency> <groupId>com.sun.jersey.experimental</groupId> <artifactId>jersey-non-blocking-client</artifactId> <version>1.9-SNAPSHOT</version> </dependency> code snippet: ClientConfig cc = new DefaultNonBlockingClientConfig(); cc.getProperties().put(NonBlockingClientConfig.PROPERTY_THREADPOOL_SIZE, 10); // default value, feel free to change Client c = NonBlockingClient.create(cc); AsyncWebResource awr = c.asyncResource("http://oracle.com"); Future<ClientResponse> responseFuture = awr.get(ClientResponse.class); // or awr.get(new TypeListener<ClientResponse>(ClientResponse.class) { @Override public void onComplete(Future<ClientResponse> f) throws InterruptedException { ... } }); javadoc (temporary location, won't be updated): http://anise.cz/~paja/jersey-non-blocking-client/

    Read the article

  • [Ruby] Why do I have to URI.encode even safe characters for Net::HTTP requests?

    - by Matthias
    I was trying to send a GET request to Twitter (user ID replaced for privacy reasons) using Net::HTTP: url = URI.parse("http://api.twitter.com/1/friends/ids.json?user_id=12345") resp = Net::HTTP.get_response(url) this throws an exception in Net::HTTP: NoMethodError: undefined method empty?' for #<URI::HTTP:0x59f5c04> from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/http.rb:1470:ininitialize' just by coincidence, I stumbled upon a similar code snippet, which used URI.encode prior to URI.parse, so I copied that and tried again: url = URI.parse(URI.encode("http://api.twitter.com/1/friends/ids.json?user_id=12345")) resp = Net::HTTP.get_response(url) now it works fine, but why? There are no reserved characters that need escaping in the URL I mentioned, so why do I have to call URI.encode for get_response to succeed?

    Read the article

  • Accessing py2exe program over network in Windows 98 throws ImportErrors

    - by darvids0n
    I'm running a py2exe-compiled python program from one server machine on a number of client machines (mapped to a network drive on every machine, say W:). For Windows XP and later machines, have so far had zero problems with Python picking up W:\python23.dll (yes, I'm using Python 2.3.5 for W98 compatibility and all that). It will then use W:\zlib.pyd to decompress W:\library.zip containing all the .pyc files like os and such, which are then imported and the program runs no problems. The issue I'm getting is on some Windows 98 SE machines (note: SOME Windows 98 SE machines, others seem to work with no apparent issues). What happens is, the program runs from W:, the W:\python23.dll is, I assume, found (since I'm getting Python ImportErrors, we'd need to be able to execute a Python import statement), but a couple of things don't work: 1) If W:\library.zip contains the only copy of the .pyc files, I get ZipImportError: can't decompress data; zlib not available (nonsense, considering W:\zlib.pyd IS available and works fine with the XP and higher machines on the same network). 2) If the .pyc files are actually bundled INSIDE the python exe by py2exe, OR put in the same directory as the .exe, OR put into a named subdirectory which is then set as part of the PYTHONPATH variable (e.g W:\pylib), I get ImportError: no module named os (os is the first module imported, before sys and anything else). Come to think of it, sys.path wouldn't be available to search if os was imported before it maybe? I'll try switching the order of those imports but my question still stands: Why is this a sporadic issue, working on some networks but not on others? And how would I force Python to find the files that are bundled inside the very executable I run? I have immediate access to the working Windows 98 SE machine, but I only get access to the non-working one (a customer of mine) every morning before their store opens. Thanks in advance! EDIT: Okay, big step forward. After debugging with PY2EXE_VERBOSE, the problem occurring on the specific W98SE machine is that it's not using the right path syntax when looking for imports. Firstly, it doesn't seem to read the PYTHONPATH environment variable (there may be a py2exe-specific one I'm not aware of, like PY2EXE_VERBOSE). Secondly, it only looks in one place before giving up (if the files are bundled inside the EXE, it looks there. If not, it looks in library.zip). EDIT 2: In fact, according to this, there is a difference between the sys.path in the Python interpreter and that of Py2exe executables. Specifically, sys.path contains only a single entry: the full pathname of the shared code archive. Blah. No fallbacks? Not even the current working directory? I'd try adding W:\ to PATH, but py2exe doesn't conform to any sort of standards for locating system libraries, so it won't work. Now for the interesting bit. The path it tries to load atexit, os, etc. from is: W:\\library.zip\<module>.<ext> Note the single slash after library.zip, but the double slash after the drive letter (someone correct me if this is intended and should work). It looks like if this is a string literal, then since the slash isn't doubled, it's read as an (invalid) escape sequence and the raw character is printed (giving W:\library.zipos.pyd, W:\library.zipos.dll, ... instead of with a slash); if it is NOT a string literal, the double slash might not be normpath'd automatically (as it should be) and so the double slash confuses the module loader. Like I said, I can't just set PYTHONPATH=W:\\library.zip\\ because it ignores that variable. It may be worth using sys.path.append at the start of my program but hard-coding module paths is an absolute LAST resort, especially since the problem occurs in ONE configuration of an outdated OS. Any ideas? I have one, which is to normpath the sys.path.. pity I need os for that. Another is to just append os.getenv('PATH') or os.getenv('PYTHONPATH') to sys.path... again, needing the os module. The site module also fails to initialise, so I can't use a .pth file. I also recently tried the following code at the start of the program: for pth in sys.path: fErr.write(pth) fErr.write(' to ') pth.replace('\\\\','\\') # Fix Windows 98 pathing issues fErr.write(pth) fErr.write('\n') But it can't load linecache.pyc, or anything else for that matter; it can't actually execute those commands from the looks of things. Is there any way to use built-in functionality which doesn't need linecache to modify the sys.path dynamically? Or am I reduced to hard-coding the correct sys.path?

    Read the article

  • Would a Socket Connection Outperform an Intarvaled Database Sweep and Requests?

    - by Jascha
    I'm building a small chat application to add to an existing framework. There will only be 20-50 users MAX at any one time. I was wondering if I could get away with updating a cache file containing (semi) live chat data for whichever users happen to be chatting just by performing timed queries and regular AJAX refreshes for new data as opposed to learning how to open and maintain a socket connection. I'm sure there are existing chat plug-ins out there. But I just had a hell of a time installing one and I could see building the whole damn thing taking just as much time as plugging one in. Am I off to a bad start? Thanks in advance -J (p.s. this is a semi closed network behind a php login so security isn't a great concern)

    Read the article

  • Jquery Live Function

    - by marharépa
    Hi! I want to make this script to work as LIVE() function. Please help me! $(".img img").each(function() { $(this).cjObjectScaler({ destElem: $(this).parent(), method: "fit" }); }); the cjObjectScaler script (called in the html header) is this: (thanks for Doug Jones) (function ($) { jQuery.fn.imagesLoaded = function (callback) { var elems = this.filter('img'), len = elems.length; elems.bind('load', function () { if (--len <= 0) { callback.call(elems, this); } }).each(function () { // cached images don't fire load sometimes, so we reset src. if (this.complete || this.complete === undefined) { var src = this.src; // webkit hack from http://groups.google.com/group/jquery-dev/browse_thread/thread/eee6ab7b2da50e1f this.src = '#'; this.src = src; } }); }; })(jQuery); /* CJ Object Scaler */ (function ($) { jQuery.fn.cjObjectScaler = function (options) { /* user variables (settings) ***************************************/ var settings = { // must be a jQuery object method: "fill", // the parent object to scale our object into destElem: null, // fit|fill fade: 0 // if positive value, do hide/fadeIn }; /* system variables ***************************************/ var sys = { // function parameters version: '2.1.1', elem: null }; /* scale the image ***************************************/ function scaleObj(obj) { // declare some local variables var destW = jQuery(settings.destElem).width(), destH = jQuery(settings.destElem).height(), ratioX, ratioY, scale, newWidth, newHeight, borderW = parseInt(jQuery(obj).css("borderLeftWidth"), 10) + parseInt(jQuery(obj).css("borderRightWidth"), 10), borderH = parseInt(jQuery(obj).css("borderTopWidth"), 10) + parseInt(jQuery(obj).css("borderBottomWidth"), 10), objW = jQuery(obj).width(), objH = jQuery(obj).height(); // check for valid border values. IE takes in account border size when calculating width/height so just set to 0 borderW = isNaN(borderW) ? 0 : borderW; borderH = isNaN(borderH) ? 0 : borderH; // calculate scale ratios ratioX = destW / jQuery(obj).width(); ratioY = destH / jQuery(obj).height(); // Determine which algorithm to use if (!jQuery(obj).hasClass("cf_image_scaler_fill") && (jQuery(obj).hasClass("cf_image_scaler_fit") || settings.method === "fit")) { scale = ratioX < ratioY ? ratioX : ratioY; } else if (!jQuery(obj).hasClass("cf_image_scaler_fit") && (jQuery(obj).hasClass("cf_image_scaler_fill") || settings.method === "fill")) { scale = ratioX < ratioY ? ratioX : ratioY; } // calculate our new image dimensions newWidth = parseInt(jQuery(obj).width() * scale, 10) - borderW; newHeight = parseInt(jQuery(obj).height() * scale, 10) - borderH; // Set new dimensions & offset jQuery(obj).css({ "width": newWidth + "px", "height": newHeight + "px"//, // "position": "absolute", // "top": (parseInt((destH - newHeight) / 2, 10) - parseInt(borderH / 2, 10)) + "px", // "left": (parseInt((destW - newWidth) / 2, 10) - parseInt(borderW / 2, 10)) + "px" }).attr({ "width": newWidth, "height": newHeight }); // do our fancy fade in, if user supplied a fade amount if (settings.fade > 0) { jQuery(obj).fadeIn(settings.fade); } } /* set up any user passed variables ***************************************/ if (options) { jQuery.extend(settings, options); } /* main ***************************************/ return this.each(function () { sys.elem = this; // if they don't provide a destObject, use parent if (settings.destElem === null) { settings.destElem = jQuery(sys.elem).parent(); } // need to make sure the user set the parent's position. Things go bonker's if not set. // valid values: absolute|relative|fixed if (jQuery(settings.destElem).css("position") === "static") { jQuery(settings.destElem).css({ "position": "relative" }); } // if our object to scale is an image, we need to make sure it's loaded before we continue. if (typeof sys.elem === "object" && typeof settings.destElem === "object" && typeof settings.method === "string") { // if the user supplied a fade amount, hide our image if (settings.fade > 0) { jQuery(sys.elem).hide(); } if (sys.elem.nodeName === "IMG") { // to fix the weird width/height caching issue we set the image dimensions to be auto; jQuery(sys.elem).width("auto"); jQuery(sys.elem).height("auto"); // wait until the image is loaded before scaling jQuery(sys.elem).imagesLoaded(function () { scaleObj(this); }); } else { scaleObj(jQuery(sys.elem)); } } else { console.debug("CJ Object Scaler could not initialize."); return; } }); }; })(jQuery);

    Read the article

  • Simplest Way to Process Basic HTTPS GET File Requests?

    - by stormin986
    All I need to do is download some basic text-based and image files from a web server that has a self-signed SSL certificate. I have been trying to figure out how to use HttpClient to do this, but getting the SSL to work is a nightmare that seems to be way too much trouble for such a simple task. Is there a better way to perform these file downloads? Perhaps through a WebView or Browser feature? Reinventing the wheel of making a simple HTTPS GET request is a major pain, and is significantly holding up my development schedule.

    Read the article

  • How do I ensure that SOAP requests from a flash client to my ASP server are coming from the flash cl

    - by Gary Benade
    I have a flash based game that has a high score system implemented with a SOAP service. There are prizes involved and I want to prevent someone from using FireBug or similar to discover the webservice path and submit fake scores. I considered using some kind of encryption on the data but am aware that someone could decompile the swf and work out how I did it. I also considered using an IP whitelist but since the incoming data will come from the users IP and not the servers that won't work. (I'm sure I'm missing something obvious here...) I know that there is a tried and tested solution for this, but I don't seem to be asking google the right questions to get to it. Any help and suggestions will be appreciated, thank you

    Read the article

  • how to do iis 7 url rewrite to avoid processing static html file requests for an aspnet mvc applicat

    - by ROHITH
    currently i am developing an aspnet mvc application on iis 5.5.So i have given wild card mapping.I know if we give wild card mapping ,all request go through the asp.net execution pipeline pipeline.(correct me if i am wrong).and in iis7 or above ,we have in built url rewrite engine.What i want to know,in iis7 or above will a request to static html page will go through the aspnet mvc execution pipeline?

    Read the article

  • Coping with infrastructure upgrades

    - by Fatherjack
    A common topic for questions on SQL Server forums is how to plan and implement upgrades to SQL Server. Moving from old to new hardware or moving from one version of SQL Server to another. There are other circumstances where upgrades of other systems affect SQL Server DBAs. For example, where I work at the moment there is an Microsoft Exchange (email) server upgrade in progress. It it being handled by a different team so I’m not wholly sure on the details but we are in a situation where there are currently 2 Exchange email servers – the old one and the new one. Users mail boxes are being transferred in a planned process but as we approach the old server being turned off we have to also make sure that our SQL Servers get updated to use the new SMTP server for all of the SQL Agent notifications, SSIS packages etc. My servers have a number of profiles so that various jobs can send emails on behalf of various departments and different systems. This means there are lots of places that the old server name needs to be replaced by the new one. Anyone who has set up DBMail and enjoyed the click-tastic odyssey of screens to create Profiles and Accounts and so on and so forth ought to seek some professional help in my opinion. It’s a nightmare of back and forth settings changes and it stinks. I wasn’t looking forward to heading into this mess of a UI and changing the old Exchange server name for the new one on all my SQL Instances for all of the accounts I have set up. So I did what any Englishmen with a shed would do, I decided to take it apart and see if I can fix it another way. I took a guess that we are going to be working in MSDB and Books OnLine was remarkably helpful and amongst a lot of information told me about a couple of procedures that can be used to interrogate DBMail settings. USE [msdb] -- It's where all the good stuff is kept GO EXEC dbo.sysmail_help_profile_sp; EXEC dbo.sysmail_help_account_sp; Both of these procedures take optional parameters with the same name – ID and Name. If you provide an ID or a name then the results you get back are for that specific Profile or Account. Otherwise you get details of all Profiles and Accounts on the server you are connected to. As you can see (click for a bigger image), the Account has the SMTP server information in the servername column. We want to change that value to NewSMTP.Contoso.com. Now it appears that the procedure we are looking at gets it’s data from the sysmail_account and sysmail_server tables, you can get the results the stored procedure provides if you run the code below. SELECT [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [last_mod_datetime] , [last_mod_user] FROM dbo.sysmail_account AS sa; SELECT [account_id] , [servertype] , [servername] , [port] , [username] , [credential_id] , [use_default_credentials] , [enable_ssl] , [flags] , [last_mod_datetime] , [last_mod_user] , [timeout] FROM dbo.sysmail_server AS sms Now, we have no real idea how these tables are linked and whether making an update direct to one or other of them is going to do what we want or whether it will entirely cripple our ability to send email from SQL Server so we wont touch those tables with any UPDATE TSQL. So, back to Books OnLine then and we find sysmail_update_account_sp. It’s exactly what we need. The examples in BOL take the form (as below) of having every parameter explicitly defined. Not wanting to totally obliterate the existing values by not passing values in all of the parameters I set to writing some code to gather the existing data from the tables and re-write the SMTP server name and then execute the resulting TSQL. IF OBJECT_ID('tempdb..#sysmailprofiles') IS NOT NULL DROP TABLE #sysmailprofiles GO CREATE TABLE #sysmailprofiles ( account_id INT , [name] VARCHAR(50) , [description] VARCHAR(500) , email_address VARCHAR(500) , display_name VARCHAR(500) , replyto_address VARCHAR(500) , servertype VARCHAR(10) , servername VARCHAR(100) , port INT , username VARCHAR(100) , use_default_credentials VARCHAR(1) , ENABLE_ssl VARCHAR(1) ) INSERT [#sysmailprofiles] ( [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [servertype] , [servername] , [port] , [username] , [use_default_credentials] , [ENABLE_ssl] ) EXEC [dbo].[sysmail_help_account_sp] DECLARE @TSQL NVARCHAR(1000) SELECT TOP 1 @TSQL = 'EXEC [dbo].[sysmail_update_account_sp] @account_id = ' + CAST([s].[account_id] AS VARCHAR(20)) + ', @account_name = ''' + [s].[name] + '''' + ', @email_address = N''' + [s].[email_address] + '''' + ', @display_name = N''' + [s].[display_name] + '''' + ', @replyto_address = N''' + s.replyto_address + '''' + ', @description = N''' + [s].[description] + '''' + ', @mailserver_name = ''NEWSMTP.contoso.com''' + +', @mailserver_type = ' + [s].[servertype] + ', @port = ' + CAST([s].[port] AS VARCHAR(20)) + ', @username = ' + COALESCE([s].[username], '''''') + ', @use_default_credentials =' + CAST(s.[use_default_credentials] AS VARCHAR(1)) + ', @enable_ssl =' + [s].[ENABLE_ssl] FROM [#sysmailprofiles] AS s WHERE [s].[servername] = 'SMTP.Contoso.com' SELECT @tsql EXEC [sys].[sp_executesql] @tsql This worked well for me and testing the email function EXEC dbo.sp_send_dbmail afterwards showed that the settings were indeed using our new Exchange server. It was only later in writing this blog that I tried running the sysmail_update_account_sp procedure with only the SMTP server name parameter value specified. Despite what Books OnLine might intimate, you can do this and only the values for parameters specified get changed. If a parameter is not specified in the execution of the procedure then the values remain unchanged. This renders most of the above script unnecessary as I could have simply specified the account_id that I want to amend and the new value for the parameter I want to update. EXEC sysmail_update_account_sp @account_id = 1, @mailserver_name = 'NEWSMTP.Contoso.com' This wasn’t going to be the main reason for this post, it was meant to describe how to capture values from a stored procedure and use them in dynamic TSQL but instead we are here and (re)learning the fact that Books Online is a little flawed in places. It is a fantastic resource for anyone working with SQL Server but the reader must adopt an enquiring frame of mind and use a little curiosity to try simple variations on examples to fully understand the code you are working with. I think the author(s) of this part of Books OnLine missed an opportunity to include a third example that had fewer than all parameters specified to give a lead to this method existing.

    Read the article

  • Python: How do I create a reference to a reference?

    - by KCArpe
    Hi, I am traditionally a Perl and C++ programmer, so apologies in advance if I am misunderstanding something trivial about Python! I would like to create a reference to a reference. Huh? Ok. All objects in Python are actually references to the real object. So, how do I create a reference to this reference? Why do I need/want this? I am overriding sys.stdout and sys.stderr to create a logging library. I would like a (second-level) reference to sys.stdout. If I could create a reference to a reference, then I could create a generic logger class where the init function receives a reference to a file handle reference that will be overrided, e.g., sys.stdout or sys.stderr. Currently, I must hard-code both values. Cheers, Kevin

    Read the article

  • WebLogic Server Performance and Tuning: Part II - Thread Management

    - by Gokhan Gungor
    WebLogic Server, like any other java application server, provides resources so that your applications use them to provide services. Unfortunately none of these resources are unlimited and they must be managed carefully. One of these resources is threads which are pooled to provide better throughput and performance along with the fast response time and to avoid deadlocks. Threads are execution points that WebLogic Server delivers its power and execute work. Managing threads is very important because it may affect the overall performance of the entire system. In previous releases of WebLogic Server 9.0 we had multiple execute queues and user defined thread pools. There were different queues for different type of work which had fixed number of execute threads.  Tuning of this thread pools and finding the proper number of threads was time consuming which required many trials. WebLogic Server 9.0 and the following releases use a single thread pool and a single priority-based execute queue. All type of work is executed in this single thread pool. Its size (thread count) is automatically decreased or increased (self-tuned). The new “self-tuning” system simplifies getting the proper number of threads and utilizing them.Work manager allows your applications to run concurrently in multiple threads. Work manager is a mechanism that allows you to manage and utilize threads and create rules/guidelines to follow when assigning requests to threads. We can set a scheduling guideline or priority a request with a work manager and then associate this work manager with one or more applications. At run-time, WebLogic Server uses these guidelines to assign pending work/requests to execution threads. The position of a request in the execute queue is determined by its priority. There is a default work manager that is provided. The default work manager should be sufficient for most applications. However there can be cases you want to change this default configuration. Your application(s) may be providing services that need mixture of fast response time and long running processes like batch updates. However wrong configuration of work managers can lead a performance penalty while expecting improvement.We can define/configure work managers at;•    Domain Level: config.xml•    Application Level: weblogic-application.xml •    Component Level: weblogic-ejb-jar.xml or weblogic.xml(For a specific web application use weblogic.xml)We can use the following predefined rules/constraints to manage the work;•    Fair Share Request Class: Specifies the average thread-use time required to process requests. The default is 50.•    Response Time Request Class: Specifies a response time goal in milliseconds.•    Context Request Class: Assigns request classes to requests based on context information.•    Min Threads Constraint: Limits the number of concurrent threads executing requests.•    Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.•    Capacity Constraint: Causes the server to reject requests only when it has reached its capacity. Let’s create a work manager for our application for a long running work.Go to WebLogic console and select Environment | Work Managers from the domain structure tree. Click New button and select Work manager and click next. Enter the name for the work manager and click next. Then select the managed server instances(s) or clusters from available targets (the one that your long running application is deployed) and finish. Click on MyWorkManager, and open the Configuration tab and check Ignore Stuck Threads and save. This will prevent WebLogic to tread long running processes (that is taking more than a specified time) as stuck and enable to finish the process.

    Read the article

  • How do I sign requests reliably for the Last.fm api in C#?

    - by Arda Xi
    I'm trying to implement authorization through Last.fm. I'm submitting my arguments as a Dictionary to make the signing easier. This is the code I'm using to sign my calls: public static string SignCall(Dictionary<string, string> args) { IOrderedEnumerable<KeyValuePair<string, string>> sortedArgs = args.OrderBy(arg => arg.Key); string signature = sortedArgs.Select(pair => pair.Key + pair.Value). Aggregate((first, second) => first + second); return MD5(signature + SecretKey); } I've checked the output in the debugger, it's exactly how it should be, however, I'm still getting WebExceptions every time I try. Here's my code I use to generate the URL in case it'll help: public static string GetSignedURI(Dictionary<string, string> args, bool get) { var stringBuilder = new StringBuilder(); if (get) stringBuilder.Append("http://ws.audioscrobbler.com/2.0/?"); foreach (var kvp in args) stringBuilder.AppendFormat("{0}={1}&", kvp.Key, kvp.Value); stringBuilder.Append("api_sig="+SignCall(args)); return stringBuilder.ToString(); } And sample usage to get a SessionKey: var args = new Dictionary<string, string> { {"method", "auth.getSession"}, {"api_key", ApiKey}, {"token", token} }; string url = GetSignedURI(args, true); EDIT: Oh, and the code references an MD5 function implemented like this: public static string MD5(string toHash) { byte[] textBytes = Encoding.UTF8.GetBytes(toHash); var cryptHandler = new System.Security.Cryptography.MD5CryptoServiceProvider(); byte[] hash = cryptHandler.ComputeHash(textBytes); return hash.Aggregate("", (current, a) => current + a.ToString("x2")); }

    Read the article

  • How to store or share live data between PHP Requests?

    - by Devyn
    Hi, I want to start a project for facebook and the application will be like real-time multiplayer chess game. The problem I'm having is I have no idea how to store the data when a player moves one piece and update the new position in player2 browser. I'm gonna use PHP, MySQL for server side and jQuery for Client Rendering. The simplest idea is to store the data in XML or MySQL and re-generate the result to player2 browser. But I know that when thousand of players are playing, it will not be an efficient way. Since I don't have time to study new language for this project, I'm gonna have to stick with PHP. I'm not going to use flash either because I want my client side light-weight and flash-free. So is there any way that will solve my problems?

    Read the article

  • [Symfony 1.2: ckWebServicePlugin 3.0.0] Module name in SOAP requests, how to get rid of them?

    - by Henri
    When I generate a WSDL file with ./symfony webservice:generate-wsdl (where is 'frontend', is 'soap' and is 'http://localhost ') I get a nice soap.wsdl file which works like it should. Except, the methods are not named 'justAMethod' but 'soapService_justAMethod' (where soapService is the module which holds the SOAP methods). How do I omit the module name in the SOAP method names? I know this is possible since the previous release of the software had no module name in the SOAP method names.

    Read the article

  • How do you determine an acceptable response time for DB requests?

    - by qiq
    According to this discussion of Google App Engine on Hacker News, A DB (read) request takes over 100ms on the datastore. That's insane and unusable for about 90% of applications. How do you determine what is an acceptable response time for a DB read request? I have been using App Engine without noticing any issues with DB responsiveness. But, on the other hand, I'm not sure I would even know what to look for in that regard :)

    Read the article

  • List of all index & index columns in SQL Server DB

    - by Anton Gogolev
    How do I get a list of all index & index columns in SQL Server 2005+? The closest I could get is: select s.name, t.name, i.name, c.name from sys.tables t inner join sys.schemas s on t.schema_id = s.schema_id inner join sys.indexes i on i.object_id = t.object_id inner join sys.index_columns ic on ic.object_id = t.object_id inner join sys.columns c on c.object_id = t.object_id and ic.column_id = c.column_id where i.index_id > 0 and i.type in (1, 2) -- clustered & nonclustered only and i.is_primary_key = 0 -- do not include PK indexes and i.is_unique_constraint = 0 -- do not include UQ and i.is_disabled = 0 and i.is_hypothetical = 0 and ic.key_ordinal > 0 order by ic.key_ordinal which is not exactly what I want. What I want is to list all user-defined indexes (which means no indexes which support unique constraints & primary keys) with all columns (ordered by how do they apper in index definition) plus as much metadata as possible.

    Read the article

  • ASP.NET MVC - How to save some data between requests ?

    - by Tony
    Hi, I'm trying to solve the problem: when a user is being logged into a WebSite (via user control .ascx stored on the Master Page), it's name is being stored in a Page.User.Identity.Name property. OK, but how to retrieve that user name in the controller ? Is it possible without registering System.Security.Principal namespace in the controller ? In the other words - the controller must know whose user wants to do some action (e.g. change account data). I could store it's name in the Html.Hidden control on each View but I don't want to have a mess in my Views

    Read the article

  • How do you determine an acceptable response time for App Engine DB requests?

    - by qiq
    According to this discussion of Google App Engine on Hacker News, A DB (read) request takes over 100ms on the datastore. That's insane and unusable for about 90% of applications. How do you determine what is an acceptable response time for a DB read request? I have been using App Engine without noticing any issues with DB responsiveness. But, on the other hand, I'm not sure I would even know what to look for in that regard :)

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >