Search Results

Search found 10693 results on 428 pages for 'max requests'.

Page 395/428 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • ELMAH - Using custom error pages to collecting user feedback

    - by vdh_ant
    Hey guys I'm looking at using ELMAH for the first time but have a requirement that needs to be met that I'm not sure how to go about achieving... Basically, I am going to configure ELMAH to work under asp.net MVC and get it to log errors to the database when they occur. On top of this I be using customErrors to direct the user to a friendly message page when an error occurs. Fairly standard stuff... The requirement is that on this custom error page I have a form which enables to user to provide extra information if they wish. Now the problem arises due to the fact that at this point the error is already logged and I need to associate the loged error with the users feedback. Normally, if I was using my own custom implementation, after I log the error I would pass through the ID of the error to the custom error page so that an association can be made. But because of the way that ELMAH works, I don't think the same is quite possible. Hence I was wondering how people thought that one might go about doing this.... Cheers UPDATE: My solution to the problem is as follows: public class UserCurrentConextUsingWebContext : IUserCurrentConext { private const string _StoredExceptionName = "System.StoredException."; private const string _StoredExceptionIdName = "System.StoredExceptionId."; public virtual string UniqueAddress { get { return HttpContext.Current.Request.UserHostAddress; } } public Exception StoredException { get { return HttpContext.Current.Application[_StoredExceptionName + this.UniqueAddress] as Exception; } set { HttpContext.Current.Application[_StoredExceptionName + this.UniqueAddress] = value; } } public string StoredExceptionId { get { return HttpContext.Current.Application[_StoredExceptionIdName + this.UniqueAddress] as string; } set { HttpContext.Current.Application[_StoredExceptionIdName + this.UniqueAddress] = value; } } } Then when the error occurs, I have something like this in my Global.asax: public void ErrorLog_Logged(object sender, ErrorLoggedEventArgs args) { var item = new UserCurrentConextUsingWebContext(); item.StoredException = args.Entry.Error.Exception; item.StoredExceptionId = args.Entry.Id; } Then where ever you are later you can pull out the details by var item = new UserCurrentConextUsingWebContext(); var error = item.StoredException; var errorId = item.StoredExceptionId; item.StoredException = null; item.StoredExceptionId = null; Note this isn't 100% perfect as its possible for the same IP to have multiple requests to have errors at the same time. But the likely hood of that happening is remote. And this solution is independent of the session, which in our case is important, also some errors can cause sessions to be terminated, etc. Hence why this approach has worked nicely for us.

    Read the article

  • Possible to do rounded corners in custom Progressbar progressDrawable?

    - by b-ryce
    I have a progress bar that is supposed to look like the attached image: And I've made it a long way. I'm very close the only part that isn't working is the rounded corners for the progressDrawable. Here is what mine looks like. (Notice, circled in red, that the fill inside the white outline does not have rounded corners): So, I've found a couple of ways to make this work when the progress bar is colored in with a shape, gradient, or color. BUT, I can't get it with an image as the progressDrawable. Here is my class that extends ProgressBar public class RoundedProgressBar extends ProgressBar{ private Paint paint; public RoundedProgressBar(Context context) { super(context); setup(); } public RoundedProgressBar(Context context, AttributeSet attrs) { super(context, attrs); setup(); } public RoundedProgressBar(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); setup(); ; } protected void setup() { paint = new Paint(); } @Override protected synchronized void onDraw(Canvas canvas) { // First draw the regular progress bar, then custom draw our text super.onDraw(canvas); paint.setColor(Color.WHITE); paint.setStyle(Paint.Style.STROKE); RectF r = new RectF(0,0,getWidth()-1,getHeight()-1); canvas.drawRoundRect(r,getHeight()/2,getHeight()/2, paint); } } Here is my selector: <layer-list xmlns:android="http://schemas.android.com/apk/res/android" > <item android:id="@android:id/background" android:drawable="@drawable/slider_track" /> <item android:id="@android:id/secondaryProgress" android:drawable="@drawable/slider_track" /> <item android:id="@android:id/progress" android:drawable="@drawable/slider_track_progress" /> </layer-list> Here are the images used in the selector: slider_track- slider_track_progress- Here is where I embed my progressbar in the layout for my activity <com.android.component.RoundedProgressBar android:id="@+id/player_hp_bar" android:layout_width="fill_parent" android:layout_height="36dip" android:layout_marginLeft="30dip" android:layout_marginRight="30dip" android:max="100" style="?android:attr/progressBarStyleHorizontal" android:progressDrawable="@drawable/slider_layer_list" android:progress="20" android:maxHeight="12dip" android:minHeight="12dip" /> Anyone know how to make this work?

    Read the article

  • Boggling Direct3D9 dynamic vertex buffer Lock crash/post-lock failure on Intel GMA X3100.

    - by nj
    Hi, For starters I'm a fairly seasoned graphics programmer but as wel all know, everyone makes mistakes. Unfortunately the codebase is a bit too large to start throwing sensible snippets here and re-creating the whole situation in an isolated CPP/codebase is too tall an order -- for which I am sorry, do not have the time. I'll do my best to explain. B.t.w, I will of course supply specific pieces of code if someone wonders how I'm handling this-or-that! As with all resources in the D3DPOOL_DEFAULT pool, when the device context is taken away from you you'll sooner or later will have to reset your resources. I've built a mechanism to handle this for all relevant resources that's been working for years; but that fact nothingwithstanding I've of course checked, asserted and doubted any assumption since this bug came to light. What happens is as follows: I have a rather large dynamic vertex buffer, exact size 18874368 bytes. This buffer is locked (and discarded fully using the D3DLOCK_DISCARD flag) each frame prior to generating dynamic geometry (isosurface-related, f.y.i) to it. This works fine, until, of course, I start to reset. It might take 1 time, it might take 2 or it might take 5 resets to set off a bug that causes an access violation either on the pointer returned by the Lock() operation on the renewed resource or a plain crash -- regarding a somewhat similar address, but without the offset that it has tacked on to it in the first case because in that case we're somewhere halfway writing -- iside the D3D9 dll Lock() call. I've tested this on other hardware, upgraded my GMA X3100 drivers (using a MacBook with BootCamp) to the latest ones, but I can't reproduce it on any other machine and I'm at a loss about what's wrong here. I have tried to reproduce a similar situation with a similar buffer (I've got a large scratch pad of the same type I filled with quads) and beyond a certain amount of bytes it started to behave likewise. I'm not asking for a solution here but I'm very interested if there are other developers here who have battled with the same foe or maybe some who can point me in some insightful direction, maybe ask some questions that might shed a light on what I may or may not be overlooking. Another interesting artifact is that the vertex buffer starts to bug if I supply both D3DLOCK_DISCARD and D3DLOCK_NOOVERWRITE together which, even though not very logical (you're not going to overwrite if you've just discarded all), gives graphics glitches. Thanks and any corrections are more than welcome. Niels p.s - A friend of mine raised the valid point that it is a huge buffer for onboard video RAM and it's being at least double or triple buffered internally due to it's dynamic nature. On the other hand, the debug output (D3D9 debug DLL + max. warning output) remains silent. p.s 2 - Had it tested on more machines and still works -- it's probably a matter of circumstance: the huge dynamic, internally double/trippled buffered buffer, not a lot of memory and drivers that don't complain when they should.. Unless someone has a better suggestion; I'd still love to hear it :)

    Read the article

  • JQuery autocomplete: is not working asp.net

    - by Abu Hamzah
    is that something wrong in the below code? its not firing at all edit <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="HostPage.aspx.cs" Inherits="HostPage" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js"></script> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/jquery-ui.min.js"></script> <link rel="stylesheet" href="http://dev.jquery.com/view/trunk/plugins/autocomplete/jquery.autocomplete.css" type="text/css" /> <script type="text/javascript" src="http://dev.jquery.com/view/trunk/plugins/autocomplete/jquery.autocomplete.js"></script> </head> <body> <form id="form1" runat="server"> <script type="text/javascript"> $(document).ready(function() { $("#<%=txtHost.UniqueID %>").autocomplete("HostService.asmx/GetHosts", { dataType: 'json' , contentType: "application/json; charset=utf-8" , parse: function(data) { var rows = Array(); debugger for (var i = 0; i < data.length; i++) { rows[i] = { data: data[i], value: data[i].LName, result: data[i].LName }; } return rows; } , formatItem: function(row, i, max) { return data.LName + ", " + data.FName; } }); }); </script> <div> <asp:Label runat="server" ID='Label4' >Host Name:</asp:Label> <asp:TextBox ID="txtHost" runat='server'></asp:TextBox> <p> </div> </form> </body> </html>

    Read the article

  • HttpSendRequest not getting latest file from server

    - by Doug Kavendek
    I am having an issue with my HTTP requests in my app, such that if the remote file is the same size as the local file (even though its modified time is different, as its contents have been changed), attempts to download it return quickly and the newer file is not downloaded. In short, the process I am following is: Setting up an HTTP connection with the INTERNET_FLAG_RESYNCHRONIZE flag and calling HttpSendRequest(); then checking the HTTP status code and finding it to be "200". If the remote file is updated, but remains the same size as the local copy: The local file is unchanged after running the app. If I call HttpQueryInfo() with HTTP_QUERY_LAST_MODIFIED after sending the request, it gives me the actual last modified time of the server's file, which I can see is different from the local file I am trying to have it overwrite. If the remote file is updated, and the file size becomes different from the local copy: It is downloaded and overwrites the local copy as expected. Here's a fairly abridged version of the code, to cut out helpers and error checking: // szAppName = our app name HINTERNET hInternetHandle = InternetOpen( szAppName, INTERNET_OPEN_TYPE_PRECONFIG, NULL, NULL, 0 ); // szServerName = our server name hInternetHandle = InternetConnect( hInternetHandle, szServerName, INTERNET_DEFAULT_HTTP_PORT, NULL, NULL, INTERNET_SERVICE_HTTP, NULL, 0 ); // szPath = the file to download LPCSTR aszDefault[2] = { "*/*", NULL }; DWORD dwFlags = 0 | INTERNET_FLAG_IGNORE_REDIRECT_TO_HTTP | INTERNET_FLAG_IGNORE_REDIRECT_TO_HTTPS | INTERNET_FLAG_KEEP_CONNECTION | INTERNET_FLAG_NO_AUTH | INTERNET_FLAG_NO_AUTO_REDIRECT | INTERNET_FLAG_NO_COOKIES | INTERNET_FLAG_NO_UI | INTERNET_FLAG_RESYNCHRONIZE; HINTERNET hHandle = HttpOpenRequest( hInternetHandle, "GET", szPath, NULL, NULL, aszDefault, dwFlags, 0 ); DWORD dwTimeOut = 10 * 1000; // In milliseconds InternetSetOption( hInternetHandle, INTERNET_OPTION_CONNECT_TIMEOUT, &dwTimeOut, sizeof( dwTimeOut ) ); InternetSetOption( hInternetHandle, INTERNET_OPTION_RECEIVE_TIMEOUT, &dwTimeOut, sizeof( dwTimeOut ) ); InternetSetOption( hInternetHandle, INTERNET_OPTION_SEND_TIMEOUT, &dwTimeOut, sizeof( dwTimeOut ) ); DWORD dwRetries = 5; InternetSetOption( hInternetHandle, INTERNET_OPTION_CONNECT_RETRIES, &dwRetries, sizeof( dwRetries ) ); HttpSendRequest( hInternetHandle, NULL, 0, NULL, 0 ); Since I have found I can query the remote file's last modified time, and find it to be accurate, I know it's actually getting to the server. I thought that specifying INTERNET_FLAG_RESYNCHRONIZE would force the file to resynch if it's out of date. Do I have it all wrong? Is this just how it's supposed to work?

    Read the article

  • .htaccess mod_rewrite issue

    - by Orhan Toy
    Almost in any project I work on, some issues with .htaccess occur. I usually just find the easiest solution and leave it because I don't have any knowledge or understanding for Apache, servers etc. But this time I thought I would ask you guys. This is the files and folders in my (simplified) setup: /modrewrite-test .htaccess /config /inc /lib /public_html .htaccess /cms /navigation index.php edit.php /pages index.php edit.php login.php page.php The "config", "inc" and "lib" folders are meant to be "hidden" from the root of the website. I try to accomplish this by making a .htaccess-file in the root that redirects the user to "public_html". The .htacess-file contains this: RewriteEngine On RewriteRule (.*) public_html/$1 This works perfect. If I type "http://localhost/modrewrite-test/login.php" in my browser, I end up in public_html/login.php which is my intention. So this works fine. The .htaccess-file in "public_html" contains this: RewriteEngine On # Root RewriteRule ^$ page.php [L] # Login RewriteRule ^(admin)|(login)\/?$ login.php [L] # Page (if not a file/directory) RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ page.php?url=$1 [L] The first rewrite just redirects me to public_html/page.php if I try to reach "http://localhost/modrewrite-test/". The next rewrite is just for the convenience of users trying to log in - so if they try to reach "http://localhost/modrewrite-test/admin" or "http://localhost/modrewrite-test/login" they will end up at the login.php-file. The third and last rewrite handles the rest of the requests. If I try to reach "http://localhost/modrewrite-test/bla/bla/bla" it will just redirect me to public_html/page.php (with the 'url' GET-variable set) instead of finding a folder called "la", containing a folder named "bla" and etc. All of these things work perfect but a minor issues occurs when I for instance try to reach "http://localhost/modrewrite-test/cms/navigation" without a slash at the end of the URL. When I try to reach that page the browser is somehow redirected to "http://localhost/modrewrite-test/public_html/cms/navigation/". The correct page is shown but why does it get redirected and add the "public_html" part in the URL? The desired behavior is that the URL stays intact and that the page public_html/cms/navigation/index.php is shown. The files and folders in the (simplified) can be found at http://highbars.com/modrewrite-test.zip

    Read the article

  • server|configuration problem, a php script just die with no error log & no reason

    - by Roberto
    Hi (first of all, thanks for your attention & sorry for my bad english hahaha also this is not a programming error, or thats what I think, I think this is an error in some configuration of the server or something else but I dont know what) I have a php script (is running like a process of linux, its not running on the web browser) that send SMS via SMPP on the port 2055 (using sockets in php) & then inserts like 10,000 rows into a dababase on MySQL, the script gets the data from a XML file; firts it was running in a shared server (Hostgator is our hosting provider) & at the begining it worked fine, with no trouble, but 5 months later an error appear, the process just die with no reason, the script only sent & inserted 700 rows in the table of the database & the process didnt show any warning or error, nothing appears in the error logs, & I didnt make any change in the script Hostgator never helped us hahaha so we decided to move the script from the shared server to a dedicated server; I thought it was a memory problem or something like that, but when we move the script to the dedicated server the problem just get worse, the script die when has just sent & inserted 40 to 50 rows to the database the information about this error: the shared server is on Red Hat 4.1.2-46 & the dedicated server is on CentOS 5.4 I have commented the line that sends the SMS, & the problem remains in the shared server, at the begining the script was fine, but then the script started to die when has just inserted 700 aprox. in the database, & now the script is dying when has inserted 2500 rows, its better but we didnt change anything in the dedicated server, the script dies when has just inserted like 40 row in the table the script, before it dies, change to a zombie process & we dont know why the usage of memory appears to be 0.3%, and of the cpu appears to be 0.7% to 1% I have changed the max memory limit of php to 128Mb, and even to -1 (so php wont have any limit) but the problem remains we have the limit of 50 connections of mysql at the same time, so I think this is not the problem Im using mysqli to connect from php to mysql Hostgator report that they haven't made any change or update in the servers what could be the problem?? what should I do??? what should I search??? is something in the logic Im missing?? what steps do I have to follow when managing & searching problems of process on Linux??? thank you very much, I think this is not a programming problem, but you have more experience than me, you can tell me thanks!!! bye!!! :)

    Read the article

  • Ajax doesn't trigger a change-event on a webkit based browser

    - by user319464
    I have adapted a Jquery plugin to for-fill my needs to send GET requests to servers as a way of "pinging" them. I've also added some javascript code to add some fancy features like: depending on the changed value in a that the Jquery plugin changes, it changes the Icon accordingly. To make it all work essentially, I made so that when Ajax gets a "complete" event, it forces a "onChange" event to the span, triggering the javascript validation function to change the status icons. Here is the code of my slightly modified jQuery Plugin: /** * ping for jQuery * * Adapted by Carroarmato0 (to actually work instead of randomly "pinging" nowhere instead of faking * * @auth Jessica * @link http://www.skiyo.cn/demo/jquery.ping/ * */ (function($) { $.fn.ping = function(options) { var opts = $.extend({}, $.fn.ping.defaults, options); return this.each(function() { var ping, requestTime, responseTime ; var target = $(this); var server = target.html(); target.html('<img src="img/loading.gif" alt="loading" />'); function ping() { $.ajax({url: 'http://' + server, type: 'GET', dataType: 'html', timeout: 30000, beforeSend : function() { requestTime = new Date().getTime(); }, complete : function() { responseTime = new Date().getTime(); ping = Math.abs(requestTime - responseTime); if (ping > 2000) { target.text('niet bereikbaar'); } else { target.text(ping + opts.unit); } target.change(); } }); } ping(); opts.interval != 0 && setInterval(ping,opts.interval * 1000); }); }; $.fn.ping.defaults = { interval: 3, unit: 'ms' }; })(jQuery); target.change(); is the code that triggers the "onchange" event in the span: echo " <td class=\"center\"><span id=\"ping$pingNb\" onChange=\"checkServerIcon(this)\" >" .$server['IP'] . "</span></td>"; In Firefox this works, checkServerIcon(this) gets executed and passes the span object to the function. function checkServerIcon(object) { var delayText = object.innerHTML; var delay = delayText.substring(0, delayText.length - 2); if ( isInteger(delay) ) { object.parentNode.previousSibling.parentNode.getElementsByTagName('img')[0].src = 'img/servers/enable_server.png'; } else { if (delay == "bezig.") { object.parentNode.previousSibling.parentNode.getElementsByTagName('img')[0].src = 'img/servers/search_server.png'; } else { object.parentNode.previousSibling.parentNode.getElementsByTagName('img')[0].src = 'img/servers/desable_server.png'; } } } My guess would be that there's something different in WebKit browsers in the way object.parentNode.previousSibling.parentNode. .... works...

    Read the article

  • image upload problem.

    - by Bluemagica
    I wrote a function to resize and upload images...it works, but only for one image. So if I call the function three times, I end up with 3 copies of the last image..... function uploadImage($name,$width,$height,$size,$path='content/user_avatars/') { //=================================================== //Handle image upload $upload_error=0; //Picture $img = $_FILES[$name]['name']; if($img) { $file = stripslashes($_FILES[$name]['name']); $ext = strtolower(getExt($file)); if($ext!='jpg' && $ext!='jpeg' && $ext!='png' && $ext!='gif') { $error_msg = "Unknown extension"; $upload_error = 1; return array($upload_error,$error_msg); } if(filesize($_FILES[$name]['tmp_name'])>$size*1024) { $error_msg = "Max file size of ".($size*1024)."kb exceeded"; $upload_error = 2; return array($upload_error,$error_msg); } $newFile = time().'.'.$ext; resizeImg($_FILES[$name]['tmp_name'],$ext,$width,$height); $store = copy($_FILES[$name]['tmp_name'],$path.$newFile); if(!$store) { $error_msg = "Uploading failed"; $upload_error = 3; return array($upload_error,$error_msg); } else { return array($upload_error,$newFile); } } } //========================================================================================= //Helper Functions function getExt($str) { $i = strpos($str,"."); if(!$i) { return ""; } $l = strlen($str)-$i; $ext = substr($str,$i+1,$l); return $ext; } function resizeImg($file,$ext,$width,$height) { list($aw,$ah) = getimagesize($file); $scaleX = $aw/$width; $scaleY = $ah/$height; if($scaleX>$scaleY) { $nw = round($aw*(1/$scaleX)); $nh = round($ah*(1/$scaleX)); } else { $nw = round($aw*(1/$scaleY)); $nh = round($ah*(1/$scaleY)); } $new_image = imagecreatetruecolor($nw,$nh); imagefill($new_image,0,0,imagecolorallocatealpha($new_image,255,255,255,127)); if($ext=='jpg'||$ext=='jpeg') { $src_image = imagecreatefromjpeg($file); } else if($ext=='gif') { $src_image = imagecreatefromgif($file); } else if($ext=='png') { $src_image = imagecreatefrompng($file); } imagecopyresampled($new_image,$src_image,0,0,0,0,$nw,$nh,$aw,$ah); if($ext=='jpg'||$ext=='jpeg') { imagejpeg($new_image,$file,100); } else if($ext=='gif') { imagegif($new_image,$file); } else if($ext=='png') { imagepng($new_image,$file,9); } imagedestroy($src_image); imagedestroy($new_image); } I have a form with two upload fields, 'face_pic' and 'body_pic', and I want to upload these two to the server and resize them before storing. Any ideas?

    Read the article

  • What is the best way to archive data in a relational database?

    - by GenericTypeTea
    I have a bit of an issue with a particular aspect of a program I'm working on. I need the ability to archive (fix) a table so that a change anywhere in the system will not affect the results it returns. This is the basic structure of what I need to fix: Recipe --> Recipe (as sub recipe) Recipe --> Ingredients So, if I fix a Recipe, I need to ensure all the sub recipes (including all the sub recipes sub recipes and so forth) are fixed and all its ingredients are fixed. The problem is that the sub recipe and ingredients still need to be modifiable as they are used by other recipes that are not fixed. I came up with a solution whereby I serialize (with protobuf-net) a master object that deals with the recipe and all the sub recipes and ingredients and save the archive data to a table like follows: Archive{ ReferenceId, (i.e. RecipeId) ReferenceTypeId, (i.e. Recipe) ArchiveData varbinary(max) } Now, this works great and is almost perfect... however I totally forgot (I'd love to blame the agile development mentally, however this was just short sighted) that this information needs to be reported on. As far as I'm aware I can't think how I could inflate the serialized data back into my Recipe Object and use it in a Report. I'm using the standard SQL 2005 report services at the moment. Alternatively, I guess I could do the following: Duplicate every table and tag the word "Archive" on the end of the table name. This would then give me an area of specific archive data... but ignoring my simplified example, there'd actually be about 15 tables duplicated. Add a nullable, non-foreign key property called "CopiedFromId" to every table that contains fixed data and duplicate every record that the recipe (and all it's sub recipes and all their sub recipes) touches. Create some sort of denormalised structure that could be restored from at a later date to the original, unfixed recipe. Although I think this would be like option 1 and involve a lot of extra tables. Anyway, I'm at a total loss and do not like any of the ideas particularly. Can anyone please advise the best course of action? EDIT: Or 4) Create tables specific to what the report requires and populate them with the data when the user clicks the report button? This would cause about 4 extra tables for the report in question.

    Read the article

  • How can I test caching and cache busting?

    - by Nathan Long
    In PHP, I'm trying to steal a page from the Rails playbook (see 'Using Asset Timestamps' here): By default, Rails appends assets' timestamps to all asset paths. This allows you to set a cache-expiration date for the asset far into the future, but still be able to instantly invalidate it by simply updating the file (and hence updating the timestamp, which then updates the URL as the timestamp is part of that, which in turn busts the cache). It‘s the responsibility of the web server you use to set the far-future expiration date on cache assets that you need to take advantage of this feature. Here‘s an example for Apache: # Asset Expiration ExpiresActive On <FilesMatch "\.(ico|gif|jpe?g|png|js|css)$"> ExpiresDefault "access plus 1 year" </FilesMatch> If you look at a the source for a Rails page, you'll see what they mean: the path to a stylesheet might be "/stylesheets/scaffold.css?1268228124", where the numbers at the end are the timestamp when the file was last updated. So it should work like this: The browser says 'give me this page' The server says 'here, and by the way, this stylesheet called scaffold.css?1268228124 can be cached for a year - it's not gonna change.' On reloads, the browser says 'I'm not asking for that css file, because my local copy is still good.' A month later, you edit and save the file, which changes the timestamp, which means that the file is no longer called scaffold.css?1268228124 because the numbers change. When the browser sees that, it says 'I've never seen that file! Give me a copy, please.' The cache is 'busted.' I think that's brilliant. So I wrote a function that spits out stylesheet and javascript tags with timestamps appended to the file names, and I configured Apache with the statement above. Now: how do I tell if the caching and cache busting are working? I'm checking my pages with two plugins for Firebug: Yslow and Google Page Speed. Both seem to say that my files are caching: "Add expires headers" in Yslow and "leverage browser caching" in Page Speed are both checked. But when I look at the Page Speed Activity, I see a lot of requests and waiting and no 'cache hits'. If I change my stylesheet and reload, I do see the change immediately. But I don't know if that's because the browser never cached in the first place or because the cache is busted. How can I tell?

    Read the article

  • HttpClient response handler always returns closed stream

    - by Alex Ciminian
    I'm new to Java development so please bear with me. Also, I hope I'm not the champion of tl;dr :). I'm using HttpClient to make requests over Http (duh!) and I'd gotten it to work for a simple servlet that receives an URL as a query string parameter. I realized that my code could use some refactoring, so I decided to make my own HttpResponseHandler, to clean up the code, make it reusable and improve exception handling. I currently have something like this: public class HttpResponseHandler implements ResponseHandler<InputStream>{ public InputStream handleResponse(HttpResponse response) throws ClientProtocolException, IOException { int statusCode = response.getStatusLine().getStatusCode(); InputStream in = null; if (statusCode != HttpStatus.SC_OK) { throw new HttpResponseException(statusCode, null); } else { HttpEntity entity = response.getEntity(); if (entity != null) { in = entity.getContent(); // This works // for (int i;(i = in.read()) >= 0;) System.out.print((char)i); } } return in; } } And in the method where I make the actual request: HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(target); ResponseHandler<InputStream> httpResponseHandler = new HttpResponseHandler(); try { InputStream in = httpclient.execute(httpget, httpResponseHandler); // This doesn't work // for (int i;(i = in.read()) >= 0;) System.out.print((char)i); return in; } catch (HttpResponseException e) { throw new HttpResponseException(e.getStatusCode(), null); } The problem is that the input stream returned from the handler is closed. I don't have any idea why, but I've checked it with the prints in my code (and no, I haven't used them both at the same time :). While the first print works, the other one gives a closed stream error. I need InputStreams, because all my other methods expect an InputStream and not a String. Also, I want to be able to retrieve images (or maybe other types of files), not just text files. I can work around this pretty easily by giving up on the response handler (I have a working implementation that doesn't use it), but I'm pretty curious about the following: Why does it do what it does? How do I open the stream, if something closes it? What's the right way to do this, anyway :)? I've checked the docs and I couldn't find anything useful regarding this issue. To save you a bit of Googling, here's the Javadoc and here's the HttpClient tutorial (Section 1.1.8 - Response handlers). Thanks, Alex

    Read the article

  • What are the types and inner workings of a query optimizer?

    - by Frank Developer
    As I understand it, most query optimizers are cost-based. Some can be influenced by hints like FIRST_ROWS(). Others are tailored for OLAP. Is it possible to know more detailed logic about how Informix IDS and SE's optimizers decide what's the best route for processing a query, other than SET EXPLAIN? Is there any documentation which illustrates the ranking of SELECT statements? I would imagine that "SELECT col FROM table WHERE ROWID = n" ranks 1st. What are the rest of them?.. If I'm not mistaking, Informix's ROWID is a SERIAL(INT) which allows for a max. of 2GB nrows, or maybe it uses INT9 for TB's nrows?.. However, I think Oracle uses HEX values for ROWID. Too bad ROWID can't be oftenly used, since a rows ROWID can change. So maybe ROWID is used by the optimizer as a counter? Perhaps, it could be used for implementing the query progress idea I mentioned in my "Begin viewing query results before query completes" question? For some reason, I feel it wouldn't be that difficult to report a query's progress while being processed, perhaps at the expense of some slight overhead, but it would be nice to know ahead of time: A "Google-like" estimate of how many rows meet a query's criteria, display it's progress every 100, 200, 500 or 1,000 rows, give users the ability to cancel it at anytime and start displaying the qualifying rows as they are being put into the current list, while it continues searching?.. This is just one example, perhaps we could think other neat/useful features, the ingridients are more or less there. Perhaps we could fine-tune each query with more granularity than currently available? OLTP queries tend to be mostly static and pre-defined. The "what-if's" are more OLAP, so let's try to add more control and intelligence to it? So, therefore, being able to more precisely control, not "hint-influence" a query is what's needed and therefore it would be necessary to know how the optimizers logic is programmed. We can then have Dynamic SELECT and other statements for specific situations! Maybe even tell IDS to read blocks of indexes nodes at-a-time instead of one-by-one, etc. etc.

    Read the article

  • How to display part of an image for the specific width and height?

    - by Brady Chu
    Recently I participated in a web project which has a huge large of images to handle and display on web page, we know that the width and height of images end users uploaded cannot be control easily and then they are hard to display. At first, I attempted to zoom in/out the images to rearch an appropriate presentation, and I made it, but my boss is still not satisfied with my solution, the following is my way: var autoResizeImage = function(maxWidth, maxHeight, objImg) { var img = new Image(); img.src = objImg.src; img.onload = function() { var hRatio; var wRatio; var Ratio = 1; var w = img.width; var h = img.height; wRatio = maxWidth / w; hRatio = maxHeight / h; if (maxWidth == 0 && maxHeight == 0) { Ratio = 1; } else if (maxWidth == 0) { if (hRatio < 1) { Ratio = hRatio; } } else if (maxHeight == 0) { if (wRatio < 1) { Ratio = wRatio; } } else if (wRatio < 1 || hRatio < 1) { Ratio = (wRatio <= hRatio ? wRatio : hRatio); } if (Ratio < 1) { w = w * Ratio; h = h * Ratio; } w = w <= 0 ? 250 : w; h = h <= 0 ? 370 : h; objImg.height = h; objImg.width = w; }; }; This way is only intended to limit the max width and height for the image so that every image in album still has different width and height which are still very urgly. And right at this minute, I know we can create a DIV and use the image as its background image, this way is too complicated and not direct I don't want to take. So I's wondering whether there is a better way to display images with the fixed width and height without presentation distortion? Thanks.

    Read the article

  • problem with custom NSProtocol and caching on iPhone

    - by TomSwift
    My iPhone app embeds a UIWebView which loads html via a custom NSProtocol handler I have registered. My problem is that resources referenced in the returned html, which are also loaded via my custom protocol handler, are cached and never reloaded. In particular, my stylesheet is cached: <link rel="stylesheet" type="text/css" href="./styles.css" /> The initial request to load the html in the UIWebView looks like this: NSString* strUrl = [NSMutableString stringWithFormat: @"myprotocol:///entry?id=%d", entryID ]; NSURL* url = [NSURL URLWithString: strUrl]; [_pCurrentWebView loadRequest: [NSURLRequest requestWithURL: url cachePolicy: NSURLRequestReloadIgnoringLocalCacheData timeoutInterval: 60 ]]; (note the cache policy is set to ignore, and I've verified this cache policy carries through to subsequent requests for page resources on the initial load) The protocol handler loads the html from a database and returns it to the client using code like this: // create the response record NSURLResponse *response = [[NSURLResponse alloc] initWithURL: [request URL] MIMEType: mimeType expectedContentLength: -1 textEncodingName: textEncodingName]; // get a reference to the client so we can hand off the data id client = [self client]; // turn off caching for this response data [client URLProtocol: self didReceiveResponse:response cacheStoragePolicy: NSURLCacheStorageNotAllowed]; // set the data in the response to our jfif data [client URLProtocol: self didLoadData:data]; [data release]; (Note the response cache policy is "not allowed"). Any ideas how I can make it NOT cache my styles.css resource? I need to be able to dynamically alter the content of this resource on subsequent loads of html that references this file. I thought clearing the shared url cache would work, but it doesnt: [[NSURLCache sharedURLCache] removeAllCachedResponses]; One thing that does work, but it's terribly inefficient, is to dynamically cache-bust the url for the stylesheet by adding a timestamp parameter: <link rel="stylesheet" type="text/css" href="./styles.css?ts=1234567890" /> To make this work I have to load my html from the db, search and replace the url for the stylesheet with a cache-busting parameter that changes on each request. I'd rather not do this. My presumption is that there is no problem if I were to load my content via the built-in HTTP protocol. In that case, I'm guessing that the UIWebView looks at any Cache-Control flags in the NSURLHTTPResponse object's http headers and abides by them. Since my NSURLResponseObject has no http headers (it's not http...) then perhaps UIWebView just decides to cached the resource (ignoring the NSURLRequest caching directive?). Ideas???

    Read the article

  • stop and split generated sequence at repeats - clojure

    - by fitzsnaggle
    I am trying to make a sequence that will only generate values until it finds the following conditions and return the listed results: case head = 0 - return {:origin [all generated except 0] :pattern 0} 1 - return {:origin nil :pattern [all-generated-values] } repeated-value - {:origin [values-before-repeat] :pattern [values-after-repeat] { ; n = int ; x = int ; hist - all generated values ; Keeps the head below x (defn trim-head [head x] (loop [head head] (if (> head x) (recur (- head x)) head))) ; Generates the next head (defn next-head [head x n] (trim-head (* head n) x)) (defn row [x n] (iterate #(next-head % x n) n)) ; Generates a whole row - ; Rows are a max of x - 1. (take (- x 1) (row 11 3)) Examples of cases to stop before reaching end of row: [9 8 4 5 6 7 4] - '4' is repeated so STOP. Return preceding as origin and rest as pattern. {:origin [9 8] :pattern [4 5 6 7]} [4 5 6 1] - found a '1' so STOP, so return everything as pattern {:origin nil :pattern [4 5 6 1]} [3 0] - found a '0' so STOP {:origin [3] :pattern [0]} :else if the sequences reaches a length of x - 1: {:origin [all values generated] :pattern nil} The Problem I have used partition-by with some success to split the groups at the point where a repeated value is found, but would like to do this lazily. Is there some way I can use take-while, or condp, or the :while clause of the for loop to make a condition that partitions when it finds repeats? Some Attempts (take 2 (partition-by #(= 1 %) (row 11 4))) (for [p (partition-by #(stop-match? %) head) (iterate #(next-head % x n) n) :while (or (not= (last p) (or 1 0 n) (nil? (rest p))] {:origin (first p) :pattern (concat (second p) (last p))})) # Updates What I really want to be able to do is find out if a value has repeated and partition the seq without using the index. Is that possible? Something like this - { (defn row [x n] (loop [hist [n] head (gen-next-head (first hist) x n) steps 1] (if (>= (- x 1) steps) (case head 0 {:origin [hist] :pattern [0]} 1 {:origin nil :pattern (conj hist head)} ; Speculative from here on out (let [p (partition-by #(apply distinct? %) (conj hist head))] (if-not (nil? (next p)) ; One partition if no repeats. {:origin (first p) :pattern (concat (second p) (nth 3 p))} (recur (conj hist head) (gen-next-head head x n) (inc steps))))) {:origin hist :pattern nil}))) }

    Read the article

  • G++, compiler warnings, c++ templates

    - by Ian
    During the compilatiion of the C++ program those warnings appeared: c:/MinGW/bin/../lib/gcc/mingw32/3.4.5/../../../../include/c++/3.4.5/bc:/MinGW/bin/../lib/gcc/mingw32/3.4.5/../../../../include/c++/3.4.5/bits/stl_algo.h:2317: instantiated from `void std::partial_sort(_RandomAccessIterator, _RandomAccessIterator, _RandomAccessIterator, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<Object<double>**, std::vector<Object<double>*, std::allocator<Object<double>*> > >, _Compare = sortObjects<double>]' c:/MinGW/bin/../lib/gcc/mingw32/3.4.5/../../../../include/c++/3.4.5/bits/stl_algo.h:2506: instantiated from `void std::__introsort_loop(_RandomAccessIterator, _RandomAccessIterator, _Size, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<Object<double>**, std::vector<Object<double>*, std::allocator<Object<double>*> > >, _Size = int, _Compare = sortObjects<double>]' c:/MinGW/bin/../lib/gcc/mingw32/3.4.5/../../../../include/c++/3.4.5/bits/stl_algo.h:2589: instantiated from `void std::sort(_RandomAccessIterator, _RandomAccessIterator, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<Object<double>**, std::vector<Object<double>*, std::allocator<Object<double>*> > >, _Compare = sortObjects<double>]' io/../structures/objects/../../algorithm/analysis/../../structures/list/ObjectsList.hpp:141: instantiated from `void ObjectsList <T>::sortObjects(unsigned int, T, T, T, T, unsigned int) [with T = double]' I do not why, because all objects have only template parameter T, their local variables are also T. The only place, where I am using double is main. There are objects of type double creating and adding into the ObjectsList... Object <double> o1; ObjectsList <double> olist; olist.push_back(o1); .... T xmin = ..., ymin = ..., xmax = ..., ymax = ...; unsigned int n = ...; olist.sortAllObjects(xmin, ymin, xmax, ymax, n); and comparator template <class T> class sortObjects { private: unsigned int n; T xmin, ymin, xmax, ymax; public: sortObjects ( const T xmin_, const T ymin_, const T xmax_, const T ymax_, const int n_ ) : xmin ( xmin_ ), ymin ( ymin_ ), xmax ( xmax_ ), ymax ( ymax_ ), n ( n_ ) {} bool operator() ( const Object <T> *o1, const Object <T> *o2 ) const { T dmax = (std::max) ( xmax - xmin, ymax - ymin ); T x_max = ( xmax - xmin ) / dmax; T y_max = ( ymax - ymin ) / dmax; ... return ....; } representing ObjectsList method: template <class T> void ObjectsList <T> ::sortAllObjects ( const T xmin, const T ymin, const T xmax, const T ymax, const unsigned int n ) { std::sort ( objects.begin(), objects.end(), sortObjects <T> ( xmin, ymin, xmax, ymax, n ) ); }

    Read the article

  • How do you make a Custom Data Generator for SQL XML DataType.

    - by Keith Sirmons
    Howdy, I am using Visual Studio 2010 and am playing around with the Database Projects. I am creating a DataGenerationPlan to insert data into a simple table, in which one of the column datatypes is XML. Out of the box, the generation plan uses the Regular Expression generator and generates something like this : HGcSv9wa7yM44T9x5oFT4pmBkEmv62lJ7OyAmCnL6yqXC2X.......... I am looking at creating a custom data Generator for this data type and have followed this site for the basics: http://msdn.microsoft.com/en-us/library/aa833244.aspx This example works if I am creating a string datatype and using it for a nvarchar datatype. What do I need to change to hook this Generator to the XML Datatype? Below are my code files. The string property works for nvarchar. The XElement property does not work for the xml datatype, and the RecordXMLDataGenerator is not listed as an option in the Generator column for the generation plan. CustomDataGenerators: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Data.Schema.Tools.DataGenerator; using Microsoft.Data.Schema.Extensibility; using Microsoft.Data.Schema; using Microsoft.Data.Schema.Sql; using System.Xml.Linq; namespace CustomDataGenerators { [DatabaseSchemaProviderCompatibility(typeof(SqlDatabaseSchemaProvider))] public class RecordXMLDataGenerator : Generator { private XElement _RecordData; [Output(Description = "Generates string of XML Data for the Record.", Name = "RecordDataString")] public string RecordDataString { get { return _RecordData.ToString(SaveOptions.None); } } [Output(Description = "Generates XML Data for the Record.", Name = "RecordData")] public XElement RecordData { get { return _RecordData; } } protected override void OnGenerateNextValues() { base.OnGenerateNextValues(); XElement element = new XElement("Root", new XElement("Children1", 1), new XElement("Children6", 6) ); _RecordData = element; } } } XML Extensions File: <?xml version="1.0" encoding="utf-8" ?> <extensions assembly="" version="1" xmlns="urn:Microsoft.Data.Schema.Extensions" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:Microsoft.Data.Schema.Extensions Microsoft.Data.Schema.Extensions.xsd"> <extension type="CustomDataGenerators.RecordXMLDataGenerator" assembly="CustomDataGenerators, Version=1.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxxxxx" enabled="true"/> </extensions> Table.sql: CREATE TABLE [dbo].[Record] ( id int IDENTITY (1,1) NOT NULL, recordData xml NULL, userId int NULL, test nvarchar(max) NULL, rowver rowversion NULL, CONSTRAINT pk_RecordID PRIMARY KEY (id) )

    Read the article

  • How can I tell groovy/grails not to try to "re-encode" binary data? (Revised title)

    - by ?????
    I have a groovy/grails application that needs to serve images It works fine on my dev box, the image is returned properly. Here's the start of the returned JPEG, as seen by od -cx 0000000 377 330 377 340 \0 020 J F I F \0 001 001 001 001 , d8ff e0ff 1000 464a 4649 0100 0101 2c01 but on the production box, there's some garbage in front, and the d8ff e0ff before the 1000 is missing 0000000 ? ** ** ? ** ** ? ** ** ? ** ** \0 020 J F bfef efbd bdbf bfef efbd bdbf 1000 464a 0000020 I F \0 001 001 001 \0 H \0 H \0 \0 ? ** ** ? 4649 0100 0101 4800 4800 0000 bfef efbd It's the exact same code. I just moved the .war over and run it on a different machine. (Isn't Java supposed to be write once, run everywhere?) Any ideas? An "encoding" problem? The code is sent to the response like this: response.contentType = "image/jpeg"; response.outputStream << out; Here's the code that locates the image on an internal application server and re-serves the image. I've pared down the code a bit to remove the error handling, etc, to make it easier to read. def show = { def address = "http://internal.application.server:9899/img?photoid=${params.id}" def out = new ByteArrayOutputStream() out << new URL(address).openStream() response.contentLength = out.size(); // XXX If you don't do this hack, "head" requests won't work! if (request.method == 'HEAD') { render( text : "", contentType : "image/jpeg" ); } else { response.contentType = "image/jpeg"; response.outputStream << out; } } Update: I tried setting the CharacterEncoding response.setCharacterEncoding("ISO-8859-1"); if (request.method == 'HEAD') { render( text : "", contentType : "image/jpeg" ); } else { response.contentType = "image/jpeg;charset=ISO-8859-1"; response.outputStream << out; } but it made no difference in the output. On my production machine, the binary bytes in the image are re-encoded/escaped as if they were UTF-8 (see Michael's explanation below). It works fine on my development machine.

    Read the article

  • How to view ASMX SOAP using Fiddler2?

    - by outer join
    Does anyone know if Fiddler can display the raw SOAP messages for ASMX web services? I'm testing a simple web service using both Fiddler2 and Storm and the results vary (Fiddler shows plain xml while Storm shows the SOAP messages). See sample request/responses below: Fiddler2 Request: POST /webservice1.asmx/Test HTTP/1.1 Accept: */* Referer: http://localhost.:4164/webservice1.asmx?op=Test Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2; MS-RTC LM 8) Content-Type: application/x-www-form-urlencoded Accept-Encoding: gzip, deflate Host: localhost.:4164 Content-Length: 0 Connection: Keep-Alive Pragma: no-cache Fiddler2 Response: HTTP/1.1 200 OK Server: ASP.NET Development Server/9.0.0.0 Date: Thu, 21 Jan 2010 14:21:50 GMT X-AspNet-Version: 2.0.50727 Cache-Control: private, max-age=0 Content-Type: text/xml; charset=utf-8 Content-Length: 96 Connection: Close <?xml version="1.0" encoding="utf-8"?> <string xmlns="http://tempuri.org/">Hello World</string> Storm Request (body only): <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <Test xmlns="http://tempuri.org/" /> </soap:Body> </soap:Envelope> Storm Response: Status Code: 200 Content Length : 339 Content Type: text/xml; charset=utf-8 Server: ASP.NET Development Server/9.0.0.0 Status Description: OK <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <TestResponse xmlns="http://tempuri.org/"> <TestResult>Hello World</TestResult> </TestResponse> </soap:Body> </soap:Envelope> Thanks for any help.

    Read the article

  • How do I print out objects in an array in python?

    - by Jonathan
    I'm writing a code which performs a k-means clustering on a set of data. I'm actually using the code from a book called collective intelligence by O'Reilly. Everything works, but in his code he uses the command line and i want to write everything in notepad++. As a reference his line is >>>kclust=clusters.kcluster(data,k=10) >>>[rownames[r] for r in k[0]] Here is my code: from PIL import Image,ImageDraw def readfile(filename): lines=[line for line in file(filename)] # First line is the column titles colnames=lines[0].strip( ).split('\t')[1:] rownames=[] data=[] for line in lines[1:]: p=line.strip( ).split('\t') # First column in each row is the rowname rownames.append(p[0]) # The data for this row is the remainder of the row data.append([float(x) for x in p[1:]]) return rownames,colnames,data from math import sqrt def pearson(v1,v2): # Simple sums sum1=sum(v1) sum2=sum(v2) # Sums of the squares sum1Sq=sum([pow(v,2) for v in v1]) sum2Sq=sum([pow(v,2) for v in v2]) # Sum of the products pSum=sum([v1[i]*v2[i] for i in range(len(v1))]) # Calculate r (Pearson score) num=pSum-(sum1*sum2/len(v1)) den=sqrt((sum1Sq-pow(sum1,2)/len(v1))*(sum2Sq-pow(sum2,2)/len(v1))) if den==0: return 0 return 1.0-num/den class bicluster: def __init__(self,vec,left=None,right=None,distance=0.0,id=None): self.left=left self.right=right self.vec=vec self.id=id self.distance=distance def hcluster(rows,distance=pearson): distances={} currentclustid=-1 # Clusters are initially just the rows clust=[bicluster(rows[i],id=i) for i in range(len(rows))] while len(clust)>1: lowestpair=(0,1) closest=distance(clust[0].vec,clust[1].vec) # loop through every pair looking for the smallest distance for i in range(len(clust)): for j in range(i+1,len(clust)): # distances is the cache of distance calculations if (clust[i].id,clust[j].id) not in distances: distances[(clust[i].id,clust[j].id)]=distance(clust[i].vec,clust[j].vec) #print 'i' #print i #print #print 'j' #print j #print d=distances[(clust[i].id,clust[j].id)] if d<closest: closest=d lowestpair=(i,j) # calculate the average of the two clusters mergevec=[ (clust[lowestpair[0]].vec[i]+clust[lowestpair[1]].vec[i])/2.0 for i in range(len(clust[0].vec))] # create the new cluster newcluster=bicluster(mergevec,left=clust[lowestpair[0]], right=clust[lowestpair[1]], distance=closest,id=currentclustid) # cluster ids that weren't in the original set are negative currentclustid-=1 del clust[lowestpair[1]] del clust[lowestpair[0]] clust.append(newcluster) return clust[0] def kcluster(rows,distance=pearson,k=4): # Determine the minimum and maximum values for each point ranges=[(min([row[i] for row in rows]),max([row[i] for row in rows])) for i in range(len(rows[0]))] # Create k randomly placed centroids clusters=[[random.random( )*(ranges[i][1]-ranges[i][0])+ranges[i][0] for i in range(len(rows[0]))] for j in range(k)] lastmatches=None for t in range(100): print 'Iteration %d' % t bestmatches=[[] for i in range(k)] # Find which centroid is the closest for each row for j in range(len(rows)): row=rows[j] bestmatch=0 for i in range(k): d=distance(clusters[i],row) if d<distance(clusters[bestmatch],row): bestmatch=i bestmatches[bestmatch].append(j) # If the results are the same as last time, this is complete if bestmatches==lastmatches: break lastmatches=bestmatches # Move the centroids to the average of their members for i in range(k): avgs=[0.0]*len(rows[0]) if len(bestmatches[i])>0: for rowid in bestmatches[i]: for m in range(len(rows[rowid])): avgs[m]+=rows[rowid][m] for j in range(len(avgs)): avgs[j]/=len(bestmatches[i]) clusters[i]=avgs return bestmatches

    Read the article

  • Using multiple aggregate functions in an algebraic expression in (ANSI) SQL statement

    - by morpheous
    I have the following aggregate functions (AGG FUNCs): foo(), foobar(), fredstats(), barneystats(). I want to know if I can use multiple AGG FUNCs in an algebraic expression. This may seem a strange/simplistic question for seasoned SQL developers - however, the but the reason I ask is that so far, all AGG FUNCs examples I have seen are of the simplistic variety e.g. max(salary) < 100, rather than using the AGG FUNCs in an expression which involves using multiple AGG FUNCs in an expression (like agg_func1() agg_func2()). The information below should help clarify further. Given tables with the following schemas: CREATE TABLE item (id int, length float, weight float); CREATE TABLE item_info (item_id, name varchar(32)); # Is it legal (ANSI) SQL to write queries of this format ? SELECT id, name, foo, foobar, fredstats FROM A, B (SELECT id, foo(123) as foo, foobar('red') as foobar, fredstats('weight') as fredstats FROM item GROUP BY id HAVING [ALGEBRAIC EXPRESSION] ORDER BY id AS A), item_info AS B WHERE item.id = B.id Where: ALGEBRAIC EXPRESSION is the type of expression that can be used in a WHERE clause - for example: ((foo(x) < foobar(y)) AND foobar(y) IN (1,2,3)) OR (fredstats(x) <> 0)) I am using PostgreSQL as the db, but I would prefer to use ANSI SQL wherever possible. Assuming it is legal to include AGG FUNCS in the way I have done above, I'd like to know: Is there a more efficient way to write the above query ? Is there any way I can speed up the query in terms of a judicious choice of indexes on the tables item and item_info ? Is there a performance hit of using AGG FUNCs in an algebraic expression like I am (i.e. an expression involving the output of aggregate functions rather than constants? Can the expression also include 'scaled' AGG FUNC? (for example: 2*foo(123) < -3*foobar(456) ) - will scaling (i.e. multiplying an AGG FUNC by a number have an effect on performance?) How can I write the query above using INNER JOINS instead?

    Read the article

  • Problem in loading cities using JSON

    - by Saravanan I M
    I am using Geonames for loading the cities using JSON. Geonames data i imported into my database. I am using MS SQL 2008 Server. I display a dropdown for country select. Once the user select the country. I am loading all the cities into the autocomplete textbox. I am facing a delay in the getJSON method. Also it seems like executing asynchronous. So before getting all the data my autocompelte is getting filled. Below is my complete script. I think i have some problem in the loop. Please advice me what i am doing wrong in my code. $(document).ready(function () { $("#ShowLoad").hide(); //Hook onto the MakeID list's onchange event $("#Country").change(function () { findcities = []; $("#ShowLoad").show(); $("#HomeTown").unautocomplete(); var url = '<%= Url.Content("~/") %>' + "Location/GetCitiesCountByCountry/" + $("#Country").val(); $.getJSON(url, null, function (data) { var total = data; if (total > 0) { var pageTotal = Math.ceil(total / 1000); var isFilled = false; for (var i = 0; i < pageTotal; i++) { var skip = i == 1 ? 0 : (i * 1000) - 1000; var url = '<%= Url.Content("~/") %>' + "Location/GetCitiesByCountry/" + $("#Country").val() + "?skip=" + skip; //alert(i); $.getJSON(url, null, function (data) { $.each(data.Cities, function (index, optionData) { if ($("#Country").val() == "US") { findcities.push(optionData.asciiname + "," + optionData.admin1_code); } else { findcities.push(optionData.asciiname); } }); if (i == pageTotal) { //alert(findcities); $("#ShowLoad").hide(); $("#HomeTown").focus().autocomplete(findcities); } }); } $("#HomeTown").setOptions({ max: 100000 }); } }); }).change(); });

    Read the article

  • CUDA, more threads for same work = Longer run time despite better occupancy, Why?

    - by zenna
    I encountered a strange problem where increasing my occupancy by increasing the number of threads reduced performance. I created the following program to illustrate the problem: #include <stdio.h> #include <stdlib.h> #include <cuda_runtime.h> __global__ void less_threads(float * d_out) { int num_inliers; for (int j=0;j<800;++j) { //Do 12 computations num_inliers += threadIdx.x*1; num_inliers += threadIdx.x*2; num_inliers += threadIdx.x*3; num_inliers += threadIdx.x*4; num_inliers += threadIdx.x*5; num_inliers += threadIdx.x*6; num_inliers += threadIdx.x*7; num_inliers += threadIdx.x*8; num_inliers += threadIdx.x*9; num_inliers += threadIdx.x*10; num_inliers += threadIdx.x*11; num_inliers += threadIdx.x*12; } if (threadIdx.x == -1) d_out[blockIdx.x*blockDim.x+threadIdx.x] = num_inliers; } __global__ void more_threads(float *d_out) { int num_inliers; for (int j=0;j<800;++j) { // Do 4 computations num_inliers += threadIdx.x*1; num_inliers += threadIdx.x*2; num_inliers += threadIdx.x*3; num_inliers += threadIdx.x*4; } if (threadIdx.x == -1) d_out[blockIdx.x*blockDim.x+threadIdx.x] = num_inliers; } int main(int argc, char* argv[]) { float *d_out = NULL; cudaMalloc((void**)&d_out,sizeof(float)*25000); more_threads<<<780,128>>>(d_out); less_threads<<<780,32>>>(d_out); return 0; } Note both kernels should do the same amount of work in total, the (if threadIdx.x == -1 is a trick to stop the compiler optimising everything out and leaving an empty kernel). The work should be the same as more_threads is using 4 times as many threads but with each thread doing 4 times less work. Significant results form the profiler results are as followsL: more_threads: GPU runtime = 1474 us,reg per thread = 6,occupancy=1,branch=83746,divergent_branch = 26,instructions = 584065,gst request=1084552 less_threads: GPU runtime = 921 us,reg per thread = 14,occupancy=0.25,branch=20956,divergent_branch = 26,instructions = 312663,gst request=677381 As I said previously, the run time of the kernel using more threads is longer, this could be due to the increased number of instructions. Why are there more instructions? Why is there any branching, let alone divergent branching, considering there is no conditional code? Why are there any gst requests when there is no global memory access? What is going on here! Thanks

    Read the article

  • Saving image to database as varbinary, arraylength (part 2)

    - by Thomas Schoof
    This is a followup to my previous question, which got solved (thank you for that) but now I am stuck at another error. I'm trying to save an image in my database (called 'Afbeelding'), for that I made a table which excists of: id: int souce: varbinary(max) I then created a wcf service to save an 'Afbeelding' to the database. private static DataClassesDataContext dc = new DataClassesDataContext(); [OperationContract] public void setAfbeelding(Afbeelding a) { //Afbeelding a = new Afbeelding(); //a.id = 1; //a.source = new Binary(bytes); dc.Afbeeldings.InsertOnSubmit(a); dc.SubmitChanges(); } I then put a reference to the service in my project and when I press the button I try to save it to the datbase. private void btnUpload_Click(object sender, RoutedEventArgs e) { Afbeelding a = new Afbeelding(); OpenFileDialog openFileDialog = new OpenFileDialog(); openFileDialog.Filter = "JPEG files|*.jpg"; if (openFileDialog.ShowDialog() == true) { //string imagePath = openFileDialog.File.Name; //FileStream fileStream = new FileStream(imagePath, FileMode.Open, FileAccess.Read); //byte[] buffer = new byte[fileStream.Length]; //fileStream.Read(buffer, 0, (int)fileStream.Length); //fileStream.Close(); Stream stream = (Stream)openFileDialog.File.OpenRead(); Byte[] bytes = new Byte[stream.Length]; stream.Read(bytes, 0, (int)stream.Length); string fileName = openFileDialog.File.Name; a.id = 1; a.source = new Binary { Bytes = bytes }; } EditAfbeeldingServiceClient client = new EditAfbeeldingServiceClient(); client.setAfbeeldingCompleted +=new EventHandler<System.ComponentModel.AsyncCompletedEventArgs>(client_setAfbeeldingCompleted); client.setAfbeeldingAsync(a); } void client_setAfbeeldingCompleted(object sender, System.ComponentModel.AsyncCompletedEventArgs e) { if (e.Error != null) txtEmail.Text = e.Error.ToString(); else MessageBox.Show("WIN"); } However, when I do this, I get the following error: System.ServiceModel.FaultException: The formatter threw an exception while trying to deserialize the message: There was an error while trying to deserialize parameter :a. The InnerException message was 'There was an error deserializing the object of type OndernemersAward.Web.Afbeelding. The maximum array length quota (16384) has been exceeded while reading XML data. This quota may be increased by changing the MaxArrayLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader.'. Please see InnerException for more details. at System.ServiceModel.Channels.ServiceChannel.HandleReply(ProxyOperationRuntime operation, ProxyRpc& rpc) at System.ServiceModel.Channels.ServiceChannel.EndCall(String action, Object[] outs, IAsyncResult result) at System.ServiceModel.ClientBase`1.ChannelBase`1.EndInvoke(String methodName, Object[] args, IAsyncResult result) atOndernemersAward.EditAfbeeldingServiceReference.EditAfbeeldingServiceClient.EditAfbeeldingServiceClientChannel.EndsetAfbeelding(IAsyncResult result) at OndernemersAward.EditAfbeeldingServiceReference.EditAfbeeldingServiceClient.OndernemersAward.EditAfbeeldingServiceReference.EditAfbeeldingService.EndsetAfbeelding(IAsyncResult result) at OndernemersAward.EditAfbeeldingServiceReference.EditAfbeeldingServiceClient.OnEndsetAfbeelding(IAsyncResult result) at System.ServiceModel.ClientBase`1.OnAsyncCallCompleted(IAsyncResult result) I'm not sure what's causing this but I think it has something to do with the way I write the image to the database? (The array length is too big big, I don't really know how to change it) Thank you for your help, Thomas

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >