Search Results

Search found 20990 results on 840 pages for 'static block'.

Page 74/840 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • My kernel only works in block (0,0)

    - by ZeroDivide
    I am trying to write a simple matrixMultiplication application that multiplies two square matrices using CUDA. I am having a problem where my kernel is only computing correctly in block (0,0) of the grid. This is my invocation code: dim3 dimBlock(4,4,1); dim3 dimGrid(4,4,1); //Launch the kernel; MatrixMulKernel<<<dimGrid,dimBlock>>>(Md,Nd,Pd,Width); This is my Kernel function __global__ void MatrixMulKernel(int* Md, int* Nd, int* Pd, int Width) { const int tx = threadIdx.x; const int ty = threadIdx.y; const int bx = blockIdx.x; const int by = blockIdx.y; const int row = (by * blockDim.y + ty); const int col = (bx * blockDim.x + tx); //Pvalue stores the Pd element that is computed by the thread int Pvalue = 0; for (int k = 0; k < Width; k++) { Pvalue += Md[row * Width + k] * Nd[k * Width + col]; } __syncthreads(); //Write the matrix to device memory each thread writes one element Pd[row * Width + col] = Pvalue; } I think the problem may have something to do with memory but I'm a bit lost. What should I do to make this code work across several blocks?

    Read the article

  • How to limit TCP writes to particular size and then block untlil the data is read

    - by ustulation
    {Qt 4.7.0 , VS 2010} I have a Server written in Qt and a 3rd party client executable. Qt based server uses QTcpServer and QTcpSocket facilities (non-blocking). Going through the articles on TCP I understand the following: the original implementation of TCP mentioned the negotiable window size to be a 16-bit value, thus maximum being 65535 bytes. But implementations often used the RFC window-scale-extension that allows the sliding window size to be scalable by bit-shifting to yield a maximum of 1 gigabyte. This is implementation defined. This could have resulted in majorly different window sizes on receiver and sender end as the server uses Qt facilities without hardcoding any window size limit. Client 1st asks for all information it can based on the previous messages from the server before handling the new (accumulating) incoming messages. So at some point Server receives a lot of messages each asking for data of several MB's. This the server processes and puts it into the sender buffer. Client however is unable to handle the messages at the same pace and it seems that client’s receiver buffer is far smaller (65535 bytes maybe) than sender’s transmit window size. The messages thus get accumulated at sender’s end until the sender’s buffer is full too after which the TCP writes on sender would block. This however does not happen as sender buffer is much larger. Hence this manifests as increase in memory consumption on the sender’s end. To prevent this from happening, I used Qt’s socket’s waitForBytesWritten() with timeout set to -1 for infinite waiting period. This as I see from the behaviour blocks the thread writing TCP data until the data has actually been sensed by the receiver’s window (which will happen when earlier messages have been processed by the client at application level). This has caused memory consumption at Server end to be almost negligible. is there a better alternative to this (in Qt) if i want to restrict the memory consumption at server end to say x MB's? Also please point out if any of my understandings is incorrect.

    Read the article

  • How To Block The UserName After 3 Invalid Password Attempts IN ASP.NET

    - by shihab
    I used the following code for checking user name and password. and I want ti block the user name after 3 invalid password attempt. what should I add in my codeing MD5CryptoServiceProvider md5hasher = new MD5CryptoServiceProvider(); Byte[] hashedDataBytes; UTF8Encoding encoder = new UTF8Encoding(); hashedDataBytes = md5hasher.ComputeHash(encoder.GetBytes(TextBox3.Text)); StringBuilder hex = new StringBuilder(hashedDataBytes.Length * 2); foreach (Byte b in hashedDataBytes) { hex.AppendFormat("{0:x2}", b); } string hash = hex.ToString(); SqlConnection con = new SqlConnection("Data Source=Shihab-PC;Initial Catalog=test;User ID=SOMETHING;Password=SOMETHINGELSE"); SqlDataAdapter ad = new SqlDataAdapter("select password from Users where UserId='" + TextBox4.Text + "'", con); DataSet ds = new DataSet(); ad.Fill(ds, "Users"); SqlDataAdapter ad2 = new SqlDataAdapter("select UserId from Users ", con); DataSet ds2 = new DataSet(); ad2.Fill(ds2, "Users"); Session["id"] = TextBox4.Text.ToString(); if ((string.Compare((ds.Tables["Users"].Rows[0][0].ToString()), hash)) == 0) { if (string.Compare(TextBox4.Text, (ds2.Tables["Users"].Rows[0][0].ToString())) == 0) { Response.Redirect("actioncust.aspx"); } else { Response.Redirect("actioncust.aspx"); } } else { Label2.Text = "Invalid Login"; } con.Close(); }

    Read the article

  • Large free block of english non-pronoun text

    - by Tom
    As part of teaching myself python I've written a script which allows a user to play hangman. At the moment, the hangman word to be guessed is simply entered manually at the start of the script's code. I want instead for the script to choose randomly from a large list of english words. This I know how to do - my problem is finding that list of words to work from in the first place. Does anyone know of a source on the net for, say, 1000 common english words where they can be downloaded as a block of text or something similar that I can work with? (My initial thought was grabbing a chunk of a novel from project gutenburg [this project is only for my own amusement and won't be available anywhere else so copyright etc doesn't matter hugely to me btw], but anything like that is likely to contain too many names or non-standard words that wouldn't be suitable for hangman. I need text that only has words legal for use in scrabble, basically). It's a slightly odd question for here I suppose, but actually I thought the answer might be of use not just to me but anyone else working on a project for a wordgame or similar that needs a large seed list of words to work from. Many thanks for any links or suggestions :)

    Read the article

  • this block of code going straight to break in java

    - by user2914851
    I have this block in a switch case statement that when selected, just breaks and presents me with the main menu again. System.out.println("Choose a competitor surname"); String competitorChoice2 = input.nextLine(); int lowestSpeed = Integer.MAX_VALUE; int highestSpeed = 0; for(int j = 0; j < clipArray.length; j++) { if(clipArray[j] != null) { if(competitorChoice2.equals(clipArray[j].getSurname())) { if(clipArray[j].getSpeed() > clipArray[highestSpeed].getSpeed()) { highestSpeed = j; } } } } for(int i = 0; i < clipArray.length; i++) { if(clipArray[i] != null) { if(competitorChoice2.equals(clipArray[i].getSurname())) { if(clipArray[i].getSpeed() < clipArray[lowestSpeed].getSpeed()) { lowestSpeed = i; } } } } for(int h = lowestSpeed; h < highestSpeed; h++ ) { System.out.println(""+clipArray[h].getLength()); } I have an array of objects and each object has a surname and a speed. I want the user to choose a surname and display the speeds of all of their clips from lowest to highest. when I select this option it just breaks and brings me back to the main menu

    Read the article

  • WatiN screenshot saver

    - by Brian Schroer
    In addition to my automated unit, system and integration tests for ASP.NET projects, I like to give my customers something pretty that they can look at and visually see that the web site is behaving properly. I use the Gallio test runner to produce a pretty HTML report, and WatiN (Web Application Testing In .NET) to test the UI and create screenshots. I have a couple of issues with WatiN’s “CaptureWebPageToFile” method, though: It blew up the first (and only) time I tried it, possibly because… It scrolls down to capture the entire web page (I tried it on a very long page), and I usually don’t need that Also, sometimes I don’t need a picture of the whole browser window - I just want a picture of the element that I'm testing (for example, proving that a button has the correct caption). I wrote a WatiN screenshot saver helper class with these methods: SaveBrowserWindowScreenshot(Watin.Core.IE ie)  / SaveBrowserWindowScreenshot(Watin.Core.Element element) saves a screenshot of the browser window SaveBrowserWindowScreenshotWithHighlight(Watin.Core.Element element) saves a screenshot of the browser window, with the specified element scrolled into view and highlighted SaveElementScreenshot(Watin.Core.Element element) saves a picture of only the specified element The element highlighting improves on the built-in WatiN method (which just gives the element a yellow background, and makes the element pretty much unreadable when you have a light foreground color) by adding the ability to specify a HighlightCssClassName that points to a style in your site’s stylesheet. This code is specifically for testing with Internet Explorer (‘cause that’s what I have to test with at work), but you’re welcome to take it and do with it what you want… using System; using System.Drawing; using System.Drawing.Imaging; using System.IO; using System.Reflection; using System.Runtime.InteropServices; using System.Text; using System.Threading; using SHDocVw; using WatiN.Core; using mshtml; namespace BrianSchroer.TestHelpers { public static class WatinScreenshotSaver { public static void SaveBrowserWindowScreenshotWithHighlight (Element element, string screenshotName) { HighlightElement(element, true); SaveBrowserWindowScreenshot(element, screenshotName); HighlightElement(element, false); } public static void SaveBrowserWindowScreenshotWithHighlight(Element element) { HighlightElement(element, true); SaveBrowserWindowScreenshot(element); HighlightElement(element, false); } public static void SaveBrowserWindowScreenshot(Element element, string screenshotName) { SaveScreenshot(GetIe(element), screenshotName, SaveBitmapForCallbackArgs); } public static void SaveBrowserWindowScreenshot(Element element) { SaveScreenshot(GetIe(element), null, SaveBitmapForCallbackArgs); } public static void SaveBrowserWindowScreenshot(IE ie, string screenshotName) { SaveScreenshot(ie, screenshotName, SaveBitmapForCallbackArgs); } public static void SaveBrowserWindowScreenshot(IE ie) { SaveScreenshot(ie, null, SaveBitmapForCallbackArgs); } public static void SaveElementScreenshot(Element element, string screenshotName) { // TODO: Figure out how to get browser window "chrome" size and not have to go to full screen: var iex = (InternetExplorerClass) GetIe(element).InternetExplorer; bool fullScreen = iex.FullScreen; if (!fullScreen) iex.FullScreen = true; ScrollIntoView(element); SaveScreenshot(GetIe(element), screenshotName, args => SaveElementBitmapForCallbackArgs(element, args)); iex.FullScreen = fullScreen; } public static void SaveElementScreenshot(Element element) { SaveElementScreenshot(element, null); } private static void SaveScreenshot(IE browser, string screenshotName, Action<ScreenshotCallbackArgs> screenshotCallback) { string fileName = string.Format("{0:000}{1}{2}.jpg", ++_screenshotCount, (string.IsNullOrEmpty(screenshotName)) ? "" : " ", screenshotName); string path = Path.Combine(ScreenshotDirectoryName, fileName); Console.WriteLine(); // Gallio HTML-encodes the following display, but I have a utility program to // remove the "HTML===" and "===HTML" and un-encode the rest to show images in the Gallio report: Console.WriteLine("HTML===<div><b>{0}:</br></b><img src=\"{1}\" /></div>===HTML", screenshotName, new Uri(path).AbsoluteUri); MakeBrowserWindowTopmost(browser); try { var args = new ScreenshotCallbackArgs { InternetExplorerClass = (InternetExplorerClass)browser.InternetExplorer, ScreenshotPath = path }; Thread.Sleep(100); screenshotCallback(args); } catch (Exception ex) { Console.WriteLine(ex.Message); } } public static void HighlightElement(Element element, bool doHighlight) { if (!element.Exists) return; if (string.IsNullOrEmpty(HighlightCssClassName)) { element.Highlight(doHighlight); return; } string jsRef = element.GetJavascriptElementReference(); if (string.IsNullOrEmpty(jsRef)) return; var sb = new StringBuilder("try { "); sb.AppendFormat(" {0}.scrollIntoView(false);", jsRef); string format = (doHighlight) ? "{0}.className += ' {1}'" : "{0}.className = {0}.className.replace(' {1}', '')"; sb.AppendFormat(" " + format + ";", jsRef, HighlightCssClassName); sb.Append("} catch(e) {}"); string script = sb.ToString(); GetIe(element).RunScript(script); } public static void ScrollIntoView(Element element) { string jsRef = element.GetJavascriptElementReference(); if (string.IsNullOrEmpty(jsRef)) return; var sb = new StringBuilder("try { "); sb.AppendFormat(" {0}.scrollIntoView(false);", jsRef); sb.Append("} catch(e) {}"); string script = sb.ToString(); GetIe(element).RunScript(script); } public static void MakeBrowserWindowTopmost(IE ie) { ie.BringToFront(); SetWindowPos(ie.hWnd, HWND_TOPMOST, 0, 0, 0, 0, TOPMOST_FLAGS); } public static string HighlightCssClassName { get; set; } private static int _screenshotCount; private static string _screenshotDirectoryName; public static string ScreenshotDirectoryName { get { if (_screenshotDirectoryName == null) { var asm = Assembly.GetAssembly(typeof(WatinScreenshotSaver)); var uri = new Uri(asm.CodeBase); var fileInfo = new FileInfo(uri.LocalPath); string directoryName = fileInfo.DirectoryName; _screenshotDirectoryName = Path.Combine( directoryName, string.Format("Screenshots_{0:yyyyMMddHHmm}", DateTime.Now)); Console.WriteLine("Screenshot folder: {0}", _screenshotDirectoryName); Directory.CreateDirectory(_screenshotDirectoryName); } return _screenshotDirectoryName; } set { _screenshotDirectoryName = value; _screenshotCount = 0; } } [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] private static extern bool SetWindowPos(IntPtr hWnd, IntPtr hWndInsertAfter, int X, int Y, int cx, int cy, uint uFlags); private static readonly IntPtr HWND_TOPMOST = new IntPtr(-1); private const UInt32 SWP_NOSIZE = 0x0001; private const UInt32 SWP_NOMOVE = 0x0002; private const UInt32 TOPMOST_FLAGS = SWP_NOMOVE | SWP_NOSIZE; private static IE GetIe(Element element) { if (element == null) return null; var container = element.DomContainer; while (container as IE == null) container = container.DomContainer; return (IE)container; } private static void SaveBitmapForCallbackArgs(ScreenshotCallbackArgs args) { InternetExplorerClass iex = args.InternetExplorerClass; SaveBitmap(args.ScreenshotPath, iex.Left, iex.Top, iex.Width, iex.Height); } private static void SaveElementBitmapForCallbackArgs(Element element, ScreenshotCallbackArgs args) { InternetExplorerClass iex = args.InternetExplorerClass; Rectangle bounds = GetElementBounds(element); SaveBitmap(args.ScreenshotPath, iex.Left + bounds.Left, iex.Top + bounds.Top, bounds.Width, bounds.Height); } /// <summary> /// This method is used instead of element.NativeElement.GetElementBounds because that /// method has a bug (http://sourceforge.net/tracker/?func=detail&aid=2994660&group_id=167632&atid=843727). /// </summary> private static Rectangle GetElementBounds(Element element) { var ieElem = element.NativeElement as WatiN.Core.Native.InternetExplorer.IEElement; IHTMLElement elem = ieElem.AsHtmlElement; int left = elem.offsetLeft; int top = elem.offsetTop; for (IHTMLElement parent = elem.offsetParent; parent != null; parent = parent.offsetParent) { left += parent.offsetLeft; top += parent.offsetTop; } return new Rectangle(left, top, elem.offsetWidth, elem.offsetHeight); } private static void SaveBitmap(string path, int left, int top, int width, int height) { using (var bitmap = new Bitmap(width, height)) { using (Graphics g = Graphics.FromImage(bitmap)) { g.CopyFromScreen( new Point(left, top), Point.Empty, new Size(width, height) ); } bitmap.Save(path, ImageFormat.Jpeg); } } private class ScreenshotCallbackArgs { public InternetExplorerClass InternetExplorerClass { get; set; } public string ScreenshotPath { get; set; } } } }

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • cpptask ordering of static libraries in gcc command line

    - by AC
    How do I force cpptask to move the static libraries to the end on arg list issued to the compiler? Here is the clause I am using <cpptasks:cc description="appname" subsystem="console" objdir="obj" outfile="dist/app_test"> <compiler refid="testsslcc" /> <linkerarg value="-L${libdir}" /> <linkerarg value="-L/usr/local/devl/lib" /> <linkerarg value="-Wl,-rpath,../lib" /> <libset libs="unittest ${libs} dsg readline ncurses gcov" /> <fileset dir="test/obj" includes="main.o" /> <fileset dir="." includes="${TCFILES}" /> <fileset dir="../lib" includes="libboost_thread.a libboost_date_time.a" /> </cpptasks:cc> when this executes, libboost_thread.a libboost_date_time.a are first files in the argument list passed the compiler, gcc -ggdb -Wl,-export-dynamic -Wshadow -Wno-format-y2k ../../lib/libboost_date_time.a ../../lib/libboost_thread.a x.cpp ... which causes compiler error. By manually moving them to the end of the argument list, the application compiles without error. gcc -ggdb -Wl,-export-dynamic -Wshadow -Wno-format-y2k x.cpp ... ../../lib/libboost_date_time.a ../../lib/libboost_thread.a And yes I have tried changing the order in the xml, and that of course didn't work. For now I am using an exec task to call gcc with the files in the correct order but this of course is a hack.

    Read the article

  • can't figure out serving static images in django dev environment

    - by photographer
    I've read the article (and few others on the subject), but still can't figure out how to show an image unless a link to a file existing on a web-service is hard-coded into the html template. I've got in urls.py: ... (r'^galleries/(landscapes)/(?P<path>.jpg)$', 'django.views.static.serve', {'document_root': settings.MEDIA_URL}), ... where 'landscapes' is one of the albums I'm trying to show images from. (There are several more of them.) In views.py it calls the template with code like that: ... <li><img src=160.jpg alt='' title='' /></li> ... which resolves the image link in html into: http://127.0.0.1:8000/galleries/landscapes/160.jpg In settings.py I have: MEDIA_ROOT = 'C:/siteURL/galleries/' MEDIA_URL = 'http://some-good-URL/galleries/' In file system there is a file C:/siteURL/galleries/landscapes/160.jpg and I do have the same file at http://some-good-URL/galleries/landscapes/160.jpg No matter what I use in urls.py — MEDIA_ROOT or MEDIA_URL (with expectation to have either local images served or from the web-server) — I get following in the source code in the browser: <li><img src=160.jpg /></li> There is no image shown in the browser. What am I doing wrong?

    Read the article

  • nginx serving Django static media | 502 bad gateway

    - by MMRUser
    I'm trying to serve Django static media through nginx, Here's my nginx.conf server { listen 7777; listen localhost:7777; server_name example.com; location / { proxy_pass http://localhost:7777; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } location /test-app-media/ { root /sites/mysite/staticmedia/; expires max; } } but give a 502 bad gateway error, the path to /sites/mysite/staticmedia/ is in the nginx root/ is that the problem.. Django running on Apache 2.2 + mod_wsgi nginx 0.7.65 Thanks..

    Read the article

  • Get UiBinder widget to display inline instead of block

    - by Steve Armstrong
    I'm trying to get my UiBinder-defined widget to display inline, but I can't. My current code is: <ui:UiBinder xmlns:ui='urn:ui:com.google.gwt.uibinder' xmlns:g='urn:import:com.google.gwt.user.client.ui'> <ui:style> .section { border: 1px solid #000000; width: 330px; padding: 5px; display: run-in; } </ui:style> <g:HTMLPanel> <div class="{style.section}"> <div ui:field="titleSpan" class="{style.title}" /> <div class="{style.contents}"> <g:VerticalPanel ui:field="messagesPanel" /> </div> </div> </g:HTMLPanel> </ui:UiBinder> This works fine in terms of how the widget looks internally, but I want to throw a bunch of these widgets into a FlowPanel and have them flow when the window is resized. The HTMLPanel is a div, but I can't get the display attribute to assign. I can't force the style name, since the following throws an error: <g:HTMLPanel styleNames="{style.section}"> And I can assign an additional style, but it doesn't apply the display setting. <g:HTMLPanel addStyleNames="{style.section}"> This displays the border and sets the size, as expected, but it doesn't flow. Firebug shows the styles on the div are border, width, and padding, but no display. Is there a way to make a widget in UiBinder so that it'll display inline instead of block? And if so, can I make it compatible with having a VerticalPanel inside (can I do it without making the entire widget pure HTML without any GWT widgets)? PS: I saw question 2257924 but it hasn't had any answers lately, and he seems to be focused on getting a tag, not specifically getting inline layout. I don't care directly about , if I can just get the top-level tag for my widget to flow inline, I'm happy.

    Read the article

  • Using dynamic parameters in email publisher subjectSettings block with CruiseControl.Net

    - by Joe
    I am trying to get dynamic parameters to be used in the email publisher's subjectSettings block. For example, <project> ... <parameters> <textParameter> <name>version</name> <display>Version to install</display> <description>The version to install.</description> <required>true</required> </textParameter> </parameters> <tasks> ... </tasks> <publishers> .... <email includeDetails="TRUE"> <from>buildmaster</from> <mailhost>localhost</mailhost> <users> <user name="Joe" group="buildmaster" address="jdavies" /> </users> <groups> <group name="buildmaster"> <notifications> <notificationType>Always</notificationType> </notifications> </group> <group name="users"> <notifications> <notificationType>Success</notificationType> <notificationType>Fixed</notificationType> </notifications> </group> </groups> <subjectSettings> <subject buildResult="Success" value="Version ${version} installed." /> <subject buildResult="Fixed" value="Version ${version} fixed and installed." /> </subjectSettings> <modifierNotificationTypes> <notificationType>Success</notificationType> </modifierNotificationTypes> </email> </project> I have tried using ${version} and $[version]. When I use $[version], the entire subject line is empty! Are dynamic parameters supported in this case, and if so, what am I doing wrong?

    Read the article

  • Difference dynamic static 2d array c++

    - by snorlaks
    Hello, Im using opensource library called wxFreeChart to draw some XY charts. In example there is code which uses static array as a serie : double data1[][2] = { { 10, 20, }, { 13, 16, }, { 7, 30, }, { 15, 34, }, { 25, 4, }, }; dataset->AddSerie((double *) data1, WXSIZEOF(dynamicArray)); WXSIZEOF ismacro defined like: sizeof(array)/sizeof(array[0]) In this case everything works great but in my program Im using dynamic arrays (according to users input). I made a test and wrotecode like below: double **dynamicArray = NULL; dynamicArray = new double *[5] ; for( int i = 0 ; i < 5 ; i++ ) dynamicArray[i] = new double[2]; dynamicArray [0][0] = 10; dynamicArray [0][1] = 20; dynamicArray [1][0] = 13; dynamicArray [1][1] = 16; dynamicArray [2][0] = 7; dynamicArray [2][1] = 30; dynamicArray [3][0] = 15; dynamicArray [3][1] = 34; dynamicArray [4][0] = 25; dynamicArray [4][1] = 4; dataset->AddSerie((double *) *dynamicArray, WXSIZEOF(dynamicArray)); But it doesnt work correctly. I mean point arent drawn. I wonder if there is any possibility that I can "cheat" that method and give it dynamic array in way it understands it and will read data from correct place thanks for help

    Read the article

  • Need advice on OOP philosophy

    - by David Jenings
    I'm trying to get the wheels turning on a large project in C#. My previous experience is in Delphi, where by default every form was created at applicaton startup and form references where held in (gasp) global variables. So I'm trying to adapt my thinking to a 100% object oriented environment, and my head is spinning just a little. My app will have a large collection of classes Most of these classes will only really need one instance. So I was thinking: static classes. I'm not really sure why, but much of what I've read here says that if my class is going to hold a state, which I take to mean any property values at all, I should use a singleton structure instead. Okay. But there are people out there who for reasons that escape me, think that singletons are evil too. None of these classes is in danger of being used anywhere except in this program. So they could certainly work fine as regular objects (vs singletons or static classes) Then there's the issue of interaction between objects. I'm tempted to create a Global class full of public static properties referencing the single instances of many of these classes. I've also considered just making them properties (static or instance, not sure which) of the MainForm. Then I'd have each of my classes be aware of the MainForm as Owner. Then the various objects could refer to each other as Owner.Object1, Owner.Object2, etc. I fear I'm running out of electronic ink, or at least taxing the patience of anyone kind enough to have stuck with me this long. I hope I have clearly explained my state of utter confusion. I'm just looking for some advice on best practices in my situation. All input is welcome and appreciated. Thanks in advance, David Jennings

    Read the article

  • Implementing A Static NSOutlineView

    - by spamguy
    I'm having a hard time scraping together enough snippets of knowledge to implement an NSOutlineView with a static, never-changing structure defined in an NSArray. This link has been great, but it's not helping me grasp submenus. I'm thinking they're just nested NSArrays, but I have no clear idea. Let's say we have an NSArray inside an NSArray, defined as NSArray *subarray = [[NSArray alloc] initWithObjects:@"2.1", @"2.2", @"2.3", @"2.4", @"2.5", nil]; NSArray *ovStructure = [[NSArray alloc] initWithObjects:@"1", subarray, @"3", nil]; The text is defined in outlineView:objectValueForTableColumn:byItem:. - (id)outlineView:(NSOutlineView *)ov objectValueForTableColumn:(NSTableColumn *)tableColumn byItem:(id)ovItem { if ([[[tableColumn headerCell] stringValue] compare:@"Key"] == NSOrderedSame) { // Return the key for this item. First, get the parent array or dictionary. // If the parent is nil, then that must be root, so we'll get the root // dictionary. id parentObject = [ov parentForItem:ovItem] ? [ov parentForItem:ovItem] : ovStructure; if ([parentObject isKindOfClass:[NSArray class]]) { // Arrays don't have keys (usually), so we have to use a name // based on the index of the object. NSLog([NSString stringWithFormat:@"%@", ovItem]); //return [NSString stringWithFormat:@"Item %d", [parentObject indexOfObject:ovItem]]; return (NSString *) [ovStructure objectAtIndex:[ovStructure indexOfObject:ovItem]]; } } else { // Return the value for the key. If this is a string, just return that. if ([ovItem isKindOfClass:[NSString class]]) { return ovItem; } else if ([ovItem isKindOfClass:[NSDictionary class]]) { return [NSString stringWithFormat:@"%d items", [ovItem count]]; } else if ([ovItem isKindOfClass:[NSArray class]]) { return [NSString stringWithFormat:@"%d items", [ovItem count]]; } } return nil; } The result is '1', '(' (expandable), and '3'. NSLog shows the array starting with '(', hence the second item. Expanding it causes a crash due to going 'beyond bounds.' I tried using parentForItem: but couldn't figure out what to compare the result to. What am I missing?

    Read the article

  • Writing a JMS Publisher without "public static void main"

    - by The Elite Gentleman
    Hi guys, Every example I've seen on the web, e.g. http://www.codeproject.com/KB/docview/jms_to_jms_bridge_activem.aspx, creates a publisher and subscriber with a public static void main method. I don't think that'll work for my web application. I'm learning JMS and I've setup Apache ActiveMQ to run on JBoss 5 and Tomcat 6 (with no glitches). I'm writing a messaging JMS service that needs to send email asynchronously. I've already written a JMS subscriber that receives the message (the class inherits MessageListener). My question is simple: How do I write a publisher that will so that my web applications can call it? Does it have to be published somewhere? My thought is to create a publisher with a no-attribute constructor (in there) and get the MessageQueue Factory, etc. from the JNDI pool (in the constructor). Is my idea correct? How do I subscribe my subscriber to the Queue Receiver? (So far, the subscriber has no constructor, and if I write a constructor, do I always subscribe myself to the Queue receiver?) Thanks for your help, sorry if my terminology is not up to scratch, there are too many java terminologies that I get lost sometimes (maybe a java GPS will do! :-) ) PS Is there a tutorial out there that explains how to write a "better" (better can mean anything, but in my case it's all about performance in high demand requests) JMS Publisher and Subscriber that I can run on Application Server such as JBoss or Glassfish? Don't forget that the JMS application will needs a "guarantee" uptime as many applications will use this.

    Read the article

  • Usage of static analysis tools - with Clear Case/Quest

    - by boyd4715
    We are in the process of defining our software development process and wanted to get some feed back from the group about this topic. Our team is spread out - US, Canada and India - and I would like to put into place some simple standard rules that all teams will apply to their code. We make use of Clear Case/Quest and RAD I have been looking at PMD, CPP, checkstyle and FindBugs as a start. My thought is to just put these into ANT and have the developers run these manually. I realize doing this you have to have some trust in that each developer will do this. The other thought is to add in some builders in to the IDE which would run a subset of the rules (keep the build process light) and then add another set (heavy) when they check in the code. Some other ideals is to make use of something like Cruse Control and have it set up to run these static analysis tools along with the unit test when ever Clear Case/Quest is idle. Wondering if others have done this and if it was successfully or can provide lessons learned.

    Read the article

  • setting up 301 redirects: dynamic urls to static urls

    - by MS
    We are currently using a template-based website and are hoping to move to a site with static urls. Our domain will stay the same. I understand that using 301 redirects in a .htaccess file is the preferred method -- and the one that has the highest chance of preserving our google rankings. I am still new at all this and am having a hard time figuring out the proper way to code it all. Over a hundred of our pages are indexed. They all have a similar URL but with different pageIDs: http://www.realestate-bigbear.com/Nav.aspx/Page=%2fPageManager%2fDefault.aspx%2fPageID%3d2020765 Some link out to provided content, ex. /RealEstateNews/Default.aspx Then there are many that flow from the main featured listings page: /ListNow/Default.aspx Down to all the specific properties.. where the PropertyId changes /ListNow/Property.aspx?PropertyID=2048098 would a simple set of codes work... like the following.... redirect 301 /Nav.aspx/Page=%2fPageManager%2fDefault.aspx%2fPageID%3d2020765 www.realestate.bigbear.com/SearchBigBearMLS.htm or do I need to do something entirely different?

    Read the article

  • need a near 100% uptime third-party web-accesible hosting for static web resources

    - by Jared Henderson
    I hope this makes sense: my business sells a website template, we currently have about 10,000 users. For various reasons that are unimportant to this question, I try to keep the file size of the zipped template we give them as small as possible. Because of this, I have taken a bunch of images and a couple of static files used by the template and moved them to external hosting. They are referenced by absolute URL in the css and markup, instead of shipping all of those images and files with every template. So, basically 10,000+ and growing users are requesting images and files from a third-party host. I don't use my own webhosting for this because I still kind of use a medium-cheap shared hosting for my website, and if it goes down, 10,000+ users are potentially effected. Currently I'm having the template directly access files inside of an open-source google-code project that I created for just this purpose. But, that seems like a bastardization of what a google-code repository is for, and plus, google code (i've found out) often spews 502 bad gateway errors for hours at a time. So, anyway, my question is: where is the right kind of place to host these? Obviously I'm willing to pay. My main needs are speed and uptime, since the images and files are being requested from thousands of different websites every day. Is this something that I should use Amazon S3 for? I'm guessing there's some kind of service exactly for this kind of need, but I'm at a loss to figure out what it is.

    Read the article

  • iphone cocos2d sprites disappearing

    - by jer
    I've been working on a game and implementing the physics stuff with chipmunk. All was going fine on the cocos2d part until the integration with chipmunk. A bit of background: The game is a game with blocks. Levels are defined in a property list, where positions, size of the blocks, gravitational forces, etc., are all defined for each block to be shown in the level. The problem is with the blocks showing up. I have a method on my BlockLayer class which is part of my game's main scene. Upon creation of the layer, the property list is read, and all the blocks are created. The following method is called to create the blocks: - (void)createBlock:(Block*)block withAssets:(NSBundle*)assets { Sprite* sprite; switch(block.blockColour) { case kBlockColourGreen: sprite = [Sprite spriteWithFile:[assets pathForResource:@"green" ofType:@"png" inDirectory:@"Blocks"]]; break; case kBlockColourOrange: sprite = [Sprite spriteWithFile:[assets pathForResource:@"orange" ofType:@"png" inDirectory:@"Blocks"]]; break; case kBlockColourRed: sprite = [Sprite spriteWithFile:[assets pathForResource:@"red" ofType:@"png" inDirectory:@"Blocks"]]; break; case kBlockColourBlue: sprite = [Sprite spriteWithFile:[assets pathForResource:@"blue" ofType:@"png" inDirectory:@"Blocks"]]; break; } sprite.position = block.bounds.origin; [self addChild:sprite]; if(block.blockColour == kBlockColourGreen || block.blockColour == kBlockColourRed) space-gravity = cpvmult(cpv(0, 10), 1000); cpVect verts[] = { cpv(-block.bounds.size.width, -block.bounds.size.height), cpv(-block.bounds.size.width, block.bounds.size.height), cpv(block.bounds.size.width, block.bounds.size.height), cpv(block.bounds.size.width, -block.bounds.size.height) }; cpBody* blockBody = cpBodyNew([block.mass floatValue], INFINITY); blockBody-p = cpv(block.bounds.origin.x, block.bounds.origin.y); blockBody-v = cpvzero; cpSpaceAddBody(space, blockBody); cpShape* blockShape = cpPolyShapeNew(blockBody, 4, verts, cpvzero); blockShape-e = 0.9f; blockShape-u = 0.9f; blockShape-data = sprite; cpSpaceAddShape(space, blockShape); } With the above code, the sprites never show up. However, if I comment out the "cpSpaceAddBody(space, blockBody);" line, the sprites show up. The position and size of the blocks are stored in the "bounds" property of instances of the Block class, which is a CGRect. Not sure if it's important, but the orientation of the app is in landscape left, and all the coordinates are based on that orientation. Any help would be greatly appreciated.

    Read the article

  • JS: capture a static snapshot of an object at a point in time with a method

    - by Barney
    I have a JS object I use to store DOM info for easy reference in an elaborate GUI. It starts like this: var dom = { m:{ old:{}, page:{x:0,y:0}, view:{x:0,y:0}, update:function(){ this.old = this; this.page.x = $(window).width(); this.page.y = $(window).height(); this.view.x = $(document).width(); this.view.y = window.innerHeight || $(window).height(); } I call the function on window resize: $(window).resize(function(){dom.m.update()}); The problem is with dom.m.old. I would have thought that by calling it in the dom.m.update() method before the new values for the other properties are assigned, at any point in time dom.m.old would contain a snapshot of the dom.m object as of the last update – but instead, it's always identical to dom.m. I've just got a pointless recursion method. Why isn't this working? How can I get a static snapshot of the object that won't update without being specifically told to? Comments explaining how I shouldn't even want to be doing anything remotely like this in the first place are very welcome :)

    Read the article

  • GlassFish docroot internationalization

    - by Mr.J4mes
    At the moment, I am using Apache web server to redirect all HTTP request to port 8080 to be served by GlassFish app server. Just like Apache, GlassFish has a docroot folder to store static pages. I've tried to googled for a while but I could not figure out whether there's a way to set up internationalization for GlassFish's docroot. I'd be very grateful if you could give me a hint or a link to some tutorial regarding this matter.

    Read the article

  • How to maxmise the largest contiguous block of memory in the Large Object Heap

    - by Unsliced
    The situation is that I am making a WCF call to a remote server which is returns an XML document as a string. Most of the time this return value is a few K, sometimes a few dozen K, very occasionally a few hundred K, but very rarely it could be several megabytes (first problem is that there is no way for me to know). It's these rare occasions that are causing grief. I get a stack trace that starts: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Xml.BufferBuilder.AddBuffer() at System.Xml.BufferBuilder.AppendHelper(Char* pSource, Int32 count) at System.Xml.BufferBuilder.Append(Char[] value, Int32 start, Int32 count) at System.Xml.XmlTextReaderImpl.ParseText() at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlTextReader.Read() at System.Xml.XmlReader.ReadElementString() at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderMDRQuery.Read2_getMarketDataResponse() at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer2.Deserialize(XmlSerializationReader reader) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle) at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) I've read around and it is because the Large Object Heap is just getting too fragmented, so even preceding the call with a quick check to StringBuilder.EnsureCapacity just causes the OutOfMemoryException to be thrown earlier (and because I'm guessing at what's needed, it might not actually need that much so my check is causing more problems than it is solving). Some opinions are that there's not much I can do about it. Some of the questions I've asked myself: Use less memory - have you checked for leaks? Yes. The memory usage goes up and down, but there's no fundamental growth that guarantees this to happen. Some of the times it fails, it succeeded at that stage previously. Transfer smaller amounts Not an option, this is a third party web service over which I have no control (or at least it would take a long time to resolve, in the meantime I still have a problem) Can you do something to the LOH to make it less likely to fail? ... now this is most fruitful course. It's a 32-bit process (it has to be for various political, technical and boring reasons) but there's normally hundreds of meg free (multiples of the largest amount for which we've seen failures). Can we monitor the LOH? Using perfmon I can track the size of the heaps, but I don't think there's a way to monitor the largest available contiguous block of memory. Question is: any advice or suggestions for things to try?

    Read the article

  • Can not access response.body inside after filter block in Sinatra 1.0

    - by Petr Vostrel
    I'm struggling with a strange issue. According to http://github.com/sinatra/sinatra (secion Filters) a response object is available in after filter blocks in Sinatra 1.0. However the response.status is correctly accessible, I can not see non-empty response.body from my routes inside after filter. I have this rackup file: config.ru require 'app' run TestApp Then Sinatra 1.0.b gem installed using: gem install --pre sinatra And this is my tiny app with a single route: app.rb require 'rubygems' require 'sinatra/base' class TestApp < Sinatra::Base set :root, File.dirname(__FILE__) get '/test' do 'Some response' end after do halt 500 if response.empty? # used 500 just for illustation end end And now, I would like to access the response inside the after filter. When I run this app and access /test URL, I got a 500 response as if the response is empty, but the response clearly is 'Some response'. Along with my request to /test, a separate request to /favicon.ico is issued by the browser and that returns 404 as there is no route nor a static file. But I would expect the 500 status to be returned as the response should be empty. In console, I can see that within the after filter, the response to /favicon.ico is something like 'Not found' and response to /test really is empty even though there is response returned by the route. What do I miss?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >