Search Results

Search found 9266 results on 371 pages for 'alex block'.

Page 25/371 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • javascript onmouseover hide a div block

    - by Loki
    SO this is my code so far: JS: <script type="text/javascript"> function Hide(srcField) { var x = srcField.getAttribute('name'); var string = new RegExp("hide_ID",'gi'); switch (x) { case "1": var dataRows= document.getElementsByID("obrazovanje"); alert (dataRows[0].innerHTML); dataRows[0].className.replace('',string); break; case "2": var dataRows= document.getElementsByID("rad_iskustvo"); dataRows[0].className.replace('',string); break; case "3": var dataRows= document.getElementsByID("strani_jezici"); dataRows[0].className.replace('',string); break; case "4": var dataRows= document.getElementsByID("znanja_vjestine"); dataRows[0].className.replace('',string); break; case "5": var dataRows= document.getElementsByID("osobine_interesi"); dataRows[0].className.replace('',string); break; } } </script> CSS: .hide_ID, { display:none } HTML: <a name="1"><h4><span name="1" onmouseover="Hide(this)">OBRAZOVANJE:</span></h4></a> <div ID="obrazovanje"> <ul> <li>2001.-2005. elektrotehnicar</li> <li>2009.-2012. racunarstvo</li> </ul> </div> the idea is that i want to hide the div block when i hover over the title that's in h4, but it doesn't seem to hide it... any ideas? i started using replace but it still didn't work, before that it was just 'dataRows[0].className = "hide_ID"' but that didn't work either.

    Read the article

  • alternative to #include within namespace { } block

    - by Jeff
    Edit: I know that method 1 is essentially invalid and will probably use method 2, but I'm looking for the best hack or a better solution to mitigate rampant, mutable namespace proliferation. I have multiple class or method definitions in one namespace that have different dependencies, and would like to use the fewest namespace blocks or explicit scopings possible but while grouping #include directives with the definitions that require them as best as possible. I've never seen any indication that any preprocessor could be told to exclude namespace {} scoping from #include contents, but I'm here to ask if something similar to this is possible: (see bottom for explanation of why I want something dead simple) // NOTE: apple.h, etc., contents are *NOT* intended to be in namespace Foo! // would prefer something most this: namespace Foo { #include "apple.h" B *A::blah(B const *x) { /* ... */ } #include "banana.h" int B::whatever(C const &var) { /* ... */ } #include "blueberry.h" void B::something() { /* ... */ } } // namespace Foo ... // over this: #include "apple.h" #include "banana.h" #include "blueberry.h" namespace Foo { B *A::blah(B const *x) { /* ... */ } int B::whatever(C const &var) { /* ... */ } void B::something() { /* ... */ } } // namespace Foo ... // or over this: #include "apple.h" namespace Foo { B *A::blah(B const *x) { /* ... */ } } // namespace Foo #include "banana.h" namespace Foo { int B::whatever(C const &var) { /* ... */ } } // namespace Foo #include "blueberry.h" namespace Foo { void B::something() { /* ... */ } } // namespace Foo My real problem is that I have projects where a module may need to be branched but have coexisting components from the branches in the same program. I have classes like FooA, etc., that I've called Foo::A in the hopes being able to branch less painfully as Foo::v1_2::A, where some program may need both a Foo::A and a Foo::v1_2::A. I'd like "Foo" or "Foo::v1_2" to show up only really once per file, as a single namespace block, if possible. Moreover, I tend to prefer to locate blocks of #include directives immediately above the first definition in the file that requires them. What's my best choice, or alternatively, what should I be doing instead of hijacking the namespaces?

    Read the article

  • iOS: Assignment to iVar in Block (ARC)

    - by manmal
    I have a readonly property isFinished in my interface file: typedef void (^MyFinishedBlock)(BOOL success, NSError *e); @interface TMSyncBase : NSObject { BOOL isFinished_; } @property (nonatomic, readonly) BOOL isFinished; and I want to set it to YES in a block at some point later, without creating a retain cycle to self: - (void)doSomethingWithFinishedBlock:(MyFinishedBlock)theFinishedBlock { __weak MyClass *weakSelf = self; MyFinishedBlock finishedBlockWrapper = ^(BOOL success, NSError *e) { [weakSelf willChangeValueForKey:@"isFinished"]; weakSelf -> isFinished_ = YES; [weakSelf didChangeValueForKey:@"isFinished"]; theFinishedBlock(success, e); }; self.finishedBlock = finishedBlockWrapper; // finishedBlock is a class ext. property } I'm unsure that this is the right way to do it (I hope I'm not embarrassing myself here ^^). Will this code leak, or break, or is it fine? Perhaps there is an easier way I have overlooked? SOLUTION Thanks to the answers below (especially Krzysztof Zablocki), I was shown the way to go here: Define isFinished as readwrite property in the class extension (somehow I missed that one) so no direct ivar assignment is needed, and change code to: - (void)doSomethingWithFinishedBlock:(MyFinishedBlock)theFinishedBlock { __weak MyClass *weakSelf = self; MyFinishedBlock finishedBlockWrapper = ^(BOOL success, NSError *e) { MyClass *strongSelf = weakSelf; strongSelf.isFinished = YES; theFinishedBlock(success, e); }; self.finishedBlock = finishedBlockWrapper; // finishedBlock is a class ext. property }

    Read the article

  • Block facebook from my website

    - by Joseph Szymborski
    I have a secure link direction service I'm running (expiringlinks.co). If I change the headers in php to redirect my visitors, then facebook is able to show a preview of the website I'm redirecting to when users send links to one another via facebook. I wish to avoid this. Right now, I'm using an AJAX call to get the URL and javascript to redirect, but it's causing problems for users who don't use javascript. Here are a number of ways I'd like to block facebook, but I can't seem to get working: I've tried blocking the facebook bot (facebookexternalhit/1.0 and facebookexternalhit/1.1) but it's not working, I don't think they're using them for this functionality. I'm thinking of blocking the facebook IP addresses, but I can't find all of them, and I don't think it'll work unless I get all of them. I've thought of using a CAPTCHA or even a button, but I can't bring myself to do that to my visitors. Not to mention I don't think anyone would use the site. I've searched the facebook docs for meta tags that would "opt-me out", but haven't found one, and doubt that I would trust it if I had. Any creative ideas or any idea how to implement the ones above? Thank you so much in advance!

    Read the article

  • Unable to step into interface implementation configured by unity application block

    - by Rahul
    I have configured a set of interface implementations with EntLib. unity block. The constructor of implementation classes work fine as soon as I run the application: 1. The interface to implement when I run the application the cctor runs fine, which shows that unity resolution was successful: But when I try to call a method of this class, the code just passes through without actually invoking the function of the implemented class: Edit: Added on June 11, 2012 Following is the Unity Configuration I have. (This is all the unity configuration I am doing) public class UnityControllerFactory : DefaultControllerFactory { private static readonly IUnityContainer container; private static UnityControllerFactory factory = null; static UnityControllerFactory() { container = new UnityContainer(); UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); section.Configure(container); factory = new UnityControllerFactory(); } public static UnityControllerFactory GetControllerFactory() { return factory; } protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType) { return container.Resolve(controllerType) as IController; } } I am unable to step into this code and the implementation simply skips out without executing anything. What is wrong here?

    Read the article

  • SPAN vs DIV (inline-block)

    - by blackjid
    Hi, Is there any reason to use a <div style="display:inline-block"> instead of a <span> to layout a webpage? Can I put content nested inside the span? What is valid and what isn't? Thanks! It's ok to use this to make a 3x2 table like layout? <div> <span> content1(divs,p, spans, etc) </span> <span> content2(divs,p, spans, etc) </span> <span> content3(divs,p, spans, etc) </span> </div> <div> <span> content4(divs,p, spans, etc) </span> <span> content5(divs,p, spans, etc) </span> <span> content6(divs,p, spans, etc) </span> </div>

    Read the article

  • Howto: Access a second related model in a nested attribute builder block

    - by Joe Cairns
    I have a basic has_many through relationship: class Foo < ActiveRecord::Base has_many :bars, :dependent => :destroy has_many :wtfs :through => :bars accepts_nested_attributes_for :bars, :wtfs end On my crud forms I have a builder block for the wtf, but I need the label to come from the bar (an attribute called label for instance). What's the proper method to do this? Here's the most simple scaffold: <h1>New foo</h1> <% form_for(@foo) do |f| %> <%= f.error_messages %> <p> <%= f.label :name %><br /> <%= f.text_field :name %> </p> <h2>Bars</h2> <% f.fields_for :wtfs do |builder| %> <%= builder.hidden_field :bar_id %> <p> <%= builder.text_field :wtf_data_i_need_to_set %> </p> <% end %> <p> <%= f.submit 'Create' %> </p> <% end %> <%= link_to 'Back', foos_path %>

    Read the article

  • image display in block <a> - CSS

    - by blasteralfred
    I have a page like below; <style type="text/css"> #div1 { height: 100px; background-color: #CCCCCC; } #div2 { display: inline; height: 48px; margin: 0; padding: 0; position: relative; white-space: nowrap; } #div2 a { display: block; background-color: #FF9900; height: 51px; width: 150px; padding-right: 50px; text-decoration: none; word-wrap: break-word; white-space: normal; } #div2 img { border:0; float: right; } </style> <div id="div1"> <div id="div2"> <a href="">text1 text2 text3 text4 text5 text6 text7 text8<img src="image.jpg"></a> </div> </div> What I am getting is something like this; and I want this; Here is the fiddle. Thanks in advance...:)

    Read the article

  • Block Cascade Json Serealize?

    - by CrazyJoe
    I have this Class: public class User { public string id{ get; set; } public string name{ get; set; } public string password { get; set; } public string email { get; set; } public bool is_broker { get; set; } public string branch_id { get; set; } public string created_at{get; set;} public string updated_at{get; set;} public UserGroup UserGroup {get;set;} public UserAddress UserAddress { get; set; } public List<UserContact> UserContact {get; set;} public User() { UserGroup = new UserGroup(); UserAddress = new UserAddress(); UserContact = new List<UserContact>(); } } I like to Serealize Only properties , how i block serealization of UserGroup, UserAdress, asn UserContact??? This is my Serealization function: public static string Serealize<T>(T obj) { System.Runtime.Serialization.Json.DataContractJsonSerializer serializer = new System.Runtime.Serialization.Json.DataContractJsonSerializer(obj.GetType()); MemoryStream ms = new MemoryStream(); serializer.WriteObject(ms, obj); return Encoding.UTF8.GetString(ms.ToArray(), 0,(int)ms.Length); }

    Read the article

  • Execute a block of database querys

    - by Nightmare
    I have the following task to complete: In my program a have a block of database querys or questions. I want to execute these questions and wait for the result of all questions or catch an error if one question fails! My Question object looks like this (simplified): public class DbQuestion(String sql) { [...] } [...] //The answer is just a holder for custom data... public void SetAnswer(DbAnswer answer) { //Store the answer in the question and fire a event to the listeners this.OnAnswered(EventArgs.Empty); } [...] public void SetError() { //Signal an Error in this query! this.OnError(EventArgs.Empty); } So every question fired to the database has a listener that waits for the parsed result. Now I want to fire some questions asynchronous to the database (max. 5 or so) and fire an event with the data from all questions or an error if only one question throws one! Which is the best or a good way to accomplish this task? Can I really execute more then one question parallel and stop all my work when one question throws an error? I think I need some inspiration on this... Just a note: I´m working with .NET framework 2.0

    Read the article

  • Pulling name value pair from a structured adsense code block contained in a txt file

    - by Scott B
    I have a txt file which contains a google adsense code block and I'm trying to pull in the file via file_get_contents to extract the values of the google_ad_client and google_ad_slot variables. In the examples below, I want to return to my calling function: $google_ad_client = 'pub-1234567890987654'; $google_ad_slot = '1234567890' The file may contain one of either of these two formats and I wont know which the user has chosen: Newer Ad Unit Style <script type="text/javascript"><!-- google_ad_client = "pub-1234567890987654"; google_ad_slot = "1234567890"; google_ad_width = 336; google_ad_height = 280; //--> </script> <script type="text/javascript" src="path-to-google-script"></script> Classic Style <script type="text/javascript"><!-- google_ad_client = "pub-1234567890987654"; /* 336x280, created 8/6/09 */ google_ad_slot = "1234567890"; google_ad_width = 336; google_ad_height = 280; google_ad_format="336x280_as"; google_ad_type="text_image"; google_color_border="FFFFFF"; google_color_bg="FFFFFF"; google_color_link="2200CC"; google_color_url="000000"; google_color_text="777777"; //--> </script>

    Read the article

  • My kernel only works in block (0,0)

    - by ZeroDivide
    I am trying to write a simple matrixMultiplication application that multiplies two square matrices using CUDA. I am having a problem where my kernel is only computing correctly in block (0,0) of the grid. This is my invocation code: dim3 dimBlock(4,4,1); dim3 dimGrid(4,4,1); //Launch the kernel; MatrixMulKernel<<<dimGrid,dimBlock>>>(Md,Nd,Pd,Width); This is my Kernel function __global__ void MatrixMulKernel(int* Md, int* Nd, int* Pd, int Width) { const int tx = threadIdx.x; const int ty = threadIdx.y; const int bx = blockIdx.x; const int by = blockIdx.y; const int row = (by * blockDim.y + ty); const int col = (bx * blockDim.x + tx); //Pvalue stores the Pd element that is computed by the thread int Pvalue = 0; for (int k = 0; k < Width; k++) { Pvalue += Md[row * Width + k] * Nd[k * Width + col]; } __syncthreads(); //Write the matrix to device memory each thread writes one element Pd[row * Width + col] = Pvalue; } I think the problem may have something to do with memory but I'm a bit lost. What should I do to make this code work across several blocks?

    Read the article

  • Using block around a static/singleton resource reference

    - by byte
    This is interesting (to me anyway), and I'd like to see if anyone has a good answer and explanation for this behavior. Say you have a singleton database object (or static database object), and you have it stored in a class Foo. public class Foo { public static SqlConnection DBConn = new SqlConnection(ConfigurationManager.ConnectionStrings["BAR"].ConnectionString); } Then, lets say that you are cognizant of the usefulness of calling and disposing your connection (pretend for this example that its a one-time use for purposes of illustration). So you decide to use a 'using' block to take care of the Dispose() call. using (SqlConnection conn = Foo.DBConn) { conn.Open(); using (SqlCommand cmd = new SqlCommand()) { cmd.Connection = conn; cmd.CommandType = System.Data.CommandType.StoredProcedure; cmd.CommandText = "SP_YOUR_PROC"; cmd.ExecuteNonQuery(); } conn.Close(); } This fails, with an error stating that the "ConnectionString property is not initialized". It's not an issue with pulling the connection string from the app.config/web.config. When you investigate in a debug session you see that Foo.DBConn is not null, but contains empty properties. Why is this?

    Read the article

  • How to limit TCP writes to particular size and then block untlil the data is read

    - by ustulation
    {Qt 4.7.0 , VS 2010} I have a Server written in Qt and a 3rd party client executable. Qt based server uses QTcpServer and QTcpSocket facilities (non-blocking). Going through the articles on TCP I understand the following: the original implementation of TCP mentioned the negotiable window size to be a 16-bit value, thus maximum being 65535 bytes. But implementations often used the RFC window-scale-extension that allows the sliding window size to be scalable by bit-shifting to yield a maximum of 1 gigabyte. This is implementation defined. This could have resulted in majorly different window sizes on receiver and sender end as the server uses Qt facilities without hardcoding any window size limit. Client 1st asks for all information it can based on the previous messages from the server before handling the new (accumulating) incoming messages. So at some point Server receives a lot of messages each asking for data of several MB's. This the server processes and puts it into the sender buffer. Client however is unable to handle the messages at the same pace and it seems that client’s receiver buffer is far smaller (65535 bytes maybe) than sender’s transmit window size. The messages thus get accumulated at sender’s end until the sender’s buffer is full too after which the TCP writes on sender would block. This however does not happen as sender buffer is much larger. Hence this manifests as increase in memory consumption on the sender’s end. To prevent this from happening, I used Qt’s socket’s waitForBytesWritten() with timeout set to -1 for infinite waiting period. This as I see from the behaviour blocks the thread writing TCP data until the data has actually been sensed by the receiver’s window (which will happen when earlier messages have been processed by the client at application level). This has caused memory consumption at Server end to be almost negligible. is there a better alternative to this (in Qt) if i want to restrict the memory consumption at server end to say x MB's? Also please point out if any of my understandings is incorrect.

    Read the article

  • Large free block of english non-pronoun text

    - by Tom
    As part of teaching myself python I've written a script which allows a user to play hangman. At the moment, the hangman word to be guessed is simply entered manually at the start of the script's code. I want instead for the script to choose randomly from a large list of english words. This I know how to do - my problem is finding that list of words to work from in the first place. Does anyone know of a source on the net for, say, 1000 common english words where they can be downloaded as a block of text or something similar that I can work with? (My initial thought was grabbing a chunk of a novel from project gutenburg [this project is only for my own amusement and won't be available anywhere else so copyright etc doesn't matter hugely to me btw], but anything like that is likely to contain too many names or non-standard words that wouldn't be suitable for hangman. I need text that only has words legal for use in scrabble, basically). It's a slightly odd question for here I suppose, but actually I thought the answer might be of use not just to me but anyone else working on a project for a wordgame or similar that needs a large seed list of words to work from. Many thanks for any links or suggestions :)

    Read the article

  • How To Block The UserName After 3 Invalid Password Attempts IN ASP.NET

    - by shihab
    I used the following code for checking user name and password. and I want ti block the user name after 3 invalid password attempt. what should I add in my codeing MD5CryptoServiceProvider md5hasher = new MD5CryptoServiceProvider(); Byte[] hashedDataBytes; UTF8Encoding encoder = new UTF8Encoding(); hashedDataBytes = md5hasher.ComputeHash(encoder.GetBytes(TextBox3.Text)); StringBuilder hex = new StringBuilder(hashedDataBytes.Length * 2); foreach (Byte b in hashedDataBytes) { hex.AppendFormat("{0:x2}", b); } string hash = hex.ToString(); SqlConnection con = new SqlConnection("Data Source=Shihab-PC;Initial Catalog=test;User ID=SOMETHING;Password=SOMETHINGELSE"); SqlDataAdapter ad = new SqlDataAdapter("select password from Users where UserId='" + TextBox4.Text + "'", con); DataSet ds = new DataSet(); ad.Fill(ds, "Users"); SqlDataAdapter ad2 = new SqlDataAdapter("select UserId from Users ", con); DataSet ds2 = new DataSet(); ad2.Fill(ds2, "Users"); Session["id"] = TextBox4.Text.ToString(); if ((string.Compare((ds.Tables["Users"].Rows[0][0].ToString()), hash)) == 0) { if (string.Compare(TextBox4.Text, (ds2.Tables["Users"].Rows[0][0].ToString())) == 0) { Response.Redirect("actioncust.aspx"); } else { Response.Redirect("actioncust.aspx"); } } else { Label2.Text = "Invalid Login"; } con.Close(); }

    Read the article

  • this block of code going straight to break in java

    - by user2914851
    I have this block in a switch case statement that when selected, just breaks and presents me with the main menu again. System.out.println("Choose a competitor surname"); String competitorChoice2 = input.nextLine(); int lowestSpeed = Integer.MAX_VALUE; int highestSpeed = 0; for(int j = 0; j < clipArray.length; j++) { if(clipArray[j] != null) { if(competitorChoice2.equals(clipArray[j].getSurname())) { if(clipArray[j].getSpeed() > clipArray[highestSpeed].getSpeed()) { highestSpeed = j; } } } } for(int i = 0; i < clipArray.length; i++) { if(clipArray[i] != null) { if(competitorChoice2.equals(clipArray[i].getSurname())) { if(clipArray[i].getSpeed() < clipArray[lowestSpeed].getSpeed()) { lowestSpeed = i; } } } } for(int h = lowestSpeed; h < highestSpeed; h++ ) { System.out.println(""+clipArray[h].getLength()); } I have an array of objects and each object has a surname and a speed. I want the user to choose a surname and display the speeds of all of their clips from lowest to highest. when I select this option it just breaks and brings me back to the main menu

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Simple python oo issue

    - by Alex K
    Hello, Have a look a this simple example. I don't quite understand why o1 prints "Hello Alex" twice. I would think that because of the default self.a is always reset to the empty list. Could someone explain to me what's the rationale here? Thank you so much. class A(object): def __init__(self, a=[]): self.a = a o = A() o.a.append('Hello') o.a.append('Alex') print ' '.join(o.a) # >> prints Hello Alex o1 = A() o1.a.append('Hello') o1.a.append('Alex') print ' '.join(o1.a) # >> prints Hello Alex Hello Alex

    Read the article

  • Get UiBinder widget to display inline instead of block

    - by Steve Armstrong
    I'm trying to get my UiBinder-defined widget to display inline, but I can't. My current code is: <ui:UiBinder xmlns:ui='urn:ui:com.google.gwt.uibinder' xmlns:g='urn:import:com.google.gwt.user.client.ui'> <ui:style> .section { border: 1px solid #000000; width: 330px; padding: 5px; display: run-in; } </ui:style> <g:HTMLPanel> <div class="{style.section}"> <div ui:field="titleSpan" class="{style.title}" /> <div class="{style.contents}"> <g:VerticalPanel ui:field="messagesPanel" /> </div> </div> </g:HTMLPanel> </ui:UiBinder> This works fine in terms of how the widget looks internally, but I want to throw a bunch of these widgets into a FlowPanel and have them flow when the window is resized. The HTMLPanel is a div, but I can't get the display attribute to assign. I can't force the style name, since the following throws an error: <g:HTMLPanel styleNames="{style.section}"> And I can assign an additional style, but it doesn't apply the display setting. <g:HTMLPanel addStyleNames="{style.section}"> This displays the border and sets the size, as expected, but it doesn't flow. Firebug shows the styles on the div are border, width, and padding, but no display. Is there a way to make a widget in UiBinder so that it'll display inline instead of block? And if so, can I make it compatible with having a VerticalPanel inside (can I do it without making the entire widget pure HTML without any GWT widgets)? PS: I saw question 2257924 but it hasn't had any answers lately, and he seems to be focused on getting a tag, not specifically getting inline layout. I don't care directly about , if I can just get the top-level tag for my widget to flow inline, I'm happy.

    Read the article

  • Using dynamic parameters in email publisher subjectSettings block with CruiseControl.Net

    - by Joe
    I am trying to get dynamic parameters to be used in the email publisher's subjectSettings block. For example, <project> ... <parameters> <textParameter> <name>version</name> <display>Version to install</display> <description>The version to install.</description> <required>true</required> </textParameter> </parameters> <tasks> ... </tasks> <publishers> .... <email includeDetails="TRUE"> <from>buildmaster</from> <mailhost>localhost</mailhost> <users> <user name="Joe" group="buildmaster" address="jdavies" /> </users> <groups> <group name="buildmaster"> <notifications> <notificationType>Always</notificationType> </notifications> </group> <group name="users"> <notifications> <notificationType>Success</notificationType> <notificationType>Fixed</notificationType> </notifications> </group> </groups> <subjectSettings> <subject buildResult="Success" value="Version ${version} installed." /> <subject buildResult="Fixed" value="Version ${version} fixed and installed." /> </subjectSettings> <modifierNotificationTypes> <notificationType>Success</notificationType> </modifierNotificationTypes> </email> </project> I have tried using ${version} and $[version]. When I use $[version], the entire subject line is empty! Are dynamic parameters supported in this case, and if so, what am I doing wrong?

    Read the article

  • iphone cocos2d sprites disappearing

    - by jer
    I've been working on a game and implementing the physics stuff with chipmunk. All was going fine on the cocos2d part until the integration with chipmunk. A bit of background: The game is a game with blocks. Levels are defined in a property list, where positions, size of the blocks, gravitational forces, etc., are all defined for each block to be shown in the level. The problem is with the blocks showing up. I have a method on my BlockLayer class which is part of my game's main scene. Upon creation of the layer, the property list is read, and all the blocks are created. The following method is called to create the blocks: - (void)createBlock:(Block*)block withAssets:(NSBundle*)assets { Sprite* sprite; switch(block.blockColour) { case kBlockColourGreen: sprite = [Sprite spriteWithFile:[assets pathForResource:@"green" ofType:@"png" inDirectory:@"Blocks"]]; break; case kBlockColourOrange: sprite = [Sprite spriteWithFile:[assets pathForResource:@"orange" ofType:@"png" inDirectory:@"Blocks"]]; break; case kBlockColourRed: sprite = [Sprite spriteWithFile:[assets pathForResource:@"red" ofType:@"png" inDirectory:@"Blocks"]]; break; case kBlockColourBlue: sprite = [Sprite spriteWithFile:[assets pathForResource:@"blue" ofType:@"png" inDirectory:@"Blocks"]]; break; } sprite.position = block.bounds.origin; [self addChild:sprite]; if(block.blockColour == kBlockColourGreen || block.blockColour == kBlockColourRed) space-gravity = cpvmult(cpv(0, 10), 1000); cpVect verts[] = { cpv(-block.bounds.size.width, -block.bounds.size.height), cpv(-block.bounds.size.width, block.bounds.size.height), cpv(block.bounds.size.width, block.bounds.size.height), cpv(block.bounds.size.width, -block.bounds.size.height) }; cpBody* blockBody = cpBodyNew([block.mass floatValue], INFINITY); blockBody-p = cpv(block.bounds.origin.x, block.bounds.origin.y); blockBody-v = cpvzero; cpSpaceAddBody(space, blockBody); cpShape* blockShape = cpPolyShapeNew(blockBody, 4, verts, cpvzero); blockShape-e = 0.9f; blockShape-u = 0.9f; blockShape-data = sprite; cpSpaceAddShape(space, blockShape); } With the above code, the sprites never show up. However, if I comment out the "cpSpaceAddBody(space, blockBody);" line, the sprites show up. The position and size of the blocks are stored in the "bounds" property of instances of the Block class, which is a CGRect. Not sure if it's important, but the orientation of the app is in landscape left, and all the coordinates are based on that orientation. Any help would be greatly appreciated.

    Read the article

  • How to maxmise the largest contiguous block of memory in the Large Object Heap

    - by Unsliced
    The situation is that I am making a WCF call to a remote server which is returns an XML document as a string. Most of the time this return value is a few K, sometimes a few dozen K, very occasionally a few hundred K, but very rarely it could be several megabytes (first problem is that there is no way for me to know). It's these rare occasions that are causing grief. I get a stack trace that starts: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Xml.BufferBuilder.AddBuffer() at System.Xml.BufferBuilder.AppendHelper(Char* pSource, Int32 count) at System.Xml.BufferBuilder.Append(Char[] value, Int32 start, Int32 count) at System.Xml.XmlTextReaderImpl.ParseText() at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlTextReader.Read() at System.Xml.XmlReader.ReadElementString() at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderMDRQuery.Read2_getMarketDataResponse() at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer2.Deserialize(XmlSerializationReader reader) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle) at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) I've read around and it is because the Large Object Heap is just getting too fragmented, so even preceding the call with a quick check to StringBuilder.EnsureCapacity just causes the OutOfMemoryException to be thrown earlier (and because I'm guessing at what's needed, it might not actually need that much so my check is causing more problems than it is solving). Some opinions are that there's not much I can do about it. Some of the questions I've asked myself: Use less memory - have you checked for leaks? Yes. The memory usage goes up and down, but there's no fundamental growth that guarantees this to happen. Some of the times it fails, it succeeded at that stage previously. Transfer smaller amounts Not an option, this is a third party web service over which I have no control (or at least it would take a long time to resolve, in the meantime I still have a problem) Can you do something to the LOH to make it less likely to fail? ... now this is most fruitful course. It's a 32-bit process (it has to be for various political, technical and boring reasons) but there's normally hundreds of meg free (multiples of the largest amount for which we've seen failures). Can we monitor the LOH? Using perfmon I can track the size of the heaps, but I don't think there's a way to monitor the largest available contiguous block of memory. Question is: any advice or suggestions for things to try?

    Read the article

  • Block copy PDF document

    - by Wiliam Witter
    hello Gentlemen, I would like block copy (ctrl+c ctrl+v) PDF document using java. I have a code that build a PDF document with JasperReport... //seta o caminho dos arquivos jasper String pathLote = ScopeSupport.getServletContext().getRealPath("priv/sgc/relatorios/AtaPregaoLotePageReport.jasper"); String pathCabecalho = ScopeSupport.getServletContext().getRealPath("priv/sgc/relatorios/CabecalhoPageReport.jasper"); String pathRodape = ScopeSupport.getServletContext().getRealPath("priv/sgc/relatorios/rodapePageReport.jasper"); String imagemDir = ScopeSupport.getServletContext().getRealPath("/priv/comum/img"); //HashMap parametros passa o parametro usado na query e o caminho da imagem HashMap<String,Object> parametros = new HashMap<String,Object>(); parametros.put("idPregao", idPregao); parametros.put("idLote", idLote); parametros.put("IMAGEM_DIR", imagemDir + "/"); parametros.put("USUARIO", "NomeUsuario" ); parametros.put("texto", texto); parametros.put("numeroAta", numAta); if(numAta != null && numAta > 0) parametros.put("relatorio", "Ata "+numAta); HashMap<String,Object> parametrosSub = new HashMap<String,Object>(); parametrosSub.put("CabecalhoPageReport", pathCabecalho); parametrosSub.put("rodapePageReport", pathRodape); parametrosSub.put("AtaPregaoPorLotePageReport", pathLote); for(String element : parametrosSub.keySet()){ parametros.put(element, (JasperReport) JRLoader.loadObject((String) (parametrosSub.get(element)))); } JasperReport report = (JasperReport) JRLoader.loadObject( pathLote ); JasperPrint printRel = JasperFillManager.fillReport( report, parametros, getJDBCConnection() ); byte[] bytes = JasperExportManager.exportReportToPdf(printRel); httpResponse.setHeader("Content-Disposition","attachment; filename=\""+ report.getName() + ".pdf" +"\";"); httpResponse.setContentLength(bytes.length); httpResponse.setContentType("application/pdf"); ServletOutputStream ouputStream = httpResponse.getOutputStream(); ouputStream.write(bytes, 0, bytes.length); ouputStream.flush(); ouputStream.close(); Who can help me with this?

    Read the article

  • While within a switch block

    - by rursw1
    Hi, I've seen the following code, taken from the libb64 project. I'm trying to understand what is the purpose of the while loop within the switch block - switch (state_in->step) { while (1) { case step_a: do { if (codechar == code_in+length_in) { state_in->step = step_a; state_in->plainchar = *plainchar; return plainchar - plaintext_out; } fragment = (char)base64_decode_value(*codechar++); } while (fragment < 0); *plainchar = (fragment & 0x03f) << 2; case step_b: do { if (codechar == code_in+length_in) { state_in->step = step_b; state_in->plainchar = *plainchar; return plainchar - plaintext_out; } fragment = (char)base64_decode_value(*codechar++); } while (fragment < 0); *plainchar++ |= (fragment & 0x030) >> 4; *plainchar = (fragment & 0x00f) << 4; case step_c: do { if (codechar == code_in+length_in) { state_in->step = step_c; state_in->plainchar = *plainchar; return plainchar - plaintext_out; } fragment = (char)base64_decode_value(*codechar++); } while (fragment < 0); *plainchar++ |= (fragment & 0x03c) >> 2; *plainchar = (fragment & 0x003) << 6; case step_d: do { if (codechar == code_in+length_in) { state_in->step = step_d; state_in->plainchar = *plainchar; return plainchar - plaintext_out; } fragment = (char)base64_decode_value(*codechar++); } while (fragment < 0); *plainchar++ |= (fragment & 0x03f); } } What can give the while? It seems that anyway, always the switch will perform only one of the cases. Did I miss something? Thanks.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >