Search Results

Search found 4931 results on 198 pages for 'burnt hand'.

Page 149/198 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Flush kernel's TCP buffer with `MSG_MORE`-flagged packets

    - by timn
    send()'s man page reveals the MSG_MORE flag which is asserted to act like TCP_CORK. I have a wrapper function around send(): int SocketConnection_Write(SocketConnection *this, void *buf, int len) { errno = 0; int sent = send(this->fd, buf, len, MSG_NOSIGNAL); if (errno == EPIPE || errno == ENOTCONN) { throw(exc, &SocketConnection_NotConnectedException); } else if (errno == ECONNRESET) { throw(exc, &SocketConnection_ConnectionResetException); } else if (sent != len) { throw(exc, &SocketConnection_LengthMismatchException); } return sent; } Assuming I want to use the kernel buffer, I could go with TCP_CORK, enable whenever it is necessary and then disable it to flush the buffer. But on the other hand, thereby the need for an additional system call arises. Thus, the usage of MSG_MORE seems more appropriate to me. I'd simply change the above send() line to: int sent = send(this->fd, buf, len, MSG_NOSIGNAL | MSG_MORE); According to lwm.net, packets will be flushed automatically if they are large enough: If an application sets that option on a socket, the kernel will not send out short packets. Instead, it will wait until enough data has shown up to fill a maximum-size packet, then send it. When TCP_CORK is turned off, any remaining data will go out on the wire. But this section only refers to TCP_CORK. Now, what is the proper way to flush MSG_MORE packets? I can only think of two possibilities: Call send() with an empty buffer and without MSG_MORE being set Re-apply the TCP_CORK option as described on this page Unfortunately the whole topic is very poorly documented and I couldn't find much on the Internet. I am also wondering how to check that everything works as expected? Obviously running the server through strace' is not an option. So the only simplest way would be to usenetcat' and then look at its `strace' output? Or will the kernel handle traffic differently transmitted over a loopback interface?

    Read the article

  • What is the proper way to handle non-tracking self tracking entities?

    - by Will
    Self tracking entities. Awesome. Except when you do something like return Db.Users; none of the self-tracking entities are tracking (until, possibly, they are deserialized). Fine. So we have to recognize that there is a possibility that an entity returning to us does not have tracking enabled. Now what??? Things I have tried For the given method body: using (var db = new Database()) { if (update.ChangeTracker.ChangeTrackingEnabled) db.Configurations.ApplyChanges(update); else FigureItOut(update, db); db.SaveChanges(); update.AcceptChanges(); } The following implementations of FigureItOut all fail: db.Configurations.Attach(update); db.DetectChanges(); Nor db.Configurations.Attach(update); db.Configurations.ApplyCurrentValues(update); Nor db.Configurations.Attach(update); db.Configurations.ApplyOriginalValues(update); Nor db.Configurations.Attach(update); db.Configurations.ApplyChanges(update Nor about anything else I can figure to throw at it, other than Getting the original entity from the database Comparing each property by hand Updating properties as needed What, exactly, am I supposed to do with self-tracking entities that aren't tracking themselves??

    Read the article

  • Problem with response.redirect sending incorrect HTTPMethod

    - by Andy Macnaughton-Jones
    Hi, I've got a strange problem with a Response.Redirect. I'm using VB.NET with the .NET 2 framework (so VS2005 & SP1). I've got a page that I do a form submit on (that's a proper form method="POST" hard-coded onto the page) and that properly posts me back the page data which is then processed. As part of that processing the system determines if we need to get sent to another URL after processing has been complete. So the request.httpmethod = "POST". So if the "GotoPage" parameter has a URL specified we then do a response.redirect(URL, false). (False as we want page processing to complete in order to write some timing logs etc). The page correctly redirects but instead of the response having a "GET" as the request.httpmethod it has a "POST" instead ! Now, we're using our own custom framework so that we use the HTTPRequest method to determine if a page has been posted back or is being "Getted" so the "IsPagePostBack" property doesn't work (that only works when you're using the normal .NET controls and form submissions). In all other instances our code works happily but what might be causing the Request.httpMethod to not be being set correctly ? I've tried doing a response.clear before the redirect in case headers are being written out before hand but to no avail. Any clues ?! thanks, Andy

    Read the article

  • Is there a more useful explanation for UITableViewStylePlain?

    - by mystify
    From the docs: In the plain style, section headers and footers float above the content if the part of a complete section is visible. A table view can have an index that appears as a bar on the right hand side of the table (for example, "a" through "z"). You can touch a particular label to jump to the target section. I find that very hard to grasp. First, this one: if the part of a complete section is visible What do they mean by this? This is paradox. Which one is it? A) Table must be exactly the height of that section. If I have 5 Rows, and each row is 50px high, I must make it 5*50 high. The full section must be visible on the screen. Otherwise, if I have 100 rows but my table view is only 400 high, this will not apply. Nothing will float above my content. Sounds wrong. B) It doesn't matter how high my table view actually is. Header and Footer is floating above the content and I can scroll the section. Makes more sense. But is completely against this nonsense making sentence: 'if the part of a complete section is visible' Can anyone explain it better than they did?

    Read the article

  • To (monkey)patch or not to (monkey)patch, that is the question

    - by gsakkis
    I was talking to a colleague about one rather unexpected/undesired behavior of some package we use. Although there is an easy fix (or at least workaround) on our end without any apparent side effect, he strongly suggested extending the relevant code by hard patching it and posting the patch upstream, hopefully to be accepted at some point in the future. In fact we maintain patches against specific versions of several packages that are applied automatically on each new build. The main argument is that this is the right thing to do, as opposed to an "ugly" workaround or a fragile monkey patch. On the other hand, I favor practicality over purity and my general rule of thumb is that "no patch" "monkey patch" "hard patch", at least for anything other than a (critical) bug fix. So I'm wondering if there is a consensus on when it's better to (hard) patch, monkey patch or just try to work around a third party package that doesn't do exactly what one would like. Does it have mainly to do with the reason for the patch (e.g. fixing a bug, modifying behavior, adding missing feature), the given package (size, complexity, maturity, developer responsiveness), something else or there are no general rules and one should decide on a case-by-case basis ?

    Read the article

  • Correct use of WSDL-generated sources

    - by John K
    How can I easily convert between manually written classes and WSDL-generated equivalents? I have a Java SE 6 thick client that calls a web service to get and store data. The client has a DAO that works with my entity classes, calls <Entity.toDto() to convert them to DTOs, and sends/receives that data with the web service. My issue stems from the fact that the entity classes live on both sides of the service interface: client and server. Each entity has a constructor from the DTO and a toDto function: public class EntityClass { public EntityClass(EntityClassDto dto); public EntityClassDto toDto(); ... } This means I have a handwritten DTO class that the client and server both use. However, the service interface expects the WSDL-generated classes. I have tried writing conversion code between the hand-written DTO and the WSDL-generated DTO and it is tedious and error-prone. What is a reasonable alternative to this? Some back-story: The thick client should be able to have a configurable backend: either direct to the DB or through this web service. The aforementioned DAO is the web service based implementation and another imlpementation that is JPA-based exists.

    Read the article

  • Entity Framework - Merging 2 physical tables into one "virtual" table problems...

    - by Keith Barrows
    I have been reading up on porting ASP.NET Membership Provider into .NET 3.5 using LINQ & Entities. However, the DB model that every single sample shows is the newer model while I've inherited a rather old model. Differences: The User Table is split into a pair of User & Membership Tables. All of the tables in the DB are prepended with aspnet_ I have Lowered versions of some columns (UserName, Email, etc) To work with this I have copied the properties from the Membership table into the User table (in the DB this is a 1<-1 relationship, not a 1<-0,1), renamed aspnet_Applications to Application, aspnet_Profiles to Profile, aspnet_Users to User and aspnet_Roles to Role. (See image) Link to full size image of model Now, I am running into one of 2 problems when I try to compile. Using the model in the image I get this error: Problem in Mapping Fragment starting at line 464: EntitySets 'UserSet' and 'aspnet_Membership' are both mapped to table 'aspnet_Membership'. Their Primary Keys may collide. If I delete the aspnet_Membership table from my model (to handle the above error) I then get: Problem in Mapping Fragment starting at line 384: Column aspnet_Membership.ApplicationId in table aspnet_Membership must be mapped: It has no default value and is not nullable. My ability to hand edit the backing stores is not the best and I don't want to just hack something in that may break other things. I am looking for suggestions, best practices, etc to handle this. Note: Moving the data tables themselves is not an option as I cannot replace all the logic in the existing apps. I am building this EF Provider for a new App. Over the next 6 months the old app(s) will migrate bit-by-bit to the new structures. Note: I added a link just under the image to the full size image for better viewing.

    Read the article

  • How to estimate the contribution of an individual to a software project?

    - by Amit Kumar
    I work on a software project and would like to estimate the percentage out of the total contribution that I have put in the development of the software. Is there some tool doing this? Such a tool can be useful for appraisals or negotiations, for example. After all, we work for money (yes, not only money, put the point remains). I think there is enough hand-waving for the most important things. The estimation is very subjective (at least to me now) but I do not know of any tool that provides even a subjective estimate. I know of Sloccount that spells out the total effort using the lines of code but not on per-developer basis. My idea of an ideal tool for this purpose would: measure the complexity of the code (more complex is more effort, but more effort is not necessarily more contribution) measure the decomposibility/flexibility of the software (more decomposable is better) how much library code is used -- using library code speeds up the development process, increases the associated risk and requires the developer to know from before or learn about the library. be intelligent enough to differentiate between "who wrote the code", "who copied the code" and "who indented the code". It is difficult to differentiate between the complexity in the implementation and the intrinsic complexity of the problem. Perhaps a comparison can be made with an equivalent open source counterpart if there is, or for each submodule separately. If there is no such tool, is there no merit in having such a tool? Or do you believe in "I do work, I do not measure"? It takes time after all. Perhaps the project manager should do this estimation continuously, say, weekly. Are there any standards? Yes, standardization is difficult because every project has a different goal, but difficult does not mean it is not useful.

    Read the article

  • JQuery not Working in chrome?

    - by Pete Herbert Penito
    Hi everyone! I have some code here which works perfectly in firefox but not in chrome or IE, my javascript is thus ` $(document).ready(function() { $("#clientLoginPop").show(); $("#clientLoginPop").animate({"left": "-=400px"}, "fast"); }); $("#clientLoginCloseLink").click(function () { $("#clientLoginPop").animate({"left": "+=400px"}, "fast"); }); $("#contactUsPopLink").click(function () { $("#contactUsPop").show(); $("#contactUsPop").animate({"left": "-=437px"}, "fast"); }); $("#contactUsClose").click(function () { $("#contactUsPop").animate({"left": "+=474px"}, "fast"); }); }); ` and finally the css of the div looks like this, i think rather importantly it's aligned to the right of the browser: (the client login div looks similar just a different height) ` #contactUsPop { width:437px; right:-437px; margin-top:220px; position:fixed; height:217px; background-color:white; z-index:2; } ` so what happens in firefox is the div animates to the left and then when it closes it moves back to the right. when in chrome the div doesn't seem to pop up at all? the URL of the site is this: http://clearcreativegroup.com/devcorner/clear3/ the tabs are on the right hand side of the browser, any advice would help tons, thank you!

    Read the article

  • .NET WF4: Should it be in the middle of everything?

    - by stimpy77
    I am aware that WF4 (Windows Workflow 4.0, part of .NET 4.0) is a significant rework and redesign of WF3, where much of what made WF3 a poor technology choice has been cleaned up in WF4. For example, as far as I can tell, WF4 (Windows Workflow 4.0) activities are more or less testable with [TestMethod] and mocking. This among other things, like improved performance, has grabbed my attention about the technology again, whereas I had previously pooh-poohed WF3. I'm working on a new architecture for essentially an n-tier collaborative application (not enterprise-class, just a smallish project with potential to grow significantly) where I'm already trying to discipline myself to use IoC and, to some extent, TDD, and I'm wondering, in general terms, whether it is wiser to just hand-code workflow logic or if I should delve into learning and integrating WF4 so that WF becomes literally the controller of the entire application, i.e. the practical C in "MVC" (not ASP.NET MVC but rather the pattern). So should workflow activities in WF4 be the primary controller for a highly expandable/growable web-based collaborative application? Or am I asking entirely the wrong question? This is a vague question, I'm sure, so abstract answers are as welcome as specific ones.

    Read the article

  • jQuery: Multiple animations with a delay on a set of divs

    - by Waleed
    Hey there, I have a group of 4 divs and I'm looking to use jQuery to animate them once, then have a delay using delay(), and then run another set of animations on them, putting the divs back to their original configuration. Here's the code that I have: //only selectors called 'option1' are affected by delay, and not 'option1 img' or 'option2' $("#option1").showOption1().delay(3000).hideOption1(); //if i don't attach to #option1, delay doesn't work, but the animations that need to happen simultaneously work $(document).showOption1().delay(3000).hideOption1(); jQuery.fn.showOption1 = function(){ $("#option2, #option3, #leftColumn").fadeOut(1000); $("#option1").css("zIndex", "5"); $("#option1").animate({ left: "370px", }, 1500); $("#option1 img").animate({ width: "500px", }, 1500, function(){ $("p.optionText").text('hello'); }); return this; } jQuery.fn.hideOption1 = function(){ $("#option1 img").animate({ width: "175px", }, 1500); $("#option1").animate({ left: "743px", }, 1500); $("#option2, #option3, #leftColumn").fadeIn(2000); $("#option1").css("zIndex", "1"); return this; } I've tried two ways of running these two functions, as seen on lines 2 and 5. In the case of line 2, showOption1() will run fine, but then only $("#option1").animate({ left: "743px", }, 1500); will work from hideOption1() after the delay. The rest of hideOption1() is fired immediately after showOption1() finishes, ignoring the delay. On the other hand, if I run line 5, all the code in hideOption1() runs simultaneously as desired, but ignores the delay entirely, running immediately after showOption1() finishes. How can I have all the code in hideOption1() run simultaneously after the delay? Thanks much in advance!

    Read the article

  • Migrating to web application with AjaxControlToolkit

    - by Chris
    Hi All, We are currently migrating our ASP.NET website to a web application in Visual Studio 2008. Most of the process has been fairly straight forward, but I have hit one block that is driving me a bit nuts. We are using the AjaxControlToolkit for some functionality, specifically an AutoControlExtender. When this is run locally through VS's development server the extender (dropdown) does not render after the service returns the resultset. However if I deploy the migrated solution to our UAT server the extender functions correctly. I have ensured the Ajax Control Toolkit is properly installed locally on my dev machine (and the dll available in the bin directory), and using debugging have ensured the service is called correctly and runs through without error (which it does). The web application was taken from a server running IIS7. Can anyone confirm if Visual Studio 2008 development server requires a different configuration to IIS 7 (as I believe IIS 6 requires a different configuration to IIS 7), and if there is a resource that provides more info? My own searches have turned up very little in this area. On the other hand, if I am looking in the wrong area any other tips would be appreciated. Thanks Chris

    Read the article

  • Boost Mersenne Twister: how to seed with more than one value?

    - by Eamon Nerbonne
    I'm using the boost mt19937 implementation for a simulation. The simulation needs to be reproducible, and that means storing and potentially reusing the RNG seeds later. I'm using the windows crypto api to generate the seed values because I need an external source for the seeds and not because of any particular guarantees of randomness. The output of any simulation run will have a note including the RNG seed - so the seed needs to be reasonably short. On the other hand, as part of the analysis of the simulation, I'll be comparing several runs - but to be sure that these runs are actually different, I'll need to use different seeds - so the seed needs to be long enough to avoid accidental collisions. I've determined that 64-bits of seeding should suffice; the chance of a collision will reach 50% after about 2^32 runs - that probability is low enough that the average error caused by it is negligible to me. Using just 32-bits of seed is tricky; the chance of a collision reaches 50% already after 2^16 runs; and that's a little too likely for my tastes. Unfortunately, the boost implementation either seeds with a full state vector - which is far, far too long - or a single 32-bit unsigned long - which isn't ideal. How can I seed the generator with more than 32-bits but less than a full state vector? I tried just padding the vector or repeating the seeds to fill the state vector, but even a cursory glance at the results shows that that generates poor results.

    Read the article

  • SQL 2005 indexed queries slower than unindexed queries

    - by uos??
    Adding a seemingly perfectly index is having an unexpectedly adverse affect on a query performance... -- [Data] has a predictable structure and a simple clustered index of the primary key: ALTER TABLE [dbo].[Data] ADD PRIMARY KEY CLUSTERED ( [ID] ) -- My query, joins on itself looking for a certain kind of "overlapping" records SELECT DISTINCT [Data].ID AS [ID] FROM dbo.[Data] AS [Data] JOIN dbo.[Data] AS [Compared] ON [Data].[A] = [Compared].[A] AND [Data].[B] = [Compared].[B] AND [Data].[C] = [Compared].[C] AND ([Data].[D] = [Compared].[D] OR [Data].[E] = [Compared].[E]) AND [Data].[F] <> [Compared].[F] WHERE 1=1 AND [Data].[A] = @A AND @CS <= [Data].[C] AND [Data].[C] < @CE -- Between a range [Data] has about a quarter-million records so far, 10% to 50% of the data satisfies the where clause depending on @A, @CS, and @CE. As is, the query takes 1 second to return about 300 rows when querying 10%, and 30 seconds to return 3000 rows when querying 50% of the data. Curiously, the estimated/actual execution plan indicates two parallel Clustered Index Scans, but the clustered index is only of the ID, which isn't part of the conditions of the query, only the output. ?? If I add this hand-crafted [IDX_A_B_C_D_E_F] index which I fully expected to improve performance, the query slows down by a factor of 8 (8 seconds for 10% & 4 minutes for 50%). The estimated/actual execution plans show an Index Seek, which seems like the right thing to be doing, but why so slow?? CREATE UNIQUE INDEX [IDX_A_B_C_D_E_F] ON [dbo].[Data] ([A], [B], [C], [D], [E], [F]) INCLUDE ([ID], [X], [Y], [Z]); The Data Engine Tuning wizard suggests a similar index with no noticeable difference in performance from this one. Moving AND [Data].[F] <> [Compared].[F] from the join condition to the where clause makes no difference in performance. I need these and other indexes for other queries. I'm sure I could hint that the query should refer to the Clustered Index, since that's currently winning - but we all know it is not as optimized as it could be, and without a proper index, I can expect the performance will get much worse with additional data. What gives?

    Read the article

  • What is the fastest way to pull a few element values out of XML files in Perl?

    - by Anon Guy
    I have a bunch of XML files that are about 1-2 megabytes in size. Actually, more than a bunch, there are millions. They're all well-formed and many are even validated against their schema (confirmed with libxml2). All were created by the same app, so they're in a consistent format (though this could theoretically change in the future). I want to check the values of one element in each file from within a Perl script. Speed is important (I'd like to take less than a second per file) and as noted I already know the files are well-formed. I am sorely tempted to simply 'open' the files in Perl and scan through until I see the element I am looking for, grab the value (which is near the start of the file), and close the file. On the other hand, I could use an XML parser (which might protect me from future changes to the XML formatting) but I suspect it will be slower than I'd like. Can anyone recommend an appropriate approach and/or parser? Thanks in advance.

    Read the article

  • Javascript: Controlling the order that event handlers / listeners are exeucted in

    - by LRE
    Once again the IE Monster has hit me with an odd problem. I'm writing some changes into an asp.net site I inherited a while back. One of the problems is that in some pages there are several controls that add javascript functions as handlers to the onload event (using YUI if that matters). Some of those event handlers assume certain other functions have been executed. This is well and good in Firefox and IE7 as the handlers seem to execute in order of registration. IE8 on the other hand does this backwards. I could go with some kind of double-checking approach but given the controls are present in several pages I feel that'd create even more dependencies. So I've started cooking up my own queue class that I push the functions to and can control their execution order. Then I'll register an onload handler that instructs the queue to execute in my preferred order. I'm part way through that and have started wondering 2 things: Am I going OTT? Am I reinventing the wheel? Anyone have any insights? Any clean solutions that allow me to easily enforce execution order?

    Read the article

  • Versioning friendly, extendible binary file format

    - by Bas Bossink
    In the project I'm currently working on there is a need to save a sizable data structure to disk (edit: think dozens of MB's). Being an optimist, I thought that there must be a standard solution for such a problem; however, up to now I haven't found a solution that satisfies the following requirements: .NET 2.0 support, preferably with a FOSS implementation Version friendly (this should be interpreted as: reading an old version of the format should be relatively simple if the changes in the underlying data structure are simple, say adding/dropping fields) Ability to do some form of random access where part of the data can be extended after initial creation (think of this as extending intermediate results) Space and time efficient (XML has been excluded as option given this requirement) Options considered so far: Protocol Buffers: was turned down by verdict of the documentation about Large Data Sets - since this comment suggested adding another layer on top, this would call for additional complexity which I wish to have handled by the file format itself. HDF5,EXI: do not seem to have .net implementations SQLite/SQL Server Compact edition: the data structure at hand would result in a pretty complex table structure that seems too heavyweight for the intended use BSON: does not appear to support requirement 3. Fast Infoset: only seems to have paid .NET implementations. Any recommendations or pointers are greatly appreciated. Furthermore if you believe any of the information above is not true, please provide pointers/examples to prove me wrong.

    Read the article

  • What are alternatives to Win32 PulseEvent() function?

    - by Bill
    The documentation for the Win32 API PulseEvent() function (kernel32.dll) states that this function is “… unreliable and should not be used by new applications. Instead, use condition variables”. However, condition variables cannot be used across process boundaries like (named) events can. I have a scenario that is cross-process, cross-runtime (native and managed code) in which a single producer occasionally has something interesting to make known to zero or more consumers. Right now, a well-known named event is used (and set to signaled state) by the producer using this PulseEvent function when it needs to make something known. Zero or more consumers wait on that event (WaitForSingleObject()) and perform an action in response. There is no need for two-way communication in my scenario, and the producer does not need to know if the event has any listeners, nor does it need to know if the event was successfully acted upon. On the other hand, I do not want any consumers to ever miss any events. In other words, the system needs to be perfectly reliable – but the producer does not need to know if that is the case or not. The scenario can be thought of as a “clock ticker” – i.e., the producer provides a semi-regular signal for zero or more consumers to count. And all consumers must have the correct count over any given period of time. No polling by consumers is allowed (performance reasons). The ticker is just a few milliseconds (20 or so, but not perfectly regular). Raymen Chen (The Old New Thing) has a blog post pointing out the “fundamentally flawed” nature of the PulseEvent() function, but I do not see an alternative for my scenario from Chen or the posted comments. Can anyone please suggest one? Please keep in mind that the IPC signal must cross process boundries on the machine, not simply threads. And the solution needs to have high performance in that consumers must be able to act within 10ms of each event.

    Read the article

  • custom C++ boost::lambda expression help

    - by aaa
    hello. A little bit of background: I have some strange multiple nested loops which I converted to flat work queue (basically collapse single index loops to single multi-index loop). right now each loop is hand coded. I am trying to generalized approach to work with any bounds using lambda expressions: For example: // RANGE(i,I,N) is basically a macro to generate `int i = I; i < N; ++i ` // for (RANGE(lb, N)) { // for (RANGE(jb, N)) { // for (RANGE(kb, max(lb, jb), N)) { // for (RANGE(ib, jb, kb+1)) { // is equivalent to something like (overload , to produce range) flat<1, 3, 2, 4>((_2, _3+1), (max(_4,_3), N), N, N) the internals of flat are something like: template<size_t I1, size_t I2, ..., class L1_, class L2, ..._> boost::array<int,4> flat(L1_ L1, L2_ L2, ...){ //boost::array<int,4> current; class variable bool advance; L2_ l2 = L2.bind(current); // bind current value to lambda { L1_ l1 = L1.bind(current); //bind current value to innermost lambda l1.next(); advance = !(l1 < l1.upper()); // some internal logic if (advance) { l2.next(); current[0] = l1.lower(); } } //..., } my question is, can you give me some ideas how to write lambda (derived from boost) which can be bound to index array reference to return upper, lower bounds according to lambda expression? thank you much

    Read the article

  • Excel 2003 VBA - Method to duplicate this code that select and colors rows

    - by Justin
    so this is a fragment of a procedure that exports a dataset from access to excel Dim rs As Recordset Dim intMaxCol As Integer Dim intMaxRow As Integer Dim objxls As Excel.Application Dim objWkb As Excel.Workbook Dim objSht As Excel.Worksheet Set rs = CurrentDb.OpenRecordset("qryOutput", dbOpenSnapshot) intMaxCol = rs.Fields.Count If rs.RecordCount > 0 Then rs.MoveLast: rs.MoveFirst intMaxRow = rs.RecordCount Set objxls = New Excel.Application objxls.Visible = True With objxls Set objWkb = .Workbooks.Add Set objSht = objWkb.Worksheets(1) With objSht On Error Resume Next .Range(.Cells(1, 1), .Cells(intMaxRow, intMaxCol)).CopyFromRecordset rs .Name = conSHT_NAME .Cells.WrapText = False .Cells.EntireColumn.AutoFit .Cells.RowHeight = 17 .Cells.Select With Selection.Font .Name = "Calibri" .Size = 10 End With .Rows("1:1").Select With Selection .Insert Shift:=xlDown End With .Rows("1:1").Interior.ColorIndex = 15 .Rows("1:1").RowHeight = 30 .Rows("2:2").Select With Selection.Interior .ColorIndex = 40 .Pattern = xlSolid End With .Rows("4:4").Select With Selection.Interior .ColorIndex = 40 .Pattern = xlSolid End With .Rows("6:6").Select With Selection.Interior .ColorIndex = 40 .Pattern = xlSolid End With .Rows("1:1").Select With Selection.Borders(xlEdgeBottom) .LineStyle = xlContinuous .Weight = xlMedium .ColorIndex = xlAutomatic End With End With End With End If Set objSht = Nothing Set objWkb = Nothing Set objxls = Nothing Set rs = Nothing Set DB = Nothing End Sub see where I am looking at coloring the rows. I wanted to select and fill (with any color) every other row, kinda like some of those access reports. I can do it manually coding each and every row, but two problems: 1) its a pain 2) i don't know what the record count is before hand. How can I make the code more efficient in this respect while incorporating the recordcount to know how many rows to "loop through" EDIT: Another question I have is with the selection methods I am using in the module, is there a better excel syntax instead of these with selections.... .Cells.Select With Selection.Font .Name = "Calibri" .Size = 10 End With is the only way i figure out how to accomplish this piece, but literally every other time I run this code, it fails. It says there is no object and points to the .font ....every other time? is this because the code is poor, or that I am not closing the xls app in the code? if so how do i do that? Thanks as always!

    Read the article

  • desktop shortcut icon not showing in web setup project

    - by davsan
    i've created a web setup project and i wanted it to create a desktop shortcut to the web application (ex: http://localhost/xx/yy.aspx). up to this point it was pretty easy: i created a shortcut (doesnt matter where), gave it the url i wanted, added this to the User's Desktop special folder of my web setup project, and it was placed on the desktop after the installation. but then i wanted to display my custom shortcut icon. i set the icon of the shortcut i've created on my file system. then i re-included this to the setup project. however after the installation the shortcut kept showing the default IE icon again. (i tried these on windows 2003 server, on win xp the shortcut showed up iconless) after some trials i found another way: i recreated an iconless shortcut on my file system, opened my web setup project, included this shortcut and my icon to Web Application Folder under File System on Target Machine, then clicked on User's Desktop, right clicked on the right hand side blank area, selected Create New Shortcut and chose the shortcut i've just added. Then under User's Desktop i clicked on the newly created shortcut, opened the Properties window and set its Icon property to my included icon. These steps solved it all both on 2003 server and win xp. Though this wasnt really a question i wanted to share it anyways because it was quite annoying.

    Read the article

  • How to avoid using this in a contructor

    - by Paralife
    I have this situation: interface MessageListener { void onMessageReceipt(Message message); } class MessageReceiver { MessageListener listener; public MessageReceiver(MessageListener listener, other arguments...) { this.listener = listener; } loop() { Message message = nextMessage(); listener.onMessageReceipt(message); } } and I want to avoid the following pattern: (Using the this in the Client constructor) class Client implements MessageListener { MessageReceiver receiver; MessageSender sender; public Client(...) { receiver = new MessageReceiver(this, other arguments...); sender = new Sender(...); } . . . @Override public void onMessageReceipt(Message message) { if(Message.isGood()) sender.send("Congrtulations"); else sender.send("Boooooooo"); } } The reason why i need the above functionality is because i want to call the sender inside the onMessageReceipt() function, for example to send a reply. But I dont want to pass the sender into a listener, so the only way I can think of is containing the sender in a class that implements the listener, hence the above resulting Client implementation. Is there a way to achive this without the use of 'this' in the constructor? It feels bizare and i dont like it, since i am passing myself to an object(MessageReceiver) before I am fully constructed. On the other hand, the MessageReceiver is not passed from outside, it is constructed inside, but does this 'purifies' the bizarre pattern? I am seeking for an alternative or an assurance of some kind that this is safe, or situations on which it might backfire on me.

    Read the article

  • Axis2 Class Generation

    - by Jack
    I have an instance of a derived class (called Child) that I would like to send between the client and server of my web service. However, the method that might be returning this instance, is marked as returning an instance of the parent class (called Parent). For example: public class Service{public Parent createInstanceOfParentOrChildObject();} While Child is not a parameter anywhere in the service nor is it ever specifically named as a return type (only Parent is ever named), it is nonetheless generated and returned inside certain methods (and then cast to Parent). I generated the wsdl file using Axis2 1.4.1 java2wsdl and specifying that it include this class (using the -xc parameter). I did not use Axis2 1.5.1 because it was not honoring the -xc parameter though it looks like that bug is supposedly fixed in Axis2 1.6. I even did a quick check of the generated .wsdl file to ensure that it did indeed include a definition for Child (and, of course, Parent). However, when I used wsdl2java to generate the server-side (and client-side) code, Child was not generated. How can I get wsdl2java to generate Child? I realize that I could do this by hand but I don't want to have to do this for both the client and server. I was also hoping that I could make this as easy as possible for people to use my wsdl to generate their own clients.

    Read the article

  • Alternative to jQuery's .toggle() method that supports eventData?

    - by Bungle
    The jQuery documentation for the .toggle() method states: The .toggle() method is provided for convenience. It is relatively straightforward to implement the same behavior by hand, and this can be necessary if the assumptions built into .toggle() prove limiting. The assumptions built into .toggle have proven limiting for my current task, but the documentation doesn't elaborate on how to implement the same behavior. I need to pass eventData to the handler functions provided to toggle(), but it appears that only .bind() will support this, not .toggle(). My first inclination is to use a flag global to a single handler function to store the click state. In other words, rather than: $('a').toggle(function() { alert('odd number of clicks'); }, function() { alert('even number of clicks'); }); do this: var clicks = true; $('a').click(function() { if (clicks) { alert('odd number of clicks'); clicks = false; } else { alert('even number of clicks'); clicks = true; } }); I haven't tested the latter, but I suspect it would work. Is this the best way to do something like this, or is there a better way that I'm missing? Thanks!

    Read the article

  • php/mongodb: how does references work in php?

    - by harald
    hello, i asked this in the mongodb user-group, but was not satisfied with the answer, so -- maybe someone at stackoverflow can enlighten me: the question was: $b = array('x' => 1); $ref = &$b; $collection->insert($ref); var_dump($ref); $ref does not contain '_id', because it's a reference to $b, the handbook states. (the code snippet is taken from the php mongo documentation) i should add, that: $b = array('x' => 1); $ref = $b; $collection->insert($ref); var_dump($ref); in this case $ref contains the _id -- for those, who do not know, what the insert method of mongodb-php-driver does -- because $ref is passed by reference (note the $b with and without referencing '&'). on the other hand ... function test(&$data) { $data['_id'] = time(); } $b = array('x' => 1); $ref =& $b; test($ref); var_dump($ref); $ref contains _id, when i call a userland function. my question is: how does the references in these cases differ? my question is probably not mongodb specific -- i thought i would know how references in php work, but apparently i do not: the answer in the mongodb user-group was, that this was the way, how references in php work. so ... how do they work -- explained with these two code-snippets? thanks in advance!!!

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >