Search Results

Search found 1200 results on 48 pages for 'ed taylor'.

Page 44/48 | < Previous Page | 40 41 42 43 44 45 46 47 48  | Next Page >

  • Socket Read In Multi-Threaded Application Returns Zero Bytes or EINTR (104)

    - by user309670
    Hi. Am a c-coder for a while now - neither a newbie nor an expert. Now, I have a certain daemoned application in C on a PPC Linux. I use PHP's socket_connect as a client to connect to this service locally. The server uses epoll for multiplexing connections via a Unix socket. A user submitted string is parsed for certain characters/words using strstr() and if found, spawns 4 joinable threads to different websites simultaneously. I use socket, connect, write and read, to interact with the said webservers via TCP on their port 80 in each thread. All connections and writes seems successful. Reads to the webserver sockets fail however, with either (A) all 3 threads seem to hang, and only one thread returns -1 and errno is set to 104. The responding thread takes like 10 minutes - an eternity long:-(. *I read somewhere that the 104 (is EINTR?), which in the network context suggests that ...'the connection was reset by peer'; or (B) 0 bytes from 3 threads, and only 1 of the 4 threads actually returns some data. Isn't the socket read/write thread-safe? I use thread-safe (and reentrant) libc functions such as strtok_r, gethostbyname_r, etc. *I doubt that the said webhosts are actually resetting the connection, because when I run a single-threaded standalone (everything else equal) all things works perfectly right, but of course in series not parallel. There's a second problem too (oops), I can't write back to the client who connect to my epoll-ed Unix socket. My daemon application will hang and hog CPU 100% for ever. Yet nothing is written to the clients end. Am sure the client (a very typical PHP socket application) hasn't closed the connection whenever this is happening - no error(s) detected either. Any ideas? I cannot figure-out whatever is wrong even with Valgrind, GDB or much logging. Kindly help where you can.

    Read the article

  • Error while trying to install Community Engine: NameError - "Undefined local variable or method 'map

    - by floatingfrisbee
    I'm trying to install Community Engine using the instructions here: http://github.com/bborn/communityengine At first I thought it might be because I had Rails 2.3.5 and desert 0.5.3 which were higher versions than what was mentioned on the installation site. However moving to rails 2.3.4 and desert 0.5.2 did not work. Any ideas as to what might be going on? $ script/generate plugin_migration /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecat ed and will be removed on or after August 2010. Use #requirement /cygdrive/c/users/me/jesse/projects/ceng1/config/routes.rb:2: undefined local variable or method `map' for main:Object (NameError ) from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:147:in `load_without_new_constant _marking' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:147:in `load_without_desert' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.2/lib/desert/ruby/object.rb:18:in `load' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.2/lib/desert/ruby/object.rb:32:in `__each_matching_file' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.2/lib/desert/ruby/object.rb:17:in `load' from /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:286:in `load_routes!' from /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:286:in `each' from /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:286:in `load_routes!' from /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:266:in `reload!' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:537:in `initialize_routing' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:188:in `process' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:113:in `send' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:113:in `run' from /cygdrive/c/users/me/jesse/projects/ceng1/config/environment.rb:6 from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/generate.rb:1 from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from script/generate:3

    Read the article

  • Moq, a translator and an expression

    - by jeriley
    I'm working with an expression within a moq-ed "Get Service" and ran into a rather annoying issue. In order to get this test to run correctly and the get service to return what it should, there's a translator in between that takes what you've asked for, sends it off and gets what you -really- want. So, thinking this was easy I attempt this ... the fakelist is the TEntity objects (translated, used by the UI) and TEnterpriseObject is the actual persistance. mockGet.Setup(mock => mock.Get(It.IsAny<Expression<Func<TEnterpriseObject, bool>>>())).Returns( (Expression<Func<TEnterpriseObject, bool>> expression) => { var items = new List<TEnterpriseObject>(); var translator = (IEntityTranslator<TEntity, TEnterpriseObject>) ObjectFactory.GetInstance(typeof (IEntityTranslator<TEntity, TEnterpriseObject>)); fakeList.ForEach(fake => items.Add(translator.ToEnterpriseObject(fake))); items = items.Where(expression); var result = new List<TEnterpriseObject>(items); fakeList.Clear(); result.ForEach(item => translator.ToEntity(item)); return items; }); I'm getting the red squigglie under there items.where(expression) -- says it can't be infered from usage (confused between <Func<TEnterpriseObject,bool>> and <Func<TEnterpriseObject,int,bool>>) A far simpler version works great ... mockGet.Setup(mock => mock.Get(It.IsAny<Expression<Func<TEntity, bool>>>())).Returns( (Expression<Func<TEntity, bool>> expression) => fakeList.AsQueryable().Where(expression)); so I'm not sure what I'm missing... ideas?

    Read the article

  • DSA signature verification input

    - by calccrypto
    What is the data inputted into DSA when PGP signs a message? From RFC4880, i found A Signature packet describes a binding between some public key and some data. The most common signatures are a signature of a file or a block of text, and a signature that is a certification of a User ID. im not sure if it is the entire public key, just the public key packet, or some other derivative of a pgp key packet. whatever it is, i cannot get the DSA signature to verify here is a sample im testing my program on: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 abcd -----BEGIN PGP SIGNATURE----- Version: BCPG v1.39 iFkEARECABkFAk0z65ESHGFiYyAodGVzdCBrZXkpIDw+AAoJEC3Jkh8+bnkusO0A oKG+HPF2Qrsth2zS9pK+eSCBSypOAKDBgC2Z0vf2EgLiiNMk8Bxpq68NkQ== =gq0e -----END PGP SIGNATURE----- Dumped from pgpdump.net Old: Signature Packet(tag 2)(89 bytes) Ver 4 - new Sig type - Signature of a canonical text document(0x01). Pub alg - DSA Digital Signature Algorithm(pub 17) Hash alg - SHA1(hash 2) Hashed Sub: signature creation time(sub 2)(4 bytes) Time - Mon Jan 17 07:11:13 UTC 2011 Hashed Sub: signer's User ID(sub 28)(17 bytes) User ID - abc (test key) <> Sub: issuer key ID(sub 16)(8 bytes) Key ID - 0x2DC9921F3E6E792E Hash left 2 bytes - b0 ed DSA r(160 bits) - a1 be 1c f1 76 42 bb 2d 87 6c d2 f6 92 be 79 20 81 4b 2a 4e DSA s(160 bits) - c1 80 2d 99 d2 f7 f6 12 02 e2 88 d3 24 f0 1c 69 ab af 0d 91 -> hash(DSA q bits) and the public key for it is: -----BEGIN PGP PUBLIC KEY BLOCK----- Version: BCPG v1.39 mOIETTPqeBECALx+i9PIc4MB2DYXeqsWUav2cUtMU1N0inmFHSF/2x0d9IWEpVzE kRc30PvmEHI1faQit7NepnHkkphrXLAoZukAoNP3PB8NRQ6lRF6/6e8siUgJtmPL Af9IZOv4PI51gg6ICLKzNO9i3bcUx4yeG2vjMOUAvsLkhSTWob0RxWppo6Pn6MOg dMQHIM5sDH0xGN0dOezzt/imAf9St2B0HQXVfAAbveXBeRoO7jj/qcGx6hWmsKUr BVzdQhBk7Sku6C2KlMtkbtzd1fj8DtnrT8XOPKGp7/Y7ASzRtBFhYmMgKHRlc3Qg a2V5KSA8PohGBBMRAgAGBQJNM+p5AAoJEC3Jkh8+bnkuNEoAnj2QnqGtdlTgUXCQ Fyvwk5wiLGPfAJ4jTGTL62nWzsgrCDIMIfEG2shm8bjMBE0z6ngQAgCUlP7AlfO4 XuKGVCs4NvyBpd0KA0m0wjndOHRNSIz44x24vLfTO0GrueWjPMqRRLHO8zLJS/BX O/BHo6ypjN87Af0VPV1hcq20MEW2iujh3hBwthNwBWhtKdPXOndJGZaB7lshLJuW v9z6WyDNXj/SBEiV1gnPm0ELeg8Syhy5pCjMAgCFEc+NkCzcUOJkVpgLpk+VLwrJ /Wi9q+yCihaJ4EEFt/7vzqmrooXWz2vMugD1C+llN6HkCHTnuMH07/E/2dzciEYE GBECAAYFAk0z6nkACgkQLcmSHz5ueS7NTwCdED1P9NhgR2LqwyS+AEyqlQ0d5joA oK9xPUzjg4FlB+1QTHoOhuokxxyN =CTgL -----END PGP PUBLIC KEY BLOCK----- the public key packet of the key is mOIETTPqeBECALx+i9PIc4MB2DYXeqsWUav2cUtMU1N0inmFHSF/2x0d9IWEpVzEkRc30PvmEHI1faQi t7NepnHkkphrXLAoZukAoNP3PB8NRQ6lRF6/6e8siUgJtmPLAf9IZOv4PI51gg6ICLKzNO9i3bcUx4ye G2vjMOUAvsLkhSTWob0RxWppo6Pn6MOgdMQHIM5sDH0xGN0dOezzt/imAf9St2B0HQXVfAAbveXBeRoO 7jj/qcGx6hWmsKUrBVzdQhBk7Sku6C2KlMtkbtzd1fj8DtnrT8XOPKGp7/Y7ASzR in radix 64 i have tried many different combinations of sha1(< some data + 'abcd'),but the calculated value v never equals r, of the signature i know that the pgp implementation i used to create the key and signature is correct. i also know that my DSA implementation and PGP key data extraction program are correct. thus, the only thing left is the data to hash. what is the correct data to be hashed?

    Read the article

  • I need to implement C# deep copy constructors with inheritance. What patterns are there to choose fr

    - by Tony Lambert
    I wish to implement a deepcopy of my classes hierarchy in C# public Class ParentObj : ICloneable { protected int myA; public virtual Object Clone () { ParentObj newObj = new ParentObj(); newObj.myA = theObj.MyA; return newObj; } } public Class ChildObj : ParentObj { protected int myB; public override Object Clone ( ) { Parent newObj = this.base.Clone(); newObj.myB = theObj.MyB; return newObj; } } This will not work as when Cloning the Child only a parent is new-ed. In my code some classes have large hierarchies. What is the recommended way of doing this? Cloning everything at each level without calling the base class seems wrong? There must be some neat solutions to this problem, what are they? Can I thank everyone for their answers. It was really interesting to see some of the approaches. I think it would be good if someone gave an example of a reflection answer for completeness. +1 awaiting!

    Read the article

  • Can LINQ expression classes implement the observer pattern instead of deferred execution?

    - by Tormod
    Hi. We have issues within an application using a state machine. The application is implemented as a windows service and is iteration based (it "foreaches" itself through everything) and there are myriads of instances being processed by the state machine. As I'm reading the MEAP version of Jon Skeets book "C# in Depth, 2nd ed", I'm wondering if I can change the whole thing to use linq expression instances so that guards and conditions are represented using expression trees. We are building many applications on this state machine engine and would probably greatly benefit from the new Expression tree visualizer in VS 2010 Now, simple example. If I have an expression tree where there is an OR Expression condition with two sub nodes, is there any way that these can implement the observer pattern so that the expression tree becomes event driven? If a condition change, it should notify its parent node (the OR node). Since the OR node then changes from "false" to "true", then it should notify ITS parent and so on. I love the declarative model of expression trees, but the deferred execution model works in opposite direction of the control flow if you want event based "live" conditions. Am I off on a wild goose chase here? Or is there some concept in the BCL that may help me achieve this?

    Read the article

  • checking and replacing a value in an array jquery

    - by liz
    i have a table of data: <table id="disparities" class="datatable"> <thead> <tr> <th scope="col">Events</th> <th scope="col">White</th> <th scope="col">Black</th> <th scope="col">Hispanic</th><th scope="col">Asian/Pacific Islands</th> </tr> </thead> <tbody> <tr> <th scope="row">Hospitalizations</th> <td>0.00</td> <td>20</td> <td>10</td> <td>5</td> </tr> <tr> <th scope="row">ED Visits</th> <td>19</td> <td>90</td> <td>40</td> <td>18</td> </tr> </tbody> </table> i have a function that retrieves the values from the above table into an array like so (0.00,19) var points1 = $('#disparities td:nth-child(2)').map(function() { return $(this).text().match(/\S+/)[0]; }).get(); i want to check if there is a 0.00 value (or it could be just 0) and change that value to NA... so my resulting array is then (NA,19) not really sure how to go about this, whether in the initial match or as a separate action...

    Read the article

  • Whatfor Visual Studio?! ml, cl, and link exe-cutables would suffice

    - by AntonIO
    It says in /library article /9s7c9wdw : "You can start this tool [cl.exe] only from the Visual Studio command prompt. You cannot start it from a system command prompt or from Windows Explorer." The corresponding (v=VS.80) page geared towards Visual Studio 2005 makes no such mention. Moreover, there is this Q&A. Thing is: Why should anybody spend anything on VS? ml is provided free of charge- necessarily so since it poses no value addition. The combined size of the other two is 895kb. Uncompressed. The GUI is a disservice. I myself have found half a dozen bugs. However, if the above is true, you'd need the IDE. MSFT fanboys, please step up. Background is that I have the 2008 Pro ed. The official Firefox builds use VS 2005 which I have on another system. To me no redundancy is acceptable. That's when I started pondering about boiling down VS and merely copying over the essential binaries. Then extended the thought to synthetically updating V$.

    Read the article

  • git-svn: reset tracking for master

    - by digitala
    I'm using git-svn to work with an SVN repository. My working copies have been created using git svn clone -s http://foo.bar/myproject so that my working copy follows the default directory scheme for SVN (trunk, tags, branches). Recently I've been working on a branch which was created using git-svn branch myremotebranch and checked-out using git checkout --track -b mybranch myremotebranch. I needed to work from multiple locations, so from the branch I git-svn dcommit-ed files to the SVN repository quite regularly. After finishing my changes, I switched back to the master and executed a merge, committed the merge, and tried to dcommit the successful merge to the remote trunk. It seems as though after the merge the remote tracking for the master has switched to the branch I was working on: # git checkout master # git merge mybranch ... (successful) # git add . # git commit -m '...' # git svn dcommit Committing to http://foo.bar/myproject/branches/myremotebranch ... # Is there a way I can update the master so that it's following remotes/trunk as before the merge? I'm using git 1.7.0.5, if that's any help.

    Read the article

  • How to optimize Conway's game of life for CUDA?

    - by nlight
    I've written this CUDA kernel for Conway's game of life: global void gameOfLife(float* returnBuffer, int width, int height) { unsigned int x = blockIdx.x*blockDim.x + threadIdx.x; unsigned int y = blockIdx.y*blockDim.y + threadIdx.y; float p = tex2D(inputTex, x, y); float neighbors = 0; neighbors += tex2D(inputTex, x+1, y); neighbors += tex2D(inputTex, x-1, y); neighbors += tex2D(inputTex, x, y+1); neighbors += tex2D(inputTex, x, y-1); neighbors += tex2D(inputTex, x+1, y+1); neighbors += tex2D(inputTex, x-1, y-1); neighbors += tex2D(inputTex, x-1, y+1); neighbors += tex2D(inputTex, x+1, y-1); __syncthreads(); float final = 0; if(neighbors < 2) final = 0; else if(neighbors 3) final = 0; else if(p != 0) final = 1; else if(neighbors == 3) final = 1; __syncthreads(); returnBuffer[x + y*width] = final; } I am looking for errors/optimizations. Parallel programming is quite new to me and I am not sure if I get how to do it right. The rest of the app is: Memcpy input array to a 2d texture inputTex stored in a CUDA array. Output is memcpy-ed from global memory to host and then dealt with. As you can see a thread deals with a single pixel. I am unsure if that is the fastest way as some sources suggest doing a row or more per thread. If I understand correctly NVidia themselves say that the more threads, the better. I would love advice on this on someone with practical experience.

    Read the article

  • Socket Read In Multi-Threaded Application Returns Zero Bytes or EINTR (-1)

    - by user309670
    Hi. Am a c-coder for a while now - neither a newbie nor an expert. Now, I have a certain daemoned application in C on a PPC Linux. I use PHP's socket_connect as a client to connect to this service locally. The server uses epoll for concurrent connections via a Unix socket. A user submitted string is parsed for certain characters/words using strstr() and if found, spawns 4 joinable threads to different websites simultaneously. I use socket, connect, write and read, to interact with the said webservers via TCP on port 80 in each thread. All connections and writes seems successful. Reads to the webserver sockets fail however, with either (A) all 3 threads seem to hang, and only one thread returns -1 and errno is set to 104. The responding thread takes like 10 minutes - an eternity long:-(. *I read somewhere that the 104 (is EINTR) suggests that ...'the connection was reset by peer', or (B) 0 bytes from 3 threads, and only 1 of the 4 threads actually returns some data. Isn't the socket read/write thread-safe? Otherwise, use thread-safe (and reentrant) libc functions such as strtok_r, gethostbyname_r, etc. *I doubt that the said webhosts are actually resetting the connection, because when I run a single-threaded standalone (everything else equal) all things works perfectly right. There's a second problem too (oops), I can't write back to the client who connect to my epoll-ed Unix socket. My daemon application will hang and hog CPU 100% for ever. Yet nothing is written to the clients end. Am sure the client (a very typical PHP socket application) hasn't closed the connection whenever this is happening - no error(s) detected either. I cannot figure-out whatever is wrong even with Valgrind or GDB

    Read the article

  • Programming an IPTV application- Client/Server

    - by Sumit Ghosh
    I am part of a team which has been given a task to deploy an IPTV solution for a company. The system has been architect-ed like this. There is Video capture card , which receives satellite signals from a satellite receiver. This video capture card is part of a windows 7 machine. The signals need to be trans coded here and passed to a streaming server which will be received by end users. The end users will be desktop users having a C#.NET application installed to view the channels. I am confused at the choice of server software as I have multiple choices - Windows Media Server, VideoLan (VLC project), or Flash Media Server, it also supports MPEG-2 HD. My main aim to be able to stream MPEG-2 channels with HD quality and encrypt the channels at the server end so that the streams can be protected. I know reversing is possible but it wont be easy as for every naive user with wireshark snooping my streams. If any of you here has ever done such an implementation please do suggest me the best technologies I should go for. Iam open to C#,C++ and other similar languages. Any help shall be deeply appreciated. edit: End Users shall be part of Internet and not necessarily a lan, reason for this question is internet doesn't support multicast like Lan, so I need some suggestions.

    Read the article

  • What happens to class members when malloc is used instead of new?

    - by Felix
    I'm studying for a final exam and I stumbled upon a curious question that was part of the exam our teacher gave last year to some poor souls. The question goes something like this: Is the following program correct, or not? If it is, write down what the program outputs. If it's not, write down why. The program: #include<iostream.h> class cls { int x; public: cls() { x=23; } int get_x(){ return x; } }; int main() { cls *p1, *p2; p1=new cls; p2=(cls*)malloc(sizeof(cls)); int x=p1->get_x()+p2->get_x(); cout<<x; return 0; } My first instinct was to answer with "the program is not correct, as new should be used instead of malloc". However, after compiling the program and seeing it output 23 I realize that that answer might not be correct. The problem is that I was expecting p2->get_x() to return some arbitrary number (whatever happened to be in that spot of the memory when malloc was called). However, it returned 0. I'm not sure whether this is a coincidence or if class members are initialized with 0 when it is malloc-ed. Is this behavior (p2->x being 0 after malloc) the default? Should I have expected this? What would your answer to my teacher's question be? (besides forgetting to #include <stdlib.h> for malloc :P)

    Read the article

  • Java - Reading a csv file line by line - stuck with weird non-existent characters being read!

    - by rockit
    hello fellow java developers. I'm having a very strange issue. I'm trying to read a csv file line by line. Im at the point where Im just testing out the reading of the lines. ONly each time that I read a line, the line contains square characters between each character of text. I even saved the file as a txt file in wordpad and notepad with no change. Thus I must be doing something stupid... I have a csv file, standard csv file, yes a text file with commas in it. I try to read a line of text, but the text is all f-ed up and cannot find the phrase within the text. Any advice? code below. //open csv File filReadMe = new File(strRoot + "data2.csv"); BufferedReader brReadMe = new BufferedReader(new InputStreamReader(new FileInputStream(filReadMe))); String strLine = brReadMe.readLine(); //for all lines while (strLine != null){ //if line contains "(see also" if (strLine.toLowerCase().contains("(see also")){ //write line from "(see also" to ")" int iBegin = strLine.toLowerCase().indexOf("(see also"); String strTemp = strLine.substring(iBegin); int iLittleEnd = strTemp.indexOf(")"); System.out.println(strLine.substring(iBegin, iBegin + iLittleEnd)); } //update line strLine = brReadMe.readLine(); } //end for brReadMe.close();

    Read the article

  • java.net.BindException How can I clear the sockets or what ever is causing it?

    - by user2266067
    I need some help with, I guess a simple networking related problem I'm having. It will also help me better understand how all this works by knowing what isn't being .close()'ed. I'm sure this is pretty simple, but for me its all very new. This is the client program. I can most likely append the server then, if I can figure this out. Thanks public class Server { public static void main(String[] args) { start(); } static int start = 0; public static void start() { try { ServerSocket serverSocket = new ServerSocket(4567); Socket socket = serverSocket.accept(); //1) Take and echo input (In this case a message) BufferedReader bf = new BufferedReader(new InputStreamReader(socket.getInputStream())); String message = bf.readLine(); System.out.println("Message recieved from Client:" + message); //2) Response of client message PrintWriter printWriter = new PrintWriter(socket.getOutputStream(), true); printWriter.println("Server echoing back the message ' " + message + " ' from Client"); } catch (IOException e) { System.out.println("e " + e); System.exit(-1); } start++; clearUp(); if (start < 5) { System.out.println("Closing binds and Restarting" + start); start(); } } public void clearUp(){ //How would I clear the stuff that is left bound so I can restart via start() and avoid the java.net.BindException: Address already in use: JVM_Bind ? } } How would I clear the stuff that is left bound so I can restart via start() and avoid java.net.BindException: Address already in use: JVM_Bind ?

    Read the article

  • Run Reporting Service in local mode and generate columns automatically?

    - by grady
    Hi, I have a SQL query right now which I want to use with the MS reporting services in my ASP.NET application. So I created a report in local mode (rdlc) and attached this to a report viewer. Since my query uses parameters, I created a stored procedure, which had exactly those parameters. In addition to that I had some textboxes which are used for entering the params for the query and a button to call the stored proc and to fill the datatset which is bound to the report viewer. This works, I press the button and according to what I entred the correct data is shown. Now my question: In the future I plan to have multiple reports (which will be selected in a dropdown) and I wonder if I can somehow just call the correct stored procedure and according to the columns which are *SELECT*ed in the procedure, the columns are shown in the report. Example: I select report1 from the dropdown (procedure for report 1 is called), 5 columns are shown in the reportviewer. I select report2 from dropdown (procedure for report 2 is called), 8 columns are shown. Is that possible somehow? Thanks :-)

    Read the article

  • Sequence Point and Evaluation Order( Preincrement)

    - by Josh
    There was a debate today among some of my colleagues and I wanted to clarify it. It is about the evaluation order and the sequence point in an expression. It is clearly stated in the standard that C/C++ does not have a left-to-right evaluation in an expression unlike languages like Java which is guaranteed to have a sequencial left-to-right order. So, in the below expression, the evaluation of the leftmost operand(B) in the binary operation is sequenced before the evaluation of the rightmost operand(C): A = B B_OP C The following expression according, to CPPReference under the subsection Sequenced-before rules(Undefined Behaviour) and Bjarne's TCPPL 3rd ed, is an UB x = x++ + 1; It could be interpreted as the compilers like BUT the expression below is said to be clearly a well defined behaviour in C++11 x = ++x + 1; So, if the above expression is well defined, what is the "fate" of this? array[x] = ++x; It seems the evaluation of a post-increment and post-decrement is not defined but the pre-increment and the pre-decrement is defined. NOTE: This is not used in a real-life code. Clang 3.4 and GCC 4.8 clearly warns about both the pre- and post-increment sequence point.

    Read the article

  • Merging two arrays in PHP

    - by Industrial
    Hi everyone, I am trying to create a new array from two current arrays. Tried array_merge, but it will not give me what I want. $array1 is a list of keys that I pass to a function. $array2 holds the results from that function, but doesn't contain any non-available resuls for keys. So, I want to make sure that all requested keys comes out with 'null':ed values, as according to the shown $result array. It goes a little something like this: $array1 = array('item1', 'item2', 'item3', 'item4'); $array2 = array( 'item1' => 'value1', 'item2' => 'value2', 'item3' => 'value3' ); Here's the result I want: $result = array( 'item1' => 'value1', 'item2' => 'value2', 'item3' => 'value3', 'item4' => '' ); It can be done this way, but I don't think that it's a good solution - I really don't like to take the easy way out and suppress PHP errors by adding @:s in the code. This sample would obviously throw errors since 'item4' is not in $array2, based on the example. foreach ($keys as $k => $v){ @$array[$v] = $items[$v]; } So, what's the fastest (performance-wise) way to accomplish the same result?

    Read the article

  • Swing Menu dimensions

    - by ikurtz
    Greetings. I am trying to learn Java and Swing (today is my first day). I have been able to set up a menu in my test application. but the items occupy very little space (they are narrow). How do I go about extendng the amount of space it uses? I am studying Teach Yourself Java 6 in 21 Days 5th Ed, Java Swing, 2nd Edition, 2002 and Teach Yourself Programming With Java In 24 Hours, 4th Edition (2005) but none of them shed any light on this issue. EDIT: Menu code: JMenu _Game = new JMenu("Game"); JMenuItem _New = new JMenuItem("New"); JMenuItem _Exit = new JMenuItem("Exit"); JMenu _Turn = new JMenu("Turn"); JMenuItem _Red = new JMenuItem("Red"); JMenuItem _Yellow = new JMenuItem("Yellow"); _Turn.add(_Red); _Turn.add(_Yellow); _Game.add(_New); _Game.addSeparator(); _Game.add(_Turn); _Game.addSeparator(); _Game.add(_Exit); JMenu _Help = new JMenu("Help"); JMenuItem _About = new JMenuItem("About"); _Help.add(_About); JMenuBar _MenuBar = new JMenuBar(); _MenuBar.add(_Game); _MenuBar.add(_Help); setJMenuBar(_MenuBar); EDIT: Solved! JMenuItem _New = new JMenuItem("New "); just add spaces as needed! simple.

    Read the article

  • Misaligned Pointer Performance

    - by Elite Mx
    Aren't misaligned pointers (in the BEST possible case) supposed to slow down performance and in the worst case crash your program (assuming the compiler was nice enough to compile your invalid c program). Well, the following code doesn't seem to have any performance differences between the aligned and misaligned versions. Why is that? /* brutality.c */ #ifdef BRUTALITY xs = (unsigned long *) ((unsigned char *) xs + 1); #endif ... /* main.c */ #include <stdio.h> #include <stdlib.h> #define size_t_max ((size_t)-1) #define max_count(var) (size_t_max / (sizeof var)) int main(int argc, char *argv[]) { unsigned long sum, *xs, *itr, *xs_end; size_t element_count = max_count(*xs) >> 4; xs = malloc(element_count * (sizeof *xs)); if(!xs) exit(1); xs_end = xs + element_count - 1; sum = 0; for(itr = xs; itr < xs_end; itr++) *itr = 0; #include "brutality.c" itr = xs; while(itr < xs_end) sum += *itr++; printf("%lu\n", sum); /* we could free the malloc-ed memory here */ /* but we are almost done */ exit(0); } Compiled and tested on two separate machines using gcc -pedantic -Wall -O0 -std=c99 main.c for i in {0..9}; do time ./a.out; done

    Read the article

  • Finding k elements of length-n list that sum to less than t in O(nlogk) time

    - by tresbot
    This is from Programming Pearls ed. 2, Column 2, Problem 8: Given a set of n real numbers, a real number t, and an integer k, how quickly can you determine whether there exists a k-element subset of the set that sums to at most t? One easy solution is to sort and sum the first k elements, which is our best hope to find such a sum. However, in the solutions section Bentley alludes to a solution that takes nlog(k) time, though he gives no hints for how to find it. I've been struggling with this; one thought I had was to go through the list and add all the elements less than t/k (in O(n) time); say there are m1 < k such elements, and they sum to s1 < t. Then we are left needing k - m1 elements, so we can scan through the list again in O(n) time looking for all elements less than (t - s1)/(k - m1). Add in again, to get s2 and m2, then again if m2 < k, look for all elements less than (t - s2)/(k - m2). So: def kSubsetSumUnderT(inList, k, t): outList = [] s = 0 m = 0 while len(outList) < k: toJoin = [i for i in inList where i < (t - s)/(k - m)] if len(toJoin): if len(toJoin) >= k - m: toJoin.sort() if(s0 + sum(toJoin[0:(k - m - 1)]) < t: return True return False outList = outList + toJoin s += sum(toJoin) m += len(toJoin) else: return False My intuition is that this might be the O(nlog(k)) algorithm, but I am having a hard time proving it to myself. Thoughts?

    Read the article

  • SQLAuthority News – Speaking Sessions at TechEd India – 3 Sessions – 1 Panel Discussion

    - by pinaldave
    Microsoft Tech-Ed India 2010 is considered as the major Technology event of the year for various IT professionals and developers. This event will feature a comprehensive forum in order   to learn, connect, explore, and evolve the current technologies we have today. I would recommend this event to you since here you will learn about today’s cutting-edge trends, thereby enhancing your work profile and getting ahead of the rest. But, the most important benefit of all might be the networking opportunity that that you can attain by attending the forum. You can build personal connections with various Microsoft experts and peers that will last even far beyond this event! It also feels good to let you know that I will be speaking at this year’s event! So, here are the sessions that await you in this mega-forum. Session 1: True Lies of SQL Server – SQL Myth Buster Date: April 12, 2010  Time: 11:15pm – 11:45pm In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myth and their resolution backing up with some demo. This demo session is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet  fun session. Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Date: April 12, 2010  Time: 2:30pm-3:30pm SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision making by allowing you to create, manage and propagate changes from single master view of your business entities. Also with MDS – Master Data-hub which is the vital component helps ensure reporting consistency across systems and deliver faster more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Date: April 14, 2010 Time: 5:00pm-6:00pm Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Panel Discussion: Harness the power of Web – SEO and Technical Blogging Date: April 12, 2010 Time: 5:00pm-6:00pm Here you will learn lots of tricks and tips about SEO and Technical Blogging from various Industry Technical Blogging Experts. This event will surely be one of the most important Tech conventions of 2010. TechEd is going to be a very busy time for Tech developers and enthusiasts, since every evening there will be a fun session to attend. If you are interested in any of the above topics for every session, I suggest that you visit each of them as you will learn so many things about the topic to be discussed. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Boot From a USB Drive Even if your BIOS Won’t Let You

    - by Trevor Bekolay
    You’ve always got a trusty bootable USB flash drive with you to solve computer problems, but what if a PC’s BIOS won’t let you boot from USB? We’ll show you how to make a CD or floppy disk that will let you boot from your USB drive. This boot menu, like many created before USB drives became cheap and commonplace, does not include an option to boot from a USB drive. A piece of freeware called PLoP Boot Manager solves this problem, offering an image that can burned to a CD or put on a floppy disk, and enables you to boot to a variety of devices, including USB drives. Put PLoP on a CD PLoP comes as a zip file, which includes a variety of files. To put PLoP on a CD, you will need either plpbt.iso or plpbtnoemul.iso from that zip file. Either disc image should work on most computers, though if in doubt plpbtnoemul.iso should work “everywhere,” according to the readme included with PLoP Boot Manager. Burn plpbtnoemul.iso or plpbt.iso to a CD and then skip to the “booting PLoP Boot Manager” section. Put PLoP on a Floppy Disk If your computer is old enough to still have a floppy drive, then you will need to put the contents of the plpbt.img image file found in PLoP’s zip file on a floppy disk. To do this, we’ll use a freeware utility called RawWrite for Windows. We aren’t fortunate enough to have a floppy drive installed, but if you do it should be listed in the Floppy drive drop-down box. Select your floppy drive, then click on the “…” button and browse to plpbt.img. Press the Write button to write PLoP boot manager to your floppy disk. Booting PLoP Boot Manager To boot PLoP, you will need to have your CD or floppy drive boot with higher precedence than your hard drive. In many cases, especially with floppy disks, this is done by default. If the CD or floppy drive is not set to boot first, then you will need to access your BIOS’s boot menu, or the setup menu. The exact steps to do this vary depending on your BIOS – to get a detailed description of the process, search for your motherboard’s manual (or your laptop’s manual if you’re working with a laptop). In general, however, as the computer boots up, some important keyboard strokes are noted somewhere prominent on the screen. In our case, they are at the bottom of the screen. Press Escape to bring up the Boot Menu. Previously, we burned a CD with PLoP Boot Manager on it, so we will select the CD-ROM Drive option and hit Enter. If your BIOS does not have a Boot Menu, then you will need to access the Setup menu and change the boot order to give the floppy disk or CD-ROM Drive higher precedence than the hard drive. Usually this setting is found in the “Boot” or “Advanced” section of the Setup menu. If done correctly, PLoP Boot Manager will load up, giving a number of boot options. Highlight USB and press Enter. PLoP begins loading from the USB drive. Despite our BIOS not having the option, we’re now booting using the USB drive, which in our case holds an Ubuntu Live CD! This is a pretty geeky way to get your PC to boot from a USB…provided your computer still has a floppy drive. Of course if your BIOS won’t boot from a USB it probably has one…or you really need to update it. Download PLoP Boot Manager Download RawWrite for Windows Similar Articles Productive Geek Tips Create a Bootable Ubuntu 9.10 USB Flash DriveReinstall Ubuntu Grub Bootloader After Windows Wipes it OutCreate a Bootable Ubuntu USB Flash Drive the Easy WayBuilding a New Computer – Part 3: Setting it UpInstall Windows XP on Your Pre-Installed Windows Vista Computer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • The Beginner’s Guide to Greasemonkey User Scripts in Firefox

    - by Asian Angel
    Everybody knows that Firefox has add-ons for virtually everything, but if you don’t want to bloat your installation you’ve always got the option of Greasemonkey scripts instead. Here’s a quick primer on how to use them. Getting Started with User Scripts Once you have Greasemonkey installed, managing the extension is really easy. Left click on the status bar icon to turn the extension on/off and right click to access the context menu shown here. Whether you use the Options button in the Add-ons Manager Window or the context menu shown above, both will bring up the Manage User Scripts dialog. At the moment you have a nice clean slate to work with… time to get some scripts added in. The majority of user scripts can be found at two different sites, the first being appropriately named userscripts.org, and you can either browse by tag or search for a script. As you can see here your search for a particular type of script can be quickly narrowed down based on category. There is definitely a lot to choose from. For our example we focused on the “textarea” tag. There were 62 scripts available but we quickly found what we were looking for on the first page. Installing, Managing, & Using Your Scripts When you find a script that you want to install visit the script’s homepage and click on the “Install” button. Note: Link for this script provided below. Once you have clicked on the Install button, Greasemonkey will open up the following installation window. You will be able to view: A summary of what the script does A list of websites that the script is supposed to function on (our example is set for all) View the script source if desired Make a final decision on whether to install the script or cancel the process Right-clicking on our status bar icon shows our new script listed and active. Reopening the Manage User Scripts window shows: Our new script listed in the column on the left The websites/pages included An option to disable the script (can also be done in the context menu) The ability to edit the script The ability to uninstall the script If you choose to edit the script you will be asked to browse for and select a default text editor of your choice (first time only). Once you have selected a text editor you can make any changes desired to the script. We decided to test our new user script on the site. Going to the comment box at the bottom we could easily resize the window as desired. The Comment box definitely got a lot bigger. Conclusion If you prefer to keep the number of extensions to a minimum in your Firefox installation then Greasemonkey and the Userscripts website can easily provide that extra functionality without the bloat. For added auto website script detection goodness see our article on Greasefire. Note: See our article here for specialized How-To Geek User Style Scripts that can be added to Greasemonkey. Links Download the Greasemonkey Extension (Mozilla Add-ons) Install the Textarea & Input Resize User Script Visit the Userscripts.org Website Visit the Userstyles.org Website Similar Articles Productive Geek Tips Enjoy How-To Geek User Style Script GoodnessEnable Multi-Column Google Searches with a User ScriptSearch Alternative Search Engines from within Bing’s Search PageFind User Scripts for Your Favorite Websites the Easy WaySet Up User Scripts in Opera Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • SQL SERVER – A Picture is Worth a Thousand Words – A Collection of Inspiring and Funny Posts by Vinod Kumar

    - by pinaldave
    One of the most popular quotes is: A picture is worth a thousand words. Working on this concept I started a series over my blog called the “Picture Post”. Rather than rambling over tons of material over text, we are trying to give you a capsule mode of the blog in a quick glance. Some of the picture posts already available over my blog are: Correlation of Ego and Work: Ego and Pride most of the times become a hindrance when we work inside a team. Take this cue, the first ever Picture post was published. Simple and easy to understand concept. Would want to say, Ego is the biggest enemy to humans. Read Original Post. Success (Perception Vs Reality): Personally, have always thought success is not something the talented achieve with the opportunity presented to them, but success is developed using the opportunity in hand now. In this fast paced world where success is pre-defined and convoluted by metrics it is hard to understand how complex it can sometimes be. So I took a stab at this concept in a simple way. Read Original Post. Doing Vs Saying: As Einstein would describe, Insanity is doing the same thing over and over again and expecting different results. Given the amount of information we get, it is difficult to keep track, learn and implement the same. If you were ever reminded of your college days, there will always be 5-6 people doing different things and we naturally try to emulate what they are doing. This could be from competitive exams GMAT, GRE, CAT, Higher-Ed, B-School hunting etc. Rather than saying you are going to do, it is best to do and then say!!! Read Original Picture Post. Your View Vs Management View: Being in the corporate world can be really demanding and we keep asking this question – “Why me?” when the performance appraisal process ends. In this post I just want to ask you one frank opinion – “Are you really self-critical in your assessments?”. If that is the case there shouldn’t be any heartburns or surprises. If you had just one thing to take back, well forget what others are getting but invest time in making yourself better because that is going to take you longer and further in your career. Read Picture Post. Blogging lifecycle for majority: I am happy and fortunate to be in this blog post because this picture post surely doesn’t apply to SQLAuthority where consistency and persistence have been the hallmark of the blog. For the majority others, who have a tendency to start a blog, get into slumber for a while and write saying they want to get back to blogging, the picture post was specifically done for them. Paradox of being someone else: It is always a dream that we want to become somebody and in this process of doing so, we become nobody. In this constant tussle of lost identity we forget to enjoy the moment that is in front of us. I just depicted this using a simple analogy of our constant struggle to get to the other side, just to realize we missed the wonderful moments. Grass is not greener on the other side, but grass is greener where we water the surface. Read Picture Post. And on the lighter side… Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48  | Next Page >