Search Results

Search found 293 results on 12 pages for 'unpack'.

Page 10/12 | < Previous Page | 6 7 8 9 10 11 12  | Next Page >

  • Is the Internet Making us Smarter or Not?

    - by BuckWoody
    I’ve been reading recently about an exchange among some very bright folks, some who posit that the Internet with its instant-on, sometimes-right, big-statement-wins mentality is making people think in a more shallow way, teaching us to rely on others as experts and diluting our logical thought process. Others state that it broadens our perspective and extends our mental reach. Whenever I see this kind of exchange on two ends of a spectrum, I begin to wonder if both sides might be correct.   I can certainly say that I have changed my way of learning, reading, and social interactions because of the Internet. And my tolerance for reading long missives has indeed gone down. I tend to (mentally and literally) “bookmark” things I never seem to have time to get back to. But I also agree that I’ve been exposed to thoughts, ideas and people I never would have encountered any other way. So how to deal with this dichotomy?   Well, I’m going to go off and think about it. No, I’m really going to go off for a full week to a cabin I’ve rented in a National Forest in the Midwest. It has no indoor plumbing, phones, Internet connections or anything else – only a bed to sleep in and a place to cook a little. I’m taking one book, some paper, and a guitar with me and that’s it. I plan to spend my days walking, reading a little, playing a little on the guitar, but mostly just thinking. Those of you who know me might find this unusual. I’m an always-on, hyper-caffeinated, overly-busy, connected person. I haven’t taken a vacation in five years, at least for more than two or three days at a time. Even then, I keep us on the move constantly – our vacations aren’t cruises or anything like that. I check e-mail, post and all that. When I’m not on vacation, I live with and leverage lots of technology, and work with those that do the same. This, however, is a really “unplugged” event, and I’m hoping that it will let me unpack the things I’ve been stuffing in my head. I plan to spend a lot of time on a single subject, writing notes, thinking, and writing more notes.   So after I post tomorrow's “quote of the day” I’ll be “going dark” for a week. No twitter, FaceBook, LinkedIn, e-mail, chat, none of my five blogs will get updated, and I’ll have to turn in my two articles for InformIT.com early. I won’t have access to my college class portal, so my students will be without me for a week. I will really be offline. I’ll see you in a week – hopefully a little more educated. See you then.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Is the Internet Making us Smarter or Not?

    - by BuckWoody
    I’ve been reading recently about an exchange among some very bright folks, some who posit that the Internet with its instant-on, sometimes-right, big-statement-wins mentality is making people think in a more shallow way, teaching us to rely on others as experts and diluting our logical thought process. Others state that it broadens our perspective and extends our mental reach. Whenever I see this kind of exchange on two ends of a spectrum, I begin to wonder if both sides might be correct.   I can certainly say that I have changed my way of learning, reading, and social interactions because of the Internet. And my tolerance for reading long missives has indeed gone down. I tend to (mentally and literally) “bookmark” things I never seem to have time to get back to. But I also agree that I’ve been exposed to thoughts, ideas and people I never would have encountered any other way. So how to deal with this dichotomy?   Well, I’m going to go off and think about it. No, I’m really going to go off for a full week to a cabin I’ve rented in a National Forest in the Midwest. It has no indoor plumbing, phones, Internet connections or anything else – only a bed to sleep in and a place to cook a little. I’m taking one book, some paper, and a guitar with me and that’s it. I plan to spend my days walking, reading a little, playing a little on the guitar, but mostly just thinking. Those of you who know me might find this unusual. I’m an always-on, hyper-caffeinated, overly-busy, connected person. I haven’t taken a vacation in five years, at least for more than two or three days at a time. Even then, I keep us on the move constantly – our vacations aren’t cruises or anything like that. I check e-mail, post and all that. When I’m not on vacation, I live with and leverage lots of technology, and work with those that do the same. This, however, is a really “unplugged” event, and I’m hoping that it will let me unpack the things I’ve been stuffing in my head. I plan to spend a lot of time on a single subject, writing notes, thinking, and writing more notes.   So after I post tomorrow's “quote of the day” I’ll be “going dark” for a week. No twitter, FaceBook, LinkedIn, e-mail, chat, none of my five blogs will get updated, and I’ll have to turn in my two articles for InformIT.com early. I won’t have access to my college class portal, so my students will be without me for a week. I will really be offline. I’ll see you in a week – hopefully a little more educated. See you then.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Package management system corrupted. Cannot install or remove packages. U12.04LTS

    - by user271490
    Having read other posts, I believe that this may be less about samba than about update system. Below is the log file of the failed installation of Samba. I have been trying without success to install/outstall samba so that I could install anything else ... I cannot either install or remove samba using either update-manager or apt-get (nor indeed Software Centre). One of the errors that I have had to correct is the presence after "removal" (failed) of /usr/share/system-config-samba directory which finally allowed itself to be deleted. That, however was then ... I have U12.04LTS. running on release 63 because I allowed the upgrade to 64 this morning which fell over - no output to monitor - obviously even less support for my graphic chip than I am suffering already (see other posts in this forum). According to my interpretation of the dpkg returned errors there may be some problem with the package files, but if this is the case then it is on servers 'main', 'nantes uni fr' and 'best fr' at the very least if not everywhere. The suggestions offered at Package operation failed and elsewhere have not worked for me. This linked post suggests that a similar error is present in other packages, or that the error is in the 'update system' I have tried ... sudo apt-get remove samba ... autoremove ... install samba ... clean ... update -f all of the above In update-manager I have tried the "reload packages list" which fails to terminate because of the error. I have tried to install and remove samba from the software centre ... :( I am at a loss ... I need help, please! Firstly to recover my apt-get/update-manager/Software Centre so that I can at least carry on with my continuing installation - up to communicating with home network hence need for samba - which brings me to my second requirement ... samba. PS is the issue about "MaxReports" associated or apart? UPDATE! Being heartily sick of restarting FF every 5 seconds I thought I'd try again with Chromium ... and got the same errors from dpkg about corrupt compressed package - coincidence? Of course this was no longer in clipboard when I got here because apport has just errored ... AAARRRGGGH!!! Why does every error clear the clipboard? Thanks for any and all help!! installArchives() failed: Preconfiguring packages ... ... snip (Reading database ... ... snip (Reading database ... 184858 files and directories currently installed.) Unpacking samba (from .../samba_2%3a3.6.3-2ubuntu2.10_i386.deb) ... dpkg-deb (subprocess): data: internal gzip read error: ': data error' dpkg-deb: error: subprocess returned error exit status 2 dpkg: error processing /var/cache/apt/archives/samba_2%3a3.6.3-2ubuntu2.10_i386.deb (--unpack): subprocess dpkg-deb --fsys-tarfile returned error exit status 2 No apport report written because MaxReports is reached already Selecting previously unselected package system-config-samba. Unpacking system-config-samba (from .../system-config-samba_1.2.63-0ubuntu5_all.deb) ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for ufw ... Processing triggers for man-db ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Errors were encountered while processing: /var/cache/apt/archives/samba_2%3a3.6.3-2ubuntu2.10_i386.deb Error in function: dpkg: dependency problems prevent configuration of system-config-samba: system-config-samba depends on samba; however: Package samba is not installed. dpkg: error processing system-config-samba (--configure): dependency problems - leaving unconfigured

    Read the article

  • Converting .docx to pdf (or .doc to pdf, or .doc to odt, etc.) with libreoffice on a webserver on the fly using php

    - by robertphyatt
    Ok, so I needed to convert .docx files to .pdf files on the fly, but none of the free php libraries that were available let me do it on my server (a webservice was not good enough). Basically either I needed to pay for a library (and have it maybe suck) or just deal with the free ones that didn't convert the formatting well enough. Not good enough! I found that LibreOffice (OpenOffice's successor) allows command line conversion using the LibreOffice conversion engine (which DID preserve the formatting like I wanted and generally worked great). I loaded the latest version of Ubuntu (http://www.ubuntu.com/download/ubuntu/download) onto my Virtual Box (https://www.virtualbox.org/wiki/Downloads) on my computer and found that I was able to easily convert files using the commandline like this: libreoffice --headless -convert-to pdf fileToConvert.docx -outdir output/path/for/pdf I thought: sweet...but I don't have admin rights on my host's web server. I tried to use a "portable" version of LibreOffice that I obtained from http://portablelinuxapps.org/ but I was unable to get it to work on my host's webserver, because my host's webserver didn't have all the dependencies (Dependency Hell! http://en.wikipedia.org/wiki/Dependency_hell) I was at a loss of how to make it work, until I ran across a cool project made by a Ph.D. student (Philip J. Guo) at Stanford called CDE: http://www.stanford.edu/~pgbovine/cde.html I will let you look at his explanations of how it works (I followed what he did in http://www.youtube.com/watch?feature=player_embedded&v=6XdwHo1BWwY, starting at about 32:00 as well as the directions on his site), but in short, it allows one to avoid dependency hell by copying all the files used when you run certain commands, recreating the linux environment where the command worked. I was able to use this to run LibreOffice without having to resort to someone's portable version of it, and it worked just like it did when I did it on Ubuntu with the command above, with a tweak: I needed to run the wrapper of LibreOffice the CDE generated. So, below is my PHP code that calls it. In this code snippet, the filename to be copied is passed in as $_POST["filename"]. I copy the file to the same spot where I originally converted the file, convert it, copy it back and then delete all the files (so that it doesn't start growing exponentially). I did it this way because I wasn't able to make it work otherwise on the webserver. If there is a linux + webserver ninja out there that can figure out how to make it work without doing this, I would be interested to know what you did. Please post a comment or something if you did that. <?php //first copy the file to the magic place where we can convert it to a pdf on the fly copy($time.$_POST["filename"], "../LibreOffice/cde-package/cde-root/home/robert/Desktop/".$_POST["filename"]); //change to that directory chdir('../LibreOffice/cde-package/cde-root/home/robert'); //the magic command that does the conversion $myCommand = "./libreoffice.cde --headless -convert-to pdf Desktop/".$_POST["filename"]." -outdir Desktop/"; exec ($myCommand); //copy the file back copy("Desktop/".str_replace(".docx", ".pdf", $_POST["filename"]), "../../../../../documents/".str_replace(".docx", ".pdf", $_POST["filename"])); //delete all the files out of the magic place where we can convert it to a pdf on the fly $files1 = scandir('Desktop'); //my files that I generated all happened to start with a number. $pattern = '/^[0-9]/'; foreach ($files1 as $value) { preg_match($pattern, $value, $matches); if(count($matches) ?> 0) { unlink("Desktop/".$value); } } //changing the header to the location of the file makes it work well on androids header( 'Location: '.str_replace(".docx", ".pdf", $_POST["filename"]) ); ?> And here is the tar.gz file I generated I generated with CDE. To duplicate what I did exactly, put the tar.gz file in a folder somewhere. I will call that folder the "root". Make a new folder called "documents" in the "root" folder. Unpack the tar.gz and run the php script above from the "documents" folder. Success! I made a truly portable version of LibreOffice that can convert files on the fly on a webserver using 100% free, open source software!

    Read the article

  • Cant finish upgrade from 11.10 to 12 on VPS based on Parallels Virtuozzo Containers, due to libc6

    - by Carmageddon
    I was stuck with this problem near the end of an upgrade: WARNING: this version of the GNU libc requires kernel version 2.6.24 or later. Please upgrade your kernel before installing glibc. The installation of a 2.6 kernel could ask you to install a new libc first, this is NOT a bug, and should NOT be reported. In that case, please add lenny sources to your /etc/apt/sources.list and run: apt-get install -t lenny linux-image-2.6 Their suggested stepds dont work on VPS, and after googling, I came up to this: Why did my upgrade to 12.04 fail with "glibc not found" or "libc6" or "requires kernel 2.6.24" error? There is comment by izx which explains my problem and proposes a workaround (might take a while to convince the guys to upgrade the kernel..). However, when I follow his instructions, I get error: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libc-dev-bin libc6 libc6-dev libnih1 Suggested packages: glibc-doc The following packages will be upgraded: libc-dev-bin libc6 libc6-dev libnih1 4 upgraded, 0 newly installed, 0 to remove and 394 not upgraded. 1 not fully installed or removed. Need to get 0 B/7737 kB of archives. After this operation, 233 kB disk space will be freed. Do you want to continue [Y/n]? y locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by locale) Preconfiguring packages ... (Reading database ... 35175 files and directories currently installed.) Preparing to replace libc6-dev 2.13-20ubuntu5.2 (using .../libc6-dev_2.15-0ubuntu10.3_amd64.deb) ... Unpacking replacement libc6-dev ... Preparing to replace libc-dev-bin 2.13-20ubuntu5.2 (using .../libc-dev-bin_2.15-0ubuntu10.3_amd64.deb) ... Unpacking replacement libc-dev-bin ... Preparing to replace libc6 2.13-20ubuntu5.2 (using .../libc6_2.15-0ubuntu10.3_amd64.deb) ... locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by locale) Checking for services that may need to be restarted... Checking init scripts... runlevel:/var/run/utmp: No such file or directory Checking for services that may need to be restarted... Checking init scripts... runlevel:/var/run/utmp: No such file or directory WARNING: init script for samba not found. Stopping some services possibly affected by the upgrade (will be restarted later): cron: stopping...done. WARNING: this version of the GNU libc requires kernel version 2.6.24 or later. Please upgrade your kernel before installing glibc. The installation of a 2.6 kernel _could_ ask you to install a new libc first, this is NOT a bug, and should *NOT* be reported. In that case, please add lenny sources to your /etc/apt/sources.list and run: apt-get install -t lenny linux-image-2.6 Then reboot into this new kernel, and proceed with your upgrade dpkg: error processing /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Processing triggers for man-db ... locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by locale) Errors were encountered while processing: /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I also attempted to manually grab the .deb package and install it using dpkg -i, but getting: locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) Even though the file is: libc-bin_2.15-0ubuntu10+openvz0_amd64.deb

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • Problem with cucumber

    - by sev
    I want to make rails app which will require minimum gems. I freeze gems into app and try to run cucumber's test and I've got the an error. Below is sequence of my actions. What I do wrong? rails cucumber && cd cucumber rake rails:freeze:gems add at the end of config/environments/test.rb: config.gem 'gherkin' config.gem 'cucumber-rails' config.gem 'database_cleaner' config.gem 'webrat' rake gems:unpack:dependencies RAILS_ENV=test rake gems:build RAILS_ENV=test rake gems RAILS_ENV=test [F] gherkin [F] trollop = 1.16.2 [F] cucumber-rails [F] cucumber = 0.8.0 [F] gherkin = 1.0.30 [F] trollop = 1.16.2 [F] term-ansicolor = 1.0.4 [F] builder = 2.1.2 [F] diff-lcs = 1.1.2 [F] json_pure = 1.4.3 [F] database_cleaner [F] webrat [F] nokogiri = 1.2.0 [F] rack = 1.0 [F] rack-test = 0.5.3 [F] rack = 1.0 script/generate cucumber rake db:migrate gem uninstall builder cucumber cucumber-rails diff-lcs gherkin json_pure nokogiri rack-test term-ansicolor trollop webrat rake cucumber /usr/bin/ruby1.8 -I "cucumber/vendor/gems/cucumber-0.8.0/lib:lib" "cucumber/vendor/gems/cucumber-0.8.0/bin/cucumber" --profile default /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- gherkin (LoadError) from /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in require' from cucumber/vendor/gems/cucumber-0.8.0/bin/../lib/cucumber/cli/main.rb:5 from cucumber/vendor/gems/cucumber-0.8.0/bin/cucumber:5:inrequire' from cucumber/vendor/gems/cucumber-0.8.0/bin/cucumber:5 rake aborted! Command failed with status (1): [/usr/bin/ruby1.8 -I "cucumbe...] (See full trace by running task with --trace)

    Read the article

  • Encrypting with Perl CBC and decrypting with PHP mcrypt

    - by Ed
    I have an encrypted string that was encrypted with Perl Crypt::CBC (Rijndael,cbc). The original plaintext was encrypted with encrypt_hex() method of Crypt::CBC. $encrypted_string = '52616e646f6d49567b2c89810ceddbe8d182c23ba5f6562a418e318b803a370ea25a6a8cbfe82bc6362f790821dce8441a790a7d25d3d9ea29f86e6685d0796d'; I have the 32 character key that was used. mcrypt is successfully compiled into PHP, but I'm having a very hard time trying to decrypt the string in PHP. I keep getting gibberish back. If I unpack('H*', $encrypted_string), I see 'RandomIV' followed by what looks like binary. I can't seem to correctly extract the IV and separate the actual encrypted message. I know I'm not providing my information, but I'm not sure where else to start. $cipher = 'rijndael-256'; $cipher_mode = 'cbc'; $td = mcrypt_module_open($cipher, '', $cipher_mode, ''); $key = '32 characters'; // Does this need to converted to something else before being passed? $iv = ?? // Not sure how to extract this from $encrypted_string. $token = ?? // Should be a sub-string of $encrypted_string, correct? mcrypt_generic_init($td, $key, $iv); $clear = rtrim(mdecrypt_generic($td, $token), ''); mcrypt_generic_deinit($td); mcrypt_module_close($td); echo $clear; Any help, pointers in the right direction, would be greatly appreciated. Let me know if I need to provide more information.

    Read the article

  • Jetty startup delay

    - by Tauren
    I'm trying to figure out what would be causing a 1 minute delay in the startup of Jetty. Is it a configuration problem, my application, or something else? I have Jetty 7 (jetty-7.0.1.v20091125 25 November 2009) installed on a server and I deploy a 45MB ROOT.war file into the webapps directory. This is the only webapp configured in Jetty. I then start Jetty with the command: java -DSTOP.PORT=8079 -DSTOP.KEY=mystopkey -Denv=stage -jar start.jar etc/jetty-logging.xml etc/jetty.xml & I get two lines of output right after doing this: 2010-03-07 14:20:06.642:INFO::Logging to StdErrLog::DEBUG=false via org.eclipse.jetty.util.log.StdErrLog 2010-03-07 14:20:06.710:INFO::Redirecting stderr/stdout to /home/zing/jetty-distribution-7.0.1.v20091125/logs/2010_03_07.stderrout.log When I press the enter key, I get my command prompt back. Looking at the log file (logs/2010_03_07.stderrout.log), I see the following at the beginning: 2010-03-07 14:08:50.396:INFO::jetty-7.0.1.v20091125 2010-03-07 14:08:50.495:INFO::Extract jar:file:/home/zing/jetty-distribution-7.0.1.v20091125/webapps/ROOT.war!/ to /tmp/Jetty_0_0_0_0_8080_ROOT.war___.8te0nm/webapp 2010-03-07 14:08:52.599:INFO::NO JSP Support for , did not find org.apache.jasper.servlet.JspServlet 2010-03-07 14:09:51.379:INFO::Set web app root system property: 'webapp.root' = [/tmp/Jetty_0_0_0_0_8080_ROOT.war___.8te0nm/webapp] 2010-03-07 14:09:51.585:INFO::Initializing Spring root WebApplicationContext INFO - ContextLoader - Root WebApplicationContext: initialization started INFO - XmlWebApplicationContext - Refreshing Root WebApplicationContext: startup date [Sun Mar 07 14:09:51 PST 2010]; root of context hierarchy ... Notice the 1 minute long pause between the 3rd and 4th lines. What is Jetty doing at this point? What other things could be going on? It doesn't even look like it has started my Spring initialization yet. Note that I checked my /tmp directory to see if it was simply the time to unpack my war file, but the file had been completely unpacked even at the start of this 1 minute delay.

    Read the article

  • Rails deployment strategies with Bundler and JRuby

    - by brad
    I have a jruby rails app and I've just started using bundler for gem dependency management. I'm interested in hearing peoples' opinions on deployment strategies. The docs say that bundle package will package your gems locally so you don't have to fetch them on the server (and I believe warbler does this by default), but I personally think (for us) this is not the way to go as our deployed code (in our case a WAR file) becomes much larger. My preference would be to mimic our MVN setup which fetches all dependencies directly on the server AFTER the code has been copied there. Here's what I'm thinking, all comments are appreciated: Step1: Build war file, copy to server Step2: Unpack war on server, fetch java dependencies with mvn Step3: use Bundler to fetch Gem deps (Where should these be placed??) * Step 3 is the step I'm a bit unclear on. Do I run bundle install with a particular target in mind?? Step4: Restart Tomcat Again my reasoning here is that I'd like to keep the dependencies separate from the code at deploy time. I'd also like to place all gem dependencies in the app itself so they are contained, rather than installing them in the app user's home directory (as, again, I believe is the default for Bundler)

    Read the article

  • How to suppress quotes in Powershell commands to executables

    - by David Gladfelter
    Is there any way to supress the enclosing quotation marks around each command-line argument that powershell likes to generate and then pass to external executables for command line arguments that have spaces in them? Here's the situation: One way to unpack many installers is a command of the form: msiexec /a <packagename> /qn TARGETDIR="<path to folder with spaces>" Trying to execute this from powershell has proven quite difficult. Powershell likes to enclose parameters with spaces in double-quotes. The following lines: msiexec /a somepackage.msi /qn 'TARGETDIR="c:\some path"' msiexec /a somepackage.msi /qn $('TARGETDIR="c:\some path"') $td = '"c:\some path"' msiexec /a somepackage.msi /qn TARGETDIR=$td All result in the following command line (as reported by the Win32 GetCommandLine() api): "msiexec" /a somepackage.msi /qn "TARGETDIR="c:\some path"" This command line: msiexec /a somepackage.msi TARGETDIR="c:\some path" /qn results in "msiexec" /a fooinstaller.msi "TARGETDIR=c:\some path" /qn It seems that Powershell likes to enclose the results of expressions meant to represent one argument in quotation marks when passing them to external executables. This works fine for most executables. However, MsiExec is very specific about the quoting rules it wants and won't accept any of the command lines Powershell generates for paths have have spaces in them. Is there any way to suppress this behavior?

    Read the article

  • Gems install fine but don't show as installed under rake gems

    - by Josh Pinter
    I'll show you my output here: rake gems (in /Users/jp/Sites/central/trunk) - [F] authlogic - [R] activesupport - [F] builder - [F] formtastic - [R] activesupport >= 2.3.0 - [R] actionpack >= 2.3.0 - [ ] fastercsv I = Installed F = Frozen R = Framework (loaded before rails starts) Making sure fastercsv is installed: gem which fastercsv /usr/local/lib/ruby/gems/1.8/gems/fastercsv-1.5.3/lib/fastercsv.rb After installing through a variety of methods but only one is shown here: sudo rake gems:install (in /Users/jp/central/trunk) gem install fastercsv Successfully installed fastercsv-1.5.3 1 gem installed Installing ri documentation for fastercsv-1.5.3... Installing RDoc documentation for fastercsv-1.5.3... And try it again. rake gems (in /Users/jp/Sites/central/trunk) - [F] authlogic - [R] activesupport - [F] builder - [F] formtastic - [R] activesupport >= 2.3.0 - [R] actionpack >= 2.3.0 - [ ] fastercsv I = Installed F = Frozen R = Framework (loaded before rails starts) One thing to know is that I tried unpacking the gems but if it doesn't think it's installed it can't unpack it. Another thing is that I really tried to figure this out. There's a bunch of people saying clean up local gems in your user account, always install with sudo, etc. But I've tried all that. What would you guys do to fix this? Thanks many times over, Josh

    Read the article

  • Take most significant 8 bytes of the MD5 hash of a string as a long (in Ruby)

    - by Nate Murray
    Hey Friends, I'm trying to implement a java "hash" function in ruby. Here's the java side: import java.nio.charset.Charset; import java.security.MessageDigest; /** * @return most significant 8 bytes of the MD5 hash of the string, as a long */ protected long hash(String value) { byte[] md5hash; md5hash = md5Digest.digest(value.getBytes(Charset.forName("UTF8"))); long hash = 0L; for (int i = 0; i < 8; i++) { hash = hash << 8 | md5hash[i] & 0x00000000000000FFL; } return hash; } So far, my best guess in ruby is: # WRONG - doesn't work properly. #!/usr/bin/env ruby -wKU require 'digest/md5' require 'pp' md5hash = Digest::MD5.hexdigest("0").unpack("U*") pp md5hash hash = 0 0.upto(7) do |i| hash = hash << 8 | md5hash[i] & 0x00000000000000FF end pp hash Problem is, this ruby code doesn't match the java output. For reference, the above java code given these strings returns the corresponding long: "00038c53790ecedfeb2f83102e9115a522475d73" => -2059313900129568948 "0" => -3473083983811222033 "001211e8befc8ac22dd265ecaa77f8c227d0007f" => 3234260774580957018 Thoughts: I'm having problems getting the UTF8 bytes from the ruby string In ruby I'm using hexdigest, I suspect I should be using just digest instead The java code is taking the md5 of the UTF8 bytes whereas my ruby code is taking the bytes of the md5 (as hex) Any suggestions on how to get the exact same output in ruby?

    Read the article

  • Most efficient way to send images across processes

    - by Heinrich Ulbricht
    Goal Pass images generated by one process efficiently and at very high speed to another process. The two processes run on the same machine and on the same desktop. The operating system may be WinXP, Vista and Win7. Detailled description The first process is solely for controlling the communication with a device which produces the images. These images are about 500x300px in size and may be updated up to several hundred times per second. The second process needs these images to display them. The first process uses a third party API to paint the images from the device to a HDC. This HDC has to be provided by me. Note: There is already a connection open between the two processes. They are communicating via anonymous pipes and share memory mapped file views. Thoughts How would I achieve this goal with as little work as possible? And I mean both work for me and the computer. I am using Delphi, so maybe there is some component available for doing this? I think I could always paint to any image component's HDC, save the content to memory stream, copy the contents via the memory mapped file, unpack it on the other side and paint it there to the destination HDC. I also read about a IPicture interface which can be used to marshall images. What are your ideas? I appreciate every thought on this!

    Read the article

  • Does JAXWS client make difference between an empty collection and a null collection value as returne

    - by snowflake
    Since JAX-WS rely on JAXB, and since I observed the code that unpack the XML bean in JAX-B Reference Implementation, I guess the difference is not made and that a JAXWS client always return an empty collection, even the webservice result was a null element: public T startPacking(BeanT bean, Accessor<BeanT, T> acc) throws AccessorException { T collection = acc.get(bean); if(collection==null) { collection = ClassFactory.create(implClass); if(!acc.isAdapted()) acc.set(bean,collection); } collection.clear(); return collection; } I agree that for best interoperability service contracts should be non ambiguous and avoid such differences, but it seems that the JAX-WS service I'm invoking (hosted on a Jboss server with Jbossws implementation) is returning as expected either null either empty collection (tested with SoapUI). I used for my test code generated by wsimport. Return element is defined as: @XmlElement(name = "return", nillable = true) protected List<String> _return; I even tested to change the Response class getReturn method from : public List<String> getReturn() { if (_return == null) { _return = new ArrayList<String>(); } return this._return; } to public List<String> getReturn() { return this._return; } but without success. Any helpful information/comment regarding this problem is welcome !

    Read the article

  • Casting to a struct from LPVOID - C

    - by Jamie Keeling
    Hello, I am writing a simple console application which will allow me to create a number of threads from a set of parameters passed through the arguments I provide. DWORD WINAPI ThreadFunc(LPVOID threadData) { } I am packing them into a struct and passing them as a parameter into the CreateThread method and trying to unpack them by casting them to the same type as my struct from the LPVOID. I'm not sure how to cast it to the struct after getting it through so I can use it in the method itself, i've tried various combinations (Example attatched) but it won't compile. Struct: #define numThreads 1 struct Data { int threads; int delay; int messages; }; Call to method: HANDLE hThread; DWORD threadId; struct Data *tData; tData->threads = numThreads; tData->messages = 3; tData->delay = 1000; // Create child thread hThread = CreateThread( NULL, // lpThreadAttributes (default) 0, // dwStackSize (default) ThreadFunc, // lpStartAddress &tData, // lpParameter 0, // dwCreationFlags &threadId // lpThreadId (returned by function) ); My attempt: DWORD WINAPI ThreadFunc(LPVOID threadData) { struct Data tData = (struct Data)threadData; int msg; for(msg = 0; msg<5; msg++) { printf("Message %d from child\n", msg); } return 0; } Compiler error: error C2440: 'type cast' : cannot convert from 'LPVOID' to 'Data' As you can see I have implemented a way to loop through a number of messages already, I'm trying to make things slightly more advanced and add some further functionality.

    Read the article

  • Multiple Unpacking Assignment in Python when you don't know the sequence length

    - by doug
    The textbook examples of multiple unpacking assignment are something like: import numpy as NP M = NP.arange(5) a, b, c, d, e = M # so of course, a = 0, b = 1, etc. M = NP.arange(20).reshape(5, 4) # numpy 5x4 array a, b, c, d, e = M # here, a = M[0,:], b = M[1,:], etc. (ie, a single row of M is assigned each to a through e) (My Q is not numpy specfic; indeed, i would prefer a pure python solution.) W/r/t the piece of code i'm looking at now, i see two complications on that straightforward scenario: i usually won't know the shape of M; and i want to unpack a certain number of items (definitely less than all items) and i want to put the remainder into a single container so back to the 5x4 array above, what i would very much like to be able to do is, for instance, assign the first three rows of M to a, b, and c respectively (exactly as above) and the rest of the rows (i have no idea how many there will be, just some positive integer) to a single container, all_the_rest = []. I'm not sure if i have explained this clearly; in any event, if i get feedback i'll promptly edit my Question.

    Read the article

  • Compile time float packing/punning

    - by detly
    I'm writing C for the PIC32MX, compiled with Microchip's PIC32 C compiler (based on GCC 3.4). My problem is this: I have some reprogrammable numeric data that is stored either on EEPROM or in the program flash of the chip. This means that when I want to store a float, I have to do some type punning: typedef union { int intval; float floatval; } IntFloat; unsigned int float_as_int(float fval) { IntFloat intf; intf.floatval = fval; return intf.intval; } // Stores an int of data in whatever storage we're using void StoreInt(unsigned int data, unsigned int address); void StoreFPVal(float data, unsigned int address) { StoreInt(float_as_int(data), address); } I also include default values as an array of compile time constants. For (unsigned) integer values this is trivial, I just use the integer literal. For floats, though, I have to use this Python snippet to convert them to their word representation to include them in the array: import struct hex(struct.unpack("I", struct.pack("f", float_value))[0]) ...and so my array of defaults has these indecipherable values like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 0x3C83126F, // Some default float value, 0.005 } (These actually take the form of X macro constructs, but that doesn't make a difference here.) Commenting is nice, but is there a better way? It's be great to be able to do something like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 COMPILE_TIME_CONVERT(0.005), // Some default float value, 0.005 } ...but I'm completely at a loss, and I don't even know if such a thing is possible. Notes Obviously "no, it isn't possible" is an acceptable answer if true. I'm not overly concerned about portability, so implementation defined behaviour is fine, undefined behaviour is not (I have the IDB appendix sitting in front of me). As fas as I'm aware, this needs to be a compile time conversion, since DEFAULTS is in the global scope. Please correct me if I'm wrong about this.

    Read the article

  • C++ std::stringstream seemingly causes thread to hang or die under SunOS

    - by stretch
    I have an application developed under Linux with GCC 4.2 which makes quite heacy use of stringstreams to wrap and unwrap data being sent over the wire. (Because the Grid API I'm using demands it). Under Linux everything is fine but when I deploy to SunOS (v5.10 running SPARC) and compile with GCC 3.4.6 the app hangs when it reaches the point at which stringstreams are used. In more detail: The main thread accepts requests from clients and starts a new pthread to handle each request. The child thread uses stringstreams to pack data. When the child thread gets to that point it seems to hang for a second and then die. The main thread is unaffected. Are there any known issues with stringstream and GCC 3.4.6 or SunOS or SPARCs? I didn't find anything yet... Can anyone suggest a better way to pack and unpack large amounts of data a strings or byte streams? Apologies for not posting code but this to me seems more involved than a simple syntax error. All the same, the thread crashes: std::stringstream mystringstream; //not here mystringstream << "some data: "; //but here That is, I can declare the stringstream but when I try to use it something goes wrong.

    Read the article

  • JBoss 6 unpacks jars from WEB-INF/lib of war

    - by Maxym
    when I start JBoss 6 I see that it unpacks all jar files from WEB-INF/lib in tmp/vfs/automountXXX folder. E.g. jackrabbit-server.war contains library asm-3.1.jar, then in tmp folder I see the following folders with files: asm-3.1.jar-83dc35ead0d41d41/asm-3.1.jar asm-3.1.jar-2a48f1c13ec7f25d/contents/"unpacked asm-3.1.jar" it does not take files from my.ear/lib only WEB-INF/lib... Why is it so? And is here any way to prevent it doing so? It just slows down application server starting (and stopping), what is not that comfortable at development... If it is somehow related to JavaEE 6 specification and ejb-jars, which can be located now in WEB-INF/lib, so I don't have such libraries in my war files... UPDATE: actually when I repack jackrabbit-server.war to jackrabbit-server.ear which contains jackrabbit-server.war and moved all its libraries to jackrabbit-server.ear/lib then I still see two folders in tmp: asm-3.1.jar-215a36131ebb088e/asm-3.1.jar asm-3.1.jar-14695f157664f00/contents/ but in this case last folder is empty. So it still creates two folders, but does not unpack my library. Also I use exploded deployment so the question is only about jar files, not unpacking ear/war.

    Read the article

  • How to use R's ellipsis feature when writing your own function?

    - by Ryan Thompson
    The R language has a nifty feature for defining functions that can take a variable number of arguments. For example, the function data.frame takes any number of arguments, and each argument becomes the data for a column in the resulting data table. Example usage: > data.frame(letters=c("a", "b", "c"), numbers=c(1,2,3), notes=c("do", "re", "mi")) letters numbers notes 1 a 1 do 2 b 2 re 3 c 3 mi The function's signature includes an ellipsis, like this: function (..., row.names = NULL, check.rows = FALSE, check.names = TRUE, stringsAsFactors = default.stringsAsFactors()) { [FUNCTION DEFINITION HERE] } I would like to write a function that does something similar, taking multiple values and consolidating them into a single return value (as well as doing some other processing). In order to do this, I need to figure out how to "unpack" the ... from the function's arguments within the function. I don't know how to do this. The relevant line in the function definition of data.frame is object <- as.list(substitute(list(...)))[-1L], which I can't make any sense of. So how can I convert the ellipsis from the function's signature into, for example, a list? To be more specific, how can I write get_list_from_ellipsis in the code below? my_ellipsis_function(...) { input_list <- get.list.from.ellipsis(...) output_list <- lapply(X=input_list, FUN=do_something_interesting) return(output_list) } my_ellipsis_function(a=1:10,b=11:20,c=21:30)

    Read the article

  • Python: Created nested dictionary from list of paths

    - by sberry2A
    I have a list of tuples the looks similar to this (simplified here, there are over 14,000 of these tuples with more complicated paths than Obj.part) [ (Obj1.part1, {<SPEC>}), (Obj1.partN, {<SPEC>}), (ObjK.partN, {<SPEC>}) ] Where Obj goes from 1 - 1000, part from 0 - 2000. These "keys" all have a dictionary of specs associated with them which act as a lookup reference for inspecting another binary file. The specs dict contains information such as the bit offset, bit size, and C type of the data pointed to by the path ObjK.partN. For example: Obj4.part500 might have this spec, {'size':32, 'offset':128, 'type':'int'} which would let me know that to access Obj4.part500 in the binary file I must unpack 32 bits from offset 128. So, now I want to take my list of strings and create a nested dictionary which in the simplified case will look like this data = { 'Obj1' : {'part1':{spec}, 'partN':{spec} }, 'ObjK' : {'part1':{spec}, 'partN':{spec} } } To do this I am currently doing two things, 1. I am using a dotdict class to be able to use dot notation for dictionary get / set. That class looks like this: class dotdict(dict): def __getattr__(self, attr): return self.get(attr, None) __setattr__ = dict.__setitem__ __delattr__ = dict.__delitem__ The method for creating the nested "dotdict"s looks like this: def addPath(self, spec, parts, base): if len(parts) > 1: item = base.setdefault(parts[0], dotdict()) self.addPath(spec, parts[1:], item) else: item = base.setdefault(parts[0], spec) return base Then I just do something like: for path, spec in paths: self.lookup = dotdict() self.addPath(spec, path.split("."), self.lookup) So, in the end self.lookup.Obj4.part500 points to the spec. Is there a better (more pythonic) way to do this?

    Read the article

  • Getting the data inside the C# web service from Jsonified string

    - by gnomixa
    In my JS I use Jquery's $ajax functions to call the web service and send the jsonified data to it. The data is in the following format: var countries = { "1A": { id: "1A", name: "Andorra" }, "2B": { id: 2B name: "Belgium" }, ..etc }; var jsonData = JSON.stringify({ data: data }); //then $ajax is called and it makes the call to the c# web service On the c# side the web service needs to unpack this data, currently it comes in as string[][] data type. How do I convert it to the format so I can refer to the properties such as .id and .name? Assuming I have a class called Sample with these properties? Thanks! EDIT: Here is my JS code: var jsonData = JSON.stringify(countries); $.ajax({ type: 'POST', url: 'http://localhost/MyService.asmx/Foo', contentType: 'application/json; charset=utf-8', data: jsonData, success: function (msg) { alert(msg.d); }, error: function (xhr, status) { switch (status) { case 404: alert('File not found'); break; case 500: alert('Server error'); break; case 0: alert('Request aborted'); break; default: alert('Unknown error ' + status); } } }); inside c# web service I have: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Services; using System.Data; using System.Collections; using System.IO; using System.Web.Script.Services; [WebMethod] [ScriptMethod] public string Foo(IDictionary<string, Country> countries) { return "success"; }

    Read the article

  • Convert a raw string to an array of big-endian words with Ruby

    - by Zag zag..
    Hello, I would like to convert a raw string to an array of big-endian words. As example, here is a JavaScript function that do it well (by Paul Johnston): /* * Convert a raw string to an array of big-endian words * Characters >255 have their high-byte silently ignored. */ function rstr2binb(input) { var output = Array(input.length >> 2); for(var i = 0; i < output.length; i++) output[i] = 0; for(var i = 0; i < input.length * 8; i += 8) output[i>>5] |= (input.charCodeAt(i / 8) & 0xFF) << (24 - i % 32); return output; } I believe the Ruby equivalent can be String#unpack(format). However, I don't know what should be the correct format parameter. Thank you for any help. Regards

    Read the article

  • Haskell lazy I/O and closing files

    - by Jesse
    I've written a small Haskell program to print the MD5 checksums of all files in the current directory (searched recursively). Basically a Haskell version of md5deep. All is fine and dandy except if the current directory has a very large number of files, in which case I get an error like: <program>: <currentFile>: openBinaryFile: resource exhausted (Too many open files) It seems Haskell's laziness is causing it not to close files, even after its corresponding line of output has been completed. The relevant code is below. The function of interest is getList. import qualified Data.ByteString.Lazy as BS main :: IO () main = putStr . unlines =<< getList "." getList :: FilePath -> IO [String] getList p = let getFileLine path = liftM (\c -> (hex $ hash $ BS.unpack c) ++ " " ++ path) (BS.readFile path) in mapM getFileLine =<< getRecursiveContents p hex :: [Word8] -> String hex = concatMap (\x -> printf "%0.2x" (toInteger x)) getRecursiveContents :: FilePath -> IO [FilePath] -- ^ Just gets the paths to all the files in the given directory. Are there any ideas on how I could solve this problem? The entire program is available here: http://haskell.pastebin.com/PAZm0Dcb

    Read the article

< Previous Page | 6 7 8 9 10 11 12  | Next Page >