Search Results

Search found 1098 results on 44 pages for 'jan stolarek'.

Page 42/44 | < Previous Page | 38 39 40 41 42 43 44  | Next Page >

  • Linux configurations that would affect Java memory usage?

    - by wmacura
    Hi, Background: I have a set of java background workers I start as part of my webapp. I develop locally on Ubuntu 10.10 and deploy to an Ubuntu 10.04LTS server (a media temple (ve) instance). They're both running the same JVM: Sun JVM 1.6.0_22-b04. As part of the initialization script each worker is started with explicit Xmx, Xms, and XX:MaxPermGen settings. Yet somehow locally all 10 workers use 250MB, while on the server they use more than 2.7GB. I don't know how to begin to track this down. I thought the Ubuntu (and thus, kernel) version might make a difference, but I tried an old 10.04 VM and it behaves as expected. I've noticed that the machine does not seem to ever use memory for buffer or cache (according to htop), which seems a bit strange, but perhaps normal for a server? (edited) Some info: (server) root@devel:/app/axir/target# uname -a Linux devel 2.6.18-028stab069.5 #1 SMP Tue May 18 17:26:16 MSD 2010 x86_64 GNU/Linux (local) wiktor@beastie:~$ uname -a Linux beastie 2.6.35-25-generic #44-Ubuntu SMP Fri Jan 21 17:40:44 UTC 2011 x86_64 GNU/Linux (edited) Comparing PS output: (ps -eo "ppid,pid,cmd,rss,sz,vsz") PPID PID CMD RSS SZ VSZ (local) 1588 1615 java -cp axir-distribution. 25484 234382 937528 1615 1631 java -cp /home/wiktor/Code/ 83472 163059 652236 1615 1657 java -cp /home/wiktor/Code/ 70624 89135 356540 1615 1658 java -cp /home/wiktor/Code/ 37652 77625 310500 1615 1669 java -cp /home/wiktor/Code/ 38096 77733 310932 1615 1675 java -cp /home/wiktor/Code/ 37420 61395 245580 1615 1684 java -cp /home/wiktor/Code/ 38000 77736 310944 1615 1703 java -cp /home/wiktor/Code/ 39180 78060 312240 1615 1712 java -cp /home/wiktor/Code/ 38488 93882 375528 1615 1719 java -cp /home/wiktor/Code/ 38312 77874 311496 1615 1726 java -cp /home/wiktor/Code/ 38656 77958 311832 1615 1727 java -cp /home/wiktor/Code/ 78016 89429 357716 (server) 22522 23560 java -cp axir-distribution. 24860 285196 1140784 23560 23585 java -cp /app/axir/target/a 100764 161629 646516 23560 23667 java -cp /app/axir/target/a 72408 92682 370728 23560 23670 java -cp /app/axir/target/a 39948 97671 390684 23560 23674 java -cp /app/axir/target/a 40140 81586 326344 23560 23739 java -cp /app/axir/target/a 39688 81542 326168 They look very similar. In fact, the question now is why, if I add up the virtual memory usage on the server (3.2GB) does it more closely reflect 2.4GB of memory used (according to free), yet locally the virtual memory used adds up to a much more substantial 4.7GB but only actually uses ~250MB. It seems that perhaps memory isn't being shared as aggressively. (if that's even possible) Thank you for your help, Wiktor

    Read the article

  • Passing a panel widget to a function

    - by user2939801
    I have written a quite complex script which calls a server handler to refresh elements in a grid at the press of a button. For code re-use and consistent behaviour, I am wanting to call that server handler directly during the initial painting of the grid. When the server handler gets called by clicking on the button, all expected widgets are available and can be queried with e.parameter.widget etc. When I call the function directly and pass it the panel variable, the value of e is just 'AbsolutePanel'. Is there some way I can emulate the addCallbackElement way of passing the entire panel and all widgets it contains to the function? Or a way of automatically firing a server handler on script start? Please forgive any syntax errors below, I have pruned 500 lines of code down to the pertinent bits! Thanks Tony function doGet() { var app = UiApp.createApplication(); var mainPanel = app.createAbsolutePanel(); var monthsAbbr = ['Jan.', 'Feb.', 'Mar.', 'Apr.', 'May.', 'Jun.', 'Jul.', 'Aug.', 'Sep.', 'Oct.', 'Nov.', 'Dec.']; var Dates = Array(); var period = 5; var dateHidden = Array(); var dayOfMonth = new Date(((period * 28) + 15887) * 86400000); var dateString = ''; var dayOfWeek = 0; for (var i=0; i<84; i++) { dateString = dayOfMonth.getDate() + ' ' + monthsAbbr[dayOfMonth.getMonth()] + ' ' + (dayOfMonth.getFullYear() - 2000); Dates [i] = dateString; dateHidden[i] = app.createHidden('dates'+i, dateString).setId('dates'+i); mainPanel.add(dateHidden[i]); dayOfMonth = new Date(dayOfMonth.getTime() + 86400000); } var buttonReset = app.createButton('Reset').setId('buttonReset'); var handlerChange = app.createServerHandler('myHandlerChange'); handlerChange.addCallbackElement(mainPanel); mainPanel.add(buttonReset.addChangeHandler(handlerChange)); app.add(mainPanel); myHandlerChange(mainPanel); return app; } function myHandlerChange(e) { var app = UiApp.getActiveApplication(); Logger.log('Here are the widgets passed into the function: ' + Utilities.jsonStringify(e)); return app; }

    Read the article

  • FFmpeg extract clip - stream frame rate differs from container frame rate (x264, aac)

    - by fideli
    Summary H.264 video seems to have a really high frame rate that requires a scaling factor to the applied to the duration of video that I'm trying to extract (900x lower). Body I'm trying to extract a clip from a movie that I have in MP4 format (created using Handbrake). After trying mencoder and VLC, I decided to give FFmpeg a shot since it was the least troublesome when it came to copying the codecs. That is, compared to mencoder and VLC, the resulting file was still playable in QuickTime (I know about Perian, etc, I'm just trying to learn how all this works). Anyway, my command was as follows: ffmpeg -ss 01:15:51 -t 00:05:59 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 During the copy, The following comes up: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from outofsight.mp4': Duration: 01:57:42.10, start: 0.000000, bitrate: 830 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x384, 25 tbr, 22500 tbn, 45k tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to 'out.mp4': Stream #0.0(und): Video: libx264, yuv420p, 720x384, q=2-31, 90k tbn, 22500 tbc Stream #0.1(eng): Audio: libfaac, 48000 Hz, stereo, s16 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 2591 fps=2349 q=-1.0 size= 8144kB time=101.60 bitrate= 656.7kbits/s … Instead of a 5:59 duration clip, I get the entire rest of the movie. So, to test this, I ran the ffmpeg command with -t 00:00:01. What I got was exactly a 15:00 minute clip. So I did some black box engineering and decided to scale my -t option by calculating what value to enter given that 1 second was interpreted as 900 s. For my desired 359 s clip, I calculated 0.399 s and so my ffmpeg command became: ffmpeg -ss 01:15.51 -t 00:00:00.399 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 This works, but I have no idea why the duration is scaled by 900. Investigating further, each ffmpeg run has the line: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) 45000/25 = 1800. Must be a relation somewhere. Somehow, the obscenely high frame rate is causing issues with the timing. How is that frame rate so high? The best part about this is that the resulting clip.mp4 has the exact same feature (due to the copied video codec), and taking further clips from this needs the same scaling for the -t duration option. Therefore, I've made it available for anyone willing to check this out. Appendix The preamble for ffmpeg on my system (built using MacPorts ffmpeg port): FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/opt/local --disable-vhook --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-avfilter-lavf --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libdirac --enable-libschroedinger --enable-libfaac --enable-libfaad --enable-libxvid --enable-libx264 --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/gcc-4.2 --arch=x86_64 libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 1. 4. 0 / 1. 4. 0 libswscale 1. 7. 1 / 1. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jan 4 2010 21:51:51, gcc: 4.2.1 (Apple Inc. build 5646) (dot 1)

    Read the article

  • how to install ffmpeg in cpanel

    - by Ajay Chthri
    i'm using dedicated server(linux) so i need to install ffmpeg in cpanel so here ffmpeg i found in Main Software Install a Perl Module but i writing script in php so how can i install ffmpeg phpperl when i'am trying to install ffmpeg in perl module i get this response Checking C compiler....C compiler (/usr/bin/cc) OK (cached Tue Jan 17 19:16:31 2012)....Done CPAN fallback is disabled since /var/cpanel/conserve_memory exists, and cpanm is available. Method: Using Perl Expect, Installer: cpanm You have make /usr/bin/make Falling back to HTTP::Tiny 0.009 You have /bin/tar: tar (GNU tar) 1.15.1 You have /usr/bin/unzip You have Cpanel::HttpRequest 2.1 Testing connection speed...(using fast method)...Done Ping:2 (ticks) Testing connection speed to cpan.knowledgematters.net using pureperl...(28800.00 bytes/s)...Done Ping:2 (ticks) Testing connection speed to cpan.develooper.com using pureperl...(22233.33 bytes/s)...Done Ping:2 (ticks) Testing connection speed to cpan.schatt.com using pureperl...(32750.00 bytes/s)...Done Ping:3 (ticks) Testing connection speed to cpan.mirror.facebook.net using pureperl...(14050.00 bytes/s)...Done Ping:2 (ticks) Testing connection speed to cpan.mirrors.hoobly.com using pureperl...(5150.00 bytes/s)...Done Five usable mirrors located Ping:0 (ticks) Testing connection speed to 208.109.109.239 using pureperl...(28950.00 bytes/s)...Done Ping:2 (ticks) Testing connection speed to 208.82.118.100 using pureperl...(19300.00 bytes/s)...Done Ping:1 (ticks) Testing connection speed to 69.50.192.73 using pureperl...(19300.00 bytes/s)...Done Three usable fallback mirrors located Mirror Check passed for cpan.schatt.com (/index.html) Searching on cpanmetadb ... Fetching http://cpanmetadb.cpanel.net/v1.0/package/Video::FFmpeg?cpanel_version=11.30.5.6&cpanel_tier=release (connected:0).......(request attempt 1/12)...Using dns cache file /root/.HttpRequest/cpanmetadb.cpanel.net......searching for mirrors (mirror search attempt 1/3)......5 usable mirrors located. (less then expected)......mirror search success......connecting to 208.74.123.82...@208.74.123.82......connected......receiving...100%......request success......Done Searching Video::FFmpeg on cpanmetadb (http://cpanmetadb.cpanel.net/v1.0/package/Video::FFmpeg?cpanel_version=11.30.5.6&cpanel_tier=release) ... Fetching http://cpanmetadb.cpanel.net/v1.0/package/Video::FFmpeg?cpanel_version=11.30.5.6&cpanel_tier=release (connected:1).......(request attempt 1/12)[email protected]%......request success......Done Source: fastest CPAN mirror ... --> Working on Video::FFmpeg Fetching http://cpan.schatt.com//authors/id/R/RA/RANDOMMAN/Video-FFmpeg-0.47.tar.gz ... Fetching http://cpan.schatt.com/authors/id/R/RA/RANDOMMAN/Video-FFmpeg-0.47.tar.gz (connected:1).......(request attempt 1/12)...Resolving cpan.schatt.com...(resolve attempt 1/65)......connecting to 66.249.128.125...@66.249.128.125......connected......receiving...25%...50%...75%...100%......request success......Done OK Unpacking Video-FFmpeg-0.47.tar.gz Video-FFmpeg-0.47/ Video-FFmpeg-0.47/Changes Video-FFmpeg-0.47/FFmpeg.xs Video-FFmpeg-0.47/MANIFEST Video-FFmpeg-0.47/META.yml Video-FFmpeg-0.47/Makefile.PL Video-FFmpeg-0.47/README Video-FFmpeg-0.47/lib/ Video-FFmpeg-0.47/lib/Video/ Video-FFmpeg-0.47/lib/Video/FFmpeg/ Video-FFmpeg-0.47/lib/Video/FFmpeg/AVFormat.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/ Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/Audio.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/Subtitle.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/Video.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream.pm Video-FFmpeg-0.47/lib/Video/FFmpeg.pm Video-FFmpeg-0.47/ppport.h Video-FFmpeg-0.47/t/ Video-FFmpeg-0.47/t/Video-FFmpeg.t Video-FFmpeg-0.47/test Video-FFmpeg-0.47/test.mp4 Video-FFmpeg-0.47/typemap Entering Video-FFmpeg-0.47 Checking configure dependencies from META.yml META.yml not found or unparsable. Fetching META.yml from search.cpan.org Fetching http://search.cpan.org/meta/Video-FFmpeg-0.47/META.yml (connected:1).......(request attempt 1/12)...Resolving search.cpan.org...(resolve attempt 1/65)......connecting to 199.15.176.161...@199.15.176.161......connected......receiving...100%......request success......Done Configuring Video-FFmpeg-0.47 ... Running Makefile.PL Perl v5.10.0 required--this is only v5.8.8, stopped at Makefile.PL line 1. BEGIN failed--compilation aborted at Makefile.PL line 1. N/A ! Configure failed for Video-FFmpeg-0.47. See /home/.cpanm/build.log for details. Perl Expect failed with non-zero exit status: 256 All available perl module install methods have failed guide me how can i install ffmpeg in cPanel Thanks for advance.

    Read the article

  • FFmpeg extract clip - stream frame rate differs from container frame rate (x264, aac)

    - by fideli
    Summary H.264 video seems to have a really high frame rate that requires a scaling factor to the applied to the duration of video that I'm trying to extract (900x lower). Body I'm trying to extract a clip from a movie that I have in MP4 format (created using Handbrake). After trying mencoder and VLC, I decided to give FFmpeg a shot since it was the least troublesome when it came to copying the codecs. That is, compared to mencoder and VLC, the resulting file was still playable in QuickTime (I know about Perian, etc, I'm just trying to learn how all this works). Anyway, my command was as follows: ffmpeg -ss 01:15:51 -t 00:05:59 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 During the copy, The following comes up: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from outofsight.mp4': Duration: 01:57:42.10, start: 0.000000, bitrate: 830 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x384, 25 tbr, 22500 tbn, 45k tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to 'out.mp4': Stream #0.0(und): Video: libx264, yuv420p, 720x384, q=2-31, 90k tbn, 22500 tbc Stream #0.1(eng): Audio: libfaac, 48000 Hz, stereo, s16 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 2591 fps=2349 q=-1.0 size= 8144kB time=101.60 bitrate= 656.7kbits/s … Instead of a 5:59 duration clip, I get the entire rest of the movie. So, to test this, I ran the ffmpeg command with -t 00:00:01. What I got was exactly a 15:00 minute clip. So I did some black box engineering and decided to scale my -t option by calculating what value to enter given that 1 second was interpreted as 900 s. For my desired 359 s clip, I calculated 0.399 s and so my ffmpeg command became: ffmpeg -ss 01:15.51 -t 00:00:00.399 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 This works, but I have no idea why the duration is scaled by 900. Investigating further, each ffmpeg run has the line: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) 45000/25 = 1800. Must be a relation somewhere. Somehow, the obscenely high frame rate is causing issues with the timing. How is that frame rate so high? The best part about this is that the resulting clip.mp4 has the exact same feature (due to the copied video codec), and taking further clips from this needs the same scaling for the -t duration option. Therefore, I've made it available for anyone willing to check this out. Appendix The preamble for ffmpeg on my system (built using MacPorts ffmpeg port): FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/opt/local --disable-vhook --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-avfilter-lavf --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libdirac --enable-libschroedinger --enable-libfaac --enable-libfaad --enable-libxvid --enable-libx264 --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/gcc-4.2 --arch=x86_64 libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 1. 4. 0 / 1. 4. 0 libswscale 1. 7. 1 / 1. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jan 4 2010 21:51:51, gcc: 4.2.1 (Apple Inc. build 5646) (dot 1) EDIT Not sure whether it was a bug or not, but it seems to be fixed now in my current version of ffmpeg, at least for this video (version 0.6.1 from MacPorts).

    Read the article

  • Robotic Arm &ndash; Hardware

    - by Szymon Kobalczyk
    This is first in series of articles about project I've been building  in my spare time since last Summer. Actually it all began when I was researching a topic of modeling human motion kinematics in order to create gesture recognition library for Kinect. This ties heavily into motion theory of robotic manipulators so I also glanced at some designs of robotic arms. Somehow I stumbled upon this cool looking open source robotic arm: It was featured on Thingiverse and published by user jjshortcut (Jan-Jaap). Since for some time I got hooked on toying with microcontrollers, robots and other electronics, I decided to give it a try and build it myself. In this post I will describe the hardware build of the arm and in later posts I will be writing about the software to control it. Another reason to build the arm myself was the cost factor. Even small commercial robotic arms are quite expensive – products from Lynxmotion and Dagu look great but both cost around USD $300 (actually there is one cheap arm available but it looks more like a toy to me). In comparison this design is quite cheap. It uses seven hobby grade servos and even the cheapest ones should work fine. The structure is build from a set of laser cut parts connected with few metal spacers (15mm and 47mm) and lots of M3 screws. Other than that you’d only need a microcontroller board to drive the servos. So in total it comes a lot cheaper to build it yourself than buy an of the shelf robotic arm. Oh, and if you don’t like this one there are few more robotic arm projects at Thingiverse (including one by oomlout). Laser cut parts Some time ago I’ve build another robot using laser cut parts so I knew the process already. You can grab the design files in both DXF and EPS format from Thingiverse, and there are also 3D models of each part in STL. Actually the design is split into a second project for the mini servo gripper (there is also a standard servo version available but it won’t fit this arm).  I wanted to make some small adjustments, layout, and add measurements to the parts before sending it for cutting. I’ve looked at some free 2D CAD programs, and finally did all this work using QCad 3 Beta with worked great for me (I also tried LibreCAD but it didn’t work that well). All parts are cut from 4 mm thick material. Because I was worried that acrylic is too fragile and might break, I also ordered another set cut from plywood. In the end I build it from plywood because it was easier to glue (I was told acrylic requires a special glue). Btw. I found a great laser cutter service in Kraków and highly recommend it (www.ebbox.com.pl). It cost me only USD $26 for both sets ($16 acrylic + $10 plywood). Metal parts I bought all the M3 screws and nuts at local hardware store. Make sure to look for nylon lock (nyloc) nuts for the gripper because otherwise it unscrews and comes apart quickly. I couldn’t find local store with metal spacers and had to order them online (you’d need 11 x 47mm and 3 x 15mm). I think I paid less than USD $10 for all metal parts. Servos This arm uses five standards size servos to drive the arm itself, and two micro servos are used on the gripper. Author of the project used Modelcraft RS-2 Servo and Modelcraft ES-05 HT Servo. I had two Futaba S3001 servos laying around, and ordered additional TowerPro SG-5010 standard size servos and TowerPro SG90 micro servos. However it turned out that the SG90 won’t fit in the gripper so I had to replace it with a slightly smaller E-Sky EK2-0508 micro servo. Later it also turned out that Futaba servos make some strange noise while working so I swapped one with TowerPro SG-5010 which has higher torque (8kg / cm). I’ve also bought three servo extension cables. All servos cost me USD $45. Assembly The build process is not difficult but you need to think carefully about order of assembling it. You can do the base and upper arm first. Because two servos in the base are close together you need to put first with one piece of lower arm already connected before you put the second servo. Then you connect the upper arm and finally put the second piece of lower arm to hold it together. Gripper and base require some gluing so think it through too. Make sure to look closely at all the photos on Thingiverse (also other people copies) and read additional posts on jjshortcust’s blog: My mini servo grippers and completed robotic arm  Multiply the robotic arm and electronics Here is also Rob’s copy cut from aluminum My assembled arm looks like this – I think it turned out really nice: Servo controller board The last piece of hardware I needed was an electronic board that would take command from PC and drive all seven servos. I could probably use Arduino for this task, and in fact there are several Arduino servo shields available (for example from Adafruit or Renbotics).  However one problem is that most support only up to six servos, and second that their accuracy is limited by Arduino’s timer frequency. So instead I looked for dedicated servo controller and found a series of Maestro boards from Pololu. I picked the Pololu Mini Maestro 12-Channel USB Servo Controller. It has many nice features including native USB connection, high resolution pulses (0.25µs) with no jitter, built-in speed and acceleration control, and even scripting capability. Another cool feature is that besides servo control, each channel can be configured as either general input or output. So far I’m using seven channels so I still have five available to connect some sensors (for example distance sensor mounted on gripper might be useful). And last but important factor was that they have SDK in .NET – what more I could wish for! The board itself is very small – half of the size of Tic-Tac box. I picked one for about USD $35 in this store. Perhaps another good alternative would be the Phidgets Advanced Servo 8-Motor – but it is significantly more expensive at USD $87.30. The Maestro Controller Driver and Software package includes Maestro Control Center program with lets you immediately configure the board. For each servo I first figured out their move range and set the min/max limits. I played with setting the speed an acceleration values as well. Big issue for me was that there are two servos that control position of lower arm (shoulder joint), and both have to be moved at the same time. This is where the scripting feature of Pololu board turned out very helpful. I wrote a script that synchronizes position of second servo with first one – so now I only need to move one servo and other will follow automatically. This turned out tricky because I couldn’t find simple offset mapping of the move range for each servo – I had to divide it into several sub-ranges and map each individually. The scripting language is bit assembler-like but gets the job done. And there is even a runtime debugging and stack view available. Altogether I’m very happy with the Pololu Mini Maestro Servo Controller, and with this final piece I completed the build and was able to move my arm from the Meastro Control program.   The total cost of my robotic arm was: $10 laser cut parts $10 metal parts $45 servos $35 servo controller ----------------------- $100 total So here you have all the information about the hardware. In next post I’ll start talking about the software that I wrote in Microsoft Robotics Developer Studio 4. Stay tuned!

    Read the article

  • Crash in audio resampler with some audio rates - FFMPEG PHP ( Solved! )

    - by Olaf Erlandsen
    i have a problem with this command( FFMPEG PHP ): Command: ffmpeg -i 62f76f050494f0ed6a5997967c00c0c0.wmv -ss 0 -t 99 -y -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 -f flv 62f76f050494f0ed6a5997967c00c0c0.flv Output: FFmpeg version 0.6.5, Copyright (c) 2000-2010 the FFmpeg developers built on Jan 29 2012 17:52:15 with gcc 4.4.5 20110214 (Red Hat 4.4.5-6) configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --disable-avisynth --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --enable-avfilter --enable-avfilter-lavf --enable-libdc1394 --enable-libdirac --enable-libfaac --enable-libfaad --enable-libfaadbin --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-vdpau --enable-version3 --enable-x11grab libavutil 50.15. 1 / 50.15. 1 libavcodec 52.72. 2 / 52.72. 2 libavformat 52.64. 2 / 52.64. 2 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.19. 0 / 1.19. 0 libswscale 0.11. 0 / 0.11. 0 libpostproc 51. 2. 0 / 51. 2. 0 [asf @ 0xe81670]max_analyze_duration reached Input #0, asf, from '/var/www/resources/tmp/62f76f050494f0ed6a5997967c00c0c0.wmv': Metadata: WMFSDKVersion : 12.0.7601.17514 WMFSDKNeeded : 0.0.0.0000 IsVBR : 0 Duration: 00:00:50.87, bitrate: 2467 kb/s Stream #0.0: Audio: wmapro, 44100 Hz, stereo, flt, 256 kb/s Stream #0.1: Video: vc1, yuv420p, 950x460 [PAR 1:1 DAR 95:46], 25 fps, 25 tbr, 1k tbn, 25 tbc Output #0, flv, to '/var/www/resources/media/62f76f050494f0ed6a5997967c00c0c0.flv': Metadata: encoder : Lavf52.64.2 Stream #0.0: Video: flv, yuv420p, 950x460 [PAR 1:1 DAR 95:46], q=2-31, 200 kb/s, 1k tbn, 29.97 tbc Stream #0.1: Audio: libmp3lame, 11025 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.1 -> #0.0 Stream #0.0 -> #0.1 Press [q] to stop encoding frame= 72 fps= 0 q=5.0 size= 0kB time=10.91 bitrate= 0.0kbits/s Multiple frames in a packet from stream 0 Warning, using s16 intermediate sample format for resampling frame= 141 fps=139 q=5.0 size= 103kB time=8.15 bitrate= 103.2kbits/s frame= 220 fps=144 q=5.0 size= 875kB time=10.92 bitrate= 656.6kbits/s frame= 290 fps=143 q=5.0 size= 1525kB time=13.74 bitrate= 909.1kbits/s frame= 356 fps=141 q=5.0 size= 2153kB time=15.99 bitrate=1103.1kbits/s frame= 427 fps=141 q=5.0 size= 2847kB time=18.70 bitrate=1247.0kbits/s frame= 497 fps=141 q=5.0 size= 3771kB time=21.16 bitrate=1460.0kbits/s frame= 575 fps=142 q=5.0 size= 4695kB time=24.61 bitrate=1563.0kbits/s frame= 639 fps=141 q=5.0 size= 5301kB time=26.80 bitrate=1620.2kbits/s frame= 703 fps=139 q=5.0 size= 5829kB time=29.36 bitrate=1626.2kbits/s frame= 774 fps=139 q=5.0 size= 6659kB time=32.39 bitrate=1684.0kbits/s frame= 842 fps=139 q=5.0 size= 7915kB time=35.27 bitrate=1838.6kbits/s frame= 911 fps=139 q=5.0 size= 9011kB time=37.98 bitrate=1943.4kbits/s frame= 975 fps=138 q=5.0 size= 9788kB time=40.59 bitrate=1975.3kbits/s frame= 1041 fps=138 q=5.0 size= 10904kB time=43.83 bitrate=2037.9kbits/s frame= 1115 fps=138 q=5.0 size= 11795kB time=46.24 bitrate=2089.8kbits/s frame= 1183 fps=138 q=5.0 size= 12678kB time=48.74 bitrate=2130.7kbits/s frame= 1247 fps=137 q=5.0 size= 13964kB time=51.36 bitrate=2227.5kbits/s frame= 1271 fps=136 q=5.0 Lsize= 15865kB time=58.86 bitrate=2208.1kbits/s video:15366kB audio:462kB global headers:0kB muxing overhead 0.238956% Problem: Warning, using s16 intermediate sample format for resampling I've also tried changing the parameter From -ar 44100 to -ar 11025 Thanks! Solution: Read this link: http://en.wikipedia.org/wiki/MP3#Bit_rate

    Read the article

  • Using JQuery tabs in an HTML 5 page

    - by nikolaosk
    In this post I will show you how to create a simple tabbed interface using JQuery,HTML 5 and CSS.Make sure you have downloaded the latest version of JQuery (minified version) from http://jquery.com/download.Please find here all my posts regarding JQuery.Also have a look at my posts regarding HTML 5.In order to be absolutely clear this is not (and could not be) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. Let me move on to the actual example.This is the sample HTML 5 page<!DOCTYPE html><html lang="en">  <head>    <title>Liverpool Legends</title>    <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">    <script type="text/javascript" src="jquery-1.8.2.min.js"> </script>     <script type="text/javascript" src="tabs.js"></script>       </head>  <body>    <header>        <h1>Liverpool Legends</h1>    </header>     <section id="tabs">        <ul>            <li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=9143136#first-tab">Defenders</a></li>            <li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=9143136#second-tab">Midfielders</a></li>            <li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=9143136#third-tab">Strikers</a></li>        </ul>   <div id="first-tab">     <h3>Liverpool Defenders</h3>     <p> The best defenders that played for Liverpool are Jamie Carragher, Sami Hyypia , Ron Yeats and Alan Hansen.</p>   </div>   <div id="second-tab">     <h3>Liverpool Midfielders</h3>     <p> The best midfielders that played for Liverpool are Kenny Dalglish, John Barnes,Ian Callaghan,Steven Gerrard and Jan Molby.        </p>   </div>   <div id="third-tab">     <h3>Liverpool Strikers</h3>     <p>The best strikers that played for Liverpool are Ian Rush,Roger Hunt,Robbie Fowler and Fernando Torres.<br/>      </p>   </div> </div></section>            <footer>        <p>All Rights Reserved</p>      </footer>     </body>  </html>  This is very simple HTML markup. I have styled this markup using CSS.The contents of the style.css file follow* {    margin: 0;    padding: 0;}header{font-family:Tahoma;font-size:1.3em;color:#505050;text-align:center;}#tabs {    font-size: 0.9em;    margin: 20px 0;}#tabs ul {    float: left;    background: #777;    width: 260px;    padding-top: 24px;}#tabs li {    margin-left: 8px;    list-style: none;}* html #tabs li {    display: inline;}#tabs li, #tabs li a {    float: left;}#tabs ul li.active {    border-top:2px red solid;    background: #15ADFF;}#tabs ul li.active a {    color: #333333;}#tabs div {    background: #15ADFF;    clear: both;    padding: 15px;    min-height: 200px;}#tabs div h3 {    margin-bottom: 12px;}#tabs div p {    line-height: 26px;}#tabs ul li a {    text-decoration: none;    padding: 8px;    color:#0b2f20;    font-weight: bold;}footer{background-color:#999;width:100%;text-align:center;font-size:1.1em;color:#002233;}There are some CSS rules that style the various elements in the HTML 5 file. These are straight-forward rules. The JQuery code lives inside the tabs.js file $(document).ready(function(){$('#tabs div').hide();$('#tabs div:first').show();$('#tabs ul li:first').addClass('active'); $('#tabs ul li a').click(function(){$('#tabs ul li').removeClass('active');$(this).parent().addClass('active');var currentTab = $(this).attr('href');$('#tabs div').hide();$(currentTab).show();return false;});}); I am using some of the most commonly used JQuery functions like hide , show, addclass , removeClass I hide and show the tabs when the tab becomes the active tab. When I view my page I get the following result Hope it helps!!!!!

    Read the article

  • Are Chromebooks the New Netbooks, and What Does That Mean?

    - by Chris Hoffman
    Netbooks — small, cheap, slow laptops — were once very popular. They fell out of favor — people bought them because they seemed cheap and portable, but the actual experience was lackluster. Most netbooks now sit unused. Windows netbooks have vanished from stores today, but there’s a new super-cheap laptop — the Chromebook. Chromebook sales numbers are impressive, but their usage statistics tell a different story. Are Chromebooks just the new netbook? The Problem With Netbooks Netbooks seemed appealing, especially in an age before tablets and lightweight ultrabooks. You could buy a netbook for $200 or so and have a portable device that let you get on the Internet. The name “netbook” spelled that out — it was a portable device for getting on the ‘net. They weren’t really that great. The original netbook was a lightweight Asus Eee PC that ran Linux alone and had a small amount of fast flash storage. Netbooks eventually ran heavier Windows XP operating systems — Windows Vista was out, but it was just too bloated to run on netbooks. Manufacturers added slow magnetic hard drives, bloatware, and even DVD drives! They couldn’t run most Windows software very well. The build quality was poor and their keyboards were tiny and cramped. People liked the idea of a lightweight device that let them get on the Internet and loved the cheap price, but the actual experience wasn’t great. Chromebook Sales Chromebook sales numbers seem surprisingly high. NPD reported that Chromebooks were 21% of all notebooks sold in the US in 2013. If you combine laptop and tablet sales into a single statistic, Chromebooks were 9.6% of all those devices sold. That’s 2/3 as many Chromebooks sold as iPads in the US! Of Amazon’s best-selling laptop computers, two of the top three are Chromebooks. These definitely look like successful products. Unlike netbooks, Chromebooks are taking off in a big way in the education market. Many schools are buying Chromebooks for their students instead of more expensive Windows laptops. They’re easier to manage and lock down than Windows laptops, but — more importantly for cash-strapped schools — they’re very cheap. Netbooks never had this sort of momentum in schools. Chromebook Usage Statistics Here’s where the rosy picture of Chromebooks starts to become more realistic. StatCounter’s browser usage statistics show how widely used different operating systems are. For example, Windows 7 has the highest share with 35.71% of web activity in April, 2014. The chart doesn’t even show Chrome OS at all, although there is an “Other” number near the bottom. Click the Download Data link to download a CSV file and we can view more detailed information. Chrome OS only accounted for 0.38% of web usage in April, 2014. Desktop Linux, which people often shrug at, accounted for 1.52% in the same month. To its credit, Chrome OS usage has increased. Chromebooks were widely mocked back in November, 2013 when the sales numbers came out. After all, they only accounted for 0.11% of web usage globally in November, 2013! But Chrome OS numbers have been improving: Nov, 2013: 0.11% Dec, 2013: 0.22% Jan, 2014: 0.31% Feb, 2014: 0.35% Mar, 2014: 0.36% Apr, 2014: 0.38% Chrome OS is climbing, but it’s definitely still in the “Other” category. It isn’t as high as we’d expect to see it with those types of sales numbers. Chromebooks vs. Netbooks Chromebooks are more limited devices than traditional PCs. You can do quite a few things, but you have to do it all using Chrome or Chrome apps. Most people won’t be enabling developer mode and installing a Linux desktop. You don’t have access to the powerful desktop software available for Windows and even Mac OS X. On the other hand, these Chromebooks are less compromised than netbooks in many ways. They come with a lightweight operating system designed for portable, mobile devices. They don’t come packed with any bloatware, like the bloatware you’ll find on competing Windows PCs and the original netbooks. They’re cheaper because the manufacturer doesn’t have to pay for a Windows license. There’s no need for antivirus software weighing the operating system down. They’re larger than the original netbooks, with many of them being 11.6-inches instead of the original 8-inch bodies many older netbooks came with. They have larger, more comfortable keyboards and fast solid-state storage. Really, Chromebooks are what netbooks wanted to be. People didn’t buy netbooks to use typical Windows software — they just wanted a lightweight PC. Of course, for many people, the real successor to netbooks is tablets. If all you want is a portable device to throw in a bag so you can get online, maybe a tablet is better. Where Does This Leave Chromebooks? So, are Chromebooks the new netbooks? It’s a bit early to answer that question. Chromebooks are definitely not out of the competition — their sales look good and their usage share is increasing. On the other hand, Chrome OS is still pretty far behind. They’re not catching fire like tablets did. Maybe netbooks were just before their time and Chromebooks were what they were always meant to be. Just as Microsoft’s Windows XP tablets failed, Windows XP netbooks also failed. Tablets took off with a more refined operating system on better hardware years later. “Netbooks” — or Chromebooks — are now taking off with a more purpose-built operating system on better hardware, too. It’s hard to count Chromebooks out because they provide a much better experience than netbooks ever did. If you’re one of the people who wants to use old Windows desktop apps on your portable laptop, you may think netbooks were better — but most people don’t want that. But maybe people either want a full desktop PC experience or a full mobile tablet experience. Is there a place for a laptop with a keyboard that can only view websites? We’ll have to wait and see. Image Credit: Kevin Jarret on Flickr, Clive Darra on Flickr, Sean Freese on Flickr

    Read the article

  • Do email forms need to be santized before sending?

    - by levi
    I have a client that keeps getting reports from godaddy's "websiteprotection.com" stating how the website is insecure. Your website contains pages that do not properly sanitize visitor-provided input to make sure it contains no malicious content or scripts. Cross-site scripting vulnerabilities let malicious users execute arbitrary HTML or script code in another visitor's browser. Output: The request string used to detect this flaw was : /cross_site_scripting.?nasl.asp The output was : HTTP/1.1 404 Not Found\r Date: Wed, 21 Mar 2012 08:12:02 GMT\r Server: Apache\r X-Pingback:http://?CLIENTSWEBSITE.com/?xmlrpc.php\r Expires: Wed, 11 Jan 1984 05:00:00 GMT\r Cache-Control: no-cache, must-revalidate, max-age=0\r Pragma: no-cache\r Set-Cookie: PHPSESSID=?1jsnhuflvd59nb4trtquston50; path=/\r Last-Modified: Wed, 21 Mar 2012 08:12:02 GMT\r Keep-Alive: timeout=15, max=100\r Connection: Keep-Alive\r Transfer-Encoding: chunked\r Content-Type: text/html; charset=UTF-8\r \r <div id="contact-form" class="widget"><form action="http://?CLIENTSWEBSITE.c om/<script>cross_site_?scripting.nasl</script>.asp" id="contactForm" meth od="post"> It looks like it has an issue with the contact form. All the contact form does is posts an ajax request to the same page, and than a PHP script mails the data (no database stuff). Is there any a security issues here? Any ideas on how I can satisfy the security scanner? Here is the form and script: <form action="<?php echo $this->getCurrentUrl(); ?>" id="contactForm" method="post"> <input type="text" name="Name" id="Name" value="" class="txt requiredField name" /> //Some more text inputs <input type="hidden" name="sendadd" id="sendadd" value="<?php echo $emailadd ; ?>" /> <input type="hidden" name="submitted" id="submitted" value="true" /><input class="submit" type="submit" value="Send" /> </form> // Some initial JS validation, if that passes an ajax post is made to the script below //If the form is submitted if(isset($_POST['submitted'])) { //Check captcha if (isset($_POST["captchaPrefix"])) { $capt = new ReallySimpleCaptcha(); $correct = $capt->check( $_POST["captchaPrefix"], $_POST["Captcha"] ); if( ! $correct ) { echo false; die(); } else { $capt->remove( $_POST["captchaPrefix"] ); } } $dateon = $_POST["dateon"]; $ToEmail = $_POST["sendadd"]; $EmailSubject = 'Contact Form Submission from ' . get_bloginfo('title'); $mailheader = "From: ".$_POST["Email"]."\r\n"; $mailheader .= "Reply-To: ".$_POST["Email"]."\r\n"; $mailheader .= "Content-type: text/html; charset=iso-8859-1\r\n"; $MESSAGE_BODY = "Name: ".$_POST["Name"]."<br>"; $MESSAGE_BODY .= "Email Address: ".$_POST["Email"]."<br>"; $MESSAGE_BODY .= "Phone: ".$_POST["Phone"]."<br>"; if ($dateon == "on") {$MESSAGE_BODY .= "Date: ".$_POST["Date"]."<br>";} $MESSAGE_BODY .= "Message: ".$_POST["Comments"]."<br>"; mail($ToEmail, $EmailSubject, $MESSAGE_BODY, $mailheader) or die ("Failure"); echo true; die(); }

    Read the article

  • Why my laptop sends ARP request to itself ?

    - by user58859
    I have just started to learn about protocols. While studying the packets in wireshark, I came across a ARP request sent by my machine to my own IP. Here is the details of the packet : No. Time Source Destination Protocol Info 15 1.463563 IntelCor_aa:aa:aa Broadcast ARP Who has 192.168.1.34? Tell 0.0.0.0 Frame 15: 42 bytes on wire (336 bits), 42 bytes captured (336 bits) Arrival Time: Jan 7, 2011 18:51:43.886089000 India Standard Time Epoch Time: 1294406503.886089000 seconds [Time delta from previous captured frame: 0.123389000 seconds] [Time delta from previous displayed frame: 0.123389000 seconds] [Time since reference or first frame: 1.463563000 seconds] Frame Number: 15 Frame Length: 42 bytes (336 bits) Capture Length: 42 bytes (336 bits) [Frame is marked: False] [Frame is ignored: False] [Protocols in frame: eth:arp] [Coloring Rule Name: ARP] [Coloring Rule String: arp] Ethernet II, Src: IntelCor_aa:aa:aa (aa:aa:aa:aa:aa:aa), Dst: Broadcast (ff:ff:ff:ff:ff:ff) Destination: Broadcast (ff:ff:ff:ff:ff:ff) Address: Broadcast (ff:ff:ff:ff:ff:ff) .... ...1 .... .... .... .... = IG bit: Group address (multicast/broadcast) .... ..1. .... .... .... .... = LG bit: Locally administered address (this is NOT the factory default) Source: IntelCor_aa:aa:aa (aa:aa:aa:aa:aa:aa) Address: IntelCor_aa:aa:aa (aa:aa:aa:aa:aa:aa) .... ...0 .... .... .... .... = IG bit: Individual address (unicast) .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default) Type: ARP (0x0806) Address Resolution Protocol (request) Hardware type: Ethernet (0x0001) Protocol type: IP (0x0800) Hardware size: 6 Protocol size: 4 Opcode: request (0x0001) [Is gratuitous: False] Sender MAC address: IntelCor_aa:aa:aa (aa:aa:aa:aa:aa:aa) Sender IP address: 0.0.0.0 (0.0.0.0) Target MAC address: 00:00:00_00:00:00 (00:00:00:00:00:00) Target IP address: 192.168.1.34 (192.168.1.34) Here the sender's mac address is mine(Here I have hiden my mac address). target IP is mine. Why my machine is sending ARP request to itself? I found 3 packets of this type. There was no ARP reply for these packets. Can anybody explain me why it is? (My operating system is windows-7. I am directly connected to a wifi modem. I got these packets as soon as I started my connection.) I want one suggestion also. many places I read that RFC's are enough for study about protocols. I studied the RFC 826 on ARP. I personally feel that is not enough at all. Any suggestion regarding this? Is there more then 1 RFC for a protocol? I want to study about the protocols in very detail. Can anybody guide me for this? Thanks in advance.

    Read the article

  • Different Not Automatically Implies Better

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/11/05/154556.aspxRecently I was digging deeper why some WCF hosted workflow application did consume quite a lot of memory although it did basically only load a xaml workflow. The first tool of choice is Process Explorer or even better Process Hacker (has more options and the best feature copy&paste does work). The three most important numbers of a process with regards to memory are Working Set, Private Working Set and Private Bytes. Working set is the currently consumed physical memory (parts can be shared between processes e.g. loaded dlls which are read only) Private Working Set is the physical memory needed by this process which is not shareable Private Bytes is the number of non shareable which is only visible in the current process (e.g. all new, malloc, VirtualAlloc calls do create private bytes) When you have a bigger workflow it can consume under 64 bit easily 500MB for a 1-2 MB xaml file. This does not look very scalable. Under 64 bit the issue is excessive private bytes consumption and not the managed heap. The picture is quite different for 32 bit which looks a bit strange but it seems that the hosted VB compiler is a lot less memory hungry under 32 bit. I did try to repro the issue with a medium sized xaml file (400KB) which does contain 1000 variables and 1000 if which can be represented by C# code like this: string Var1; string Var2; ... string Var1000; if (!String.IsNullOrEmpty(Var1) ) { Console.WriteLine(“Var1”); } if (!String.IsNullOrEmpty(Var2) ) { Console.WriteLine(“Var2”); } ....   Since WF is based on VB.NET expressions you are bound to the hosted VB.NET compiler which does result in (x64) 140 MB of private bytes which is ca. 140 KB for each if clause which is quite a lot if you think about the actually present functionality. But there is hope. .NET 4.5 does allow now C# expressions for WF which is a major step forward for all C# lovers. I did create some simple patcher to “cross compile” my xaml to C# expressions. Lets look at the result: C# Expressions VB Expressions x86 x86 On my home machine I have only 32 bit which gives you quite exactly half of the memory consumption under 64 bit. C# expressions are 10 times more memory hungry than VB.NET expressions! I wanted to do more with less memory but instead it did consume a magnitude more memory. That is surprising to say the least. The workflow does initialize in about the same time under x64 and x86 where the VB code does it in 2s whereas the C# version needs 18s. Also nearly ten times slower. That is a too high price to pay for any bigger sized xaml workflow to convert from VB.NET to C# expressions. If I do reduce the number of expressions to 500 then it does need 400MB which is about half of the memory. It seems that the cost per if does rise linear with the number of total expressions in a xaml workflow.  Expression Language Cost per IF Startup Time C# 1000 Ifs x64 1,5 MB 18s C# 500 Ifs x64 750 KB 9s VB 1000 Ifs x64 140 KB 2s VB 500 Ifs x64 70 KB 1s Now we can directly compare two MS implementations. It is clear that the VB.NET compiler uses the same underlying structure but it has much higher offset compared to the highly inefficient C# expression compiler. I have filed a connect bug here with a harsher wording about recent advances in memory consumption. The funniest thing is that one MS employee did give an Azure AppFabric demo around early 2011 which was so slow that he needed to investigate with xperf. He was after startup time and the call stacks with regards to VB.NET expression compilation were remarkably similar. In fact I only found this post by googling for parts of my call stacks. … “C# expressions will be coming soon to WF, and that will have different performance characteristics than VB” … What did he know Jan 2011 what I did no know until today? ;-). He knew that C# expression will come but that they will not be automatically have better footprint. It is about time to fix that. In its current state C# expressions are not usable for bigger workflows. That also explains the headline for today. You can cheat startup time by prestarting workflows so that the demo looks nice and snappy but it does hurt scalability a lot since you do need much more memory than necessary. I did find the stacks by enabling virtual allocation tracking within XPerf which is still the best tool out there. But first you need to look at your process to check where the memory is hiding: For the C# Expression compiler you do not need xperf. You can directly dump the managed heap and check with a profiler of your choice. But if the allocations are happening on the Private Data ( VirtualAlloc ) you can find it with xperf. There is a nice video on channel 9 explaining VirtualAlloc tracking it in greater detail. If your data allocations are on the Heap it does mean that the C/C++ runtime did create a heap for you where all malloc, new calls do allocate from it. You can enable heap tracing with xperf and full call stack support as well which is doable via xperf like it is shown also on channel 9. Or you can use WPRUI directly: To make “Heap Usage” it work you need to set for your executable the tracing flags (before you start it). For example devenv.exe HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\devenv.exe DWORD TracingFlags 1 Do not forget to disable it after you did complete profiling the process or it will impact the startup time quite a lot. You can with xperf attach directly to a running process and collect heap allocation information from a gone wild process. Very handy if you need to find out what a process was doing which has arrived in a funny state. “VirtualAlloc usage” does work without explicitly enabling stuff for a specific process and is always on machine wide. I had issues on my Windows 7 machines with the call stack collection and the latest Windows 8.1 Performance Toolkit. I was told that WPA from Windows 8.0 should work fine but I do not want to downgrade.

    Read the article

  • Installing multiple php versions plus extensions on freebsd

    - by jgtumusiime
    I'm a currently learning how to work with freebsd. Lately I have been trying to run multiple php versions along with their respective packages. However, I seem to be running into issues while making installations. The default location for my php installation is /usr/local/etc/, however I want to be able to install php5.2, php5.3 and php5.4 in /usr/local/etc/php52, /usr/local/etc/php53 and /usr/local/etc/php54 respectively. Using ports I simply achieved this by doing cd /usr/ports/lang/php5x && make PREFIX="/usr/local/etc/php5x" install clean. The problem now is: How do I do the same for extensions of all my PHP versions? When I try installing php-extensions like so: cd /usr/ports/lang/php5x-extension && make PREFIX="/usr/local/etc/php5x/lib/php" install clean, I get this error ... ===> PHPizing for php53-bcmath-5.3.17 env: /usr/local/bin/phpize: No such file or directory *** Error code 127 Stop in /usr/ports/math/php53-bcmath. *** Error code 1 Stop in /usr/ports/lang/php53-extensions. My PHPize is located in /usr/local/etc/php5x/bin/phpize So how do I get make or whatever to look for phpize in the right path? Is there a cleaner, may be simpler way of maintaining multiple php installations? I need to achieve this because of compatibility issues from some legacy code that runs on 5.2 and breaks on 5.3. Thank you. ================= So I successfully installed an configured freebsd jail and I would like to install software within my jail but I cannot connect to the network. Here is my rc.conf jail_enable="YES" # Set to NO to disable starting of any jails jail_list="mambo2" # Space separated list of names of jails jail_mambo2_rootdir="/usr/jails/j01" # jail's root directory jail_mambo2_hostname="mambo2.ug" # jail's hostname jail_mambo2_ip="192.168.100.174" # jail's IP address jail_mambo2_devfs_enable="YES" # mount devfs in the jail jail_mambo2_devfs_ruleset="mambo2_ruleset" # devfs ruleset to apply to jail here is my jail ifconfig output mambo2# ifconfig rl0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=8<VLAN_MTU> ether 00:c1:28:00:48:db media: Ethernet autoselect (100baseTX <full-duplex>) status: active plip0: flags=108810<POINTOPOINT,SIMPLEX,MULTICAST,NEEDSGIANT> metric 0 mtu 1500 lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 mambo2# I created a /etc/resolv.conf for nameservers mambo2# cat /etc/resolv.conf nameserver 192.168.100.251 nameserver 8.8.8.8 mambo2# Here is a list of jails running [root@mambo /usr/home/jtumusiime]# jls JID IP Address Hostname Path 5 192.168.100.174 mambo2.ug /usr/jails/j01 my host has 4 ip addresses, 3 public and one private: 192.168.100.173 I tried creating a jail using ezjail and this does not work out. [root@mambo /usr/home/jtumusiime]# ezjail-admin update -p -i Error: Cannot find your copy of the FreeBSD source tree in . Consider using 'ezjail-admin install' to create the base jail from an ftp server. [root@mambo /usr/home/jtumusiime]# I have an updated copy of freebsd 7.1 source in /usr/src/ and I did #make buildworld while building the first jail mambo2 Here is an excerpt of ouput of ezjail-admin install ... 221 Goodbye. Trying 193.162.146.4... Connected to ftp.freebsd.org. 220 ftp.beastie.tdk.net FTP server (Version 6.00LS) ready. 331 Guest login ok, send your email address as password. 230 Guest login ok, access restrictions apply. Remote system type is UNIX. Using binary mode to transfer files. 200 Type set to I. 550 pub/FreeBSD-Archive/old-releases/i386/7.1-RELEASE/base: No such file or directory. 221 Goodbye. Could not fetch base from ftp.freebsd.org. Maybe your release (7.1-RELEASE) is specified incorrectly or the host ftp.freebsd.org does not provide that release build. Use the -r option to specify an existing release or the -h option to specify an alternative ftp server. Querying your ftp-server... The ftp server you specified (ftp.freebsd.org) seems to provide the following builds: Trying 193.162.146.4... total 10 drwxrwxr-x 13 1006 1006 512 Feb 20 2011 8.2-RELEASE drwxrwxr-x 13 1006 1006 512 Apr 10 2012 8.3-RELEASE lrwxr-xr-x 1 1006 1006 16 Jan 7 2012 9.0-RELEASE -> i386/9.0-RELEASE drwxrwxr-x 7 1006 1006 1024 Feb 19 2012 ISO-IMAGES -rw-rw-r-- 1 1006 1006 637 Nov 23 2005 README.TXT drwxrwxr-x 5 1006 1006 512 Nov 2 02:59 i386 I do not want to upgrade my freebsd installation. I have googled around; but all on vail

    Read the article

  • Windows DNS Server 2008 R2 fallaciously returns SERVFAIL

    - by Easter Sunshine
    I have a Windows 2008 R2 domain controller which is also a DNS server. When resolving certain TLDs, it returns a SERVFAIL: $ dig bogus. ; <<>> DiG 9.8.1 <<>> bogus. ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 31919 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;bogus. IN A I get the same result for a real TLD like com. when querying the DC as shown above. Compare to a BIND server that is working as expected: $ dig bogus. @128.59.59.70 ; <<>> DiG 9.8.1 <<>> bogus. @128.59.59.70 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 30141 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;bogus. IN A ;; AUTHORITY SECTION: . 10800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2012012501 1800 900 604800 86400 ;; Query time: 18 msec ;; SERVER: 128.59.59.70#53(128.59.59.70) ;; WHEN: Wed Jan 25 14:09:14 2012 ;; MSG SIZE rcvd: 98 Similarly, when I query my Windows DNS server with dig . any, I get a SERVFAIL but the BIND servers return the root zone as expected. This sounds similar to the issue described in http://support.microsoft.com/kb/968372 except I am using two forwarders (128.59.59.70 from above as well as 128.59.62.10) and falling back to root hints so the preconditions to expose the issue are not the same. Nevertheless, I also applied the MaxCacheTTL registry fix as described and restarted DNS and the whole server as well but the problem persists. The problem occurs on all domain controllers in this domain and has occurred since half a year ago, even though the servers are getting automatic Windows updates. EDIT Here is a debug log. The client is 160.39.114.110, which is my workstation. 1/25/2012 2:16:01 PM 0E08 PACKET 000000001EA6BFD0 UDP Rcv 160.39.114.110 2e94 Q [0001 D NOERROR] A (5)bogus(0) UDP question info at 000000001EA6BFD0 Socket = 508 Remote addr 160.39.114.110, port 49710 Time Query=1077016, Queued=0, Expire=0 Buf length = 0x0fa0 (4000) Msg length = 0x0017 (23) Message: XID 0x2e94 Flags 0x0100 QR 0 (QUESTION) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 0 Z 0 CD 0 AD 0 RCODE 0 (NOERROR) QCOUNT 1 ACOUNT 0 NSCOUNT 0 ARCOUNT 0 QUESTION SECTION: Offset = 0x000c, RR count = 0 Name "(5)bogus(0)" QTYPE A (1) QCLASS 1 ANSWER SECTION: empty AUTHORITY SECTION: empty ADDITIONAL SECTION: empty 1/25/2012 2:16:01 PM 0E08 PACKET 000000001EA6BFD0 UDP Snd 160.39.114.110 2e94 R Q [8281 DR SERVFAIL] A (5)bogus(0) UDP response info at 000000001EA6BFD0 Socket = 508 Remote addr 160.39.114.110, port 49710 Time Query=1077016, Queued=0, Expire=0 Buf length = 0x0fa0 (4000) Msg length = 0x0017 (23) Message: XID 0x2e94 Flags 0x8182 QR 1 (RESPONSE) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 1 Z 0 CD 0 AD 0 RCODE 2 (SERVFAIL) QCOUNT 1 ACOUNT 0 NSCOUNT 0 ARCOUNT 0 QUESTION SECTION: Offset = 0x000c, RR count = 0 Name "(5)bogus(0)" QTYPE A (1) QCLASS 1 ANSWER SECTION: empty AUTHORITY SECTION: empty ADDITIONAL SECTION: empty Every option in the debug log box was checked except "filter by IP". By contrast, when I query, say, accounts.google.com, I can see the DNS server go out to its forwarder (128.59.59.70, for example). In this case, I didn't see any packets going out from my DNS server even though bogus. was not in the cache (the debug log was already running and this is the first time I queried this server for bogus. or any TLD). It just returned SERVFAIL without consulting any other DNS server, as in the Microsoft KB article linked above.

    Read the article

  • SSH is not working .. Password promt is not coming

    - by Sumanth Lingappa
    I am not able to SSH into my ubuntu server since yesterday. I am not using any keyless or public key method.. Its simple SSH with username and password everytime.. However I can do a VNC session running on my ubuntu server.. But I am afraid that if the vnc session goes out, I wont be having any way to login to the server.. My ssh-vvv output is as below.. sumanth@sumanth:~$ ssh -vvv user@serverIP OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 172.16.2.156 [172.16.2.156] port 22. debug1: Connection established. debug1: identity file /home/sumanth/.ssh/id_rsa type -1 debug1: identity file /home/sumanth/.ssh/id_rsa-cert type -1 debug1: identity file /home/sumanth/.ssh/id_dsa type -1 debug1: identity file /home/sumanth/.ssh/id_dsa-cert type -1 debug1: identity file /home/sumanth/.ssh/id_ecdsa type -1 debug1: identity file /home/sumanth/.ssh/id_ecdsa-cert type -1 debug1: identity file /home/sumanth/.ssh/id_ed25519 type -1 debug1: identity file /home/sumanth/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH_5* compat 0x0c000000 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "172.16.2.156" from file "/home/sumanth/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/sumanth/.ssh/known_hosts:5 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],[email protected],[email protected],[email protected],ssh-ed25519,ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,[email protected],[email protected],[email protected],aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,[email protected],[email protected],[email protected],aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: setup hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: setup hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA ea:4e:15:52:15:dd:6b:09:d4:36:cb:14:2d:c3:1b:7a debug3: load_hostkeys: loading entries for host "172.16.2.156" from file "/home/sumanth/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/sumanth/.ssh/known_hosts:5 debug3: load_hostkeys: loaded 1 keys debug1: Host '172.16.2.156' is known and matches the ECDSA host key. debug1: Found key in /home/sumanth/.ssh/known_hosts:5 debug1: ssh_ecdsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/sumanth/.ssh/id_rsa ((nil)), debug2: key: /home/sumanth/.ssh/id_dsa ((nil)), debug2: key: /home/sumanth/.ssh/id_ecdsa ((nil)), debug2: key: /home/sumanth/.ssh/id_ed25519 ((nil)),

    Read the article

  • Linux Kernel crash mutex_lock_slowpath "blocked for more than 120 seconds". What to do?

    - by Roddick
    I have out-of-the box Debian Lenny with non-custom kernel 2.6.26-2-amd64. Brand new server that is used to 5% of it's potential, CPU and Disk-wise. Meaning it probably not crashing because of overload. every few days it freezes with hundreds of these messages in console log: : [284847.828428] INFO: task apache2:12473 blocked for more than 120 seconds. : [284847.868468] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. : [284847.912759] apache2 D ffff8101bc6b7ab0 0 12473 14358 : [284847.912763] ffff810160d5bc50 0000000000000082 ffff8101c0002e40 0000000000000000 : [284847.912766] ffff8101a7c42950 ffff810327d92810 ffff8101a7c42bd8 0000000400000044 : [284847.912770] ffff8101c0002e40 00000000000612d0 0000000000000000 00000040000612d0 : [284847.912773] Call Trace: : [284847.912786] [<ffffffff80429b0d>] __mutex_lock_slowpath+0x64/0x9b : [284847.912790] [<ffffffff80429972>] mutex_lock+0xa/0xb : [284847.912794] [<ffffffff802a20b9>] do_lookup+0x82/0x1c1 : [284847.912800] [<ffffffff802a4271>] __link_path_walk+0x87a/0xd19 : [284847.912805] [<ffffffff80295844>] kmem_getpages+0x96/0x15f : [284847.912808] [<ffffffff80295fb7>] ____cache_alloc_node+0x6d/0x106 : [284847.912814] [<ffffffff802a4756>] path_walk+0x46/0x8b : [284847.912819] [<ffffffff802a4a82>] do_path_lookup+0x158/0x1cf : [284847.912822] [<ffffffff802a3879>] getname+0x140/0x1a7 : [284847.912827] [<ffffffff802a53f1>] __user_walk_fd+0x37/0x4c : [284847.912831] [<ffffffff8029e381>] vfs_lstat_fd+0x18/0x47 : [284847.912840] [<ffffffff8029e3c9>] sys_newlstat+0x19/0x31 : [284847.912848] [<ffffffff8020beda>] system_call_after_swapgs+0x8a/0x8f Almost all traces has __mutex_lock_slowpath as top-level. Only some has different trace: : [284847.737386] INFO: task apache2:12472 blocked for more than 120 seconds. : [284847.777551] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. : [284847.824881] apache2 D ffff8101bc6b7ab0 0 12472 14358 : [284847.824886] ffff8101b9cc1c50 0000000000000086 ffffffffa0131e0a 0000000000000002 : [284847.824889] ffff8102e7454300 ffff810324c6cad0 ffff8102e7454588 0000000000000000 : [284847.824893] 0000000000000001 0000000000000296 0000000000000003 ffff8101b9cc1c58 : [284847.824896] Call Trace: : [284847.828403] [<ffffffffa0131e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 : [284847.828412] [<ffffffff80429b0d>] __mutex_lock_slowpath+0x64/0x9b : [284847.828418] [<ffffffff80429972>] mutex_lock+0xa/0xb : [284847.828421] [<ffffffff802a20b9>] do_lookup+0x82/0x1c1 : [284847.828427] [<ffffffff802a4271>] __link_path_walk+0x87a/0xd19 : [284847.828428] [<ffffffff80271296>] find_lock_page+0x1f/0x8a : [284847.828428] [<ffffffff80273182>] filemap_fault+0x1c2/0x33c : [284847.828428] [<ffffffff802a4756>] path_walk+0x46/0x8b : [284847.828428] [<ffffffff802a4a82>] do_path_lookup+0x158/0x1cf : [284847.828428] [<ffffffff802a3879>] getname+0x140/0x1a7 : [284847.828428] [<ffffffff802a53f1>] __user_walk_fd+0x37/0x4c : [284847.828428] [<ffffffff8029e381>] vfs_lstat_fd+0x18/0x47 : [284847.828428] [<ffffffff8029e3c9>] sys_newlstat+0x19/0x31 : [284847.828428] [<ffffffff8020beda>] system_call_after_swapgs+0x8a/0x8f kernel: [1912668.466347] INFO: task apache2:17984 blocked for more than 120 seconds. [1912668.507035] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. : [1912668.555165] apache2 D ffff8101c5637ba0 0 17984 17282 : [1912668.596752] ffff810166a7dd30 0000000000000086 0000000000000000 ffff810166a7dcd8 : [1912668.643341] ffff8101c563c880 ffff81024505f000 0000000000000002 ffff810166a7dd68 : [1912668.699566] 0000000000000086 00000000000cb1a0 0000000000000000 ffff81017f344d60 : [1912668.744773] Call Trace: : [1912668.761754] [<ffffffff8022a3ed>] pick_next_task_fair+0x6e/0x7a : [1912668.829311] [<ffffffff802be0e2>] bio_alloc_bioset+0x89/0xd9 : [1912668.861930] [<ffffffff8024ac3a>] getnstimeofday+0x39/0x98 : [1912668.897005] [<ffffffff802710f6>] sync_page+0x0/0x41 : [1912668.927868] [<ffffffff80429487>] io_schedule+0x5c/0x9e : [1912668.960286] [<ffffffff80271132>] sync_page+0x3c/0x41 : [1912668.991756] [<ffffffff804295fa>] __wait_on_bit_lock+0x36/0x66 : [1912669.031757] [<ffffffff802710e3>] __lock_page+0x5e/0x64 : [1912669.064191] [<ffffffff802461d3>] wake_bit_function+0x0/0x23 : [1912669.100100] [<ffffffff80281bc5>] handle_mm_fault+0x5e4/0x8de : [1912669.134531] [<ffffffff802461a5>] autoremove_wake_function+0x0/0x2e : [1912669.174623] [<ffffffff802aa108>] fcntl_setlk+0x1cf/0x291 : [1912669.210623] [<ffffffff802461a5>] autoremove_wake_function+0x0/0x2e : [1912669.246923] [<ffffffff802a677f>] sys_fcntl+0x280/0x2f7 After googling for "mutex_lock_slowpath" I can only find the Kernel mailing list discussions that this issue was introduced in some commit. Wthout reference to verison. Discussions as recent as Jan 25, 2011. The Kernel I am using is form Debian Lenny, year ago. What should I do? Is this bug even fixed in kernel? if it's such obvious bug why it happens so rarely? Should I download latest kernel from kernel.org and upgrade? Should I use Debian backports to install new "Approved" kernel? Am I missing something? What to do?

    Read the article

  • Bad disks in ancient server

    - by Joel Coel
    I have a 1998-era Netware 3.12 server that runs everything on our campus: general ledger, purchasing, payroll, student information, grades, you name it. The server has an Adaptec RAID controller with two volumes: RAID 1, 2 17GB scsi disks, Seagate ST318417W RAID 5, 3 4GB scsi disks, 2 Seagate ST34573W and 1 ST34572W. We are currently in the early stages of a project to replace this system, but you don't just jump into a new system like that and so I need to keep this server running until at least November 2011. This week we had not one but two hard drives fail. Thankfully they are from different volumes and we're able to keep running for the moment, but given the close nature of these failures I have serious doubts that I'll be able to avoid catastrophic failure from this server through the November target as is without restoring the RAID redundancy — it'll only take one more drive failure anywhere and I'm completely hosed. We are fortunate enough to have exact match "spares" lying around for both drives, but the spares are in unknown condition. I tried swapping just them in, but the RAID controller isn't smart enough to handle this and it renders the system unbootable. As for the RAID controller itself, there is utility I can get into during POST via a Ctrl-A shortcut, but I can't do much useful from there. To actually manage volumes I must first boot in to Netware, at which point I can use CI/O Array Management Software Version 2.0 to actually look at volume information. I suspect that the normal way to manage things is to boot from a special floppy with the controller software on it, but that floppy is long gone. Going through the options in the RAID software, I think the only supported way to replace a disk in an existing RAID volume is to physically add the disk, boot up and configure it as a "spare" for a volume, force the volume to use the spare to replace an existing down disk (and at this point I'm only guessing) so that the down disk becomes the spare, repair the volume, remove the spare from the volume, and then shut down and remove the disk. Then start all over for the other failed disk. All this amounts to a lot of downtime, assuming I can even make it work and that my spares are any good. As for finding reliable spares, I have no clue where to even begin looking to find a new 4GB scsi drive, or even which exact scsi system I'm looking for, as it's gone through a few different iterations over time. Another option is to migrate this to a virtual machine (hyper-v), but all previous attempts we've made in this area have failed to get very far. When this machine was installed I was just graduating from high school, and so it requires lower level knowledge of netware and dos than I ever developed, or if I did have since forgotten (I'm not exactly a dos neophyte, either). Part of my problem is this is a high-use server, and taking it down for a few days to figure things out isn't gonna fly very well. As for the question, I'm looking for anything that might be helpful in this situation: a recommendation on a place to find good spares from this era, personal experience repairing RAID volumes using a similar controller or building a hyper-v vm from an old netware server, a line on a floppy with better software for the RAID controller, recommendation on a good Novell consultant in Nebraska that would be able to put things right, a whole other option I haven't considered yet, etc. Update: For backups, we have good (recently verified via restore) backups of the data only -- nothing for the software that actually runs things. Update 2: Just a progress report that I currently have a working Netware 3.12 install in VMWare Virtual Server 2.0, thanks largely to the guide I found here: http://cerbulescubogdan.blogspot.com/2010/11/novell-netware-312-on-vmware.html The next steps are preparing empty netware volumes to match the additional volumes on my existing server, taking a dump of everything on the C:\ drive and netware volumes on my existing server, and figuring out from that information what modules need added to netware, installing my licenses (we do still have that disk, if it's any good), and moving data over. I have approval to bring the server down for a week after the first of the year (sadly not before), so, aside from creating empty volumes, the rest of the work will have to wait until then. Final Update (Jan 5, 2011): I was able to get spares working in both raid arrays without data loss this week. Both are now listed by the controller as "FAULT TOLLERANT" (yay!). I was also able to build on the progress from my last update and now have a functional "spare" server in VMWare Server 2.0. The spare can run and use our erp software, but I can't put it into production because I can't (yet) print from that box (and I have no idea why). Even so, this VM will do in a pinch if I have no other choice, and between it and the repaired RAID arrays I'm comfortable pushing on until I can junk the machine in November.

    Read the article

  • New Skool Crosstabbing

    - by Tim Dexter
    A while back I spoke about having to go back to BIP's original crosstabbing solution to achieve a certain layout. Hok Min has provided a 'man' page for the new crosstab/pivot builder for 10.1.3.4.1 users. This will make the documentation drop but for now, get it here! The old, hand method is still available but this new approach, is more efficient and flexible. That said you may need to get into the crosstab code to tweak it where the crosstab dialog can not help. I had to do this, this week but more on that later. The following explains how the crosstab wizard builds the crosstab and what the fields inside the resulting template structure are there for. To create the crosstab a new XDO command "<?crosstab:...?>" has been created. XDO Command: <?crosstab: ctvarname; data-element; rows; columns; measures; aggregation?> Parameter Description Example Ctvarname Crosstab variable name. This is automatically generated by the Add-in. C123 data-element This is the XML data element that contains the data. "//ROW" Rows This contains a list of XML elements for row headers. The ordering information is specified within "{" and "}". The first attribute is the sort element. Leaving it blank means the sort element is the same as the row header element. The attribute "o" means order. Its value can be "a" for ascending, or "d" for descending. The attribute "t" means type. Its value can be "t" for text, and "n" for numeric. There can be more than one sort elements, example: "emp-full-name {emp-lastname,o=a,t=n}{emp-firstname,o=a,t=n}. This will sort employee by last name and first name. "Region{,o=a,t=t}, District{,o=a,t=t}" In the example, the first row header is "Region". It is sort by "Region", order is ascending, and type is text. The second row header is "District". It is sort by "District", order is ascending, and type is text. Columns This contains a list of XML elements for columns headers. The ordering information is specified within "{" and "}". The first attribute is the sort element. Leaving it blank means the sort element is the same as the column header element. The attribute "o" means order. Its value can be "a" for ascending, or "d" for descending. The attribute "t" means type. Its value can be "t" for text, and "n" for numeric. There can be more than one sort elements, example: "emp-full-name {emp-lastname,o=a,t=n}{emp-firstname,o=a,t=n}. This will sort employee by last name and first name. "ProductsBrand{,o=a,t=t}, PeriodYear{,o=a,t=t}" In the example, the first column header is "ProductsBrand". It is sort by "ProductsBrand", order is ascending, and type is text. The second column header is "PeriodYear". It is sort by "District", order is ascending, and type is text. Measures This contains a list of XML elements for measures. "Revenue, PrevRevenue" Aggregation The aggregation function name. Currently, we only support "sum". "sum" Using the Oracle BI Publisher Template Builder for Word add-in, we are able to construct the following Pivot Table: The generated XDO command for this Pivot Table is as follow: <?crosstab:c547; "//ROW";"Region{,o=a,t=t}, District{,o=a,t=t}"; "ProductsBrand{,o=a,t=t},PeriodYear{,o=a,t=t}"; "Revenue, PrevRevenue";"sum"?> Running the command on the give XML data files generates this XML file "cttree.xml". Each XPath in the "cttree.xml" is described in the following table. Element XPath Count Description C0 /cttree/C0 1 This contains elements which are related to column. C1 /cttree/C0/C1 4 The first level column "ProductsBrand". There are four distinct values. They are shown in the label H element. CS /cttree/C0/C1/CS 4 The column-span value. It is used to format the crosstab table. H /cttree/C0/C1/H 4 The column header label. There are four distinct values "Enterprise", "Magicolor", "McCloskey" and "Valspar". T1 /cttree/C0/C1/T1 4 The sum for measure 1, which is Revenue. T2 /cttree/C0/C1/T2 4 The sum for measure 2, which is PrevRevenue. C2 /cttree/C0/C1/C2 8 The first level column "PeriodYear", which is the second group-by key. There are two distinct values "2001" and "2002". H /cttree/C0/C1/C2/H 8 The column header label. There are two distinct values "2001" and "2002". Since it is under C1, therefore the total number of entries is 4 x 2 => 8. T1 /cttree/C0/C1/C2/T1 8 The sum for measure 1 "Revenue". T2 /cttree/C0/C1/C2/T2 8 The sum for measure 2 "PrevRevenue". M0 /cttree/M0 1 This contains elements which are related to measures. M1 /cttree/M0/M1 1 This contains summary for measure 1. H /cttree/M0/M1/H 1 The measure 1 label, which is "Revenue". T /cttree/M0/M1/T 1 The sum of measure 1 for the entire xpath from "//ROW". M2 /cttree/M0/M2 1 This contains summary for measure 2. H /cttree/M0/M2/H 1 The measure 2 label, which is "PrevRevenue". T /cttree/M0/M2/T 1 The sum of measure 2 for the entire xpath from "//ROW". R0 /cttree/R0 1 This contains elements which are related to row. R1 /cttree/R0/R1 4 The first level row "Region". There are four distinct values, they are shown in the label H element. H /cttree/R0/R1/H 4 This is row header label for "Region". There are four distinct values "CENTRAL REGION", "EASTERN REGION", "SOUTHERN REGION" and "WESTERN REGION". RS /cttree/R0/R1/RS 4 The row-span value. It is used to format the crosstab table. T1 /cttree/R0/R1/T1 4 The sum of measure 1 "Revenue" for each distinct "Region" value. T2 /cttree/R0/R1/T2 4 The sum of measure 1 "Revenue" for each distinct "Region" value. R1C1 /cttree/R0/R1/R1C1 16 This contains elements from combining R1 and C1. There are 4 distinct values for "Region", and four distinct values for "ProductsBrand". Therefore, the combination is 4 X 4 è 16. T1 /cttree/R0/R1/R1C1/T1 16 The sum of measure 1 "Revenue" for each combination of "Region" and "ProductsBrand". T2 /cttree/R0/R1/R1C1/T2 16 The sum of measure 2 "PrevRevenue" for each combination of "Region" and "ProductsBrand". R1C2 /cttree/R0/R1/R1C1/R1C2 32 This contains elements from combining R1, C1 and C2. There are 4 distinct values for "Region", and four distinct values for "ProductsBrand", and two distinct values of "PeriodYear". Therefore, the combination is 4 X 4 X 2 è 32. T1 /cttree/R0/R1/R1C1/R1C2/T1 32 The sum of measure 1 "Revenue" for each combination of "Region", "ProductsBrand" and "PeriodYear". T2 /cttree/R0/R1/R1C1/R1C2/T2 32 The sum of measure 2 "PrevRevenue" for each combination of "Region", "ProductsBrand" and "PeriodYear". R2 /cttree/R0/R1/R2 18 This contains elements from combining R1 "Region" and R2 "District". Since the list of values in R2 has dependency on R1, therefore the number of entries is not just a simple multiplication. H /cttree/R0/R1/R2/H 18 The row header label for R2 "District". R1N /cttree/R0/R1/R2/R1N 18 The R2 position number within R1. This is used to check if it is the last row, and draw table border accordingly. T1 /cttree/R0/R1/R2/T1 18 The sum of measure 1 "Revenue" for each combination "Region" and "District". T2 /cttree/R0/R1/R2/T2 18 The sum of measure 2 "PrevRevenue" for each combination of "Region" and "District". R2C1 /cttree/R0/R1/R2/R2C1 72 This contains elements from combining R1, R2 and C1. T1 /cttree/R0/R1/R2/R2C1/T1 72 The sum of measure 1 "Revenue" for each combination of "Region", "District" and "ProductsBrand". T2 /cttree/R0/R1/R2/R2C1/T2 72 The sum of measure 2 "PrevRevenue" for each combination of "Region", "District" and "ProductsBrand". R2C2 /cttree/R0/R1/R2/R2C1/R2C2 144 This contains elements from combining R1, R2, C1 and C2, which gives the finest level of details. M1 /cttree/R0/R1/R2/R2C1/R2C2/M1 144 The sum of measure 1 "Revenue". M2 /cttree/R0/R1/R2/R2C1/R2C2/M2 144 The sum of measure 2 "PrevRevenue". Lots to read and digest I know! Customization One new feature I discovered this week is the ability to show one column and sort by another. I had a data set that was extracting month abbreviations, we wanted to show the months across the top and some row headers to the side. As you may know XSL is not great with dates, especially recognising month names. It just wants to sort them alphabetically, so Apr comes before Jan, etc. A way around this is to generate a month number alongside the month and use that to sort. We can do that in the crosstab, sadly its not exposed in the UI yet but its doable. Go back up and take a look a the initial crosstab command. especially the Rows and Columns entries. In there you will find the sort criteria. "ProductsBrand{,o=a,t=t}, PeriodYear{,o=a,t=t}" Notice those leading commas inside the curly braces? Because there is no field preceding them it means that the crosstab should sort on the column before the brace ie PeriodYear. But you can insert another column in the data set to sort by. To get my sort working how I needed. <?crosstab:c794;"current-group()";"_Fund_Type_._Fund_Type_Display_{_Fund_Type_._Fund_Type_Sort_,o=a,t=n}";"_Fiscal_Period__Amount__._Amt_Fm_Disp_Abbr_{_Fiscal_Period__Amount__._Amt_Fiscal_Month_Sort_,o=a,t=n}";"_Execution_Facts_._Amt_";"sum"?> Excuse the horribly verbose XML tags, good ol BIEE :0) The emboldened columns are not in the crosstab but are in the data set. I just opened up the field, dropped them in and changed the type(t) value to be 'n', for number, instead of the default 'a' and my crosstab started sorting how I wanted it. If you find other tips and tricks, please share in the comments.

    Read the article

  • Entity Framework v1 &hellip; Brief Synopsis and Tips &ndash; Part 2

    - by Rohit Gupta
    Using Entity Framework with ASMX Web sErvices and WCF Web Service: If you use ASMX WebService to expose Entity objects from Entity Framework... then the ASMX Webservice does not  include object graphs, one work around is to use Facade pattern or to use WCF Service. The other important aspect of using ASMX Web Services along with Entity Framework is that the ASMX Client is not aware of the existence of EF v1 since the client solely deals with C# objects (not EntityObjects or ObjectContext). Since the client is not aware of the ObjectContext hence the client cannot participate in change tracking since the client only receives the Current Values and not the Orginal values when the service sends the the Entity objects to the client. Thus there are 2 drawbacks to using EntityFramework with ASMX Web Service: 1. Object state is not maintained... so to overcome this limitation we need insert/update single entity at a time and retrieve the original values for the entity being updated on the server/service end before calling Save Changes. 2. ASMX does not maintain object graphs... i.e. Customer.Reservations or Customer.Reservations.Trip relationships are not maintained. Thus you need to send these relationships separately from service to client. WCF Web Service overcomes the object graph limitation of ASMX Web Service, but we need to insure that we are populating all the non-null scalar properties of all the objects in the object graph before calling Update. WCF Web service still cannot overcome the second limitation of tracking changes to entities at the client end. Also note that the "Customer" class in the Client is very different from the "Customer" class in the Entity Framework Model Entities. They are incompatible with each other hence we cannot cast one to the other. However the .NET Framework translates the client "Customer" Entity to the EFv1 Model "customer" Entity once the entity is serialzed back on the ASMX server end. If you need change tracking enabled on the client then we need to use WCF Data Services which is available with VS 2010. ====================================================================================================== In WCF when adding an object that has relationships, the framework assumes that every object in the object graph needs to be added to store. for e.g. in a Customer.Reservations.Trip object graph, when a Customer Entity is added to the store, the EFv1 assumes that it needs to a add a Reservations collection and also Trips for each Reservation. Thus if we need to use existing Trips for reservations then we need to insure that we null out the Trip object reference from Reservations and set the TripReference to the EntityKey of the desired Trip instead. ====================================================================================================== Understanding Relationships and Associations in EFv1 The Golden Rule of EF is that it does not load entities/relationships unless you ask it to explicitly do so. However there is 1 exception to this rule. This exception happens when you attach/detach entities from the ObjectContext. If you detach an Entity in a ObjectGraph from the ObjectContext, then the ObjectContext removes the ObjectStateEntry for this Entity and all the relationship Objects associated with this Entity. For e.g. in a Customer.Order.OrderDetails if the Customer Entity is detached from the ObjectContext then you cannot traverse to the Order and OrderDetails Entities (that still exist in the ObjectContext) from the Customer Entity(which does not exist in the Object Context) Conversely, if you JOIN a entity that is not in the ObjectContext with a Entity that is in the ObjContext then the First Entity will automatically be added to the ObjContext since relationships for the 2 Entities need to exist in the ObjContext. ========================================================= You cannot attach an EntityCollection to an entity through its navigation property for e.g. you cannot code myContact.Addresses = myAddressEntityCollection ========================================================== Cascade Deletes in EDM: The Designer does not support specifying cascase deletes for a Entity. To enable cascasde deletes on a Entity in EDM use the Association definition in CSDL for the Entity. for e.g. SalesOrderDetail (SOD) has a Foreign Key relationship with SalesOrderHeader (SalesOrderHeader 1 : SalesOrderDetail *) if you specify a cascade Delete on SalesOrderHeader Entity then calling deleteObject on SalesOrderHeader (SOH) Entity will send delete commands for SOH record and all the SOD records that reference the SOH record. ========================================================== As a good design practise, if you use Cascade Deletes insure that Cascade delete facet is used both in the EDM as well as in the database. Even though it is not absolutely mandatory to have Cascade deletes on both Database and EDM (since you can see that just the Cascade delete spec on the SOH Entity in EDM will insure that SOH record and all related SOD records will be deleted from the database ... even though you dont have cascade delete configured in the database in the SOD table) ============================================================== Maintaining relationships in Code When Setting a Navigation property of a Entity (for e.g. setting the Contact Navigation property of Address Entity) the following rules apply : If both objects are detached, no relationship object will be created. You are simply setting a property the CLR way. If both objects are attached, a relationship object will be created. If only one of the objects is attached, the other will become attached and a relationship object will be created. If that detached object is new, when it is attached to the context its EntityState will be Added. One important rule to remember regarding synchronizing the EntityReference.Value and EntityReference.EntityKey properties is that when attaching an Entity which has a EntityReference (e.g. Address Entity with ContactReference) the Value property will take precedence and if the Value and EntityKey are out of sync, the EntityKey will be updated to match the Value. ====================================================== If you call .Load() method on a detached Entity then the .Load() operation will throw an exception. There is one exception to this rule. If you load entities using MergeOption.NoTracking, you will be able to call .Load() on such entities since these Entities are accessible by the ObjectContext. So the bottomline is that we need Objectontext to be able to call .Load() method to do deffered loading on EntityReference or EntityCollection. Another rule to remember is that you cannot call .Load() on entities that have a EntityState.Added State since the ObjectContext uses the EntityKey of the Primary (Parent) Entity when loading the related (Child) Entity (and not the EntityKey of the child (even if the EntityKey of the child is present before calling .Load()) ====================================================== You can use ObjContext.Add() to add a entity to the ObjContext and set the EntityState of the new Entity to EntityState.Added. here no relationships are added/updated. You can also use EntityCollection.Add() method to add an entity to another entity's related EntityCollection for e.g. contact has a Addresses EntityCollection so to add a new address use contact.Addresses.Add(newAddress) to add a new address to the Addresses EntityCollection. Note that if the entity does not already exist in the ObjectContext then calling contact.Addresses.Add(myAddress) will cause a new Address Entity to be added to the ObjContext with EntityState.Added and it will also add a RelationshipEntry (a relationship object) with EntityState.Added which connects the Contact (contact) with the new address newAddress. Note that if the entity already exists in the Objectcontext (being part theOtherContact.Addresses Collection), then calling contact.Addresses.Add(existingAddress) will add 2 RelationshipEntry objects to the ObjectStateEntry Collection, one with EntityState.Deleted and the other with EntityState.Added. This implies that the existingAddress Entity is removed from the theOtherContact.Addresses Collection and Added to the contact.Addresses Collection..effectively reassigning the address entity from the theOtherContact to "contact". This is called moving an existing entity to a new object graph. ====================================================== You usually use ObjectContext.Attach() and EntityCollection.Attach() methods usually when you need to reconstruct the ObjectGraph after deserializing the objects as received from a ASMX Web Service Client. Attach is usually used to connect existing Entities in the ObjectContext. When EntityCollection.Attach() is called the EntityState of the RelationshipEntry (the relationship object) remains as EntityState.unchanged whereas when EntityCollection.Add() method is called the EntityState of the relationship object changes to EntityState.Added or EntityState.Deleted as the situation demands. ========================================================= LINQ To Entities Tips: Select Many does Inner Join by default.   for e.g. from c in Contact from a in c.Address select c ... this will do a Inner Join between the Contacts and Addresses Table and return only those Contacts that have a Address. ======================================================== Group Joins Do LEFT Join by default. e.g. from a in Address join c in Contact ON a.Contact.ContactID == c.ContactID Into g WHERE a.CountryRegion == "US" select g; This query will do a left join on the Contact table and return contacts that have a address in "US" region The following query : from c in Contact join a in Address.Where(a1 => a1.CountryRegion == "US") on c.ContactID  equals a.Contact.ContactID into addresses select new {c, addresses} will do a left join on the Address table and return All Contacts. In these Contacts only those will have its Address EntityCollection Populated which have a Address in the "US" region, the other contacts will have 0 Addresses in the Address collection (even if addresses for those contacts exist in the database but are in a different region) ======================================================== Linq to Entities does not support DefaultIfEmpty().... instead use .Include("Address") Query Builder method to do a Left JOIN or use Group Joins if you need more control like Filtering on the Address EntityCollection of Contact Entity =================================================================== Use CreateSourceQuery() on the EntityReference or EntityCollection if you need to add filters during deferred loading of Entities (Deferred loading in EFv1 happens when you call Load() method on the EntityReference or EntityCollection. for e.g. var cust=context.Contacts.OfType<Customer>().First(); var sq = cust.Reservations.CreateSourceQuery().Where(r => r.ReservationDate > new DateTime(2008,1,1)); cust.Reservations.Attach(sq); This populates only those reservations that are older than Jan 1 2008. This is the only way (in EFv1) to Attach a Range of Entities to a EntityCollection using the Attach() method ================================================================== If you need to get the Foreign Key value for a entity e.g. to get the ContactID value from a Address Entity use this :                                address.ContactReference.EntityKey.EntityKeyValues.Where(k=> k.Key == "ContactID")

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Missing audio and problems playing FLV video converted from 720p .mov file with FFMPEG

    - by undefined
    I have some .mov video files recorded from a JVC GC-FM1 HD video camera in 720p mode. I have FFMPEG running on a Linux box that I upload files to and have them encoded into FLV format. The video appears to be encoding ok but there is no audio in the resulting FLV file and when I play it back in Flash Player in a browser or on Adobe Media Player, the video pauses at the start. It appears that Adobe Media Player waits for the progress bar to reach the end of the video before starting the playback - i.e. the video will load, the picture pauses, the progress bar seeks to the end as if the video was playing then when it reaches the end the video picture starts. There is no audio on the video. I am noticing this in the video player I have built with Flash 8 using an FLVPlayback component and attached seekBar. The seek bar will start moving as if the video is playing but the picture remains paused. Here are some outputs from my FFMPEG log and the command I am using to encode the video - my FFMPEG command called from PHP - $cmd = 'ffmpeg -i ' . $sourcelocation.$filename.".".$fileext . ' -ab 96k -b 700k -ar 44100 -s ' . $target['width'] . 'x' . $target['height'] . ' -ac 1 -acodec libfaac ' . $destlocation.$filename.$ext_trans .' 2>&1'; and here is the output from my error log - FFmpeg version UNKNOWN, Copyright (c) 2000-2010 Fabrice Bellard, et al. built on Jan 22 2010 11:31:03 with gcc 4.1.2 20070925 (Red Hat 4.1.2-33) configuration: --prefix=/usr --enable-static --enable-shared --enable-gpl --enable-nonfree --enable-postproc --enable-avfilter --enable-avfilter-lavf --enable-libfaac --enable-libfaad --enable-libfaadbin --enable-libgsm --enable-libmp3lame --enable-libvorbis --enable-libx264 libavutil 50. 7. 0 / 50. 7. 0 libavcodec 52.48. 0 / 52.48. 0 libavformat 52.47. 0 / 52.47. 0 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.17. 0 / 1.17. 0 libswscale 0. 9. 0 / 0. 9. 0 libpostproc 51. 2. 0 / 51. 2. 0 Seems stream 0 codec frame rate differs from container frame rate: 119.88 (120000/1001) -> 59.94 (60000/1001) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'uploads/video/60974_v1.mov': Metadata: major_brand : qt minor_version : 0 compatible_brands: qt comment : JVC GC-FM1 comment-eng : JVC GC-FM1 Duration: 00:00:30.41, start: 0.000000, bitrate: 4158 kb/s Stream #0.0(eng): Video: h264, yuv420p, 640x480 [PAR 1:1 DAR 4:3], 4017 kb/s, 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 128 kb/s Output #0, rawvideo, to 'uploads/video/60974_v1.jpg': Stream #0.0(eng): Video: mjpeg, yuvj420p, 320x240 [PAR 1:1 DAR 4:3], q=2-31, 200 kb/s, 90k tbn, 59.94 tbc Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding [h264 @ 0x8e67930]B picture before any references, skipping [h264 @ 0x8e67930]decode_slice_header error [h264 @ 0x8e67930]no frame! Error while decoding stream #0.0 [h264 @ 0x8e67930]B picture before any references, skipping [h264 @ 0x8e67930]decode_slice_header error [h264 @ 0x8e67930]no frame! Error while decoding stream #0.0 frame= 1 fps= 0 q=3.8 Lsize= 15kB time=0.02 bitrate=7271.4kbits/s dup=482 drop=0 video:15kB audio:0kB global headers:0kB muxing overhead 0.000000% Which are the important errors here - B picture before any references, skipping? decode_slice_header error? no frame? or Seems stream 0 codec frame rate differs from container frame rate: 119.88 (120000/1001) - 59.94 (60000/1001) Any advice welcome, thanks

    Read the article

  • PHP curl post to login to wordpress

    - by Sadi
    I followed http://stackoverflow.com/questions/724107 to login to wordpress, using php_curl, and it works fine as far I use WAMP, (apache/php). But when it comes to IIS on the dedicated server, it returns nothing. I have wrote the following function which is working fine on my local wamp, but when deployed to client's dedicated windows server 2k3, it doesn't. Please help me function post_url($url, array $query_string) { //$url = http://myhost.com/wptc/sys/wp/wp-login.php /* $query_string = array( 'log'=>'admin', 'pwd'=>'test', 'redirect_to'=>'http://google.com', 'wp-submit'=>'Log%20In', 'testcookie'=>1 ); */ //temp_dir is defined as folder = path/to/a/folder $cookie= temp_dir."cookie.txt"; $c = curl_init($url); if (count($query_string)) { curl_setopt ($c, CURLOPT_POSTFIELDS, http_build_query( $query_string ) ); } curl_setopt($c, CURLOPT_POST, 1); curl_setopt($c, CURLOPT_COOKIEFILE, $cookie); //curl_setopt($c, CURLOPT_SSL_VERIFYPEER, 1); //curl_setopt($c, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6"); curl_setopt($c, CURLOPT_TIMEOUT, 60); curl_setopt($c, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($c, CURLOPT_RETURNTRANSFER, 1); //return the content curl_setopt($c, CURLOPT_COOKIEJAR, $cookie); //curl_setopt($c, CURLOPT_AUTOREFERER, 1); //curl_setopt($c, CURLOPT_REFERER, wp_admin_url); //curl_setopt($c, CURLOPT_MAXREDIRS, 10); curl_setopt($c, CURLOPT_HEADER, 0); //curl_setopt($c, CURLOPT_CRLF, 1); try { $result = curl_exec($c); } catch (Exception $e) { $result = 'error'; } curl_close ($c); return $result; //it return nothing (empty) } Other Facts curl_error($c); return nothing when header CURLOPT_HEADER is set to ON, it return this header HTTP/1.1 200 OK Cache-Control: no-cache, must-revalidate, max-age=0 Pragma: no-cache Content-Type: text/html; charset=UTF-8 Expires: Wed, 11 Jan 1984 05:00:00 GMT Last-Modified: Thu, 06 May 2010 21:06:30 GMT Server: Microsoft-IIS/7.0 X-Powered-By: PHP/5.2.13 Set-Cookie: wordpress_test_cookie=WP+Cookie+check; path=/wptc/sys/wp/ Set-Cookie: wordpress_b13661ceb5c3eba8b42d383be885d372=admin%7C1273352790%7C7d8ddfb6b1c0875c37c1805ab98f1e7b; path=/wptc/sys/wp/wp-content/plugins; httponly Set-Cookie: wordpress_b13661ceb5c3eba8b42d383be885d372=admin%7C1273352790%7C7d8ddfb6b1c0875c37c1805ab98f1e7b; path=/wptc/sys/wp/wp-admin; httponly Set-Cookie: wordpress_logged_in_b13661ceb5c3eba8b42d383be885d372=admin%7C1273352790%7Cb90825fb4a7d5da9b5dc4d99b4e06049; path=/wptc/sys/wp/; httponly Refresh: 0;url=http://myhost.com/wptc/sys/wp/wp-admin/ X-Powered-By: ASP.NET Date: Thu, 06 May 2010 21:06:30 GMT Content-Length: 0 CURL version info: Array ( [version_number] = 463872 [age] = 3 [features] = 2717 [ssl_version_number] = 0 [version] = 7.20.0 [host] = i386-pc-win32 [ssl_version] = OpenSSL/0.9.8k [libz_version] = 1.2.3 [protocols] = Array ( [0] = dict [1] = file [2] = ftp [3] = ftps [4] = http [5] = https [6] = imap [7] = imaps [8] = ldap [9] = pop3 [10] = pop3s [11] = rtsp [12] = smtp [13] = smtps [14] = telnet [15] = tftp ) ) PHP Version 5.2.13 Windows Server 2K3 IIS 7 Working fine on Apache, PHP 3.0 on my localhost (windows)

    Read the article

  • Import Twitter XML into Visual Basic

    - by ct2k7
    Hello, I'm trying to import an XML that's provided by twitter into a readable format in Visual Basic. XML looks like: <?xml version="1.0" encoding="UTF-8" ?> - <statuses type="array"> - <status> <created_at>Mon Jan 18 20:41:19 +0000 2010</created_at> <id>111111111</id> <text>thattext</text> <source><a href="http://www.seesmic.com/" rel="nofollow">Seesmic</a></source> <truncated>false</truncated> <in_reply_to_status_id>7916479948</in_reply_to_status_id> <in_reply_to_user_id>90978206</in_reply_to_user_id> <favorited>false</favorited> <in_reply_to_screen_name>radonsystems</in_reply_to_screen_name> - <user> <id>20193170</id> <name>personname</name> <screen_name>screenname</screen_name> <location>loc</location> <description>desc</description> <profile_image_url>http://a3.twimg.com/profile_images/747012343/twitter_normal.png</profile_image_url> <url>myurl</url> <protected>false</protected> <followers_count>97</followers_count> <profile_background_color>ffffff</profile_background_color> <profile_text_color>333333</profile_text_color> <profile_link_color>0084B4</profile_link_color> <profile_sidebar_fill_color>ffffff</profile_sidebar_fill_color> <profile_sidebar_border_color>ababab</profile_sidebar_border_color> <friends_count>76</friends_count> <created_at>Thu Feb 05 21:54:24 +0000 2009</created_at> <favourites_count>1</favourites_count> <utc_offset>0</utc_offset> <time_zone>London</time_zone> <profile_background_image_url>http://a3.twimg.com/profile_background_images/76723999/754686.png</profile_background_image_url> <profile_background_tile>true</profile_background_tile> <notifications>false</notifications> <geo_enabled>true</geo_enabled> <verified>false</verified> <following>false</following> <statuses_count>782</statuses_count> <lang>en</lang> <contributors_enabled>false</contributors_enabled> </user> <geo /> <coordinates /> <place /> <contributors /> </status> </statuses> Now, I want to display it in a panel that automatically refresh after a certain period, however, I only want to pick out certain bits of info from that xml, such as profile_image_url and text and created_at. You can guess how the data will be formatted, much like that presented in TweetDeck and other Twitter clients. I'm quite new to Visual Basic, so how could I do this? Thanks

    Read the article

  • Rendering a view to a string in MVC, then redirecting -- workarounds?

    - by James S
    Hi -- I can't render a view to a string and then redirect, despite this answer from Feb (after version 1.0, I think) that claims it's possible. I thought I was doing something wrong, and then I read this answer from Haack in July that claims it's not possible. If somebody has it working and can help me get it working, that's great (and I'll post code, errors). However, I'm now at the point of needing workarounds. There are a few, but nothing ideal. Has anybody solved this, or have any comments on my ideas? This is to render email. While I can surely send the email outside of the web request (store info in a db and get it later), there are many types of emails and I don't want to store the template data (user object, a few other LINQ objects) in a db to let it get rendered later. I could create a simpler, serializable POCO and save that in the db, but why? ... I just want rendered text! I can create a new RedirectToAction object that checks if the headers have been sent (can't figure out how to do this -- try/catch?) and, if so, builds out a simple page with a meta redirect, a javascript redirect, and also a "click here" link. Within my controller, I can remember if I've rendered an email and, if so, manually do #2 by displaying a view. I can manually send the redirect headers before any potential email rendering. Then, rather than using the MVC infrastructure to redirecttoaction, I just call result.end. This seems easiest, but really messy. Anything else? EDIT: I've tried Dan's code (very similar to the code from Jan/Feb that I've already tried) and I'm still getting the same error. The only substantial difference I can see is that his example uses a view while I use a partial view. I'll try testing this later with a view. Here's what I've got: Controller public ActionResult Certifications(string email_intro) { //a lot of stuff ViewData["users"] = users; if (isPost()) { //create the viewmodel var view_model = new ViewModels.Emails.Certifications.Open(userContext) { emailIntro = email_intro }; //i've tried stopping this after just one iteration, in case the problem is due to calling it multiple times foreach (var user in users) { if (user.Email_Address.IsValidEmailAddress()) { //add more stuff to the view model specific to this user view_model.user = user; view_model.certification302Summary.subProcessesOwner = new SubProcess_Certifications(RecordUpdating.Role.Owner, null, null, user.User_ID, repository); //more here.... //if i comment out the next line, everything works ok SendEmail(view_model, this.ControllerContext); } } return RedirectToAction("Certifications"); } return View(); } SendEmail() public static void SendEmail(ViewModels.Emails.Certifications.Open model, ControllerContext context) { var vd = context.Controller.ViewData; vd["model"] = model; var renderer = new CustomRenderers(); //i fixed an error in your code here var text = renderer.RenderViewToString3(context, "~/Views/Emails/Certifications/Open.ascx", "", vd, null); var a = text; } CustomRenderers public class CustomRenderers { public virtual string RenderViewToString3(ControllerContext controllerContext, string viewPath, string masterPath, ViewDataDictionary viewData, TempDataDictionary tempData) { //copy/paste of dan's code } } Error [HttpException (0x80004005): Cannot redirect after HTTP headers have been sent.] System.Web.HttpResponse.Redirect(String url, Boolean endResponse) +8707691 Thanks, James

    Read the article

  • PHP: Strange Date Problem

    - by Me-and-Coding
    Hi, I have two users in my database whose birth date is set to: 1985-01-26 And then i have function which when provided the users' date, tells how many days are left in the birthday. Here is the function: function retage($iy,$im,$id) { if(!empty($iy)>0 && intval($im)>0 && intval($id)>0) { $tdo=$iy.'-'.$im.'-'.$id; $tdc=date('Y').'-'.$im.'-'.$id; /*echo "<br/>";*/ $cd=date('Y-n-j'); /*echo "<br/>";*/ if(strtotime($tdc)>strtotime($cd))//coming { $ty=floor((strtotime($tdc)-strtotime($tdo))/(3600*24*365)); $td=floor((strtotime($tdc)-strtotime($cd))/(24*3600)); if($td==1) { $td=round((strtotime($tdc)-strtotime($cd))/(24*3600)).' day to go'; } else { $td=round((strtotime($tdc)-strtotime($cd))/(24*3600)).' days to go'; } $ty='<font color="#C7C5C5">is turning '.$ty.' on <br>'.date('M jS Y',strtotime($tdc)).'</font>'; //return 'is turning '.$ty.' on '.$tdc; } elseif(strtotime($tdc)<strtotime($cd))//past { $ty=floor((strtotime($tdc)-strtotime($tdo))/(3600*24*365)); if($ty>0) { //$td='gone '.floor((strtotime($cd)-strtotime($tdc))/(24*3600)).' days ago'; $ndays=floor((strtotime($cd)-strtotime($tdc))/(24*3600)); if($ndays==1) $td=' gone '.round((strtotime($cd)-strtotime($tdc))/(24*3600)).' day ago'; else $td=' gone '.round((strtotime($cd)-strtotime($tdc))/(24*3600)).' days ago'; $ty='<font color="#C7C5C5">had turned '.$ty.' on <br>'.date('M jS Y',strtotime($tdc)).'</font>'; //return 'had turned '.$ty.' on '.$tdc; } else { $tdc=(date('Y')+1).'-'.$im.'-'.$id; $ty=floor((strtotime($tdc)-strtotime($tdo))/(3600*24*365)); //$td=floor((strtotime($tdc)-strtotime($cd))/(24*3600)).' days to go'; $td=floor((strtotime($tdc)-strtotime($cd))/(24*3600)); if($td==1) { $td=round((strtotime($tdc)-strtotime($cd))/(24*3600)).' day to go'; } else { $td=round((strtotime($tdc)-strtotime($cd))/(24*3600)).' days to go'; } $ty='<font color="#C7C5C5">is turning '.$ty.' on <br>'.date('M jS Y',strtotime($tdc)).'</font>'; //return 'is turning '.$ty.' on '.$tdc; } } else//today { $ty=floor((strtotime($tdc)-strtotime($tdo))/(3600*24*365)); if($ty>0) { $td='today'; $ty='<font color="#C7C5C5">has turned <br>'.$ty.' on today </font>'; //return 'has turned '.$ty.' on today'; } else { $ty='<font color="#C7C5C5">today</font>';$td=''; //return ''; } } } else { $ty='';$td=''; //return ''; } $ta[0]=$ty; $ta[1]=$td ; return $ta; } I use below code to show the days remaining: while($rs=mysql_fetch_array($result)) { if (isset($rs['byear'],$rs['bmonth'],$rs['bdate'])) { $tmptxt = retage($rs['byear'],$rs['bmonth'],$rs['bdate']); echo $tmptxt[1]; } } The strange thing is that for one user, the days remaining is shown correctly eg: gone 120 days ago And for other user having same birth date, this is shown: Jan 1st 1970 -14755 days to go Strange: When I use the same function outside of the loop and test with date 1985-01-26, the correct result is shown. Note: You can check out the function for yourself. Could you please tell what could be wrong there, your help will be highly appreciated. Thanks.

    Read the article

< Previous Page | 38 39 40 41 42 43 44  | Next Page >