Search Results

Search found 10285 results on 412 pages for 'cpu architecture'.

Page 399/412 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • Need help identiying a nasty rootkit in Windows

    - by goofrider
    I have a nasty rootkit that not tools seem to be able to idenity. I know for sure it's a rootkit, but I can figure out which rootkit it is. Here's what I gathered so far: It creates multiple copies of itself in %HOME%\Local Settings\Temp with names like Q.EXE, IAJARZ.exe, etc., and install them as hidden services. These EXE have SysInternals identifiers in them so they're definitely rootkits. It hooked very deep in the system, including file read/write, security policies, registry read/write, and possibly WinSock/TCP/IP. When going to Sophos.com to download their software, the rootkit inject something called Microsoft Ajax Tootkit into the page, which injects code into the email submission form in order to redirect it. (EDIT: I might have panicked. Looks like Sophos does use an AJAZ email form, their form is just broken on Chrome so it looked like a mail form injection attack, the link is http://www.sophos.com/en-us/products/free-tools/virus-removal-tool/download.aspx ) Super-Antispyware found a lot of spyware cookies, in the name of .kaspersky.2o7.net, etc. (just chedk 2o7.net, looks like it's a legit ad company) I tried comparing DNS lookup from the infected systems and from system in other physical locations, no DNS redirections it seems. I used dd to copy the MBR and compared it with the MBR provided by ms-sys package, no differences so it's not infecting MBR. No antivirus or rootkit scanner be able to identify it. Most of them can't even find it. I tried scanning, in-situ (normal mode), in safe mode, and boot to linux live CD. Scanners used: Avast, Sophos anti rootkit, Kasersky TDSSKiller, GMER, RootkitRevealer, and many others. Kaspersky reported some unsigned system files that ought to be signed (e.g. tcpip.sys), and reported a number of MD5 mismatches. But otherwise couldn't identify anything based on signature. When running Sysinternal RootkitRevealer and Sophos AntiRootkit, CPU usage goes up to 100% and gets stucked. The Rootkit is blocking them. When trying running/installing HiJackThis, RootkitRevealer and some other scanners, it tells me system security policy prevent running/installing it. The list of malicious acitivities go on and on. here's a sample of logs from all my scans. In particular, aswSnx.SYS, apnenfno.sys and PROCMON20.SYS has a huge number of hooks. It's hard to tell if the rootkit replaced legit program files like aswSnx.SYS (from Avast) and PROCMON20.SYS (from Sysinternal Process Monitor). I can't find whether apnenfno.sys is from a legit program. Help to identify it is appreciated. Trend Micro RootkitBuster ------ [HIDDEN_REGISTRY][Hidden Reg Value]: KeyPath : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\sptd\Cfg Root : 586bfc0 SubKey : Cfg ValueName : g0 Data : 38 23 E8 D0 BF F2 2D 6F ... ValueType : 3 AccessType: 0 FullLength: 61 DataSize : 32 [HOOKED_SERVICE_API]: Service API : ZwCreateMutant Image Path : C:\WINDOWS\System32\Drivers\aswSnx.SYS OriginalHandler : 0x8061758e CurrentHandler : 0xaa66cce8 ServiceNumber : 0x2b ModuleName : aswSnx.SYS SDTType : 0x0 [HOOKED_SERVICE_API]: Service API : ZwCreateThread Image Path : c:\windows\system32\drivers\apnenfno.sys OriginalHandler : 0x805d1038 CurrentHandler : 0xaa5f118c ServiceNumber : 0x35 ModuleName : apnenfno.sys SDTType : 0x0 [HOOKED_SERVICE_API]: Service API : ZwDeleteKey Image Path : C:\WINDOWS\system32\Drivers\PROCMON20.SYS OriginalHandler : 0x80624472 CurrentHandler : 0xa709b0f8 ServiceNumber : 0x3f ModuleName : PROCMON20.SYS SDTType : 0x0 HiJackThis ------ O23 - Service: JWAHQAGZ - Sysinternals - www.sysinternals.com - C:\DOCUME~1\jeff\LOCALS~1\Temp\JWAHQAGZ.exe O23 - Service: LHIJ - Sysinternals - www.sysinternals.com - C:\DOCUME~1\jeff\LOCALS~1\Temp\LHIJ.exe Kaspersky TDSSKiller ------ 21:05:58.0375 3936 C:\WINDOWS\system32\ati2sgag.exe - copied to quarantine 21:05:59.0217 3936 ATI Smart ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:05:59.0342 3936 C:\WINDOWS\system32\BUFADPT.SYS - copied to quarantine 21:05:59.0856 3936 BUFADPT ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:05:59.0965 3936 C:\Program Files\CrashPlan\CrashPlanService.exe - copied to quarantine 21:06:00.0152 3936 CrashPlanService ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:00.0246 3936 C:\WINDOWS\system32\epmntdrv.sys - copied to quarantine 21:06:00.0433 3936 epmntdrv ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:00.0464 3936 C:\WINDOWS\system32\EuGdiDrv.sys - copied to quarantine 21:06:00.0526 3936 EuGdiDrv ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:00.0604 3936 C:\Program Files\Common Files\Macrovision Shared\FLEXnet Publisher\FNPLicensingService.exe - copied to quarantine 21:06:01.0181 3936 FLEXnet Licensing Service ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0321 3936 C:\Program Files\AddinForUNCFAT\UNCFATDMS.exe - copied to quarantine 21:06:01.0430 3936 OTFSDMS ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0492 3936 C:\WINDOWS\system32\DRIVERS\tcpip.sys - copied to quarantine 21:06:01.0539 3936 Tcpip ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0601 3936 C:\DOCUME~1\jeff\LOCALS~1\Temp\TULPUWOX.exe - copied to quarantine 21:06:01.0664 3936 HKLM\SYSTEM\ControlSet003\services\TULPUWOX - will be deleted on reboot 21:06:01.0664 3936 C:\DOCUME~1\jeff\LOCALS~1\Temp\TULPUWOX.exe - will be deleted on reboot 21:06:01.0664 3936 TULPUWOX ( UnsignedFile.Multi.Generic ) - User select action: Delete 21:06:01.0757 3936 C:\WINDOWS\system32\Drivers\usbaapl.sys - copied to quarantine 21:06:01.0866 3936 USBAAPL ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0913 3936 C:\Program Files\VMware\VMware Player\vmware-authd.exe - copied to quarantine 21:06:02.0443 3936 VMAuthdService ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:02.0443 3936 vmount2 ( UnsignedFile.Multi.Generic ) - skipped by user 21:06:02.0443 3936 vmount2 ( UnsignedFile.Multi.Generic ) - User select action: Skip 21:06:02.0459 3936 vstor2 ( UnsignedFile.Multi.Generic ) - skipped by user 21:06:02.0459 3936 vstor2 ( UnsignedFile.Multi.Generic ) - User select action: Skip

    Read the article

  • Appropriate programming design questions.

    - by Edward
    I have a few questions on good programming design. I'm going to first describe the project I'm building so you are better equipped to help me out. I am coding a Remote Assistance Tool similar to TeamViewer, Microsoft Remote Desktop, CrossLoop. It will incorporate concepts like UDP networking (using Lidgren networking library), NAT traversal (since many computers are invisible behind routers nowadays), Mirror Drivers (using DFMirage's Mirror Driver (http://www.demoforge.com/dfmirage.htm) for realtime screen grabbing on the remote computer). That being said, this program has a concept of being a client-server architecture, but I made only one program with both the functionality of client and server. That way, when the user runs my program, they can switch between giving assistance and receiving assistance without having to download a separate client or server module. I have a Windows Form that allows the user to choose between giving assistance and receiving assistance. I have another Windows Form for a file explorer module. I have another Windows Form for a chat module. I have another Windows Form form for a registry editor module. I have another Windows Form for the live control module. So I've got a Form for each module, which raises the first question: 1. Should I process module-specific commands inside the code of the respective Windows Form? Meaning, let's say I get a command with some data that enumerates the remote user's files for a specific directory. Obviously, I would have to update this on the File Explorer Windows Form and add the entries to the ListView. Should I be processing this code inside the Windows Form though? Or should I be handling this in another class (although I have to eventually pass the data to the Form to draw, of course). Or is it like a hybrid in which I process most of the data in another class and pass the final result to the Form to draw? So I've got like 5-6 forms, one for each module. The user starts up my program, enters the remote machine's ID (not IP, ID, because we are registering with an intermediary server to enable NAT traversal), their password, and connects. Now let's suppose the connection is successful. Then the user is presented with a form with all the different modules. So he can open up a File Explorer, or he can mess with the Registry Editor, or he can choose to Chat with his buddy. So now the program is sort of idle, just waiting for the user to do something. If the user opens up Live Control, then the program will be spending most of it's time receiving packets from the remote machine and drawing them to the form to provide a 'live' view. 2. Second design question. A spin off question #1. How would I pass module-specific commands to their respective Windows Forms? What I mean is, I have a class like "NetworkHandler.cs" that checks for messages from the remote machine. NetworkHandler.cs is a static class globally accessible. So let's say I get a command that enumerates the remote user's files for a specific directory. How would I "give" that command to the File Explorer Form. I was thinking of making an OnCommandReceivedEvent inside NetworkHandler, and having each form register to that event. When the NetworkHandler received a command, it would raise the event, all forms would check it to see if it was relevant, and the appropriate form would take action. Is this an appropriate/the best solution available? 3. The networking library I'm using, Lidgren, provides two options for checking networking messages. One can either poll ReadMessage() to return null or a message, or one can use an AutoResetEvent OnMessageReceived (I'm guessing this is like an event). Which one is more appropriate?

    Read the article

  • Making nginx withstand flood attacks

    - by Tiffany Walker
    How can I make it stand stand against attacks better? Are their plugins. Looking for a way to RATE LIMIT and remain up and not slow down. My Setup: user nobody; # no need for more workers in the proxy mode worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; worker_priority -2; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 40480; events { worker_connections 5120; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 9; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } vhost file: server { error_log /var/log/nginx/vhost-error_log warn; listen 194.145.208.19:80; server_name ipxnow.in www.ipxnow.in; access_log /usr/local/apache/domlogs/ipxnow.in-bytes_log bytes_log; access_log /usr/local/apache/domlogs/ipxnow.in combined; root /home/ipxnowin/public_html; location / { location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { expires 7d; try_files $uri @backend; } error_page 405 = @backend; add_header X-Cache "HIT from Backend"; proxy_pass http://194.145.208.19:8081; include proxy.inc; } location @backend { internal; proxy_pass http://194.145.208.19:8081; include proxy.inc; } location ~ .*\.(php|jsp|cgi|pl|py)?$ { proxy_pass http://194.145.208.19:8081; include proxy.inc; } location ~ /\.ht { deny all; } } and proxy.inc: proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • [C#][Design] Appropriate programming design questions.

    - by Edward
    I have a few questions on good programming design. I'm going to first describe the project I'm building so you are better equipped to help me out. I am coding a Remote Assistance Tool similar to TeamViewer, Microsoft Remote Desktop, CrossLoop. It will incorporate concepts like UDP networking (using Lidgren networking library), NAT traversal (since many computers are invisible behind routers nowadays), Mirror Drivers (using DFMirage's Mirror Driver (http://www.demoforge.com/dfmirage.htm) for realtime screen grabbing on the remote computer). That being said, this program has a concept of being a client-server architecture, but I made only one program with both the functionality of client and server. That way, when the user runs my program, they can switch between giving assistance and receiving assistance without having to download a separate client or server module. I have a Windows Form that allows the user to choose between giving assistance and receiving assistance. I have another Windows Form for a file explorer module. I have another Windows Form for a chat module. I have another Windows Form form for a registry editor module. I have another Windows Form for the live control module. So I've got a Form for each module, which raises the first question: 1. Should I process module-specific commands inside the code of the respective Windows Form? Meaning, let's say I get a command with some data that enumerates the remote user's files for a specific directory. Obviously, I would have to update this on the File Explorer Windows Form and add the entries to the ListView. Should I be processing this code inside the Windows Form though? Or should I be handling this in another class (although I have to eventually pass the data to the Form to draw, of course). Or is it like a hybrid in which I process most of the data in another class and pass the final result to the Form to draw? So I've got like 5-6 forms, one for each module. The user starts up my program, enters the remote machine's ID (not IP, ID, because we are registering with an intermediary server to enable NAT traversal), their password, and connects. Now let's suppose the connection is successful. Then the user is presented with a form with all the different modules. So he can open up a File Explorer, or he can mess with the Registry Editor, or he can choose to Chat with his buddy. So now the program is sort of idle, just waiting for the user to do something. If the user opens up Live Control, then the program will be spending most of it's time receiving packets from the remote machine and drawing them to the form to provide a 'live' view. 2. Second design question. A spin off question #1. How would I pass module-specific commands to their respective Windows Forms? What I mean is, I have a class like "NetworkHandler.cs" that checks for messages from the remote machine. NetworkHandler.cs is a static class globally accessible. So let's say I get a command that enumerates the remote user's files for a specific directory. How would I "give" that command to the File Explorer Form. I was thinking of making an OnCommandReceivedEvent inside NetworkHandler, and having each form register to that event. When the NetworkHandler received a command, it would raise the event, all forms would check it to see if it was relevant, and the appropriate form would take action. Is this an appropriate/the best solution available? 3. The networking library I'm using, Lidgren, provides two options for checking networking messages. One can either poll ReadMessage() to return null or a message, or one can use an AutoResetEvent OnMessageReceived (I'm guessing this is like an event). Which one is more appropriate?

    Read the article

  • How to invalidate nginx reverse proxy cache in front of other nginx servers?

    - by Olivier Lance
    I'm running a Proxmox server on a single IP address, that will dispatch HTTP requests to containers depending on the requested host. I am using nginx on the Proxmox side to listen to HTTP requests and I am using the proxy_pass directive in my different server blocks to dispatch requests according to the server_name. My containers run on Ubuntu and are also running a nginx instance. I'm having troubles with caching on a particular website that is fully static: nginx keeps on serving me stale content after files updates, until I: Clear /var/cache/nginx/ and restart nginx or set proxy_cache off for this server and reload the config Here's the detail of my configuration: On the server (proxmox): /etc/nginx/nginx.conf: user www-data; worker_processes 8; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; use epoll; } http { ## # Basic Settings ## sendfile on; #tcp_nopush on; tcp_nodelay on; #keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; client_body_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 1 1K; ignore_invalid_headers on; client_body_timeout 5; client_header_timeout 5; keepalive_timeout 5 5; send_timeout 5; server_name_in_redirect off; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; # gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; limit_conn_zone $binary_remote_addr zone=gulag:1m; limit_conn gulag 50; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/conf.d/proxy.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; /etc/nginx/sites-available/my-domain.conf: server { listen 80; server_name .my-domain.com; access_log off; location / { proxy_pass http://my-domain.local:80/; proxy_cache cache; proxy_cache_valid 12h; expires 30d; proxy_cache_use_stale error timeout invalid_header updating; } } On the container (my-domain.local): nginx.conf: (everything is inside the main config file -- it's been done quickly...) user www-data; worker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip off; server { listen 80; server_name .my-domain.com; root /var/www; access_log logs/host.access.log; } } I've read many blog posts and answers before resolving to posting my own questions... most answers I can see suggest setting sendfile off; but that didn't work for me. I have tried many other things, double checked my settings and all seems fine. So I'm wondering whether I am not expecting nginx's cache to do something it's not meant to...? Basically, I thought that if one of my static files in my container was updated, the cache in my reverse proxy would be invalidated and my browser would get the new version of the file when it requests it... But I now have the sentiment I misunderstood many things. Of all things, I now wonder how nginx on the server can know about a file in the container has changed? I have seen a directive proxy_header_pass (or something alike), should I use this to let the nginx instance from the container somehow inform the one in Proxmox about updated files? Is this expectation just a dream, or can I do it with nginx on my current architecture?

    Read the article

  • FFmpeg creates emtpy (black) frames

    - by resamsel
    I have a set of images from a timelapse shot (172 JPG files) that I want to convert into a movie. I tried several parameters with FFmpeg, but all I get is a video with black frames (though it has the expected length). ffmpeg -f image2 -vcodec mjpeg -y -i img_%03d.jpg timelapse2.mpg The command above creates this video: http://sdm-net.org/data/timelapse2.mpg What I'm expecting is something like this (created with Time Lapse Assembler.app): https://vimeo.com/39038362 - This is my fallback option, but I'd really like to create timelapse movies from a script. I'm on OSX Lion (10.7.3) with FFmpeg version (0.10) installed via Homebrew. I also tried to find a proper version of mencoder for OSX, but this doesn't seem to be an easy task. Also, ImageMagick's convert doesn't seem to work nicely, it creates really bad output and it seems there's not much I can do about it... Edit: With libx264 and an mp4 container: ffmpeg -f image2 -y -i img_%03d.jpg -vcodec libx264 timelapse4.mp4 Output: ffmpeg version 0.10 Copyright (c) 2000-2012 the FFmpeg developers built on Mar 26 2012 13:47:02 with clang 3.0 (tags/Apple/clang-211.12) configuration: --prefix=/usr/local/Cellar/ffmpeg/0.10 --enable-shared --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-libfreetype --cc=/usr/bin/clang --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libxvid --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libass --disable-ffplay libavutil 51. 34.101 / 51. 34.101 libavcodec 53. 60.100 / 53. 60.100 libavformat 53. 31.100 / 53. 31.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 60.100 / 2. 60.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 52. 0.100 / 52. 0.100 Input #0, image2, from 'img_%03d.jpg': Duration: 00:00:06.88, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg, yuvj420p, 3888x2592 [SAR 72:72 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc [buffer @ 0x7f8ec9415f20] w:3888 h:2592 pixfmt:yuvj420p tb:1/1000000 sar:72/72 sws_param: [libx264 @ 0x7f8ec981d800] using SAR=1/1 [libx264 @ 0x7f8ec981d800] frame MB size (243x162) > level limit (36864) [libx264 @ 0x7f8ec981d800] MB rate (984150) > level limit (983040) [libx264 @ 0x7f8ec981d800] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 AVX [libx264 @ 0x7f8ec981d800] profile High, level 5.1 [libx264 @ 0x7f8ec981d800] 264 - core 120 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'timelapse4.mp4': Metadata: encoder : Lavf53.31.100 Stream #0:0: Video: h264 (![0][0][0] / 0x0021), yuvj420p, 3888x2592 [SAR 72:72 DAR 3:2], q=-1--1, 25 tbn, 25 tbc Stream mapping: Stream #0:0 -> #0:0 (mjpeg -> libx264) Press [q] to stop, [?] for help frame= 172 fps= 18 q=-1.0 Lsize= 259kB time=00:00:06.80 bitrate= 312.3kbits/s video:256kB audio:0kB global headers:0kB muxing overhead 1.089647% [libx264 @ 0x7f8ec981d800] frame I:1 Avg QP: 9.60 size:212820 [libx264 @ 0x7f8ec981d800] frame P:43 Avg QP:30.50 size: 291 [libx264 @ 0x7f8ec981d800] frame B:128 Avg QP:31.00 size: 285 [libx264 @ 0x7f8ec981d800] consecutive B-frames: 0.6% 0.0% 1.7% 97.7% [libx264 @ 0x7f8ec981d800] mb I I16..4: 22.5% 77.2% 0.3% [libx264 @ 0x7f8ec981d800] mb P I16..4: 0.0% 0.0% 0.0% P16..4: 0.0% 0.0% 0.0% 0.0% 0.0% skip:100.0% [libx264 @ 0x7f8ec981d800] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 0.0% 0.0% 0.0% direct: 0.0% skip:100.0% L0: 1.2% L1:98.8% BI: 0.0% [libx264 @ 0x7f8ec981d800] 8x8 transform intra:77.2% inter:100.0% [libx264 @ 0x7f8ec981d800] coded y,uvDC,uvAC intra: 41.2% 23.4% 0.6% inter: 0.0% 0.0% 0.0% [libx264 @ 0x7f8ec981d800] i16 v,h,dc,p: 40% 25% 35% 1% [libx264 @ 0x7f8ec981d800] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 36% 32% 30% 1% 0% 0% 0% 0% 0% [libx264 @ 0x7f8ec981d800] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 51% 40% 6% 1% 1% 0% 1% 0% 1% [libx264 @ 0x7f8ec981d800] i8c dc,h,v,p: 60% 21% 19% 0% [libx264 @ 0x7f8ec981d800] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0x7f8ec981d800] ref P L0: 92.3% 0.0% 0.0% 7.7% [libx264 @ 0x7f8ec981d800] ref B L0: 50.0% 0.0% 50.0% [libx264 @ 0x7f8ec981d800] ref B L1: 99.4% 0.6% [libx264 @ 0x7f8ec981d800] kb/s:304.49 Output timelapse4.mp4 (beacause of spam protection I can only post two links with my reputation): http sdm-net.org/data/timelapse4.mp4

    Read the article

  • setting up Ubuntu 10.10 as paravirtualized guest in Xen on RHEL5 host - what kernel?

    - by kostmo
    I've discovered the tool ubuntu-vm-builder, which I've installed and then invoked on an Ubuntu workstation as: sudo vmbuilder xen ubuntu --suite maverick --flavour virtual --arch amd64 --mem=512 --rootsize 8192 This workstation is not the intended target host of the virtual machine, however; I would like to host the guest on a Red Hat Enterprise Linux 5 machine that is running Xen 3.0.3. The output of this command appears to be a folder named ubuntu-xen containing three files: tmpXXXXXX, a very large file which I assume is the root partition image tmpYYYYYY, a somewhat large file which I assume is the swap partition image xen.conf, a text file I have copied the xen.conf file to the RHEL server's /etc/xen directory under the new name newvm, adjusting the paths of tempXXXXXX and tempYYYYYYin the file after also copying them from my local workstation to the RHEL server. When I launch the Virtual Machine Manager virt-manager, I can see the newvm virtual machine listed underneath the Dom0 machine. When I try to start newvm, I get the error: Error starting domain: virDomainCreate() failed POST operation failed: (xend.err 'Error creating domain: Kernel image does not exist: None') Indeed, there exists an entry kernel = 'None' in the xen.conf file. How do I find out what the path of the kernel should be? Is this path supposed to be to a kernel stored on the local filesystem of the RHEL5 host, or is it supposed to be a path inside the guest image? I see that the vmbuilder command provides for a --xen-kernel option, along with a --xen-ramdisk option, but I'm not sure what to use for either. I think I should be able to get this to work, since Ubuntu is said to be supported as a Xen guest, even though the Xen 4.0.1 docs state support for only a limited set of distributions, Ubuntu excluded. Update 1 When running vmbuilder on my local workstation, I did observe an output line saying: Calling hook: install_kernel and later, output lines saying: update-initramfs: Generating /boot/initrd.img-2.6.35-23-virtual [...] run-parts: executing /etc/kernel/postinst.d/initramfs-tools 2.6.35-23-virtual /boot/vmlinuz-2.6.35-23-virtual So in the xen.conf file, I tried setting the lines: kernel = '/boot/vmlinuz-2.6.35-23-virtual' ramdisk = '/boot/initrd.img-2.6.35-23-virtual' When trying to start the VM, I got an error similar to last time: Error starting domain: virDomainCreate() failed POST operation failed: (xend.err 'Error creating domain: Kernel image does not exist: /boot/vmlinuz-2.6.35-23-virtual') This makes me think that the RHEL5 machine is looking for local files, rather than a file within the binary guest disk image. After running sudo updatedb on my workstation, neither of those files were found. If the vmbuilder tool had tried to install them, it must have failed. Update 2 I was able to extract the kernel and initrd images from the guest disk binary by mounting it: mkdir mnt_tmp sudo mount ubuntu-xen/tmpXXXXXX mnt_tmp/ -o loop cp mnt_tmp/boot/vmlinuz-2.6.35-23-virtual virtual_kernel_ubuntu cp mnt_tmp/boot/initrd.img-2.6.35-23-virtual virtual_initrd_ubuntu These two files I copied to the RHEL5 server, and edited the xen.conf file to point to them as kernel and ramdisk. With this done, I could "run" the newvm virtual machine from within virt-manager, but was met with the message Console Not Configured For Guest when I double clicked the entry to open the Virtual Machine Console. As suggested by a forum, I then added the line vfb = [ 'type=vnc' ] to the configuration file, recreated the virtual machine (a ~10 min process), and this time got the message: Connecting to console for guest This remained indefinitely; after selecting View - Serial Console, I found a kernel panic: [5442621.272173] Kernel panic - not syncing: Attempted to kill the idle task! [5442621.272179] Pid: 0, comm: swapper Tainted: G D 2.6.35-23-virtual #41-Ubuntu [5442621.272184] Call Trace: [5442621.272191] [<ffffffff815a1b81>] panic+0x90/0x111 [5442621.272199] [<ffffffff810652ee>] do_exit+0x3be/0x3f0 [5442621.272204] [<ffffffff815a5e20>] oops_end+0xb0/0xf0 [5442621.272211] [<ffffffff8100ddeb>] die+0x5b/0x90 [5442621.272216] [<ffffffff815a56c4>] do_trap+0xc4/0x170 [5442621.272221] [<ffffffff8100ba35>] do_invalid_op+0x95/0xb0 [5442621.272227] [<ffffffff8130851c>] ? intel_idle+0xac/0x180 [5442621.272232] [<ffffffff810072bf>] ? xen_restore_fl_direct_end+0x0/0x1 [5442621.272239] [<ffffffff815a48fe>] ? _raw_spin_unlock_irqrestore+0x1e/0x30 [5442621.272247] [<ffffffff8108dfb7>] ? tick_broadcast_oneshot_control+0xc7/0x120 [5442621.272253] [<ffffffff8100ad5b>] invalid_op+0x1b/0x20 [5442621.272259] [<ffffffff8130851c>] ? intel_idle+0xac/0x180 [5442621.272264] [<ffffffff813084e0>] ? intel_idle+0x70/0x180 [5442621.272269] [<ffffffff810072bf>] ? xen_restore_fl_direct_end+0x0/0x1 [5442621.272275] [<ffffffff8148a147>] cpuidle_idle_call+0xa7/0x140 [5442621.272281] [<ffffffff81008d93>] cpu_idle+0xb3/0x110 [5442621.272286] [<ffffffff815873aa>] rest_init+0x8a/0x90 [5442621.272291] [<ffffffff81b04c9d>] start_kernel+0x387/0x390 [5442621.272297] [<ffffffff81b04341>] x86_64_start_reservations+0x12c/0x130 [5442621.272303] [<ffffffff81b08002>] xen_start_kernel+0x55d/0x561 Update 3 I tried an i386 architecture instead of amd64, but got the same kernel panic. Also, it seems the Virtual Machine Manager pays attention to the format of the filename of the kernel; for the same kernel binary, I tried simply naming it vmlinuz-virtual, which threw out an error box about an invalid kernel. When I named it vmlinuz-2.6.35-23-virtual, it did not throw the error, but it did still result in the kernel panic shortly thereafter.

    Read the article

  • ESXi v5.5 is having random crashes

    - by Darkmage
    HW: Type: HP Proliant ML350 G5 RAM 22GB CPU 1 x Intel Xenon E5405 2.00GHz OP: ESXi 5.5 just updated from 5.1 to try and fix the crashes occurring on ESXi 5.1 on same hardware. I'm trying to find the error on why one of our servers is crashing, it has had two lock ups in 24 hours now. The internal error light on the front is blinking red, on the inside only "#5 and #6 page 76 manual" the "Processor 2" light "amber" and the "Power" light "green" is shining. in the logs the only errors i can see in the relevant time frame is in log under. Is this the reason? or is there anything else i can do to try and log/locate the error. from zcat syslog.6.gz | less 2014-05-26T11:55:47Z sfcbd[35064]: Error opening socket pair for getProviderContext: Too many open files 2014-05-26T11:55:47Z sfcbd[35064]: Failed to set recv timeout (30) for socket -1. Errno = 9 2014-05-26T11:55:47Z sfcbd[35064]: Failed to set timeout for local socket (e.g. provider) 2014-05-26T11:55:47Z sfcbd[35064]: spGetMsg receiving from -1 35064-9 Bad file descriptor 2014-05-26T11:55:47Z sfcbd[35064]: rcvMsg receiving from -1 35064-9 Bad file descriptor 2014-05-26T11:55:47Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:55:47Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:55:47Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:55:47Z sfcbd[35064]: Error opening socket pair for getProviderContext: Too many open files 2014-05-26T11:55:47Z sfcbd[35064]: Failed to set recv timeout (30) for socket -1. Errno = 9 2014-05-26T11:55:47Z sfcbd[35064]: Failed to set timeout for local socket (e.g. provider) 2014-05-26T11:55:47Z sfcbd[35064]: spGetMsg receiving from -1 35064-9 Bad file descriptor 2014-05-26T11:55:47Z sfcbd[35064]: rcvMsg receiving from -1 35064-9 Bad file descriptor 2014-05-26T11:55:47Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:55:47Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:55:47Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:55:53Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:55:57Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:01Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:04Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:15Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:17Z sfcbd[35064]: Error opening socket pair for getProviderContext: Too many open files 2014-05-26T11:56:17Z sfcbd[35064]: Failed to set recv timeout (30) for socket -1. Errno = 9 2014-05-26T11:56:17Z sfcbd[35064]: Failed to set timeout for local socket (e.g. provider) 2014-05-26T11:56:17Z sfcbd[35064]: spGetMsg receiving from -1 35064-9 Bad file descriptor 2014-05-26T11:56:17Z sfcbd[35064]: rcvMsg receiving from -1 35064-9 Bad file descriptor 2014-05-26T11:56:17Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:17Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:17Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:17Z sfcbd[35064]: Error opening socket pair for getProviderContext: Too many open files 2014-05-26T11:56:17Z sfcbd[35064]: Failed to set recv timeout (30) for socket -1. Errno = 9 2014-05-26T11:56:17Z sfcbd[35064]: Failed to set timeout for local socket (e.g. provider) 2014-05-26T11:56:17Z sfcbd[35064]: spGetMsg receiving from -1 35064-9 Bad file descriptor 2014-05-26T11:56:17Z sfcbd[35064]: rcvMsg receiving from -1 35064-9 Bad file descriptor 2014-05-26T11:56:17Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:17Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:17Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:23Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:27Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:31Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:34Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:34Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:34Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:34Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:34Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:44Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:44Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:44Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:44Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:46Z sfcb-ProviderManager[34828]: SendMsg sending to 1 34828-9 Bad file descriptor 2014-05-26T11:56:48Z sfcbd[35064]: Error opening socket pair for getProviderContext: Too many open files -- manual

    Read the article

  • Windows 8, NVIDIA graphics recognition fails

    - by Roy Grubb
    I just installed Windows 8 Pro OEM 64-bit (clean install) and it won't properly recognize my graphics adapter. When I installed Win8, it automatically installed the BasicDisplay.sys driver dated 6/21/2006. 6.2.9200.16384 (win8_rtm.120725-1247). Hardware - Mobo:MSi G41M-P33 Combo CPU:Intel CoreDuo 6600 Graphics:NVIDIA GeForce 9400GT *OS* - Windows 8 Pro 64-bit OEM The graphics adapter worked fine in Windows XP. The PC is a generic box, bought locally and its mobo failed recently, so I replaced it with the G41M. Microsoft wouldn't let me re-activate Windows XP with a different mobo, so I installed Win8, which appears to work except as described next. Win8 only partially recognizes the graphics adapter and won't allow NVIDIA latest driver installer to see that it's an NVIDIA card. As a result, OpenGL doesn't work, and this is needed by the software I most use. Other than that the graphics look OK. When I say 'partially recognizes', I mean that via the Control Panel, I can see that the adapter is described as NVIDIA, but the driver remains stuck at Microsoft Basic Display Adapter no matter what I try, including "Update driver..." in adapter properties. Display Screen Resolution Advanced Settings Adapter shows: Adapter Type: Microsoft Basic Display Adapter Chip Type: NVIDIA DAC Type: NVIDIA Corporation Bios Information: G27 Board - p381n17 Don't know what this means ... no mention of 9400GT Total Available Graphics Memory: 256 MB Dedicated Video Memory: 0 MB In fact the adapter has 512MB on-board video memory. System Video Memory: 0 MB Shared System Memory: 256 MB And Control Panel Device Manager Display adapters just shows Microsoft Basic Display Adapter. No other graphics adapter, and no unknown device or yellow question mark. What I have tried so far: 1. Cleared CMOS and reset. Updated BIOS and all mobo drivers as follows: 1st I used Driver Reviver to see if any driver updates were required. It found some but I didn't use that to get the drivers. Then I switched to MSi's own mobo driver utility Live Update 5. This also showed the board needed to update several so I used it to fetch the new drivers. After that it showed that everything was up to date and I checked with Driver Reviver again, which also reported no drivers now needed updating. Rebooted. Went to the NVIDIA site to get the latest graphics adapter driver. Their auto-detect "Option 2: Automatically find drivers for my NVIDIA products" said "The NVIDIA Smart Scan was unable to evaluate your system hardware. Please use Option 1 to manually find drivers for your NVIDIA products." So I downloaded 310.70-desktop-win8-win7-winvista-64bit-international-whql.exe, which lists 9400 GT under supported products, but when I run it, it says: "NVIDIA Installer cannot continue This graphics driver could not find compatible graphics hardware." Connected the display to the on-board Intel graphics (G41 Intel Express), removed the NVIDIA card and rebooted, changed to internal graphics in CMOS. Again it installs the MS Basic Display Adapter, and can't properly run my s/w that needs OpenGL. It runs on other machines with Intel Express graphics (WinXP and 7) Shut down and pulled out the power cord. Held start button to discharge all capacitors. Removed and re-inserted NVIDIA adapter in PCI-E slot and made sure properly seated. Connected the monitor to the card, screwed plug to socket. Reconnected power cord. Started and checked in BIOS that Primary Graphics Adapter was set to PCI-E. Started Windows. Uninstalled MS Basic Display Adapter in Device Manager. Screen blanks briefly, reappears. No Graphics adapter entry was then visible in Device Manager. Restarted PC. MS Basic Display Adapter Visible again in Device Manager. Clicked in Device Manager View Show hidden devices. No other graphics adapter appears, no unknown devices. Rebooted. Tried Scan for Hardware changes. None detected. Tried right-click on MS Basic Display Adapter Properties Driver Update Driver... Search automatically. It replied that it had determined driver was up to date. I checked that there were no graphic driver-related entries in Programs and Features that I could delete (none). Searched for any other drivers with nvidia in their name and deleted them, just keeping the 306.97 installer exe file. Did a Windows Update. Ran GPU-Z which shows (main items): Microsoft Basic Display Adapter GPU G72 BIOS 5.72.22.76.88 Device ID 10DE - 01D5 DDR2 Bus Width 32 Bit Memory size 64MB Driver Version nvlddmkm 6.2.9200.16384 (ForceWare 0.00) / Win8 64 NVIDIA SLI Unknown in the drop-down at the foot, "Microsoft Basic Display Adapter" is the only option If I swap hard disks in that machine to one with a Ubuntu 10.4 installation (originally installed on the same PC), lspci shows "VGA compatible controller as NVIDIA Corporation Device 01d5 (rev a1) (prog-if 00 [VGA controller])" and "kernel driver in use: nvidia" I'm out of ideas for new things to try and would be really grateful of suggestions. Thanks!

    Read the article

  • Windows XP - Repairing Corrupt System32\Config\System File

    - by SimonTewsi
    My apologies for this long post. I would like to describe the mess I'm in then ask some questions about how to fix it: Starting up my Windows XP SP1 machine I got the following message: Windows could not start because the following file is missing or corrupt: \WINDOWS\SYSTEM32\CONFIG\SYSTEM Tried restarting several times with same results then Googled the problem. Tried the fix described here: http://icrontic.com/articles/repair%5Fwindows%5Fxp (since my CPU does not have XD buffer overflow protection I did not set /NOEXECUTE=OPTIN as OS Load Option). This did not work. I then found another fix for the problem on hardwareanalysis.com: Basically, boot to dos prompt (or recovery console if available) and make backups of the following files:- c:\windows\system32\config\system (to c:\windows\tmp\system.bak) c:\windows\system32\config\software (to c:\windows\tmp\software.bak) c:\windows\system32\config\sam (to c:\windows\tmp\sam.bak) c:\windows\system32\config\security (to c:\windows\tmp\security.bak) c:\windows\system32\config\default (to c:\windows\tmp\default.bak) then delete the above files (not the backups!) then copy the above files in c:\windows\repair to the c:\windows\system32\config directory restart your computer This did work (and I wish I'd done it first, since it was completely reversible, unlike the first method). However, afterwards I found that all the user accounts on the PC were gone. I resurrected them by copying the backed up security file back into the system32\config folder (I may have copied the SAM file from backup as well, I cannot remember clearly now). Now the PC boots up and I can log in. However things are still not right. I tried to alter one of the user accounts and found I could not access the User Accounts in the Control Panel. Microsoft KB 919292 had a fix for the problem. However, the fix failed with a Windows Installer error: The Windows Installer Service could not be accessed. This can occur if you are running Windows in safe mode, or if Windows Installer is not correctly installed. Contact your support personnel for assistance. Windows Installer 3.1 was already installed. I reinstalled it but continued to get the Windows Installer error whenever I tried to run the fix in KB 919292. I have since noticed another three problems: 1) Several applications on the PC no longer run, eg Microsoft Word. Shortcuts no longer seem to do anything and if I run the executables directly (eg for Word by running C:\Program Files\Microsoft Office\Office10\Winword.exe) I get a message similar to: "Microsoft Word has not been installed for the current user. Please run setup to install the application." even though the executable is clearly visible in Windows Explorer (and even though Word actually opens - the error dialog appears after Word has opened. Clicking OK to the error dialog closes Word). 2) One or the other of the two fixes I tried for the original problem caused new user profiles to be created. eg My old user profile under the Documents and Settings folder was Simon. The old one still exists but there is now a new one called Simon.DBQ2515. Obviously the new one is being used because Opera (my browser that still works) no longer sees the bookmarks file under my old profile. 3) Probably as a result of fooling around with the Security file, when I try to boot off the Windows XP CD and run the Recovery Console I am now asked for the administrator password. The only problem is there is no administrator account on the PC. There is one account, LocalAdmin, that has administrative rights but when I entered the password for that account it did not work. It is so long since I originally set up the PC that I cannot remember if the original administrator account ever had a password and, if so, what it was. So, my question is: How can I fix this mess? In particular: 1) Having tried the two fixes linked to above, have I irrepairably damaged the Windows instance, requiring a clean reinstallation of Windows + all applications, or should it be possible to get the machine working correctly again without such drastic measures? 2) Is there any way to get around the administrator password so I can use the Recovery Console again, given that there is no account called "administrator" and the password for the one account with admin privileges does not work (and that, before I started the second fix, I was not asked for an administrator password)? 3) Is there any easy way to fix the problem with the applications that think they are not installed? 4) Is there any easy way to fix the problem of the Windows Installer that does not work, even if reinstalled? Cheers Simon

    Read the article

  • Nginx reverse proxy IP issue

    - by Tiffany Walker
    For some reason Apache is still seeing my SERVERS ip. Is this an nginx problem? /etc/nginx.conf user nobody; # no need for more workers in the proxy mode worker_processes 4; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 20480; events { worker_connections 5120; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; vhost file: server { error_log /var/log/nginx/vhost-error_log warn; listen 63.6.1.12:80; server_name photo-rolldomain.com www.domain.com; access_log /usr/local/apache/domlogs/domain.com-bytes_log bytes_log; access_log /usr/local/apache/domlogs/domain.com combined; root /home/mtech/public_html; location / { location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { expires 7d; try_files $uri @backend; } error_page 405 = @backend; add_header X-Cache "HIT from Backend"; proxy_pass http://63.6.1.12:8081; include proxy.inc; } location @backend { internal; proxy_pass http://63.6.1.12:8081; include proxy.inc; } location ~ .*\.(php|jsp|cgi|pl|py)?$ { proxy_pass http://63.6.1.12:8081; include proxy.inc; } location ~ /\.ht { deny all; } }

    Read the article

  • New cast exception with VS2010/.Net 4

    - by Trevor
    [ Updated 25 May 2010 ] I've recently upgraded from VS2008 to VS2010, and at the same time upgraded to .Net 4. I've recompiled an existing solution of mine and I'm encountering a Cast exception I did not have before. The structure of the code is simple (although the actual implementation somewhat more complicated). Basically I have: public class SomeClass : ISomeClass { // Stuff } public static class ClassFactory { public static IInterface GetClassInstance<IInterface>(Type classType) { return (IInterface)Activator.CreateInstance(classType); // This throws a cast exception } } // Call the factory with: ISomeClass anInstance = ClassFactory.GetClassInstance<ISomeClass>(typeof(SomeClass)); Ignore the 'sensibleness' of the above - its provides just a representation of the issue rather than the specifics of what I'm doing (e.g. constructor parameters have been removed). The marked line throws the exception: Unable to cast object of type 'Namespace.SomeClass' to type 'Namespace.ISomeClass'. I suspect it may have something to do with the additional DotNet security (and in particular, explicit loading of assemblies, as this is something my app does). The reason I suspect this is that I have had to add to the config file the setting: <runtime> <loadFromRemoteSources enabled="true" /> </runtime> .. but I'm unsure if this is related. Update I see (from comments) that my basic code does not reproduce the issue by itself. Not surprising I suppose. It's going to be tricky to identify which part of a largish 3-tier CQS system is relevant to this problem. One issue might be that there are multiple assemblies involved. My static class is actually a factory provider, and the 'SomeClass' is a class factory (relevant in that the factories are 'registered' within the app via explicit assembly/type loading - see below) . Upfront I use reflection to 'register' all factories (i.e. classes that implement a particular interface) and that I do this when the app starts by identifying the relevant assemblies, loading them and adding them to a cache using (in essence): Loop over (file in files) { Assembly assembly = Assembly.LoadFile(file); baseAssemblyList.Add(assembly); } Then I cache the available types in these assemblies with: foreach (Assembly assembly in _loadedAssemblyList) { Type[] assemblyTypes = assembly.GetTypes(); _loadedTypesCache.AddRange(assemblyTypes); } And then I use this cache to do a variety of reflection operations, including 'registering' of factories, which involves looping through all loaded (cached) types and finding those that implement the (base) Factory interface. I've experienced what may be a similar problem in the past (.Net 3.5, so not exactly the same) with an architecture that involved dynamically creating classes on the server and streaming the compiled binary of those classes to the client app. The problem came when trying to deserialize an instance of the dynamic class on the client from a remote call: the exception said the class type was not know, even though the source and destination types were exactly the same name (including namespace). Basically the cross boundry versions of the class were not recognised as being the same. I solved that by intercepting the deserialization process and explicitly defining the deseriazation class type in the context of the local assemblies. This experience is what makes me think the types are considered mismatched because (somehow) the interface of the actual SomeClass object, and the interface of passed into the Generic method are not considered the same type. So (possibly) my question for those more knowledgable about C#/DotNet is: How does the class loading work that somehow my app thinks there are two versions/types of the interface type and how can I fit that? [ whew ... anyone who got here is quite patient .. thanks ]

    Read the article

  • Strange performance behaviour for 64 bit modulo operation

    - by codymanix
    The last three of these method calls take approx. double the time than the first four. The only difference is that their arguments doesn't fit in integer anymore. But should this matter? The parameter is declared to be long, so it should use long for calculation anyway. Does the modulo operation use another algorithm for numbersmaxint? I am using amd athlon64 3200+, winxp sp3 and vs2008. Stopwatch sw = new Stopwatch(); TestLong(sw, int.MaxValue - 3l); TestLong(sw, int.MaxValue - 2l); TestLong(sw, int.MaxValue - 1l); TestLong(sw, int.MaxValue); TestLong(sw, int.MaxValue + 1l); TestLong(sw, int.MaxValue + 2l); TestLong(sw, int.MaxValue + 3l); Console.ReadLine(); static void TestLong(Stopwatch sw, long num) { long n = 0; sw.Reset(); sw.Start(); for (long i = 3; i < 20000000; i++) { n += num % i; } sw.Stop(); Console.WriteLine(sw.Elapsed); } EDIT: I now tried the same with C and the issue does not occur here, all modulo operations take the same time, in release and in debug mode with and without optimizations turned on: #include "stdafx.h" #include "time.h" #include "limits.h" static void TestLong(long long num) { long long n = 0; clock_t t = clock(); for (long long i = 3; i < 20000000LL*100; i++) { n += num % i; } printf("%d - %lld\n", clock()-t, n); } int main() { printf("%i %i %i %i\n\n", sizeof (int), sizeof(long), sizeof(long long), sizeof(void*)); TestLong(3); TestLong(10); TestLong(131); TestLong(INT_MAX - 1L); TestLong(UINT_MAX +1LL); TestLong(INT_MAX + 1LL); TestLong(LLONG_MAX-1LL); getchar(); return 0; } EDIT2: Thanks for the great suggestions. I found that both .net and c (in debug as well as in release mode) does't not use atomically cpu instructions to calculate the remainder but they call a function that does. In the c program I could get the name of it which is "_allrem". It also displayed full source comments for this file so I found the information that this algorithm special cases the 32bit divisors instead of dividends which was the case in the .net application. I also found out that the performance of the c program really is only affected by the value of the divisor but not the dividend. Another test showed that the performance of the remainder function in the .net program depends on both the dividend and divisor. BTW: Even simple additions of long long values are calculated by a consecutive add and adc instructions. So even if my processor calls itself 64bit, it really isn't :( EDIT3: I now ran the c app on a windows 7 x64 edition, compiled with visual studio 2010. The funny thing is, the performance behavior stays the same, although now (I checked the assembly source) true 64 bit instructions are used.

    Read the article

  • Screen Casting using ffmpeg (too fast)

    - by rowman
    I can use ffmpeg to make screen casts: ffmpeg -f x11grab -s 1280x800 -i :0.0 -c:v libx264 -framerate 30 -r 30 -crf 18 out.mkv However the output comes out to be too fast paced. It also happens with GTK RecordMyDesktop if I enable the encode on the fly. So, the questions is how to get a normal video pace. Also in order to capture the sound with ffmpeg what option should be used? FFmpeg Output: ffmpeg -f x11grab -s 1280x800 -r 30 -i :0.0 -c:v libx264 -framerate 30 -r 30 -crf 18 out.mkv ffmpeg version N-35162-g87244c8 Copyright (c) 2000-2012 the FFmpeg developers built on Oct 7 2012 15:56:19 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5) configuration: --enable-gpl --enable-libfaac --enable-libfdk-aac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-x11grab --enable-libx264 --enable-nonfree --enable-version3 libavutil 51. 73.102 / 51. 73.102 libavcodec 54. 64.100 / 54. 64.100 libavformat 54. 29.105 / 54. 29.105 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 19.102 / 3. 19.102 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 16.100 / 0. 16.100 libpostproc 52. 1.100 / 52. 1.100 [x11grab @ 0xab896a0] device: :0.0 -> display: :0.0 x: 0 y: 0 width: 1280 height: 800 [x11grab @ 0xab896a0] shared memory extension found [x11grab @ 0xab896a0] Estimating duration from bitrate, this may be inaccurate Input #0, x11grab, from ':0.0': Duration: N/A, start: 1350136942.608988, bitrate: 983040 kb/s Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1280x800, 983040 kb/s, 30 tbr, 1000k tbn, 30 tbc [libx264 @ 0xab87320] using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64 SlowCTZ SlowAtom [libx264 @ 0xab87320] profile High 4:4:4 Predictive, level 3.2, 4:4:4 8-bit [libx264 @ 0xab87320] 264 - core 128 r2 198a7ea - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=18.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, matroska, to 'out.mkv': Metadata: encoder : Lavf54.29.105 Stream #0:0: Video: h264, yuv444p, 1280x800, q=-1--1, 1k tbn, 30 tbc Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> libx264) Press [q] to stop, [?] for help frame= 10 fps=0.0 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 19 fps= 17 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 28 fps= 17 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 37 fps= 17 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 45 fps= 16 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 47 fps= 14 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 52 fps= 13 q=24.0 size= 257kB time=00:00:00.00 bitrate=2101632.0kbiframe= 55 fps= 12 q=24.0 size= 257kB time=00:00:00.10 bitrate=20808.2kbitsframe= 59 fps= 11 q=24.0 size= 289kB time=00:00:00.23 bitrate=10145.0kbitsframe= 64 fps= 11 q=24.0 size= 289kB time=00:00:00.40 bitrate=5894.7kbits/frame= 70 fps= 11 q=24.0 size= 289kB time=00:00:00.60 bitrate=3933.1kbits/frame= 72 fps= 10 q=24.0 size= 289kB time=00:00:00.66 bitrate=3549.2kbits/frame= 77 fps=9.8 q=24.0 size= 289kB time=00:00:00.83 bitrate=2837.7kbits/frame= 80 fps=9.6 q=24.0 size= 289kB time=00:00:00.93 bitrate=2533.5kbits/frame= 85 fps=9.3 q=24.0 size= 289kB time=00:00:01.10 bitrate=2146.9kbits/frame= 89 fps=9.3 q=24.0 size= 289kB time=00:00:01.23 bitrate=1917.1kbits/frame= 92 fps=9.1 q=24.0 size= 289kB time=00:00:01.33 bitrate=1773.3kbits/frame= 96 fps=9.0 q=24.0 size= 289kB time=00:00:01.46 bitrate=1612.4kbits/frame= 99 fps=8.8 q=24.0 size= 321kB time=00:00:01.56 bitrate=1676.8kbits/frame= 104 fps=8.7 q=24.0 size= 321kB time=00:00:01.73 bitrate=1515.2kbits/frame= 109 fps=5.3 q=24.0 Lsize= 1093kB time=00:00:03.56 bitrate=2511.5kbits/s video:1092kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.120198% [libx264 @ 0xab87320] frame I:3 Avg QP:18.93 size:142610 [libx264 @ 0xab87320] frame P:43 Avg QP:20.79 size: 15751 [libx264 @ 0xab87320] frame B:63 Avg QP:23.75 size: 195 [libx264 @ 0xab87320] consecutive B-frames: 21.1% 1.8% 11.0% 66.1% [libx264 @ 0xab87320] mb I I16..4: 50.0% 21.1% 28.9% [libx264 @ 0xab87320] mb P I16..4: 6.1% 0.9% 3.2% P16..4: 5.5% 1.2% 0.6% 0.0% 0.0% skip:82.5% [libx264 @ 0xab87320] mb B I16..4: 0.4% 0.1% 0.0% B16..8: 2.9% 0.1% 0.0% direct: 0.0% skip:96.5% L0:40.7% L1:57.0% BI: 2.3% [libx264 @ 0xab87320] 8x8 transform intra:14.5% inter:46.1% [libx264 @ 0xab87320] coded y,u,v intra: 33.5% 24.1% 25.4% inter: 0.9% 0.4% 0.4% [libx264 @ 0xab87320] i16 v,h,dc,p: 70% 26% 1% 3% [libx264 @ 0xab87320] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 11% 21% 30% 5% 7% 5% 7% 4% 10% [libx264 @ 0xab87320] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 32% 35% 12% 2% 4% 3% 4% 3% 5% [libx264 @ 0xab87320] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0xab87320] ref P L0: 57.0% 5.6% 26.8% 10.6% [libx264 @ 0xab87320] ref B L0: 69.4% 22.6% 8.0% [libx264 @ 0xab87320] ref B L1: 93.7% 6.3% [libx264 @ 0xab87320] kb/s:2460.40

    Read the article

  • Server HTTP Load times slow?

    - by cdog5000
    Hello, My server @ codemeh.com (HTTP Server) seems to be randomly loading slowly, I cannot tell if it just my forums (http://www.codemeh.com/forums/) that are loading slowly or if the WHOLE site is just loading slowly since my forums are the largest thing on the site right now. load average: 0.02, 0.17, 0.20 That is super low to my knowledge. I have tried Google Page Analytic plug-in for FireFox to solve the problem but nothing comes up that is VERY bad. If someone could investigate this for me since I am very new at apache and server configurations. Thanks! (top): PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7493 www-data 15 0 98.2m 16m 9092 S 3 0.8 0:27.24 apache2 26429 www-data 15 0 98.2m 15m 7392 S 3 0.7 0:03.45 apache2 26477 www-data 17 0 98.2m 15m 7396 S 3 0.7 0:03.16 apache2 1 root 15 0 2468 1384 1156 S 0 0.1 0:00.49 init 1367 root 25 0 2564 816 660 S 0 0.0 0:00.00 xinetd 1526 root 15 0 29576 5420 1976 S 0 0.3 1:02.69 fail2ban-server 3703 root 15 0 13512 9312 1696 S 0 0.4 0:11.59 miniserv.pl 3915 postfix 15 0 6056 1652 1320 S 0 0.1 0:00.00 pickup 4010 root 15 0 4548 1296 972 S 0 0.1 0:37.36 ntpd 7448 root 15 0 98528 26m 20m S 0 1.3 0:00.27 apache2 7454 www-data 18 0 33580 2616 368 S 0 0.1 0:00.04 apache2 7528 www-data 18 0 108m 24m 15m S 0 1.2 0:27.60 apache2 7974 root 16 0 8700 2728 2164 S 0 0.1 0:00.08 sshd 8123 cdog5000 15 0 8832 1596 896 S 0 0.1 0:00.00 sshd 8126 cdog5000 18 0 4484 1716 1384 S 0 0.1 0:00.00 bash 8141 cdog5000 15 0 2344 980 796 R 0 0.0 0:00.11 top 13461 root 15 0 8700 2728 2164 S 0 0.1 0:00.07 sshd 13567 cdog5000 18 0 8832 1492 896 S 0 0.1 0:00.33 sshd 13569 cdog5000 18 0 4484 1728 1388 S 0 0.1 0:00.09 bash 17983 root 15 0 4392 1268 988 S 0 0.1 0:00.00 su 17987 root 15 0 4516 1752 1380 S 0 0.1 0:00.09 bash 18081 www-data 15 0 98.2m 14m 6588 S 0 0.7 0:04.91 apache2 20000 www-data 15 0 98.3m 15m 8040 S 0 0.8 0:02.45 apache2 20019 www-data 15 0 98.2m 14m 6808 S 0 0.7 0:04.97 apache2 30343 root 15 0 3964 1012 764 S 0 0.0 0:00.03 vsftpd 30382 root 15 0 2304 908 716 S 0 0.0 0:00.62 cron 30401 mysql 17 0 141m 17m 5416 S 0 0.9 1:02.20 mysqld 30424 root 15 0 5472 912 504 S 0 0.0 0:00.04 sshd 30473 syslog 15 0 1916 676 536 S 0 0.0 0:01.02 syslogd 30611 amavis 15 0 33872 25m 2292 S 0 1.2 0:03.11 amavisd-new 31890 amavis 18 0 34888 24m 1792 S 0 1.2 0:00.00 amavisd-new 31891 amavis 18 0 34888 24m 1784 S 0 1.2 0:00.00 amavisd-new 32397 clamav 18 0 104m 84m 1272 S 0 4.1 1:06.46 clamd 32563 clamav 15 0 12832 5716 4440 S 0 0.3 0:01.29 freshclam 32573 root 23 0 1892 456 372 S 0 0.0 0:00.00 courierlogger 32575 root 18 0 2096 684 544 S 0 0.0 0:00.01 authdaemond 32583 root 23 0 1892 360 284 S 0 0.0 0:00.00 courierlogger 32584 root 24 0 2000 612 516 S 0 0.0 0:00.00 couriertcpd 32598 root 23 0 1892 360 284 S 0 0.0 0:00.00 courierlogger 32599 root 25 0 2000 612 516 S 0 0.0 0:00.00 couriertcpd 32604 root 18 0 1892 460 372 S 0 0.0 0:00.00 courierlogger 32605 root 18 0 2000 624 532 S 0 0.0 0:00.00 couriertcpd 32607 root 18 0 2308 404 256 S 0 0.0 0:00.02 authdaemond 32608 root 18 0 2096 260 116 S 0 0.0 0:00.03 authdaemond 32609 root 15 0 2308 404 256 S 0 0.0 0:00.03 authdaemond 32610 root 18 0 2096 260 116 S 0 0.0 0:00.02 authdaemond 32612 root 18 0 2308 404 256 S 0 0.0 0:00.02 authdaemond 32621 root 24 0 1892 364 284 S 0 0.0 0:00.00 courierlogger 32622 root 25 0 2000 608 516 S 0 0.0 0:00.00 couriertcpd 32633 root 15 0 105m 936 716 S 0 0.0 0:02.26 nscd 32719 root 16 0 6252 1680 1344 S 0 0.1 0:01.24 master 32738 postfix 15 0 6188 1776 1400 S 0 0.1 0:00.44 qmgr 32758 postfix 15 0 6492 2564 1788 S 0 0.1 0:00.14 tlsmgr (/etc/apache2/sites-available/default): NameVirtualHost * <VirtualHost *> ServerAdmin webmaster@localhost DocumentRoot /var/www/web1/web/ <Directory /var/www/web1/web/> Options Indexes MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have fail2ban server and I dont have any firewall at this point and time that I know of. SMF is 2.0 RC4 and apache version is 2.2.14. I run a MySQL server on another box in the same DC (Persistent Connection). I installed eAccelerator today and it didnt help.

    Read the article

  • What's a clean way to have the server return a JavaScript function which would then be invoked?

    - by Khnle
    My application is architected as a host of plug-ins that have not yet been written. There's a long reason for this, but with each new year, the business logic will be different and we don't know what it will be like (Think of TurboTax if that helps). The plug-ins consist of both server and client components. The server components deals with business logic and persisting the data into database tables which will be created at a later time as well. The JavaScript manipulates the DOM for the browsers to render afterward. Each plugin lives in a separate assembly, so that they won't disturb the main application, i.e., we don't want to recompile the main application. Long story short, I am looking for a way to return JavaScript functions to the client from an Ajax get request, and execute these JavaScript functions (which are just returned). Invoking a function in Javascript is easy. The hard part is how to organize or structure so that I won't have to deal with maintenance problem. So concat using StringBuilder to end up with JavaScript code as a result of calling toString() from the string builder object is out of the question. I want to have no difference between writing JavaScript codes normally and writing Javascript codes for this dynamic purpose. An alternative is to manipulate the DOM on the server side, but I doubt that it would be as elegantly as using jQuery on the client side. I am open for a C# library that supports chainable calls like jQuery that also manipulates the DOM too. Do you have any idea or is it too much to ask or are you too confused? Edit1: The point is to avoid recompiling, hence the plug-ins architecture. In some other parts of the program, I already use the concept of dynamically loading Javascript files. That works fine. What I am looking here is somewhere in the middle of the program when an Ajax request is sent to the server. Edit 2: To illustrate my question: Normally, you would see the following code. An Ajax request is sent to the server, a JSON result is returned to the client which then uses jQuery to manipulate the DOM (creating tag and adding to the container in this case). $.ajax({ type: 'get', url: someUrl, data: {'': ''}, success: function(data) { var ul = $('<ul>').appendTo(container); var decoded = $.parseJSON(data); $.each(decoded, function(i, e) { var li = $('<li>').text(e.FIELD1 + ',' + e.FIELD2 + ',' + e.FIELD3); ul.append(li); }); } }); The above is extremely simple. But next year, what the server returns is totally different and how the data to be rendered would also be different. In a way, this is what I want: var container = $('#some-existing-element-on-the-page'); $.ajax({ type: 'get', url: someUrl, data: {'': ''}, success: function(data) { var decoded = $.parseJSON(data); var fx = decoded.fx; var data = decode.data; //fx is the dynamic function that create the DOM from the data and append to the existing container fx(container, data); } }); I don't need to know, at this time what data would be like, but in the future I will, and I can then write fx accordingly.

    Read the article

  • Sporadic unspecific kernel panic

    - by koma
    I'm experiencing seldom (so far about once a month) hard crashes on our ubuntu server 10.04 LTS box. The box itself is quite old (Dell PowerEdge 750 from 2004, Pentium4 2.8 GHz). I set up netconsole after it crashed twice last thursday and was able to extract the following output: [ 9354.062473] invalid opcode: 0000 [#1] SMP [ 9354.062516] last sysfs file: /sys/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/uevent [ 9354.062555] Modules linked in: ppdev adm1026 hwmon_vid i2c_i801 bridge stp dcdbas psmouse serio_raw netconsole configfs shpchp lp parport usbhid hid e1000 [ 9354.062685] [ 9354.062704] Pid: 3988, comm: rsync Not tainted 2.6.38-12-generic-pae #51~lucid1-Ubuntu Dell Computer Corporation PowerEdge 750 /0R1479 [ 9354.062773] EIP: 0060:[<c104fef1>] EFLAGS: 00010046 CPU: 1 [ 9354.062802] EIP is at check_preempt_wakeup+0x181/0x250 [ 9354.062826] EAX: 00000002 EBX: f2a10ccc ECX: 00000000 EDX: 00000002 [ 9354.062850] ESI: f1db71cc EDI: f1db71a0 EBP: f1dbdea8 ESP: f1dbde8c [ 9354.062875] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 [ 9354.062900] Process rsync (pid: 3988, ti=f1dbc000 task=f1db71a0 task.ti=f1dbc000) [ 9354.062933] Stack: [ 9354.062951] 0053ea60 f7907680 f28da840 f2a10ca0 c153ea60 f7907680 c153ea60 f1dbdebc [ 9354.063019] c103f98a f2a10ca0 f7907680 00000001 f1dbdef8 c104f97f 00000000 f2f0bacc [ 9354.063088] f7904338 00000001 00000003 00000000 f2f0bacc 00000001 00000001 00000086 [ 9354.063157] Call Trace: [ 9354.063183] [<c103f98a>] check_preempt_curr+0x6a/0x80 [ 9354.063210] [<c104f97f>] try_to_wake_up+0x5f/0x3f0 [ 9354.063236] [<c1077a00>] ? hrtimer_wakeup+0x0/0x30 [ 9354.063261] [<c104fd64>] wake_up_process+0x14/0x20 [ 9354.063286] [<c1077a1d>] hrtimer_wakeup+0x1d/0x30 [ 9354.063310] [<c1077f4a>] __run_hrtimer+0x7a/0x1c0 [ 9354.063336] [<c107dbad>] ? ktime_get+0x6d/0x110 [ 9354.063360] [<c1078310>] hrtimer_interrupt+0x120/0x2b0 [ 9354.063390] [<c1535c36>] smp_apic_timer_interrupt+0x56/0x8a [ 9354.063418] [<c152f459>] apic_timer_interrupt+0x31/0x38 [ 9354.063446] [<c1520000>] ? mca_attach_bus+0x5/0xc0 [ 9354.063469] Code: 8b 9b 20 01 00 00 8b 86 24 01 00 00 3b 83 24 01 00 00 75 e6 85 db 0f 84 a3 00 00 00 89 da 89 f0 e8 75 f6 fe ff 83 f8 01 0f 85 00 <fe> ff ff 89 f8 e8 95 f9 fe ff 8b 5e 1c 85 db 0f 84 e4 fe ff ff [ 9354.063804] EIP: [<c104fef1>] check_preempt_wakeup+0x181/0x250 SS:ESP 0068:f1dbde8c [ 9354.064231] ---[ end trace 290689cea65aea7f ]--- [ 9354.064290] Kernel panic - not syncing: Fatal exception in interrupt [ 9354.064352] Pid: 3988, comm: rsync Tainted: G D 2.6.38-12-generic-pae #51~lucid1-Ubuntu [ 9354.064424] Call Trace: [ 9354.064481] [<c152c057>] ? panic+0x5c/0x15b [ 9354.064539] [<c15302bd>] ? oops_end+0xcd/0xd0 [ 9354.064539] [<c100d9e4>] ? die+0x54/0x80 [ 9354.064539] [<c152f926>] ? do_trap+0x96/0xc0 [ 9354.064539] [<c100ba00>] ? do_invalid_op+0x0/0xa0 [ 9354.064539] [<c100ba8b>] ? do_invalid_op+0x8b/0xa0 [ 9354.064539] [<c104fef1>] ? check_preempt_wakeup+0x181/0x250 [ 9354.064539] [<c144884d>] ? __kfree_skb+0x3d/0x90 [ 9354.064539] [<c1042ae7>] ? update_curr+0x247/0x2a0 [ 9354.064539] [<c10447bb>] ? update_cfs_load+0x11b/0x2d0 [ 9354.064539] [<c1042a25>] ? update_curr+0x185/0x2a0 [ 9354.064539] [<c152f6bf>] ? error_code+0x67/0x6c [ 9354.064539] [<c104fef1>] ? check_preempt_wakeup+0x181/0x250 [ 9354.064539] [<c103f98a>] ? check_preempt_curr+0x6a/0x80 [ 9354.064539] [<c104f97f>] ? try_to_wake_up+0x5f/0x3f0 [ 9354.064539] [<c1077a00>] ? hrtimer_wakeup+0x0/0x30 [ 9354.064539] [<c104fd64>] ? wake_up_process+0x14/0x20 [ 9354.064539] [<c1077a1d>] ? hrtimer_wakeup+0x1d/0x30 [ 9354.064539] [<c1077f4a>] ? __run_hrtimer+0x7a/0x1c0 [ 9354.064539] [<c107dbad>] ? ktime_get+0x6d/0x110 [ 9354.064539] [<c1078310>] ? hrtimer_interrupt+0x120/0x2b0 [ 9354.064539] [<c1535c36>] ? smp_apic_timer_interrupt+0x56/0x8a [ 9354.064539] [<c152f459>] ? apic_timer_interrupt+0x31/0x38 [ 9354.064539] [<c1520000>] ? mca_attach_bus+0x5/0xc0 Googling for this issue didn't really turn up anything useful (most stuff I found was related to btrfs, but I don't use that, although the module exists and is sometimes loaded). From experience it might have to do with relatively heavy I/O, as two of the panics happened during a backup procedure. Kernel is 2.6.38-12-generic-pae, but I'm pretty sure I also saw panics on 2.6.32. I meanwhile upgraded to 3.0.0-17-generic-pae and am waiting for the next crash ;-) I'm at a loss here, so any pointers where to look for the cause or what it could be would be great :-) Thanks !

    Read the article

  • how to user ajax with json in ruby on rails

    - by fenec
    I am implemeting a facebook application in rails using facebooker plugin, therefore it is very important to use this architecture if i want to update multiple DOM in my page. if my code works in a regular rails application it would work in my facebook application. i am trying to use ajax to let the user know that the comment was sent, and update the comments bloc. migration: class CreateComments < ActiveRecord::Migration def self.up create_table :comments do |t| t.string :body t.timestamps end end def self.down drop_table :comments end end controller: class CommentsController < ApplicationController def index @comments=Comment.all end def create @comment=Comment.create(params[:comment]) if request.xhr? @comments=Comment.all render :json=>{:ids_to_update=>[:all_comments,:form_message], :all_comments=>render_to_string(:partial=>"comments" ), :form_message=>"Your comment has been added." } else redirect_to comments_url end end end view: <script> function update_count(str,message_id) { len=str.length; if (len < 200) { $(message_id).innerHTML="<span style='color: green'>"+ (200-len)+" remaining</span>"; } else { $(message_id).innerHTML="<span style='color: red'>"+ "Comment too long. Only 200 characters allowed.</span>"; } } function update_multiple(json) { for( var i=0; i<json["ids_to_update"].length; i++ ) { id=json["ids_to_update"][i]; $(id).innerHTML=json[id]; } } </script> <div id="all_comments" > <%= render :partial=>"comments/comments" %> </div> Talk some trash: <br /> <% remote_form_for Comment.new, :url=>comments_url, :success=>"update_multiple(request)" do |f|%> <%= f.text_area :body, :onchange=>"update_count(this.getValue(),'remaining');" , :onkeyup=>"update_count(this.getValue(),'remaining');" %> <br /> <%= f.submit 'Post'%> <% end %> <p id="remaining" >&nbsp;</p> <p id="form_message" >&nbsp;</p> <br><br> <br> if i try to do alert(json) in the first line of the update_multiple function , i got an [object Object]. if i try to do alert(json["ids_to_update"][0]) in the first line of the update_multiple function , there is no dialog box displayed. however the comment got saved but nothing is updated. questions: 1.how can javascript and rails know that i am dealing with json objects? 2.how can i debug this problem? 3.how can i get it to work?

    Read the article

  • Clearcase - selective merge.

    - by Keshav
    Hi, I have a peculiar Clearcase doubt. I cannot fully describe why I'm doing such a confusing architecture, but I need to do it (thanks to the mistake done by someone long back). Ok, here's a bit of detail: B1 is a contaminated branch where both my group's changes and another group's changes got mixed together so badly that there is no way of finding which code is whose). So the solution proposed is to create a new branch called B2 (at the same level as B1) and put all the unmodified code of the other group on it (The way to do that would be to merge B1 with B2 and then go about removing all changes from it till it becomes original). Then create a CR branch on B1 and keep only my group's newly added files or modified files on that branch. Finally create an integration branch out of B2 and merge the changes from CR branch of B1 to integration branch of B2. So here is what I did: (The use case is where I have dir D where file a, b and c are there. My group ended up modifying file a while b and c are not modified at all). There is a branch B1 on which there are files a, b and c. There is another branch B2. A merge is done from B1 to B2. Now B2 also has a, b and c. At this point both branch B1 and B2 are same. Now I delete file a from branch B2 (rmname). Now B2 has b and c only. I put a label to this branch called Label1. This makes the code with label Label1 as the unmodified code from other group. Now I create a sub branch called CR1 from B1 and delete all the files that are there in B2 branch (i.e b and c) such that it contains only the modified code from original code on it. In my case it is file a. At this point branch B2 with label Label1 has files b and c (those are unmodified code) and branch CR1 coming off B1 has only a (that is modified by us). Now I create another branch called integration branch that comes off B2 Label1. And then I do a merge of CR branch on to that expecting that it will have all three files a, b and c for me. All I'd need to do is to do a version tree view and see who modified what. But the problem I face is that since I had done a rmname of file a on branch B2 earlier to putting Label. The merge does not really take the file a from CR branch. How to I get around that problem. I want to selectively merge. Is it possible? sorry if it is a bad design. I'm not really conversant with Clear case and have limited options and time to clear some one else's mess.

    Read the article

  • LXC Container Networking

    - by digitaladdictions
    I just started to experiment with LXC containers. I was able to create a container and start it up but I cannot get dhcp to assign the container an IP address. If I assign a static address the container can ping the host IP but not outside the host IP. The host is CentOS 6.5 and the guest is Ubuntu 14.04LTS. I used the template downloaded by lxc-create -t download -n cn-01 command. If I am trying to get an IP address on the same subnet as the host I don't believe I should need the IP tables rule for masquerading but I added it anyways. Same with IP forwarding. I compiled LXC by hand from the following source https://linuxcontainers.org/downloads/lxc-1.0.4.tar.gz Host Operating System Version #> cat /etc/redhat-release CentOS release 6.5 (Final) #> uname -a Linux localhost.localdomain 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Container Config #> cat /usr/local/var/lib/lxc/cn-01/config # Template used to create this container: /usr/local/share/lxc/templates/lxc-download # Parameters passed to the template: # For additional config options, please look at lxc.container.conf(5) # Distribution configuration lxc.include = /usr/local/share/lxc/config/ubuntu.common.conf lxc.arch = x86_64 # Container specific configuration lxc.rootfs = /usr/local/var/lib/lxc/cn-01/rootfs lxc.utsname = cn-01 # Network configuration lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 LXC default.confu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:f #> cat /usr/local/etc/lxc/default.conf lxc.network.type = veth lxc.network.link = br0 lxc.network.flags = up #> lxc-checkconfig Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /boot/config-2.6.32-431.20.3.el6.x86_64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: /usr/local/bin/lxc-checkconfig: line 103: [: too many arguments enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: /usr/local/bin/lxc-checkconfig: line 118: [: -gt: unary operator expected Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/local/bin/lxc-checkconfig Network Config (HOST) #> cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes #> cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes TYPE=Ethernet IPV6INIT=no USERCTL=no BRIDGE=br0 #> cat /etc/networks default 0.0.0.0 loopback 127.0.0.0 link-local 169.254.0.0 #> ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 42:7e:43:b3:61:c5 brd ff:ff:ff:ff:ff:ff 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet 10.60.70.121/24 brd 10.60.70.255 scope global br0 inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 12: vethT6BGL2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fe:a1:69:af:50:17 brd ff:ff:ff:ff:ff:ff inet6 fe80::fca1:69ff:feaf:5017/64 scope link valid_lft forever preferred_lft forever #> brctl show bridge name bridge id STP enabled interfaces br0 8000.000c291230f2 no eth0 vethT6BGL2 pan0 8000.000000000000 no #> cat /proc/sys/net/ipv4/ip_forward 1 # Generated by iptables-save v1.4.7 on Fri Jul 11 15:11:36 2014 *nat :PREROUTING ACCEPT [34:6287] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT # Completed on Fri Jul 11 15:11:36 2014 Network Config (Container) #> cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp #> ip a s 11: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:69:fb:42:ee:d7 brd ff:ff:ff:ff:ff:ff inet6 fe80::69:fbff:fe42:eed7/64 scope link valid_lft forever preferred_lft forever 13: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever

    Read the article

  • Linux/Apache performance very slow even on local network

    - by klausch
    I have an Ubuntu server machine running Apache and MYSQL. System and version info is as follows: Linux kernel 3.0.0.-12 Apache/2.2.20 MySQL Ver 14.14.Distrib 5.1.58 I am running a few websites on this server, some HTML only, some PHP/MySQL. THe [problem is that response time is very slow, both on static as well as the dynamic sites. Sometimes it takes more than 10 seconds before a response is given, this makes the sites very slow and almost unusable. The problem occurs even when requesting from the local network. I have added the involved subdomains to my /etc/hosts file, and abolve all the problem is not solved by using IP numbers instead of URL's. So there is no DNS lookup issue. I have modified the log format by showing the response times and sometimes a files takes 12 seconds to be served, see the jquery~.js file in the example screenshot. I have no explanation for this extremely long response time, but is is not even the only issue here, some other files takes a long time to be served too, but do not show a long response time in the log file. So probably different tissues are involved here. I cannot find a solution until now, any suggestions??? THanx in advance, Klaas link to screenshot picture from access logfile Some extra configuration info: apache2.conf (comment is removed) LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_event_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType text/plain HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" %T/%D" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ And the virtual hostfile for one of the slow sites, in fact it is pretty straightforward... <VirtualHost *:80> ServerAdmin [email protected] ServerSignature EMail ServerName toenjoy.drsklaus.nl DocumentRoot /var/www/toenjoy.drsklaus.nl <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/toenjoy.drsklaus.nl/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig AuthType Basic AuthName "To Enjoy" AuthUserFile /etc/.htpasswd Require user petraaa Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> And the output of free -m: klaas@ubuntu-server:/etc/apache2$ free -m total used free shared buffers cached Mem: 1997 1401 595 0 144 1017 -/+ buffers/cache: 238 1758 Swap: 2035 0 2035 and I have no indication that swapping occurs on the moments the site is slow. I have runned top and it does not appear to be a CPU issue. I have the impression that the spawning of a apache thread could maybe be the bottleneck but it is just a suggestion. Maybe this gives some extra information! EDIT: The problem seemed to be gone for some time but occurs again! And not only with Apache, also connecting using SSH takes a tremendous time, sometimes it takes up to 15 seconds before the keyphrase is asked for. Also scp works very slowly. The behavious is really unpredoctable and makes the server very hard to use. Any ideas...?

    Read the article

  • iOS: Interpreted code - where do they draw the line?

    - by d7samurai
    Apple's iOS developer guidelines state: 3.3.2 — An Application may not itself install or launch other executable code by any means, including without limitation through the use of a plug-in architecture, calling other frameworks, other APIs or otherwise. No interpreted code may be downloaded or used in an Application except for code that is interpreted and run by Apple’s Documented APIs and built-in interpreter(s). Assuming that downloading data - like XML and images (or a game level description), for example - at run-time is allowed (as is my impression), I am wondering where they draw the line between "data" and "code". Picture the scenario of an app that delivers interactive "presentations" to users (like a survey, for instance). Presentations are added continuously to the server and different presentations are made available to different users, so they cannot be part of the initial app download (this is the whole point). They are described in XML format, but being interactive, they might contain conditional branching of this sort (shown in pseudo form to exemplify): <options id="Gender"> <option value="1">Male</option> <option value="2">Female</option> </options> <branches id="Gender"> <branch value="1"> <image src="Man" /> </branch> <branch value="2"> <image src="Woman" /> </branch> </branches> When the presentation is "played" within the app, the above would be presented in two steps. First a selection screen where the user can click on either of the two choices presented ("Male" or "Female"). Next, an image will be [downloaded dynamically] and displayed based on the choice made in the previous step. Now, it's easy to imagine additional tags describing further logic still. For example, a containing tag could be added: <loop count="3"> <options... /> <branches... /> </loop> The result here being that the selection screen / image screen pair would be sequentially presented three times over, of course. Or imagine some description of a level in a game. It's easy to view that as passive "data", but if it includes, say, several doorways that the user can go through and with various triggers, traps and points attached to them etc - isn't that the same as using a script - or, indeed, interpreted code - to describe options and their conditional responses? Assuming that the interpretation engine for this XML data is already present in the app and that such presentations can only be consumed (not created or edited) in the app, how would this fare against Apple's iOS guidelines? Doesn't XML basically constitute a scripting language (couldn't any interpreted programming language simply be described by XML) in this sense? Would it be OK if the proprietary scripting language (ref the XML used above) was strictly sandboxed (how can they tell?) and not given access to the operating system in any way (but able to download content dynamically - and upload results to the authoring server)? Where does the line go?

    Read the article

  • C# performance varying due to memory

    - by user1107474
    Hope this is a valid post here, its a combination of C# issues and hardware. I am benchmarking our server because we have found problems with the performance of our quant library (written in C#). I have simulated the same performance issues with some simple C# code- performing very heavy memory-usage. The code below is in a function which is spawned from a threadpool, up to a maximum of 32 threads (because our server has 4x CPUs x 8 cores each). This is all on .Net 3.5 The problem is that we are getting wildly differing performance. I run the below function 1000 times. The average time taken for the code to run could be, say, 3.5s, but the fastest will only be 1.2s and the slowest will be 7s- for the exact same function! I have graphed the memory usage against the timings and there doesnt appear to be any correlation with the GC kicking in. One thing I did notice is that when running in a single thread the timings are identical and there is no wild deviation. I have also tested CPU-bound algorithms and the timings are identical too. This has made us wonder if the memory bus just cannot cope. I was wondering could this be another .net or C# problem, or is it something related to our hardware? Would this be the same experience if I had used C++, or Java?? We are using 4x Intel x7550 with 32GB ram. Is there any way around this problem in general? Stopwatch watch = new Stopwatch(); watch.Start(); List<byte> list1 = new List<byte>(); List<byte> list2 = new List<byte>(); List<byte> list3 = new List<byte>(); int Size1 = 10000000; int Size2 = 2 * Size1; int Size3 = Size1; for (int i = 0; i < Size1; i++) { list1.Add(57); } for (int i = 0; i < Size2; i = i + 2) { list2.Add(56); } for (int i = 0; i < Size3; i++) { byte temp = list1.ElementAt(i); byte temp2 = list2.ElementAt(i); list3.Add(temp); list2[i] = temp; list1[i] = temp2; } watch.Stop(); (the code is just meant to stress out the memory) I would include the threadpool code, but we used a non-standard threadpool library. EDIT: I have reduced "size1" to 100000, which basically doesn't use much memory and I still get a lot of jitter. This suggests it's not the amount of memory being transferred, but the frequency of memory grabs?

    Read the article

  • Effective optimization strategies on modern C++ compilers

    - by user168715
    I'm working on scientific code that is very performance-critical. An initial version of the code has been written and tested, and now, with profiler in hand, it's time to start shaving cycles from the hot spots. It's well-known that some optimizations, e.g. loop unrolling, are handled these days much more effectively by the compiler than by a programmer meddling by hand. Which techniques are still worthwhile? Obviously, I'll run everything I try through a profiler, but if there's conventional wisdom as to what tends to work and what doesn't, it would save me significant time. I know that optimization is very compiler- and architecture- dependent. I'm using Intel's C++ compiler targeting the Core 2 Duo, but I'm also interested in what works well for gcc, or for "any modern compiler." Here are some concrete ideas I'm considering: Is there any benefit to replacing STL containers/algorithms with hand-rolled ones? In particular, my program includes a very large priority queue (currently a std::priority_queue) whose manipulation is taking a lot of total time. Is this something worth looking into, or is the STL implementation already likely the fastest possible? Along similar lines, for std::vectors whose needed sizes are unknown but have a reasonably small upper bound, is it profitable to replace them with statically-allocated arrays? I've found that dynamic memory allocation is often a severe bottleneck, and that eliminating it can lead to significant speedups. As a consequence I'm interesting in the performance tradeoffs of returning large temporary data structures by value vs. returning by pointer vs. passing the result in by reference. Is there a way to reliably determine whether or not the compiler will use RVO for a given method (assuming the caller doesn't need to modify the result, of course)? How cache-aware do compilers tend to be? For example, is it worth looking into reordering nested loops? Given the scientific nature of the program, floating-point numbers are used everywhere. A significant bottleneck in my code used to be conversions from floating point to integers: the compiler would emit code to save the current rounding mode, change it, perform the conversion, then restore the old rounding mode --- even though nothing in the program ever changed the rounding mode! Disabling this behavior significantly sped up my code. Are there any similar floating-point-related gotchas I should be aware of? One consequence of C++ being compiled and linked separately is that the compiler is unable to do what would seem to be very simple optimizations, such as move method calls like strlen() out of the termination conditions of loop. Are there any optimization like this one that I should look out for because they can't be done by the compiler and must be done by hand? On the flip side, are there any techniques I should avoid because they are likely to interfere with the compiler's ability to automatically optimize code? Lastly, to nip certain kinds of answers in the bud: I understand that optimization has a cost in terms of complexity, reliability, and maintainability. For this particular application, increased performance is worth these costs. I understand that the best optimizations are often to improve the high-level algorithms, and this has already been done.

    Read the article

  • How to make MySQL utilize available system resources, or find "the real problem"?

    - by anonymous coward
    This is a MySQL 5.0.26 server, running on SuSE Enterprise 10. This may be a Serverfault question. The web user interface that uses these particular queries (below) is showing sometimes 30+, even up to 120+ seconds at the worst, to generate the pages involved. On development, when the queries are run alone, they take up to 20 seconds on the first run (with no query cache enabled) but anywhere from 2 to 7 seconds after that - I assume because the tables and indexes involved have been placed into ram. From what I can tell, the longest load times are caused by Read/Update Locking. These are MyISAM tables. So it looks like a long update comes in, followed by a couple 7 second queries, and they're just adding up. And I'm fine with that explanation. What I'm not fine with is that MySQL doesn't appear to be utilizing the hardware it's on, and while the bottleneck seems to be the database, I can't understand why. I would say "throw more hardware at it", but we did and it doesn't appear to have changed the situation. Viewing a 'top' during the slowest times never shows much cpu or memory utilization by mysqld, as if the server is having no trouble at all - but then, why are the queries taking so long? How can I make MySQL use the crap out of this hardware, or find out what I'm doing wrong? Extra Details: On the "Memory Health" tab in the MySQL Administrator (for Windows), the Key Buffer is less than 1/8th used - so all the indexes should be in RAM. I can provide a screen shot of any graphs that might help. So desperate to fix this issue. Suffice it to say, there is legacy code "generating" these queries, and they're pretty much stuck the way they are. I have tried every combination of Indexes on the tables involved, but any suggestions are welcome. Here's the current Create Table statement from development (the 'experimental' key I have added, seems to help a little, for the example query only): CREATE TABLE `registration_task` ( `id` varchar(36) NOT NULL default '', `date_entered` datetime NOT NULL default '0000-00-00 00:00:00', `date_modified` datetime NOT NULL default '0000-00-00 00:00:00', `assigned_user_id` varchar(36) default NULL, `modified_user_id` varchar(36) default NULL, `created_by` varchar(36) default NULL, `name` varchar(80) NOT NULL default '', `status` varchar(255) default NULL, `date_due` date default NULL, `time_due` time default NULL, `date_start` date default NULL, `time_start` time default NULL, `parent_id` varchar(36) NOT NULL default '', `priority` varchar(255) NOT NULL default '9', `description` text, `order_number` int(11) default '1', `task_number` int(11) default NULL, `depends_on_id` varchar(36) default NULL, `milestone_flag` varchar(255) default NULL, `estimated_effort` int(11) default NULL, `actual_effort` int(11) default NULL, `utilization` int(11) default '100', `percent_complete` int(11) default '0', `deleted` tinyint(1) NOT NULL default '0', `wf_task_id` varchar(36) default '0', `reg_field` varchar(8) default '', `date_offset` int(11) default '0', `date_source` varchar(10) default '', `date_completed` date default '0000-00-00', `completed_id` varchar(36) default NULL, `original_name` varchar(80) default NULL, PRIMARY KEY (`id`), KEY `idx_reg_task_p` (`deleted`,`parent_id`), KEY `By_Assignee` (`assigned_user_id`,`deleted`), KEY `status_assignee` (`status`,`deleted`), KEY `experimental` (`deleted`,`status`,`assigned_user_id`,`parent_id`,`date_due`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 And one of the ridiculous queries in question: SELECT users.user_name assigned_user_name, registration.FIELD001 parent_name, registration_task.status status, registration_task.date_modified date_modified, registration_task.date_due date_due, registration.FIELD240 assigned_wf, if(LENGTH(registration_task.description)>0,1,0) has_description, registration_task.* FROM registration_task LEFT JOIN users ON registration_task.assigned_user_id=users.id LEFT JOIN registration ON registration_task.parent_id=registration.id where (registration_task.status != 'Completed' AND registration.FIELD001 LIKE '%' AND registration_task.name LIKE '%' AND registration.FIELD060 LIKE 'GN001472%') AND registration_task.deleted=0 ORDER BY date_due asc LIMIT 0,20; my.cnf - '[mysqld]' section. [mysqld] port = 3306 socket = /var/lib/mysql/mysql.sock skip-locking key_buffer = 384M max_allowed_packet = 100M table_cache = 2048 sort_buffer_size = 2M net_buffer_length = 100M read_buffer_size = 2M read_rnd_buffer_size = 160M myisam_sort_buffer_size = 128M query_cache_size = 16M query_cache_limit = 1M EXPLAIN above query, without additional index: +----+-------------+-------------------+--------+--------------------------------+----------------+---------+------------------------------------------------+---------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------+--------+--------------------------------+----------------+---------+------------------------------------------------+---------+-----------------------------+ | 1 | SIMPLE | registration_task | ref | idx_reg_task_p,status_assignee | idx_reg_task_p | 1 | const | 1067354 | Using where; Using filesort | | 1 | SIMPLE | registration | eq_ref | PRIMARY,gbl | PRIMARY | 8 | sugarcrm401.registration_task.parent_id | 1 | Using where | | 1 | SIMPLE | users | ref | PRIMARY | PRIMARY | 38 | sugarcrm401.registration_task.assigned_user_id | 1 | | +----+-------------+-------------------+--------+--------------------------------+----------------+---------+------------------------------------------------+---------+-----------------------------+ EXPLAIN above query, with 'experimental' index: +----+-------------+-------------------+--------+-----------------------------------------------------------+------------------+---------+------------------------------------------------+--------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------+--------+-----------------------------------------------------------+------------------+---------+------------------------------------------------+--------+-----------------------------+ | 1 | SIMPLE | registration_task | range | idx_reg_task_p,status_assignee,NewIndex1,tcg_experimental | tcg_experimental | 259 | NULL | 103345 | Using where; Using filesort | | 1 | SIMPLE | registration | eq_ref | PRIMARY,gbl | PRIMARY | 8 | sugarcrm401.registration_task.parent_id | 1 | Using where | | 1 | SIMPLE | users | ref | PRIMARY | PRIMARY | 38 | sugarcrm401.registration_task.assigned_user_id | 1 | | +----+-------------+-------------------+--------+-----------------------------------------------------------+------------------+---------+------------------------------------------------+--------+-----------------------------+

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >