Search Results

Search found 5991 results on 240 pages for 'iwork numbers'.

Page 218/240 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • Unable to remove limit on memory usage for PHP script.

    - by Jess Telford
    The Situation I am having an issue with a PHP script getting the following error message: Fatal error: Out of memory (allocated 359923712) (tried to allocate 72 bytes) in /path/to/piwik/core/DataTable.php on line 969 The script I'm running is: /path/to/piwik/misc/cron/archive.sh I am assuming the numbers are Bytes, which means that total is approximately 360MB. For all intents and purposes, I have increased the memory limits on the server well above 360MB, yet this is the number (give or take a byte) it consistently errors out at. Please note: This question is not about fixing a memory leak in the script, nor about why the script itself is using so much memory. The script is part of the Piwik archiving process, so I cannot just fix any memory leaks, etc. For more info on this script and why I am increasing the memory limit, see "How to setup auto archiving" The question Given that the script is attempting to use over 360MB of memory, which I cannot change, why does it not seem possible for me to increase the amount of memory available to php on my server? What I've tried Increasing PHP's memory_limit Given the php.ini file: php -i | grep php.ini Configuration File (php.ini) Path => /usr/local/lib Loaded Configuration File => /usr/local/lib/php.ini I have edited that file, so the memory_limit directive reads; memory_limit = -1 Restart Apache, and check the new value has stuck; $ php -i | grep memory_limit memory_limit => -1 => -1 Run the script, and get the same error. I've also tried 1G, 768M, etc, all to the same result (ie; no change). Update 22nd June: Based on Vangel's help, I have attempted to set post_max_size to 20M in combination with setting memory_limit. Again, this has no effect. Removing the memory limit on child processes of Apache I have found and edited the httpd.conf file to make sure there is no RLimitMEM directive. I then used WHM's Apache Configuration Memory Usage Restrictions to generate a restriction, which it claimed was at 1000M (and confirmed by checking httpd.conf). Both of these resulted in no change to the script erroring at 360MB. Increasing the per process memory limits of Linux The current limits set on the system: $ ulimit -m 524288 $ ulimit -v 524288 I have attempted to set both of these to unlimited: $ ulimit -m unlimited $ ulimit -v unlimited $ ulimit -m unlimited $ ulimit -v unlimited Once again, this has resulted in absolutely no improvement in my problem. My setup $ cat /etc/redhat-release CentOS release 5.5 (Final) $ uname -a Linux example.com 2.6.18-164.15.1.el5 #1 SMP Wed Mar 17 11:30:06 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux $ php -i | grep "PHP Version" PHP Version => 5.2.9 $ httpd -V Server version: Apache/2.0.63 Server built: Feb 2 2011 01:25:12 Cpanel::Easy::Apache v3.2.0 rev5291 Server's Module Magic Number: 20020903:13 Server loaded: APR 0.9.17, APR-UTIL 0.9.15 Compiled using: APR 0.9.17, APR-UTIL 0.9.15 Architecture: 64-bit Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf" Output of $ php -i: http://pastebin.com/EiRut6Nm

    Read the article

  • What is the best server or Ip address to use for prolonged testing?

    - by eldorel
    I usually run uptime/latency tests against (and from) two servers that we own at different sites and until recently I've used the google dns servers as a control group. However, I've realized there is a potential problem with monitoring latency over extended periods of time. Almost all of the major service providers are using ANYCAST. For short tests this doesn't matter, but I need to run a set of tests for at least a week to try and catch an intermittent problem, and a change in the anycast priority while trying to test latency will cause the latency values for that server to change accordingly. Since I'm submitting graphs of this data to the ISP, I need to avoid/account for as many variables as possible. Spikes in the data for only one of the tested servers will only cause headaches. So can anyone recommend servers that: are not using anycast are owned by an entity that has a good uptime reputation (so they can't claim that the problem is server-side) will respond to ICMP requests Have an available service that runs on TCP/UDP (http or dns preferably) Wont consider an automated request every 10 minutes to be abuse Are accessible from anywhere in the world Are not local to the isp ( consider this an investigation of a hostile party ) Thanks in advance. Edit: added #6 and #7 above. More info: I am attempting to demonstrate a network problem for an entire node of our local ISP's network. They are actively blaming the issue on the equipment installed at the customer sites (our backup site is one of these), and refuse to escalate the problem. (even though 2 of these businesses have ISP provided modems, and all of us have completely different routers/services running) I am already quite familiar with the need to test an isp controlled IP, but they are actively dropping all packets targeted at gateway ip addresses and are only passing traffic addressed beyond the gateways. So to demonstrate the issue, I am sending packets to other systems in the same node, systems one hop away from the affected node, and systems completely outside the network. Unfortunately, all of the systems I have currently are either administered directly by myself, or by people who are biased enough to assist me. I need to have several systems included in the trace/log/graphs that are 100% not in the control of either myself or the isp so that the graphs have a stable/unbiased control group. These requirements are straight from legal, I'm just trying to make sure that everything that could be argued to invalidate the data is already covered. In Summary: I need to be able to show tcp/udp/icmp as 3 separate data points, and I need to be able to show the connections inside the local node, from local node to another nearby node, from those 2 nodes to the internet, and through the internet to both verifiable servers and a control group that I have no control over whatsoever. Again, Google/opendns/yahoo/msn/facebook/etc all use anycast, which throws the numbers off every time the anycast caches expire, so I need suggestions of an IP or server that is available for this type of testing. I was hoping someone knew of a system run by someone such as ISC or ICANN, or perhaps even a .gov server (fcc or nsa maybe?) setup for this type of testing. Thanks again.

    Read the article

  • What's the difference between find and findstr commands in Windows?

    - by Prashant Bhate
    In Windows, what are the differences between find and findstr commands? Both seems to search text in files: find c:\>find /? Searches for a text string in a file or files. FIND [/V] [/C] [/N] [/I] [/OFF[LINE]] "string" [[drive:][path]filename[ ...]] /V Displays all lines NOT containing the specified string. /C Displays only the count of lines containing the string. /N Displays line numbers with the displayed lines. /I Ignores the case of characters when searching for the string. /OFF[LINE] Do not skip files with offline attribute set. "string" Specifies the text string to find. [drive:][path]filename Specifies a file or files to search. If a path is not specified, FIND searches the text typed at the prompt or piped from another command. findstr c:\>findstr /? Searches for strings in files. FINDSTR [/B] [/E] [/L] [/R] [/S] [/I] [/X] [/V] [/N] [/M] [/O] [/P] [/F:file] [/C:string] [/G:file] [/D:dir list] [/A:color attributes] [/OFF[LINE]] strings [[drive:][path]filename[ ...]] /B Matches pattern if at the beginning of a line. /E Matches pattern if at the end of a line. /L Uses search strings literally. /R Uses search strings as regular expressions. /S Searches for matching files in the current directory and all subdirectories. /I Specifies that the search is not to be case-sensitive. /X Prints lines that match exactly. /V Prints only lines that do not contain a match. /N Prints the line number before each line that matches. /M Prints only the filename if a file contains a match. /O Prints character offset before each matching line. /P Skip files with non-printable characters. /OFF[LINE] Do not skip files with offline attribute set. /A:attr Specifies color attribute with two hex digits. See "color /?" /F:file Reads file list from the specified file(/ stands for console). /C:string Uses specified string as a literal search string. /G:file Gets search strings from the specified file(/ stands for console). /D:dir Search a semicolon delimited list of directories strings Text to be searched for. [drive:][path]filename Specifies a file or files to search. Use spaces to separate multiple search strings unless the argument is prefixed with /C. For example, 'FINDSTR "hello there" x.y' searches for "hello" or "there" in file x.y. 'FINDSTR /C:"hello there" x.y' searches for "hello there" in file x.y. Regular expression quick reference: . Wildcard: any character * Repeat: zero or more occurances of previous character or class ^ Line position: beginning of line $ Line position: end of line [class] Character class: any one character in set [^class] Inverse class: any one character not in set [x-y] Range: any characters within the specified range \x Escape: literal use of metacharacter x \<xyz Word position: beginning of word xyz\> Word position: end of word For full information on FINDSTR regular expressions refer to the online Command Reference.

    Read the article

  • ISP 5 Device Limit ... again

    - by Tommo
    Sorry for the delay in responding to the suggestions that were posted in my first question (ISP 5 Device Limit - double NAT the solution?). I've been travelling and have not been able to try anything. Below is what I've tried and where I have not been successful. Any more help gratefully appreciated. I figure I need to give a more comprehensive overview of what I've got and how it's set up. First of all - I am using all Apple products here. I am iMac, iPad, iPhone, Apple TV, Airport Express and Time Capsule. I used to like the way that it 'just worked'. Now I find that it requires a bit of encouragement before it 'just works'. So, as I stated in my original question; my ISP has a router in my building that is limiting me to 5 devices. I am hard wired into this router and I can neither access it physically nor logically (they won't let me access it). Also, I only appear to be able to connect to it through the LAN ports on my Time Capsule. Any device I connect appears to be on a rolling IP list with the following settings: Router 91.72.80.1 Devices then get assigned IPv4 addresses in the range (as far as I can see) from 91.72.80.2 onwards. SubNet Mask 255.255.255.0 DNS Servers 213.132.63.25, 80.227.2.4 I have my Time Capsule / Router in Bridge-Mode which means I am limited to the 5 devices and cannot use Guest Networks etc. What I've tried today. Static IPs: On all devices, I went from DHCP to Static and put in the same information when they had connected using DHCP. Somewhat surprisingly this did not work. None of the devices enjoyed any connection to the router and certainly no internet connection. Intentional Double-NAT - Time Capsule to 'DHCP and NAT': By selecting DHCP and NAT on my Router I was able to connect devices to my Time Capsule in the range 10.0.1.2 to 10.0.1.200. This offered no internet connectivity and didn't really help the situation. In this mode, however, I was able to force the devices - individually and laboriously - to look for the Router and previously listed DNSs by inputting the numbers from 'Bridge-mode' into the STATIC settings and then resetting the connection. The Router then appeared to assign a distinct IP address to the device and it worked on the network. I had this working for more than 5 devices. However, this is not a great solution because as soon as one of the mobile devices left the building it needed repointing to the Router. The connections were also not very stable. Especially when trying to hold onto a VPN. Spoofing a few MAC addresses: I'm afraid I don't really know what this would achieve, nor how to do it on an Apple device… So … I'm almost back at Square One. I have had to withdraw to the Bridge-Mode position again with the 5 device limit to see if there's a better course of action to follow. ANY help would be much appreciated. I am positive that I cannot be the only one suffering under this 5 device limit!

    Read the article

  • WNDR3700 Router + Cisco SG200-08 + LACP + Dual Uplink

    - by kobaltz
    Background I have a storage server that has several virtual machine images stored on them. I would store them locally, but I have limited space on my desktop (using SSD storage). I would like to increase the bandwidth between the desktop and the storage server by using two NICs on each computer. My original configuration allowed about 55MBps between the desktop and storage server. This storage server also has several TBs of documents, pictures, movies, vms, and ISO/programs. The storage server has 8 1.5TB hard drives in a RAID 10 configuration with a hardware RAID controller. The benchmarks on the RAID 10 are about 300MBps. Configuration In short, I am trying to bridge my switch and router. The switch is a small 8 port Cisco smart switch that supports 802.3ad LACP. I have two computers plugged into the switch, each with 2 Intel Gigabit NICs. The first computer is a Windows 7 machine that has the Intel ANS software installed. I have LACP configured with the computer and now show 3 NICs (2 Physical + 1 TEAM Virtual @ 2Gbps). It looks like this computer is configured correctly. I trunked the two ports that this computer is plugged into with the switch's web interface. The second computer is a homebrew storage box running debian. I also have the bonding enabled on this machine and the switch configured with LACP. Without having the WNDR3700 router in the picture yet, I am able to communicate between the Windows 7 machine and the debian box since they both have static IP addresses. With LACP enabled on both machines I am getting about 106-108MBps speeds. Issue I plug in a network cable from the switch into the router and enable DHCP on the desktop. I saw no need to have a static address on the desktop. My transfer rates are still from 106MBps-108MBps. While this is still a boost, I am trying to figure out how to get about 140-180MBps. I am thinking that I need to increase the bandwidth from the router to the switch. My switch allows 4 groups for port trunking. I plugged in a second network cable from the router to the switch. My question is, what is the proper way to fix this issue. Should I port trunk the two ports that are going from the switch to the router? Keep in mind that the router is a WNDR3700 and is unsure whether or not it supports LACP. I do have OpenWRT installed on the router, but it still wasn't clear in any documentation that I found if it supported 802.3ad LACP standards. I am also wondering if there needs to be anything changed within the Cisco settings. [Edit] - Corrected some numbers, wasn't really paying attention. It looks like the speeds though at least two NICs are bonded with LACP is still reaching the max bandwidth of one port. Is there a way to configure the switch so that I can increase this bandwidth? Also, on the storage server, I had a couple of extra NICs laying around and threw them on there as well. Another EDIT and More Findings I happened to look at the traffic of each individual NIC and think that I see the problem. I tested with a simple transfer for a 4GB file. I noticed that only one of the NICs was taking the load of the traffic. I then copied the file back to the Storage Server and noticed that the other NIC was sending out the traffic. I have 802.3ad LACP enabled on the two NICs and I see that it gets enabled dynamically on the switch's interface. Should I be using Static Link Aggregation?

    Read the article

  • Windows 8.1 Update 1 Disk Usage 100%

    - by Gookjin Jeong
    Background Information / Computer Specs I have a 14-inch Samsung Series 5 Ultra. Core i5 CPU, 750GB HDD, 8GB RAM, Intel HD Graphics 4000. I've had the computer for about 1.5 years with no major problems. Problem The issue appeared at the beginning of April this year, when I updated the OS to Windows 8.1 Update 1 (not from 8 to 8.1). After being on continually (except for at night, when I put it on sleep mode) for about 48 hours, the disk usage as seen by Task Manager hits 100%. When this happens, everything from opening/closing applications to typing and even bringing up the start screen by pressing the Windows key becomes extremely slow. The only way to make the disk usage decrease is to restart the computer. Then the problem repeats. I've used my current laptop (as well as my previous laptops) this way -- putting it on sleep mode at night and restarting it only when Windows needs to install updates -- for a long time. So I know the 100% disk usage is not due to the way I use the computer. The thing that causes the spike varies. Sometimes it's System, sometimes it's one of the various applications I installed (e.g. Chrome, Evernote, Spotify, Wunderlist, iTunes, etc.), and sometimes it's Antimalware Service Executable, etc. Tried Solutions I think I tried almost every solution out there for this problem: Running the check disk command (chkdsk /b /f /v /scan c:) from Admin Command Prompt Running Windows Memory Diagnostic Disabling Superfetch and Windows Search from services.msc Running "Fix problems with Windows Update" from Control Panel -- Troubleshooting Updating and rolling back the graphics driver (Intel HD 4000) Disabling "Use hardware acceleration when available" from Chrome settings Disabling Intel Rapid Storage Technology Running the SFC /SCANNOW command as recommended here Running a quick scan & a full scan from Windows Defender (no threats found) Taking the hard drive out and putting it back Refreshing the computer, from the Update and recovery -- Recovery option in Windows settings NONE of the above worked for me. I was about to give up but then noticed that one of the main culprits of the disk usage spike, as shown in the "Disk Activity" section of the Resource Monitor, was C:\System (pagefile.sys). I googled around and found that one of the recommended solutions was to disable pagefile. I then went to **Control Panel -- System and Security -- System -- Advanced system settings -- Advanced tab -- Performance settings -- Advanced tab -- "Change" under Virtual memory and discovered that the number for "Currently allocated" at the bottom was 1280MB, although the number for "Recommended" was 4533MB. I immediately changed it to 4533MB and checked my family members' computers to see what the numbers were like. All of theirs had a currently allocated space that was only slightly smaller than the recommended space. See screenshot below: This might fix the problem. I'll have to wait a couple more days.But if it doesn't, what in the world should I do next? I'm guessing the hard drive isn't failing because This computer is less than 2 years old; and Speccy says that the status of the HDD is good. Update 5/27/2014 The "4533MB" solution did not work. I had to reboot the computer about 30 minutes ago because the disk usage again hit 100%. When I opened Resource Monitor the C:\System (pagefile.sys) again was shown to be the culprit. I have now disabled pagefile entirely via the same window shown above in the screenshot. The number for "currently allocated" is now 0MB. Will update again in a couple days, or if the problem occurs again, whichever comes sooner.

    Read the article

  • why jQuery.parseJSON is not a function?

    - by Pandiya Chendur
    I use the following jquery statements and i am getting the error, jQuery.parseJSON is not a function my function is, function Iteratejsondata() {var HfJsonValue = { "Table": [{ "Emp_Id": "3", "Identity_No": "", "Emp_Name": "Jerome", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Supervisior", "Desig_Description": "Supervisior of the Construction", "SalaryBasis": "Monthly", "FixedSalary": "25000.00" }, { "Emp_Id": "4", "Identity_No": "", "Emp_Name": "Mohan", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Acc ", "Desig_Description": "Accountant", "SalaryBasis": "Monthly", "FixedSalary": "200.00" }, { "Emp_Id": "5", "Identity_No": "", "Emp_Name": "Murugan", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason", "Desig_Description": "Mason", "SalaryBasis": "Weekly", "FixedSalary": "150.00" }, { "Emp_Id": "6", "Identity_No": "", "Emp_Name": "Ram", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason", "Desig_Description": "Mason", "SalaryBasis": "Weekly", "FixedSalary": "120.00" }, { "Emp_Id": "7", "Identity_No": "", "Emp_Name": "Raja", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason", "Desig_Description": "Mason", "SalaryBasis": "Weekly", "FixedSalary": "135.00" }, { "Emp_Id": "8", "Identity_No": "", "Emp_Name": "Raja kumar", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason Helper", "Desig_Description": "Mason Helper", "SalaryBasis": "Weekly", "FixedSalary": "105.00" }, { "Emp_Id": "9", "Identity_No": "", "Emp_Name": "Lakshmi", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason Helper", "Desig_Description": "Mason Helper", "SalaryBasis": "Weekly", "FixedSalary": "100.00" }, { "Emp_Id": "10", "Identity_No": "", "Emp_Name": "Palani", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Carpenter", "Desig_Description": "Carpenter", "SalaryBasis": "Weekly", "FixedSalary": "200.00" }, { "Emp_Id": "11", "Identity_No": "", "Emp_Name": "Annamalai", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Carpenter", "Desig_Description": "Carpenter", "SalaryBasis": "Weekly", "FixedSalary": "220.00" }, { "Emp_Id": "12", "Identity_No": "", "Emp_Name": "David", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Steel Fixer", "Desig_Description": "Steel Fixer", "SalaryBasis": "Weekly", "FixedSalary": "220.00" }, { "Emp_Id": "13", "Identity_No": "", "Emp_Name": "Chandru", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Steel Fixer", "Desig_Description": "Steel Fixer", "SalaryBasis": "Weekly", "FixedSalary": "220.00" }, { "Emp_Id": "14", "Identity_No": "", "Emp_Name": "Mani", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Steel Helper", "Desig_Description": "Steel Helper", "SalaryBasis": "Weekly", "FixedSalary": "175.00" }, { "Emp_Id": "15", "Identity_No": "", "Emp_Name": "Karthik", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Wood Fixer", "Desig_Description": "Wood Fixer", "SalaryBasis": "Weekly", "FixedSalary": "195.00" }, { "Emp_Id": "16", "Identity_No": "", "Emp_Name": "Bala", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Wood Fixer", "Desig_Description": "Wood Fixer", "SalaryBasis": "Weekly", "FixedSalary": "185.00" }, { "Emp_Id": "17", "Identity_No": "", "Emp_Name": "Tamil arasi", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Wood Helper", "Desig_Description": "Wood Helper", "SalaryBasis": "Weekly", "FixedSalary": "185.00" }, { "Emp_Id": "18", "Identity_No": "", "Emp_Name": "Perumal", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Cook", "Desig_Description": "Cook", "SalaryBasis": "Weekly", "FixedSalary": "105.00" }, { "Emp_Id": "19", "Identity_No": "", "Emp_Name": "Andiappan", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Watchman", "Desig_Description": "Watchman", "SalaryBasis": "Weekly", "FixedSalary": "150.00"}] }; //var jsonObj = eval('(' + HfJsonValue + ')'); var jsonObj = jQuery.parseJSON(HfJsonValue); and my page looks like this <div id="Pagination" class="page-numbers"></div> <br style="clear:both;" /> <div id="Searchresult"></div> <div id="hiddenresult" style="display:none;"> </div> <script type="text/javascript"> var pagination_options = { num_edge_entries: 2, num_display_entries: 8, callback: pageselectCallback, items_per_page: 3 } function pageselectCallback(page_index, jq) { var items_per_page = pagination_options.items_per_page; var offset = page_index * items_per_page; var new_content = $('#hiddenresult div.resultsdiv').slice(offset, offset + items_per_page).clone(); $('#Searchresult').empty().append(new_content); return false; } function initPagination() { var num_entries = $('#hiddenresult div.resultsdiv').length; // Create pagination element $("#Pagination").pagination(num_entries, pagination_options); } $(document).ready(function() { Iteratejsondata(); initPagination(); }); </script> I ve inspected through firebug and saw all jquery files have been downloaded but why this is hapenning? Any suggestion....

    Read the article

  • Simple POST random number issue in PHP

    - by MrEnder
    Ok I am trying to make something to ask you random multiplication questions. Now it asks the questions fine. Generates the random questions fine. But when it reloads the page the random numbers are different... how can I fix this? <?php $rndnum1 = rand(1, 12); $rndnum2 = rand(1, 12); echo "<h3>". $rndnum1 . " x "; echo $rndnum2 . "</h3>"; if($_SERVER["REQUEST_METHOD"] == "GET") { $answer=0; } else if($_SERVER["REQUEST_METHOD"] == "POST") { $answer=trim($_POST["answerInput"]); $check=$rndnum1*$rndnum2; if($answer==$check) { echo "Correct!"; } else { echo "Wrong!"; } } ?> <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post" > <table> <tr> <td> First Name:&nbsp; </td> <td> <input type="text" name="answerInput" value="<?php echo $answer; ?>" size="20"/> </td> <td> <?php echo $answerError; ?> </td> </tr> <tr> <td class="signupTd" colspan="2"> <input type="submit" name="submit" value="Submit"/> </td> </tr> </table> </form>

    Read the article

  • WPF Animation FPS vs. CPU usage - Am I expecting too much?

    - by Cory Charlton
    Working on a screen saver for my wife, http://cchearts.codeplex.com/, and while I've been able to improve FPS on lower end machines (switch from Path to StreamGeometry, use DrawingVisual instead of UserControl, etc) the CPU usage still seems very high. Here's some numbers I ran from a few 5 minute sampling periods: ~60FPS 35% average CPU on Core 2 Duo T7500 @ 2.2GHz, 3GB ram, NVIDIA Quadro NVS 140M (128MB), Vista [My dev laptop] ~40FPS 50% average CPU on Pentium D @ 3.4GHz, 1.5GB ram, Standard VGA Graphics Adapter (unknown), 2003 Server [A crappy desktop] I can understand the lower frame rate and higher CPU usage on the crappy desktop but it still seems pretty high and 35% on my dev laptop seems high as well. I'd really like to analyze the application to get more details but I'm having issues there as well so I'm wondering if I'm doing something wrong (never profiled WPF before). WPF Performance Suite: Process Launch Error Unable to attach to process: CCHearts.exe Do you want to kill it? This error message occurs when I click cancel after attempting launch. If I don't click cancel it sits there idle, I guess waiting to attach. Performance Explorer: Could not launch C:\Projects2\CC.Hearts\CC.Hearts\bin\Debug (USEVISUAL)\CCHearts.exe. Previous attempt to profile the application finished unsuccessfully. Please restart the application. Output Window from Performance: Profiling started. Profiling process ID 5360 (CCHearts). Process ID 5360 has exited. Data written to C:\Projects2\CC.Hearts\CCHearts100608.vsp. Profiling finished. PRF0025: No data was collected. Profiling complete. So I'm stuck wanting to improve performance but have no concrete way to determine where the bottleneck is. Have been relatively successful throwing darts at this point but I'm beyond that now :) PS: Screensaver is hosted at CodePlex if you want to look at the source and missed the link above. Edit: My RenderOptions darts... // NOTE: Grasping at straws here ;-) RenderOptions.SetBitmapScalingMode(newHeart, BitmapScalingMode.LowQuality); RenderOptions.SetCachingHint(newHeart, CachingHint.Cache); RenderOptions.SetEdgeMode(newHeart, EdgeMode.Aliased); I threw those a little while back and didn't see much difference (not sure if the bitmap scaling even comes into play). Really wish I could get profiling working to know where I should try to optimize. For now I assume there is some overhead in creating a new HeartVisual and the DrawingVisual contained inside. Maybe if I reset and reused the hearts (tossed them in a queue once they completed or something) I'd see an improvement. Shrug Throwing darts while blindfolder is always fun.

    Read the article

  • MinGW/G++/g95 link problem - libf95 undefined reference to `MAIN_'

    - by pivakaka
    Hi folks, Summing up, my problem consists on compiling g95 objects inside a C++ application. Actually, I'm constructing an interface for an old fortran program. For this task, I'm using the wxWidgets GUI library, and calling fortran subroutines when necessary. At the beginning, I was developing the entire project compiling my fortran files with gfortran (which comes with GCC) and linking them with my app by the g++ -o... command. Everything was working fine but some numbers values calculated by my fotran subroutines returned NAN values. Doing some research, I realized that compiling my fortran files with gfortran with the -m32 flag, generates this NAN values problem. Although compiling with -m64 flag, my code works properly well. The only trouble here is that my App should be 32bits and then I tryed another compiller. Here I found g95 Fortran compiler, which compiles my fortran code and gives the right output on a 32bits environment. But when I'm trying to link these g95 objects into my program I see this following error: g++ -oCyclonTechTower.exe src\fortran\Modulo_Global.o src\fortran\Prop_Fisicas.o src\fortran\inversa.o src\fortran\tower.o src\fortran\PredadeCarga.o src\fortran\gota.o src\fortran\tadi.o src\view\SimulationORSATCustomDialog.o src\view\SimulationChildGUIFrame.o src\view\ParentGUIFrame.o src\view\ClientGUIFrame.o src\model\TowerData.o src\controller\SimulationController.o src\THEIACyclonTechTower.o ..\Resource\resource.o -Lc:\wxWidgets-2.6.4\lib -Lc:\MinGW\lib\gcc-lib\i686-pc-mingw32\4.1.2 -Bdynamic -Wl,--subsystem,windows -mwindows c:\wxWidgets-2.6.4\lib\libwx_mswu-2.6.a -lwxregexu-2.6 -lwxexpat-2.6 -lwxtiff-2.6 -lwxjpeg-2.6 -lwxpng-2.6 -lwxzlib-2.6 -lrpcrt4 -loleaut32 -lole32 -luuid -lwinspool -lwinmm -lshell32 -lcomctl32 -lcomdlg32 -lctl3d32 -ladvapi32 -lwsock32 -lgdi32 -lgcc -lf95 c:\MinGW\lib\gcc-lib\i686-pc-mingw32\4.1.2/libf95.a(main.o):(.text+0x32): undefined reference to `MAIN_' collect2: ld returned 1 exit status Build error occurred, build is stopped Time consumed: 1531 ms. I have already read the g95 Manual for integration with C++ and I'm actually calling these functions bellow for controlling the fortran environment: void g95_runtime_start(int argc, char *argv[]); void g95_runtime_stop(); I also included the g95 Fortran Runtime Library libf95.a and the libgcc.a into my linker command. Finishing, I don't have a main method implemented because this is managed by wxWidgets, and my fortran subroutines can not have a main function because this is a C++ program calling fortran functions. Can some of you guys help me with this problem? How can I fix this undefined reference to MAIN__ problem? Any idea will be much appreciated. Thanks in advance, George

    Read the article

  • MKMapView memory usage grows out of control with setRegion: calls

    - by Kurt
    Hi, I have a single MKMapView instance that I have programmatically added to a UIView. As part of the UI, the user can cycle through a list of addresses and the map view is updated to show the correct map for each address as the user goes through them. I create the map view once, and simply change what it displays with setRegion:animated:. The problem is that each time the map is changed to show a new address, the memory usage of my program increases by 200K-500K (as reported by Memory Monitor in Instruments). According to Object Allocations, it appears that a lot of 1.0K Mallocs are happening each time, and the Extended Detail pane for these 1.0K allocations shows that the Responsible Caller is convert_image_data and the Extended Detail pane shows that this is the result of [MKMapTileView drawLayer:inContext:]. So, seems likely to me that the memory usage is due to MKMapView not freeing memory it uses to redraw the map each time. In fact, when I don't display the map at all (by not even adding it as a subview of my main UIView) but still cycle through the addresses (which changes various UILabels and other displayed info) the memory usage for the app does NOT increase. If I add the map view but never update it with setRegion:, the memory also does NOT increase when changing to a new address. One more bit of info: if I go to a new address (and therefore ask the map to display the new address) the memory jumps as described above. However, if I go back to an address that was already displayed, the memory does not jump when the map redraws with the old address. Also, this happens on iPad (real device) with 3.2 and on iPhone (again, real device) with 3.1.2. Here's how I initialize the MKMapView (I only do this once): CGRect mapFrame; mapFrame.origin.y = 460; // yes, magic numbers. just for testing. mapFrame.origin.x = 0; mapFrame.size.height = 500; mapFrame.size.width = 768; mapView = [[MKMapView alloc] initWithFrame:mapFrame]; mapView.delegate = self; [self.view insertSubview:mapView atIndex:0]; And in response to the user selecting an address, I set the map like so: MKCoordinateRegion region; MKCoordinateSpan span; span.latitudeDelta=kStreetMapSpan; // 0.003 span.longitudeDelta=kStreetMapSpan; // 0.003 region.center = address.coords; // coords is CLLocationCoordinate2D region.span = span; mapView.region.span = span; [mapView setRegion:region animated:NO]; Any thoughts? I've scoured the net but haven't seen mention of this problem, and I've reached the limits of my Instruments knowledge. Thanks for any ideas.

    Read the article

  • Validate a string in a table in SQL Server - CLR function or T-SQL

    - by Ashish Gupta
    I need to check If a column value (string) in SQL server table starts with a small letter and can only contain '_', '-', numbers and alphabets. I know I can use a SQL server CLR function for that. However, I am trying to implement that validation using a scalar UDF and could make very little here...I can use 'NOT LIKE', but I am not sure how to make sure I validate the string irrespective of the order of characters or in other words write a pattern in SQL for this. Am I better off using a SQL CLR function? Any help will be appreciated.. Thanks in advance Thank you everyone for their comments. This morning, I chose to go CLR function way. For the purpose of what I was trying to achieve, I created one CLR function which does the validation of an input string and have that called from a SQL UDF and It works well. Just to measure the performance of t-SQL UDF using SQL CLR function vs t- SQL UDF, I created a SQL CLR function which will just check if the input string contains only small letters, it should return true else false and have that called from a UDF (IsLowerCaseCLR). After that I also created a regular t-SQL UDF(IsLowerCaseTSQL) which does the same thing using the 'NOT LIKE'. Then I created a table (Person) with columns Name(varchar) and IsValid(bit) columns and populate that with names to test. Data :- 1000 records with 'Ashish' as value for Name column 1000 records with 'ashish' as value for Name column then I ran the following :- UPDATE Person Set IsValid=1 WHERE dbo.IsLowerCaseTSQL (Name) Above updated 1000 records (with Isvalid=1) and took less than a second. I deleted all the data in the table and repopulated the same with same data. Then updated the same table using Sql CLR UDF (with Isvalid=1) and this took 3 seconds! If update happens for 5000 records, regular UDF takes 0 seconds compared to CLR UDF which takes 16 seconds! I am very less knowledgeable on t-SQL regular expression or I could have tested my actual more complex validation criteria. But I just wanted to know, even I could have written that, would that have been faster than the SQL CLR function considering the example above. Are we using SQL CLR because we can implement we can implement lot richer logic which would have been difficult otherwise If we write in regular SQL. Sorry for this long post. I just want to know from the experts. Please feel free to ask if you could not understand anything here. Thank you again for your time.

    Read the article

  • Linq To Sql Concat() dropping fields in created TSQL

    - by user191468
    This is strange. I am moving a stored proc to a service. The TSQL unions multiple selects. To replicate this I created multiple queries resulting in a common new concrete type. Then I issue a return result.ToString(); and the resulting SQL selects have varying numbers of columns specified thus causing an MSSQL Msg 205... using (var db = GetDb()) { var fundInv = from f in db.funds select new Investments { Company = f.company, FullName = f.fullname, Admin = f.admin, Fund = f.fund1, FundCode = f.fundcode, Source = STR_FUNDS, IsPortfolio = false, IsActive = f.active, Strategy = f.strategy, SubStrategy = f.substrategy, AltStrategy = f.altstrategy, AltSubStrategy = f.altsubstrategy, Region = f.region, AltRegion = f.altregion, UseAlternate = f.usealt, ClassesAllowed = f.classallowed }; var stocksInv = from s in db.stocks where !fundInv.Select(f => f.Company).Contains(s.vehcode) select new Investments { Company = s.company, FullName = s.issuer, Admin = STR_PRS, Fund = s.shortname, FundCode = s.vehcode, Source = STR_STOCK, IsPortfolio = false, IsActive = (s.inactive == null), Strategy = s.style, SubStrategy = s.substyle, AltStrategy = s.altstyle, AltSubStrategy = s.altsubsty, Region = s.geography, AltRegion = s.altgeo, UseAlternate = s.usealt, ClassesAllowed = STR_GENERIC }; var bondsInv = from oi in db.bonds where !fundInv.Select(f => f.Company).Contains(oi.vehcode) select new Investments { Company = string.Empty, FullName = oi.issue, Admin = STR_PRS1, Fund = oi.issue, FundCode = oi.vehcode, Source = STR_BONDS, IsPortfolio = false, IsActive = oi.closed, Strategy = STR_OTH, SubStrategy = STR_OTH, AltStrategy = STR_OTH, AltSubStrategy = STR_OTH, Region = STR_OTH, AltRegion = STR_OTH, UseAlternate = false, ClassesAllowed = STR_GENERIC }; return (fundInv.Concat(stocksInv).Concat(bondsInv)).ToList(); } The code above results in a complex select statement where each "table" above has different column count. (see SQL below) I've been trying a few things but no change yet. Ideas are welcome. SELECT [t6].[company] AS [Company], [t6].[fullname] AS [FullName], [t6].[admin] AS [Admin], [t6].[fund] AS [Fund], [t6].[fundcode] AS [FundCode], [t6].[value] AS [Source], [t6].[value2] AS [IsPortfolio], [t6].[active] AS [IsActive], [t6].[strategy] AS [Strategy], [t6].[substrategy] AS [SubStrategy], [t6].[altstrategy] AS [AltStrategy], [t6].[altsubstrategy] AS [AltSubStrategy], [t6].[region] AS [Region], [t6].[altregion] AS [AltRegion], [t6].[usealt] AS [UseAlternate], [t6].[classallowed] AS [ClassesAllowed] FROM ( SELECT [t3].[company], [t3].[fullname], [t3].[admin], [t3].[fund], [t3].[fundcode], [t3].[value], [t3].[value2], [t3].[active], [t3].[strategy], [t3].[substrategy], [t3].[altstrategy], [t3].[altsubstrategy], [t3].[region], [t3].[altregion], [t3].[usealt], [t3].[classallowed] FROM ( SELECT [t0].[company], [t0].[fullname], [t0].[admin], [t0].[fund], [t0].[fundcode], @p0 AS [value], [t0].[active], [t0].[strategy], [t0].[substrategy], [t0].[altstrategy], [t0].[altsubstrategy], [t0].[region], [t0].[altregion], [t0].[usealt], [t0].[classallowed] FROM [zInvest].[funds] AS [t0] UNION ALL SELECT [t1].[company], [t1].[issuer], @p6 AS [value], [t1].[shortname], [t1].[vehcode], @p7 AS [value2], @p8 AS [value3], (CASE WHEN [t1].[inactive] IS NULL THEN 1 ELSE 0 END) AS [value5], [t1].[style], [t1].[substyle], [t1].[altstyle], [t1].[altsubsty], [t1].[geography], [t1].[altgeo], [t1].[usealt], @p10 AS [value6] FROM [zBank].[stocks] AS [t1] WHERE (NOT (EXISTS( SELECT NULL AS [EMPTY] FROM [zInvest].[funds] AS [t2] WHERE [t2].[company] = [t1].[vehcode] ))) AND ([t1].[vehcode] <> @p2) AND (SUBSTRING([t1].[vehcode], @p3 + 1, @p4) <> @p5) ) AS [t3] UNION ALL SELECT @p11 AS [value], [t4].[issue], @p12 AS [value2], [t4].[vehcode], @p13 AS [value3], @p14 AS [value4], [t4].[closed], @p16 AS [value6], @p17 AS [value7] FROM [zMut].[bonds] AS [t4] WHERE NOT (EXISTS( SELECT NULL AS [EMPTY] FROM [zInvest].[funds] AS [t5] WHERE [t5].[company] = [t4].[vehcode] )) ) AS [t6]

    Read the article

  • Python - calculate multinomial probability density functions on large dataset?

    - by Seafoid
    Hi, I originally intended to use MATLAB to tackle this problem but the inbuilt functions has limitations that do not suit my goal. The same limitation occurs in NumPy. I have two tab-delimited files. The first is a file showing amino acid residue, frequency and count for an in-house database of protein structures, i.e. A 0.25 1 S 0.25 1 T 0.25 1 P 0.25 1 The second file consists of quadruplets of amino acids and the number of times they occur, i.e. ASTP 1 Note, there are 8,000 such quadruplets. Based on the background frequency of occurence of each amino acid and the count of quadruplets, I aim to calculate the multinomial probability density function for each quadruplet and subsequently use it as the expected value in a maximum likelihood calculation. The multinomial distribution is as follows: f(x|n, p) = n!/(x1!*x2!*...*xk!)*((p1^x1)*(p2^x2)*...*(pk^xk)) where x is the number of each of k outcomes in n trials with fixed probabilities p. n is 4 four in all cases in my calculation. I have created three functions to calculate this distribution. # functions for multinomial distribution def expected_quadruplets(x, y): expected = x*y return expected # calculates the probabilities of occurence raised to the number of occurrences def prod_prob(p1, a, p2, b, p3, c, p4, d): prob_prod = (pow(p1, a))*(pow(p2, b))*(pow(p3, c))*(pow(p4, d)) return prob_prod # factorial() and multinomial_coefficient() work in tandem to calculate C, the multinomial coefficient def factorial(n): if n <= 1: return 1 return n*factorial(n-1) def multinomial_coefficient(a, b, c, d): n = 24.0 multi_coeff = (n/(factorial(a) * factorial(b) * factorial(c) * factorial(d))) return multi_coeff The problem is how best to structure the data in order to tackle the calculation most efficiently, in a manner that I can read (you guys write some cryptic code :-)) and that will not create an overflow or runtime error. To data my data is represented as nested lists. amino_acids = [['A', '0.25', '1'], ['S', '0.25', '1'], ['T', '0.25', '1'], ['P', '0.25', '1']] quadruplets = [['ASTP', '1']] I initially intended calling these functions within a nested for loop but this resulted in runtime errors or overfloe errors. I know that I can reset the recursion limit but I would rather do this more elegantly. I had the following: for i in quadruplets: quad = i[0].split(' ') for j in amino_acids: for k in quadruplets: for v in k: if j[0] == v: multinomial_coefficient(int(j[2]), int(j[2]), int(j[2]), int(j[2])) I haven'te really gotten to how to incorporate the other functions yet. I think that my current nested list arrangement is sub optimal. I wish to compare the each letter within the string 'ASTP' with the first component of each sub list in amino_acids. Where a match exists, I wish to pass the appropriate numeric values to the functions using indices. Is their a better way? Can I append the appropriate numbers for each amino acid and quadruplet to a temporary data structure within a loop, pass this to the functions and clear it for the next iteration? Thanks, S :-)

    Read the article

  • custom listview adapter getView method being called multiple times, and in no coherent order

    - by edzillion
    I have a custom list adapter: class ResultsListAdapter extends ArrayAdapter<RecordItem> { in the overridden 'getView' method I do a print to check what position is and whether it is a convertView or not: @Override public View getView(int position, View convertView, ViewGroup parent) { System.out.println("getView " + position + " " + convertView); The output of this (when the list is first displayed, no user input as yet) 04-11 16:24:05.860: INFO/System.out(681): getView 0 null 04-11 16:24:29.020: INFO/System.out(681): getView 1 android.widget.RelativeLayout@43d415d8 04-11 16:25:48.070: INFO/System.out(681): getView 2 android.widget.RelativeLayout@43d415d8 04-11 16:25:49.110: INFO/System.out(681): getView 3 android.widget.RelativeLayout@43d415d8 04-11 16:25:49.710: INFO/System.out(681): getView 0 android.widget.RelativeLayout@43d415d8 04-11 16:25:50.251: INFO/System.out(681): getView 1 null 04-11 16:26:01.300: INFO/System.out(681): getView 2 null 04-11 16:26:02.020: INFO/System.out(681): getView 3 null 04-11 16:28:28.091: INFO/System.out(681): getView 0 null 04-11 16:37:46.180: INFO/System.out(681): getView 1 android.widget.RelativeLayout@43cff8f0 04-11 16:37:47.091: INFO/System.out(681): getView 2 android.widget.RelativeLayout@43cff8f0 04-11 16:37:47.730: INFO/System.out(681): getView 3 android.widget.RelativeLayout@43cff8f0 AFAIK, though I couldn't find it stated explicitly, getView() is only called for visible rows. Since my app starts with four visible rows at least the position numbers cycling from 0-3 makes sense. But the rest is a mess: Why is getview called for each row four times? Where are these convertViews coming from when I haven't scrolled yet? I did a bit of reseach, and without getting a good answer, I did notice that people were associating this issue with layout issues. So in case, here's the layout that contains the list: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_height="fill_parent" android:layout_width="fill_parent" android:orientation="vertical" > <TextView android:id="@+id/pageDetails" android:layout_width="fill_parent" android:layout_height="wrap_content" /> <ListView android:id="@+id/list" android:layout_width="fill_parent" android:layout_height="wrap_content" android:drawSelectorOnTop="false" /> </LinearLayout> and the layout of each individual row: <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="108dp" android:padding="4dp"> <ImageView android:id="@+id/thumb" android:layout_width="120dp" android:layout_height="fill_parent" android:layout_alignParentTop="true" android:layout_alignParentBottom="true" android:layout_alignParentLeft="true" android:layout_marginRight="8dp" android:src="@drawable/loading" /> <TextView android:id="@+id/price" android:layout_width="wrap_content" android:layout_height="18dp" android:layout_toRightOf="@id/thumb" android:layout_alignParentBottom="true" android:singleLine="true" /> <TextView android:id="@+id/date" android:layout_width="wrap_content" android:layout_height="18dp" android:layout_alignParentBottom="true" android:layout_alignParentRight="true" android:paddingRight="4dp" android:singleLine="true" /> <TextView android:id="@+id/title" android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="17dp" android:layout_toRightOf="@id/thumb" android:layout_alignParentRight="true" android:layout_alignParentTop="true" android:paddingRight="4dp" android:layout_alignWithParentIfMissing="true" android:gravity="center" /> Thank you for your time

    Read the article

  • practical security ramifications of increasing WCF clock skew to more than an hour

    - by Andrew Patterson
    I have written a WCF service that returns 'semi-private' data concerning peoples name, addresses and phone numbers. By semi-private, I mean that there is a username and password to access the data, and the data is meant to be secured in transit. However, IMHO noone is going to expend any energy trying to obtain the data, as it is mostly available in the public phone book anyway etc. At some level, the security is a bit of security 'theatre' to tick some boxes imposed on us by government entities. The client end of the service is an application which is given out to registered 'users' to run within their own IT setups. We have no control over the IT of the users - and in fact they often tell us to 'go jump' if we put too many requirements on their systems. One problem we have been encountering is numerous users that have system clocks that are not accurate. This can either be caused by a genuine slow/fast clocks, or more than likely a timezone or daylight savings zone error (putting their machine an hour off the 'real' time). A feature of the WCF bindings we are using is that they rely on the notion of time to detect replay attacks etc. <wsHttpBinding> <binding name="normalWsBinding" maxBufferPoolSize="524288" maxReceivedMessageSize="655360"> <reliableSession enabled="false" /> <security mode="Message"> <message clientCredentialType="UserName" negotiateServiceCredential="false" algorithmSuite="Default" establishSecurityContext="false" /> </security> </binding> </wsHttpBinding> The inaccurate client clocks cause security exceptions to be thrown and unhappy users. Other than suggesting users correct their clocks, we know that we can increase the clock skew of the security bindings. http://www.danrigsby.com/blog/index.php/2008/08/26/changing-the-default-clock-skew-in-wcf/ My question is, what are the real practical security ramifications of increasing the skew to say 2 hours? If an attacker can perform some sort of replay attack, why would a clock skew window of 5 minutes be necessarily safer than 2 hours? I presume performing any attack with security mode of 'message' requires more than just capturing some data at a proxy and sending the data back in again to 'replay' the call? In a situation like mine where data is only 'read' by the users, are there indeed any security ramifications at all to allowing 'replay' attacks?

    Read the article

  • Project References DLL version hell

    - by Mr Shoubs
    We're having problems getting visual studio to pick up the latest version of a DLL from one of our projects. We have multiple class library projects (e.g. BusinessLogic, ReportData) and a number of web services, each has a reference to a Connectivity DLL we've written (this ref to the connectivity DLL is the problem). We always point references to the DLL in the bin/debug folder, (which is where we always build to for any given project) and all custom DLL references have CopyLocal = True and SpecificVersion = False ReportData has a reference to business logic (which also has a reference to connectivity - I don't see why this should cause a problem, but thought it is worth mentioning) The weird thing is, when you click "Add Reference" and browse to Connectivity/bin/debug - you hover the mouse over the DLL file, the correct (latest) version is shown (version and file version are always incremented together), but when you click ok, a previous version number is pulled though. Even when I look in the current projects debug folder (where copy local would put the DLL after compiling) that shows the latest version number. - NO WHERE does can I find the previous version of the DLL outside of visual studio, but in that project references it has the old version - even though the path is correct. I'm at a loss as to where it might be getting the old versions from. Or even why it wants that one. This is possibly the most frustraighting problem I have ever come across. Does anyone know how to ensure the latest version is pulled through (preferably automatically or on compile). EDIT: Although not exactly the scenario I'm dealing with I was reading this article and somewhere it mentions about CLR ignoring revision numbers. Understandable (even though this hasn't been a problem before - we're on revision 39), so I thought I would update the build number, still didn't work. In a vain attempt I though I would update the minor version number and see if that made any difference. I'm not saying this is the answer as I have to check quite a few things first, but on the face of it, this seems to have solved my problem... Further edit: In other class libraries this seems to have solved the problem, however in a test windows application it still pulls a previous version through :( If I increment the minor version number again, the same problem come back and I am left with the wrong version being pulled though. Further Edit - I created an entirly new project, added a reference and still had the exact same problem. This suggests the problem is restriced to the project I am referencing. Wish I knew why! Anyone had this problem before and know how to get around it? HELP!

    Read the article

  • spoj: runlength

    - by user285825
    For RLM problem of SPOJ: This is the problem: "Run-length encoding of a number replaces a run of digits (that is, a sequence of consecutive equivalent digits) with the number of digits followed by the digit itself. For example, 44455 would become 3425 (three fours, two fives). Note that run-length encoding does not necessarily shorten the length of the data: 11 becomes 21, and 42 becomes 1412. If a number has more than nine consecutive digits of the same type, the encoding is done greedily: each run grabs as many digits as it can, so 111111111111111 is encoded as 9161. Implement an integer arithmetic calculator that takes operands and gives results in run-length format. You should support addition, subtraction, multiplication, and division. You won't have to divide by zero or deal with negative numbers. Input/Output The input will consist of several test cases, one per line. For each test case, compute the run-length mathematics expression and output the original expression and the result, as shown in the examples. The (decimal) representation of all operands and results will fit in signed 64-bit integers." These are my testcases: input: 11 + 11 988726 - 978625 12 * 41 1124 / 1112 13 * 33 15 / 16 19222317121013161815142715181017 + 10 10 + 19222317121013161815142715181017 19222317121013161815142715181017 / 19222317121013161815142715181017 19222317121013161815142715181017 / 11 11 / 19222317121013161815142715181017 19222317121013161815142715181017 / 12 12 / 19222317121013161815142715181017 19222317121013161815142715181017 / 141621161816101118141217131817191014 141621161816101118141217131817191014 / 19222317121013161815142715181017 19222317121013161815142715181017 / 141621161816101118141217131817191013 141621161816101118141217131817191013 / 19222317121013161815142715181017 19222317121013161815142715181017 * 11 11 * 19222317121013161815142715181017 19222317121013161815142715181017 * 10 10 * 19222317121013161815142715181017 19222317121013161815142715181017 - 10 19222317121013161815142715181017 - 19222317121013161815142715181017 19222317121013161815142715181017 - 141621161816101118141217131817191014 19222317121013161815142715181017 - 141621161816101118141217131817191013 141621161816101118141217131817191013 + 141621161816101118141217131817191013 141621161816101118141217131817191013 + 141621161816101118141217131817191014 141621161816101118141217131817191014 + 141621161816101118141217131817191013 141621161816101118141217131817191014 + 10 10 + 141621161816101118141217131817191013 141621161816101118141217131817191013 + 11 11 + 141621161816101118141217131817191013 141621161816101118141217131817191013 * 12 12 * 141621161816101118141217131817191013 141621161816101118141217131817191014 - 141621161816101118141217131817191014 141621161816101118141217131817191013 - 141621161816101118141217131817191013 141621161816101118141217131817191013 - 10 141621161816101118141217131817191014 - 11 141621161816101118141217131817191014 - 141621161816101118141217131817191013 141621161816101118141217131817191014 / 141621161816101118141217131817191014 141621161816101118141217131817191014 / 141621161816101118141217131817191013 141621161816101118141217131817191013 / 141621161816101118141217131817191014 141621161816101118141217131817191013 / 141621161816101118141217131817191013 141621161816101118141217131817191014 * 11 11 * 141621161816101118141217131817191014 141621161816101118141217131817191014 / 11 11 / 141621161816101118141217131817191014 10 + 10 10 + 11 10 + 15 15 + 10 11 + 10 11 + 10 10 - 10 15 - 10 10 * 10 10 * 15 15 * 10 10 / 111213 output: 11 + 11 = 12 988726 - 978625 = 919111 12 * 41 = 42 1124 / 1112 = 1112 13 * 33 = 39 15 / 16 = 10 19222317121013161815142715181017 + 10 = 19222317121013161815142715181017 10 + 19222317121013161815142715181017 = 19222317121013161815142715181017 19222317121013161815142715181017 / 19222317121013161815142715181017 = 11 19222317121013161815142715181017 / 11 = 19222317121013161815142715181017 11 / 19222317121013161815142715181017 = 10 19222317121013161815142715181017 / 12 = 141621161816101118141217131817191013 12 / 19222317121013161815142715181017 = 10 19222317121013161815142715181017 / 141621161816101118141217131817191014 = 11 141621161816101118141217131817191014 / 19222317121013161815142715181017 = 10 19222317121013161815142715181017 / 141621161816101118141217131817191013 = 12 141621161816101118141217131817191013 / 19222317121013161815142715181017 = 10 19222317121013161815142715181017 * 11 = 19222317121013161815142715181017 11 * 19222317121013161815142715181017 = 19222317121013161815142715181017 19222317121013161815142715181017 * 10 = 10 10 * 19222317121013161815142715181017 = 10 19222317121013161815142715181017 - 10 = 19222317121013161815142715181017 19222317121013161815142715181017 - 19222317121013161815142715181017 = 10 19222317121013161815142715181017 - 141621161816101118141217131817191014 = 141621161816101118141217131817191013 19222317121013161815142715181017 - 141621161816101118141217131817191013 = 141621161816101118141217131817191014 141621161816101118141217131817191013 + 141621161816101118141217131817191013 = 19222317121013161815142715181016 141621161816101118141217131817191013 + 141621161816101118141217131817191014 = 19222317121013161815142715181017 141621161816101118141217131817191014 + 141621161816101118141217131817191013 = 19222317121013161815142715181017 141621161816101118141217131817191014 + 10 = 141621161816101118141217131817191014 10 + 141621161816101118141217131817191013 = 141621161816101118141217131817191013 141621161816101118141217131817191013 + 11 = 141621161816101118141217131817191014 11 + 141621161816101118141217131817191013 = 141621161816101118141217131817191014 141621161816101118141217131817191013 * 12 = 19222317121013161815142715181016 12 * 141621161816101118141217131817191013 = 19222317121013161815142715181016 141621161816101118141217131817191014 - 141621161816101118141217131817191014 = 10 141621161816101118141217131817191013 - 141621161816101118141217131817191013 = 10 141621161816101118141217131817191013 - 10 = 141621161816101118141217131817191013 141621161816101118141217131817191014 - 11 = 141621161816101118141217131817191013 141621161816101118141217131817191014 - 141621161816101118141217131817191013 = 11 141621161816101118141217131817191014 / 141621161816101118141217131817191014 = 11 141621161816101118141217131817191014 / 141621161816101118141217131817191013 = 11 141621161816101118141217131817191013 / 141621161816101118141217131817191014 = 10 141621161816101118141217131817191013 / 141621161816101118141217131817191013 = 11 141621161816101118141217131817191014 * 11 = 141621161816101118141217131817191014 11 * 141621161816101118141217131817191014 = 141621161816101118141217131817191014 141621161816101118141217131817191014 / 11 = 141621161816101118141217131817191014 11 / 141621161816101118141217131817191014 = 10 10 + 10 = 10 10 + 11 = 11 10 + 15 = 15 15 + 10 = 15 11 + 10 = 11 11 + 10 = 11 10 - 10 = 10 15 - 10 = 15 10 * 10 = 10 10 * 15 = 10 15 * 10 = 10 10 / 111213 = 10 I am getting consistently wrong answer. I generated the above testcases trying to make them as representative as possible (boundary conditions, etc). I am not sure how to test it further. Some guidline would be really appreciated.

    Read the article

  • C# Confusing Results from Performance Test

    - by aip.cd.aish
    I am currently working on an image processing application. The application captures images from a webcam and then does some processing on it. The app needs to be real time responsive (ideally < 50ms to process each request). I have been doing some timing tests on the code I have and I found something very interesting (see below). clearLog(); log("Log cleared"); camera.QueryFrame(); camera.QueryFrame(); log("Camera buffer cleared"); Sensor s = t.val; log("Sx: " + S.X + " Sy: " + S.Y); Image<Bgr, Byte> cameraImage = camera.QueryFrame(); log("Camera output acuired for processing"); Each time the log is called the time since the beginning of the processing is displayed. Here is my log output: [3 ms]Log cleared [41 ms]Camera buffer cleared [41 ms]Sx: 589 Sy: 414 [112 ms]Camera output acuired for processing The timings are computed using a StopWatch from System.Diagonostics. QUESTION 1 I find this slightly interesting, since when the same method is called twice it executes in ~40ms and when it is called once the next time it took longer (~70ms). Assigning the value can't really be taking that long right? QUESTION 2 Also the timing for each step recorded above varies from time to time. The values for some steps are sometimes as low as 0ms and sometimes as high as 100ms. Though most of the numbers seem to be relatively consistent. I guess this may be because the CPU was used by some other process in the mean time? (If this is for some other reason, please let me know) Is there some way to ensure that when this function runs, it gets the highest priority? So that the speed test results will be consistently low (in terms of time). EDIT I change the code to remove the two blank query frames from above, so the code is now: clearLog(); log("Log cleared"); Sensor s = t.val; log("Sx: " + S.X + " Sy: " + S.Y); Image<Bgr, Byte> cameraImage = camera.QueryFrame(); log("Camera output acuired for processing"); The timing results are now: [2 ms]Log cleared [3 ms]Sx: 589 Sy: 414 [5 ms]Camera output acuired for processing The next steps now take longer (sometimes, the next step jumps to after 20-30ms, while the next step was previously almost instantaneous). I am guessing this is due to the CPU scheduling. Is there someway I can ensure the CPU does not get scheduled to do something else while it is running through this code?

    Read the article

  • Code Complete 2ed, composition and delegation.

    - by Arlukin
    Hi there. After a couple of weeks reading on this forum I thought it was time for me to do my first post. I'm currently rereading Code Complete. I think it's 15 years since the last time, and I find that I still can't write code ;-) Anyway on page 138 in Code Complete you find this coding horror example. (I have removed some of the code) class Emplyee { public: FullName GetName() const; Address GetAddress() const; PhoneNumber GetWorkPhone() const; ... bool IsZipCodeValid( Address address); ... private: ... } What Steve thinks is bad is that the functions are loosely related. Or has he writes "There's no logical connection between employees and routines that check ZIP codes, phone numbers or job classifications" Ok I totally agree with him. Maybe something like the below example is better. class ZipCode { public: bool IsValid() const; ... } class Address { public: ZipCode GetZipCode() const; ... } class Employee { public: Address GetAddress() const; ... } When checking if the zip is valid you would need to do something like this. employee.GetAddress().GetZipCode().IsValid(); And that is not good regarding to the Law of Demeter ([http://en.wikipedia.org/wiki/Law_of_Demeter][1]). So if you like to remove two of the three dots, you need to use delegation and a couple of wrapper functions like this. class ZipCode { public: bool IsValid(); } class Address { public: ZipCode GetZipCode() const; bool IsZipCodeValid() {return GetZipCode()->IsValid()); } class Employee { public: FullName GetName() const; Address GetAddress() const; bool IsZipCodeValid() {return GetAddress()->IsZipCodeValid()); PhoneNumber GetWorkPhone() const; } employee.IsZipCodeValid(); But then again you have routines that has no logical connection. I personally think that all three examples in this post are bad. Is it some other way that I haven't thougt about? //Daniel

    Read the article

  • Diophantine Equation [closed]

    - by ANIL
    In mathematics, a Diophantine equation (named for Diophantus of Alexandria, a third century Greek mathematician) is a polynomial equation where the variables can only take on integer values. Although you may not realize it, you have seen Diophantine equations before: one of the most famous Diophantine equations is: X^n+Y^n=Z^n We are not certain that McDonald's knows about Diophantine equations (actually we doubt that they do), but they use them! McDonald's sells Chicken McNuggets in packages of 6, 9 or 20 McNuggets. Thus, it is possible, for example, to buy exactly 15 McNuggets (with one package of 6 and a second package of 9), but it is not possible to buy exactly 16 nuggets, since no non- negative integer combination of 6's, 9's and 20's adds up to 16. To determine if it is possible to buy exactly n McNuggets, one has to solve a Diophantine equation: find non-negative integer values of a, b, and c, such that 6a + 9b + 20c = n. Problem 1 Show that it is possible to buy exactly 50, 51, 52, 53, 54, and 55 McNuggets, by finding solutions to the Diophantine equation. You can solve this in your head, using paper and pencil, or writing a program. However you chose to solve this problem, list the combinations of 6, 9 and 20 packs of McNuggets you need to buy in order to get each of the exact amounts. Given that it is possible to buy sets of 50, 51, 52, 53, 54 or 55 McNuggets by combinations of 6, 9 and 20 packs, show that it is possible to buy 56, 57,..., 65 McNuggets. In other words, show how, given solutions for 50-55, one can derive solutions for 56-65. Problem 2 Write an iterative program that finds the largest number of McNuggets that cannot be bought in exact quantity. Your program should print the answer in the following format (where the correct number is provided in place of n): "Largest number of McNuggets that cannot be bought in exact quantity: n" Hints: Hypothesize possible instances of numbers of McNuggets that cannot be purchased exactly, starting with 1 For each possible instance, called n, a. Test if there exists non-negative integers a, b, and c, such that 6a+9b+20c = n. (This can be done by looking at all feasible combinations of a, b, and c) b. If not, n cannot be bought in exact quantity, save n When you have found six consecutive values of n that in fact pass the test of having an exact solution, the last answer that was saved (not the last value of n that had a solution) is the correct answer, since you know by the theorem that any amount larger can also be bought in exact quantity

    Read the article

  • int, short, byte performance in back-to-back for-loops

    - by runrunraygun
    (background: http://stackoverflow.com/questions/1097467/why-should-i-use-int-instead-of-a-byte-or-short-in-c) To satisfy my own curiosity about the pros and cons of using the "appropriate size" integer vs the "optimized" integer i wrote the following code which reinforced what I previously held true about int performance in .Net (and which is explained in the link above) which is that it is optimized for int performance rather than short or byte. DateTime t; long a, b, c; t = DateTime.Now; for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } a = DateTime.Now.Ticks - t.Ticks; t = DateTime.Now; for (short index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } b=DateTime.Now.Ticks - t.Ticks; t = DateTime.Now; for (byte index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } c=DateTime.Now.Ticks - t.Ticks; Console.WriteLine(a.ToString()); Console.WriteLine(b.ToString()); Console.WriteLine(c.ToString()); This gives roughly consistent results in the area of... ~950000 ~2000000 ~1700000 which is in line with what i would expect to see. However when I try repeating the loops for each data type like this... t = DateTime.Now; for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } a = DateTime.Now.Ticks - t.Ticks; the numbers are more like... ~4500000 ~3100000 ~300000 Which I find puzzling. Can anyone offer an explanation? NOTE: In the interest of compairing like for like i've limited the loops to 127 because of the range of the byte value type. Also this is an act of curiosity not production code micro-optimization.

    Read the article

  • How to iterate over all the page breaks in an Excel 2003 worksheet via COM

    - by Martin
    I've been trying to retrieve the locations of all the page breaks on a given Excel 2003 worksheet over COM. Here's an example of the kind of thing I'm trying to do: Excel::HPageBreaksPtr pHPageBreaks = pSheet->GetHPageBreaks(); long count = pHPageBreaks->Count; for (long i=0; i < count; ++i) { Excel::HPageBreakPtr pHPageBreak = pHPageBreaks->GetItem(i+1); Excel::RangePtr pLocation = pHPageBreak->GetLocation(); printf("Page break at row %d\n", pLocation->Row); pLocation.Release(); pHPageBreak.Release(); } pHPageBreaks.Release(); I expect this to print out the row numbers of each of the horizontal page breaks in pSheet. The problem I'm having is that although count correctly indicates the number of page breaks in the worksheet, I can only ever seem to retrieve the first one. On the second run through the loop, calling pHPageBreaks->GetItem(i) throws an exception, with error number 0x8002000b, "invalid index". Attempting to use pHPageBreaks->Get_NewEnum() to get an enumerator to iterate over the collection also fails with the same error, immediately on the call to Get_NewEnum(). I've looked around for a solution, and the closest thing I've found so far is http://support.microsoft.com/kb/210663/en-us. I have tried activating various cells beyond the page breaks, including the cells just beyond the range to be printed, as well as the lower-right cell (IV65536), but it didn't help. If somebody can tell me how to get Excel to return the locations of all of the page breaks in a sheet, that would be awesome! Thank you. @Joel: Yes, I have tried displaying the user interface, and then setting ScreenUpdating to true - it produced the same results. Also, I have since tried combinations of setting pSheet->PrintArea to the entire worksheet and/or calling pSheet->ResetAllPageBreaks() before my call to get the HPageBreaks collection, which didn't help either. @Joel: I've used pSheet->UsedRange to determine the row to scroll past, and Excel does scroll past all the horizontal breaks, but I'm still having the same issue when I try to access the second one. Unfortunately, switching to Excel 2007 did not help either.

    Read the article

  • Where does the version number come from?

    - by Robert Schneider
    I have a version control system (e.g. Subversion) and now I'd like to set up a build process. Now I have to create a version number and insert it into the system. But where does the version number come from and get into? Assume I want to use this common <major.<minor.<bugfix/revision scheme. Should I pass a number to the build script? Or should I pass arguments like increaseMajor, increaseMinor, increaseRevision? Or would you recommend to create a branch with the number which will be detected by the build script? I could imagine that the major and minor version number have to be put in manually somewhere. The revision number could be increased automaically. But still I don't know where I would place the major and minor number. In my case I have some php files that I would like to zip, but before I have to insert some version numbers into php file. I have edited this post to try to make my request clearer: I do not use Subversion, that was just an example. And I don't want to discuss the version number scheme. Imagine I want to create version 3.5.0 or 3.5.1. Would I pass this version number to a build script? Would the script create the branch in the repository with this number or would it expect that someone has already created this branch? Manually? Or would the build script look for name of the branch (e.g. '3.5.1) and use it for further things? And does the version number come from my brain or is it automatically created (I guess the major/minor number it comes from my little brain and revision number is created)? Or would you place the number into a file that may gets inserted into the repository? I guess if would use a release management tool I would insert the version number there. But I don't use one yet.

    Read the article

  • Complex.pm taking up a lot of time

    - by synapz
    While trying to profile my perl program, I find that Complex.pm is taking up a lot of time with what looks like some kind of warning. Also, my code shouldn't have any complex numbers being generated or used, so I am not sure what it is doing in Complex.pm, anyway. Here's the FastProf output for the most expensive lines: /usr/lib/perl5/5.8.8/Math/Complex.pm:182 1.55480 276232: _cannot_make("real part", $re) unless $re =~ /^$gre$/; /usr/lib/perl5/5.8.8/Math/Complex.pm:310 1.01132 453641: sub cartesian {$_[0]->{c_dirty} ? /usr/lib/perl5/5.8.8/Math/Complex.pm:315 0.97497 562188: sub set_cartesian { $_[0]->{p_dirty}++; $_[0]->{c_dirty} = 0; /usr/lib/perl5/5.8.8/Math/Complex.pm:189 0.86302 276232: return $self; /usr/lib/perl5/5.8.8/Math/Complex.pm:1332 0.85628 293660: $self->{display_format} = { %display_format }; /usr/lib/perl5/5.8.8/Math/Complex.pm:185 0.81529 276232: _cannot_make("imaginary part", $im) unless $im =~ /^$gre$/; /usr/lib/perl5/5.8.8/Math/Complex.pm:1316 0.78749 293660: my %display_format = %DISPLAY_FORMAT; /usr/lib/perl5/5.8.8/Math/Complex.pm:1335 0.69534 293660: %{$self->{display_format}} : /usr/lib/perl5/5.8.8/Math/Complex.pm:186 0.66697 276232: $self->set_cartesian([$re, $im ]); /usr/lib/perl5/5.8.8/Math/Complex.pm:170 0.62790 276232: my $self = bless {}, shift; /usr/lib/perl5/5.8.8/Math/Complex.pm:172 0.56733 276232: if (@_ == 0) { /usr/lib/perl5/5.8.8/Math/Complex.pm:316 0.53179 281094: $_[0]->{'cartesian'} = $_[1] } /usr/lib/perl5/5.8.8/Math/Complex.pm:1324 0.48768 293660: if (@_ == 1) { /usr/lib/perl5/5.8.8/Math/Complex.pm:1319 0.44835 293660: if (exists $self->{display_format}) { /usr/lib/perl5/5.8.8/Math/Complex.pm:1318 0.40355 293660: if (ref $self) { # Called as an object method /usr/lib/perl5/5.8.8/Math/Complex.pm:187 0.39950 276232: $self->display_format('cartesian'); /usr/lib/perl5/5.8.8/Math/Complex.pm:1315 0.39312 293660: my $self = shift; /usr/lib/perl5/5.8.8/Math/Complex.pm:1331 0.38087 293660: if (ref $self) { # Called as an object method /usr/lib/perl5/5.8.8/Math/Complex.pm:184 0.35171 276232: $im ||= 0; /usr/lib/perl5/5.8.8/Math/Complex.pm:181 0.34145 276232: if (defined $re) { /usr/lib/perl5/5.8.8/Math/Complex.pm:171 0.33492 276232: my ($re, $im); /usr/lib/perl5/5.8.8/Math/Complex.pm:390 0.20658 128280: my ($z1, $z2, $regular) = @_; /usr/lib/perl5/5.8.8/Math/Complex.pm:391 0.20631 128280: if ($z1->{p_dirty} == 0 and ref $z2 and $z2->{p_dirty} == 0) { Thanks for any help!

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >