Search Results

Search found 5942 results on 238 pages for 'total'.

Page 39/238 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Free RAM disappears - Memory leak?

    - by Izzy
    On a fresh started system, free reports about 1.5G used RAM (8G RAM alltogether, Ubuntu 12.04 with lightdm and plasma desktop, one konsole window started). Having the apps running I use, it still consumes not more than 2G. However, having the system running for a couple of days, more and more of my free RAM disappears -- without showing up in the list of used apps: while smem --pie=name reports less than 20% used (and 80% being available), everything else says differently. free -m for example reports on about day 7: total used free shared buffers cached Mem: 7459 7013 446 0 178 997 -/+ buffers/cache: 5836 1623 Swap: 9536 296 9240 (so you can see, it's not the buffers or the cache). Today this finally ended with the system crashing completely: the windows manager being gone, apps "hanging in the air" (frameless) -- and a popup notifying me about "too many open files". Syslog reports: kernel: [856738.020829] VFS: file-max limit 752838 reached So I closed those applications I was able to close, and killed X using Ctrl-Alt-backspace. X tried to come up again after that with failsafeX, but was unable to do so as it could no longer detect its configuration. So I switched to a console using Ctrl-Alt-F2, captured all information I could think of (vmstat, free, smem, proc/meminfo, lsof, ps aux), and finally rebooted. X again came up with failsafeX; this time I told it to "recover from my backed-up configuration", then switched to a console and successfully used startx to bring up the graphical environment. I have no real clue to what is causing this issue -- though it must have to do either with X itself, or with some user processes running on X -- as after killing X, free -m output looked like this: total used free shared buffers cached Mem: 7459 2677 4781 0 62 419 -/+ buffers/cache: 2195 5263 Swap: 9536 59 9477 (~3.5GB being freed) -- to compare with the output after a fresh start: total used free shared buffers cached Mem: 7459 1483 5975 0 63 730 -/+ buffers/cache: 689 6769 Swap: 9536 0 9536 Two more helpful outputs are provided by memstat -u. Shortly before the crash: User Count Swap USS PSS RSS mail 1 0 200 207 616 whoopsie 1 764 740 817 2300 colord 1 3200 836 894 2156 root 62 70404 352996 382260 569920 izzy 80 177508 1465416 1519266 1851840 After having X killed: User Count Swap USS PSS RSS mail 1 0 184 188 356 izzy 1 1400 708 739 1080 whoopsie 1 848 668 826 1772 colord 1 3204 804 888 1728 root 62 54876 131708 149950 267860 And after a restart, back in X: User Count Swap USS PSS RSS mail 1 0 212 217 628 whoopsie 1 0 1536 1880 5096 colord 1 0 3740 4217 7936 root 54 0 148668 180911 345132 izzy 47 0 370928 437562 915056 Edit: Just added two graphs from my monitoring system. Interesting to see: everytime when there's a "jump" in memory consumption, CPU peaks as well. Just found this right now -- and it reminds me of another indicator pointing to X itself: Often when returning to my machine and unlocking the screen, I found something doing heavvy work on my CPU. Checking with top, it always turned out to be /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none. So after this long explanation, finally my questions: What could be the possible causes? How can I better identify involved processes/applications? What steps could be taken to avoid this behaviour -- short from rebooting the machine all X days? I was running 8.04 (Hardy) for about 5 years on my old machine, never having experienced the like (always more than 100 days uptime, before rebooting for e.g. kernel updates). This now is a complete new machine with a fresh install of 8.04. In case it matters, some specs: AMD A4-3400 APU with Radeon(tm) HD Graphics, using the open-source ati/radeon driver (so no fglrx installed), 8GB RAM, WDC WD1002FAEX-0 hdd (1TB), Asus F1A75-V Evo mainboard. Ubuntu 12.04 64-bit with KDE4/Plasma. Apps usually open more or less permanently include Evolution, Firefox, konsole (with Midnight Commander running inside, about 4 tabs), and LibreOffice -- plus occasionally Calibre, Gimp and Moneyplex (banking software I'm already using for almost 20 years now, in a version which did fine on Hardy).

    Read the article

  • Fresh Ubuntu Install - Grub not loading

    - by Ryan Sharp
    System Ubuntu 12.04 64-bit Windows 7 SP1 Samsung 64GB SSD - OS' Samsung 1TB HDD - Games, /Home, Swap WD 300'ishGB HDD - Backup Okay, so I'm very frustrated, so please excuse me if I miss anything out as my head is clouded by anger and impatience, etc. I'll try me best, though. First of all, I'll explain how I got to my predicament. I finally got my new SSD. I firstly installed Windows, which completed without a hitch. Afterwards, I tried to install Ubuntu, which failed several times due to problems irrelevant to this question, but I mention this to explain my frustrations, sorry. Anyway, I finally installed Ubuntu. However, I chose the 'bootloader' to be installed on the same partition as where I was installing the Ubuntu Root partition, as that was what I believed to be the best choice. It was of my thinking that it was supposed to go on the same partition and on the SSD, which is my OS drive, though with my problem, it apparently was wrong. So I tried to fix it by checking guides and following their directions, but seemed to have messed it up even more. Here is what I receive after I use the fdisk -l command: (I also added explanations for which I used each partition for) Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x324971d1 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 208896 48957439 24374272 7 HPFS/NTFS/exFAT /dev/sda3 48959486 125044735 38042625 5 Extended /dev/sda5 48959488 125044735 38042624 83 Linux sda1 --/ Windows Recovery sda2 --/ Windows 7 sda3/5 --/ Ubuntu root [ / ] Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc0ee6a69 Device Boot Start End Blocks Id System /dev/sdb1 1024208894 1953523711 464657409 5 Extended /dev/sdb3 * 2048 1024206847 512102400 7 HPFS/NTFS/exFAT /dev/sdb5 1024208896 1939851263 457821184 83 Linux /dev/sdb6 1939853312 1953523711 6835200 82 Linux swap / Solaris sdb3 --/ Partition for Steam games, etc. sdb5 --/ Ubuntu Home [ /home ] sdb6 --/ Ubuntu Swap Partition table entries are not in disk order Disk /dev/sdc: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x292eee23 Device Boot Start End Blocks Id System /dev/sdc1 2048 625141759 312569856 7 HPFS/NTFS/exFAT sdc1 --/ Generic backup I also used a Boot Script that other users suggested, so that I can give more details on my partitions and also where Grub is located... ============================= Boot Info Summary: =============================== => Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos5)/boot/grub on this drive. => Grub2 (v1.99) is installed in the MBR of /dev/sdb and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos5)/boot/grub on this drive. => Windows is installed in the MBR of /dev/sdc. Now that is weird... Why would Grub2 be installed on both my SSD and HDD? Even weirder is why is Windows on the MBR of my backup hard drive? Nothing I did should have done that... Anyway, here is the entire Output from that script... PASTEBIN So, to summarize what I need: How can I fix my setup so grub loads on startup? How can I clean my partitions to remove unnecessary grubs? What did I do wrong so that I don't do something so daft again? Thank you so much for reading, and I hope you can help me. I've been trying to have a successful setup since Friday, and I'm almost at the point that I'm really tempted to throw my computer out the window due to my frustration.

    Read the article

  • jqGrid with JSON data renders table as empty

    - by jgreep
    I'm trying to create a jqgrid, but the table is empty. The table renders, but the data doesn't show. The data I'm getting back from the php call is: { "page":"1", "total":1, "records":"10", "rows":[ {"id":"2:1","cell":["1","image","Chief Scout","Highest Award test","0"]}, {"id":"2:2","cell":["2","image","Link Badge","When you are invested as a Scout, you may be eligible to receive a Link Badge. (See page 45)","0"]}, {"id":"2:3","cell":["3","image","Pioneer Scout","Upon completion of requirements, the youth is invested as a Pioneer Scout","0"]}, {"id":"2:4","cell":["4","image","Voyageur Scout Award","Voyageur Scout Award is the right after Pioneer Scout.","0"]}, {"id":"2:5","cell":["5","image","Voyageur Citizenship","Learning about and caring for your community.","0"]}, {"id":"2:6","cell":["6","image","Fish and Wildlife","Demonstrate your knowledge and involvement in fish and wildlife management.","0"]}, {"id":"2:7","cell":["7","image","Photography","To recognize photography knowledge and skills","0"]}, {"id":"2:8","cell":["8","image","Recycling","Demonstrate your knowledge and involvement in Recycling","0"]}, {"id":"2:10","cell":["10","image","Voyageur Leadership ","Show leadership ability","0"]}, {"id":"2:11","cell":["11","image","World Conservation","World Conservation Badge","0"]} ]} The javascript configuration looks like so: $("#"+tableId).jqGrid ({ url:'getAwards.php?id='+classId, dataType : 'json', mtype:'POST', colNames:['Id','Badge','Name','Description',''], colModel : [ {name:'awardId', width:30, sortable:true, align:'center'}, {name:'badge', width:40, sortable:false, align:'center'}, {name:'name', width:180, sortable:true, align:'left'}, {name:'description', width:380, sortable:true, align:'left'}, {name:'selected', width:0, sortable:false, align:'center'} ], sortname: "awardId", sortorder: "asc", pager: $('#'+tableId+'_pager'), rowNum:15, rowList:[15,30,50], caption: 'Awards', viewrecords:true, imgpath: 'scripts/jqGrid/themes/green/images', jsonReader : { root: "rows", page: "page", total: "total", records: "records", repeatitems: true, cell: "cell", id: "id", userdata: "userdata", subgrid: {root:"rows", repeatitems: true, cell:"cell" } }, width: 700, height: 200 }); The HTML looks like: <table class="awardsList" id="awardsList2" class="scroll" name="awardsList" /> <div id="awardsList2_pager" class="scroll"></div> I'm not sure that I needed to define jsonReader, since I've tried to keep to the default. If the php code will help, I can post it too.

    Read the article

  • How do I get folder size with Exchange Web Services 2010 Managed API?

    - by Adam Tuttle
    I'm attempting to use EWS 2010 Managed API to get the total size of a user's mailbox. I haven't found a web service method to get this data, so I figured I would try to calculate it. I found one seemingly-applicable question on another site about finding mailbox sizes with EWS 2007, but either I'm not understanding what it's asking me to do, or that method just doesn't work with EWS 2010. Noodling around in the code insight, I was able to write what I thought was a method that would traverse the folder structure recursively and result in a combined total for all folders inside the Inbox: private int traverseChildFoldersForSize(Folder f) { int folderSizeSum = 0; if (f.ChildFolderCount > 0) { foreach (Folder c in f.FindFolders(new FolderView(10000))) { folderSizeSum += traverseChildFoldersForSize(c); } } folderSizeSum += (int)f.ManagedFolderInformation.FolderSize; return folderSizeSum; } (Assumes there aren't more than 10,000 folders inside a given folder. Figure that's a safe bet...) Unfortunately, this doesn't work. I'm initiating the recursion with this code: Folder root = Folder.Bind(svc, WellKnownFolderName.Inbox); int totalSize = traverseChildFoldersForSize(root); But a Null Pointer Exception is thrown, essentially saying that [folder].ManagedFolderInformation is a null object reference. For clarity, I also attempted to just get the size of the root folder: Console.Write(root.ManagedFolderInformation.FolderSize.ToString()); Which threw the same NPE exception, so I know that it's not just that once you get to a certain depth in the directory tree that ManagedFolderInformation doesn't exist. Any ideas on how to get the total size of the user's mailbox? Am I barking up the wrong tree?

    Read the article

  • SQL query to count and list the different counties per zip code

    - by Chris
    I have a sql server 2005 table called ZipCode, which has all the US ZIPs in it. For each entry, it lists the zip, county, city, state, and several other details. Some zipcodes span multiple cities and some even span multiple counties. As a result, some zipcodes appear many times in the table. I am trying to query the table to see which zipcodes go across multiple counties. This is what I have so far: select zipcode, count(zipcode) as total, county, state from zipcode group by zipcode, county, state order by zipcode Of 19248 records in the result set, here are the first several records returned: zipcode total county state 00501 2 SUFFOLK NY 00544 2 SUFFOLK NY 00801 3 SAINT THOMAS VI 00802 3 SAINT THOMAS VI 00803 3 SAINT THOMAS VI 00804 3 SAINT THOMAS VI 00805 1 SAINT THOMAS VI 00820 2 SAINT CROIX VI 00821 1 SAINT CROIX VI 00822 1 SAINT CROIX VI 00823 2 SAINT CROIX VI 00824 2 SAINT CROIX VI In this particular example, each zip with a total of two or more happens to be in the table more than once, and it's because the "cityaliasname" (not shown) or some other column differs. But I just want to know which zips are in there more than once because the county column differs. I searched before posting this and I found many questions about counting records but I could not figure out how to apply them to my problem. Please forgive me if there is already a question whose answer applies to this question.

    Read the article

  • django+uploadify - don't working

    - by Erico
    Hi, I'm trying to use an example posted on the "github" the link is http://github.com/tstone/django-uploadify. And I'm having trouble getting work. can you help me? I followed step by step, but does not work. Accessing the "URL" / upload / the only thing is that returns "True" part of settings.py import os PROJECT_ROOT_PATH = os.path.dirname(os.path.abspath(file)) MEDIA_ROOT = os.path.join(PROJECT_ROOT_PATH, 'media') TEMPLATE_DIRS = ( os.path.join(PROJECT_ROOT_PATH, 'templates')) urls.py from django.conf.urls.defaults import * from django.conf import settings from teste.uploadify.views import * from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', (r'^admin/', include(admin.site.urls)), url(r'upload/$', upload, name='uploadify_upload'), ) views.py from django.http import HttpResponse import django.dispatch upload_received = django.dispatch.Signal(providing_args=['data']) def upload(request, *args, **kwargs): if request.method == 'POST': if request.FILES: upload_received.send(sender='uploadify', data=request.FILES['Filedata']) return HttpResponse('True') models.py from django.db import models def upload_received_handler(sender, data, **kwargs): if file: new_media = Media.objects.create( file = data, new_upload = True, ) new_media.save() upload_received.connect(upload_received_handler, dispatch_uid='uploadify.media.upload_received') class Media(models.Model): file = models.FileField(upload_to='images/upload/', null=True, blank=True) new_upload = models.BooleanField() uploadify_tags.py from django import template from teste import settings register = template.Library() @register.inclusion_tag('uploadify/multi_file_upload.html', takes_context=True) def multi_file_upload(context, upload_complete_url): """ * filesUploaded - The total number of files uploaded * errors - The total number of errors while uploading * allBytesLoaded - The total number of bytes uploaded * speed - The average speed of all uploaded files """ return { 'upload_complete_url' : upload_complete_url, 'uploadify_path' : settings.UPLOADIFY_PATH, # checar essa linha 'upload_path' : settings.UPLOADIFY_UPLOAD_PATH, } template - uploadify/multi_file_upload.html {% load uploadify_tags }{ multi_file_upload '/media/images/upload/' %} <script type="text/javascript" src="{{ MEDIA_URL }}js/swfobject.js"></script> <script type="text/javascript" src="{{ MEDIA_URL }}js/jquery.uploadify.js"></script> <div id="uploadify" class="multi-file-upload"><input id="fileInput" name="fileInput" type="file" /></div> <script type="text/javascript">// <![CDATA[ $(document).ready(function() { $('#fileInput').uploadify({ 'uploader' : '/media/swf/uploadify.swf', 'script' : '{% url uploadify_upload %}', 'cancelImg' : '/media/images/uploadify-remove.png/', 'auto' : true, 'folder' : '/media/images/upload/', 'multi' : true, 'onAllComplete' : allComplete }); }); function allComplete(event, data) { $('#uploadify').load('{{ upload_complete_url }}', { 'filesUploaded' : data.filesUploaded, 'errorCount' : data.errors, 'allBytesLoaded' : data.allBytesLoaded, 'speed' : data.speed }); // raise custom event $('#uploadify') .trigger('allUploadsComplete', data); } // ]]</script>

    Read the article

  • Excel Prorated SUMIF

    - by Pete Michaud
    I have a worksheet with 2 columns, one is a dollar amount, and the other is a day of the month (1 through 31) that the dollar amount is due by (the dollars are income streams). So, I use the following formula to SUM all the income streams due on or before a certain day: =SUMIF(C5:C14, "<="&$B$42,B5:B14) Column C is the due day B42 is the cell in which I input the day to compare to like "15" for "total of all income due on or before the 15th" - the idea is to have a sum of all income received for the period. Column B is the dollar amount for each income stream. My question is: Some of the income streams don't have a day next to them (the day cell in column C is blank). That means that that income stream doesn't come in as a check or a chunk on a certain date, it trickles in roughly evenly through out the month. So if the amount for the income stream is $10,000 and the day is 15 in a 30 day month, then I should add $5,000 to the total. That would be something like: =SUMIF(C5:C14, "",???) So where the due date is blank, select ???. ??? isn't just the number, it's the number*(given_day/total_days_in_month). So I think what I need for an accurate total is: =SUMIF(C5:C14, "<="&$B$42,B5:B14) + SUMIF(C5:C14, "",???) But I'm not sure how to write that exactly.

    Read the article

  • check RAM,page file, /PAE, /3GB, SQL server memory using powershell

    - by Manjot
    I am a powershell novice. After days of searching.... I have put together a small powershell script (as below) to check page file, /PAE switch, /3GB switch, SQL server max RAM, min RAM. I am running this on 1 server. If I want to run it on many servers (from a .txt) file, How can I change it ? How can I change it to search boot.ini file's contents for a given server? clear $strComputer="." $PageFile=Get-WmiObject Win32_PageFile -ComputerName $strComputer Write-Host "Page File Size in MB: " ($PageFile.Filesize/(1024*1024)) $colItems=Get-WmiObject Win32_PhysicalMemory -Namespace root\CIMv2 -ComputerName $strComputer $total=0 foreach ($objItem in $colItems) { $total=$total+ $objItem.Capacity } $isPAEEnabled =Get-WmiObject Win32_OperatingSystem -ComputerName $strComputer Write-Host "Is PAE Enabled: " $isPAEEnabled.PAEEnabled Write-Host "Is /3GB Enabled: " | Get-Content C:\boot.ini | Select-String "/3GB" -Quiet # how can I change to search boot.ini file's contents on $strComputer $smo = new-object('Microsoft.SqlServer.Management.Smo.Server') $strSQLServer $memSrv = $smo.information.physicalMemory $memMin = $smo.configuration.minServerMemory.runValue $memMax = $smo.configuration.maxServerMemory.runValue ## DBMS Write-Host "Server RAM available: " -noNewLine Write-Host "$memSrv MB" -fore "blue" Write-Host "SQL memory Min: " -noNewLine Write-Host "$memMin MB " Write-Host "SQL memory Max: " -noNewLine Write-Host "$memMax MB" Any comments how this can be improved? Thanks in advance

    Read the article

  • adding only odd numbers

    - by Jessica M.
    So the question I'm trying to solve the user is supposed to enter any positive number. Then I'm trying to write a program that adds only the odd numbers up to the number the user enters and displays the total. So for example if the user enters 4 my program should add four odd numbers. 1 + 3 + 5 + 7 = 16. The only tools I have available are for statement, if, if/else if,while loop and println. I can only figure out how to print out the odd numbers. I know I want to create a variable named total to store the value of adding up all the odd numbers but I don't know how that fits into the program. import acm.program.*; public class AddingOddNumbers extends ConsoleProgram { public void run() { int n = readInt("enter a positive nunber: "); int total = 0; for (int i = 0; i < n; i++) { if (n == 1) { println(1); } else { println((i * 2) + 1); } } } }

    Read the article

  • New Perl user: using a hash of arrays

    - by Zach H
    I'm doing a little datamining project where a perl script grabs info from a SQL database and parses it. The data consists of several timestamps. I want to find how many of a particular type of timestamp exist on any particular day. Unfortunately, this is my first perl script, and the nature of perl when it comes to hashes and arrays is confusing me quite a bit. Code segment: my %values=();#A hash of the total values of each type of data of each day. #The key is the day, and each key stores an array of each of the values I need. my @proposal; #[drafted timestamp(0), submitted timestamp(1), attny approved timestamp(2),Organiziation approved timestamp(3), Other approval timestamp(4), Approved Timestamp(5)] while(@proposal=$sqlresults->fetchrow_array()){ #TODO: check to make sure proposal is valid #Increment the number of timestamps of each type on each particular date my $i; for($i=0;$i<=5;$i++) $values{$proposal[$i]}[$i]++; #Update rolling average of daily #TODO: To check total load, increment total load on all dates between attourney approve date and accepted date for($i=$proposal[1];$i<=$proposal[2];$i++) $values{$i}[6]++; } I keep getting syntax errors inside the for loops incrementing values. Also, considering that I'm using strict and warnings, will Perl auto-create arrays of the right values when I'm accessing them inside the hash, or will I get out-of bounds errors everywhere? Thanks for any help, Zach

    Read the article

  • Getting percentage of "Count(*)" to the number of all items in "GROUP BY"

    - by celalo
    Let's say I need to have the ratio of "number of items available from certain category" to "the the number of all items". Please consider a MySQL table like this: /* mysql> select * from Item; +----+------------+----------+ | ID | Department | Category | +----+------------+----------+ | 1 | Popular | Rock | | 2 | Classical | Opera | | 3 | Popular | Jazz | | 4 | Classical | Dance | | 5 | Classical | General | | 6 | Classical | Vocal | | 7 | Popular | Blues | | 8 | Popular | Jazz | | 9 | Popular | Country | | 10 | Popular | New Age | | 11 | Popular | New Age | | 12 | Classical | General | | 13 | Classical | Dance | | 14 | Classical | Opera | | 15 | Popular | Blues | | 16 | Popular | Blues | +----+------------+----------+ 16 rows in set (0.03 sec) mysql> SELECT Category, COUNT(*) AS Total -> FROM Item -> WHERE Department='Popular' -> GROUP BY Category; +----------+-------+ | Category | Total | +----------+-------+ | Blues | 3 | | Country | 1 | | Jazz | 2 | | New Age | 2 | | Rock | 1 | +----------+-------+ 5 rows in set (0.02 sec) */ What I need is basically a result set resembles this one: /* +----------+-------+-----------------------------+ | Category | Total | percentage to the all items | (Note that number of all available items is "9") +----------+-------+-----------------------------+ | Blues | 3 | 33 | (3/9)*100 | Country | 1 | 11 | (1/9)*100 | Jazz | 2 | 22 | (2/9)*100 | New Age | 2 | 22 | (2/9)*100 | Rock | 1 | 11 | (1/9)*100 +----------+-------+-----------------------------+ 5 rows in set (0.02 sec) */ How can I achieve such a result set in a single query? Thanks in advance.

    Read the article

  • Invalid conversion from int to int

    - by FOXMULDERIZE
    #include <iostream> #include<fstream> using namespace std; void showvalues(int,int,int []); void showvalues2(int,int); void sumtotal(int,int); int main() { const int SIZE_A= 9; int arreglo[SIZE_A]; ifstream archivo_de_entrada; archivo_de_entrada.open("numeros.txt"); int count,suma,total,a,b,c,d,e,f; int total1=0; int total2=0; //lee/// for(count =0 ;count < SIZE_A;count++) archivo_de_entrada>>arreglo[count] ; archivo_de_entrada.close(); showvalues(0,3,9); HERE IS THE PROBLEM showvalues2(5,8); sumtotal(total1,total2); system("pause"); return 0; } void showvalues(int a,int b,int v) { //muestra//////////////////////// cout<< "los num son "; for(count = a ;count <= b;count++) total1 = total1 + arreglo[count]; cout <<total1<<" "; cout <<endl; } void showvalues2(int c,int d) { ////////////////////////////// cout<< "los num 2 son "; for(count =5 ;count <=8;count++) total2 = total2 + arreglo[count]; cout <<total2<<" "; cout <<endl; } void sumtotal(int e,int f) { ///////////////////////////////// cout<<"la suma de t1 y t2 es "; total= total1 + total2; cout<<total; cout <<endl; }

    Read the article

  • NHibernate update using composite key

    - by Mahesh
    Hi, I have a table defnition as given below: License ClientId Type Total Used ClientId and Type together uniquely identifies a row. I have a mapping file as given below: <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" auto-import="true"> <class name="Acumen.AAM.Domain.Model.License, Acumen.AAM.Domain" lazy="false" table="License"> <id name="ClientId" access="field" column="ClientID" /> <property name="Total" access="field" column="Total"/> <property name="Used" access="field" column="Used"/> <property name="Type" access="field" column="Type"/> </class> </hibernate-mapping> If a client used a license to create a user, I need to update the Used column in the table. As I set ClientId as the id column for this table, I am getting TooManyRowsAffectedException. could you please let me know how to set a composite key at mapping level so that NHibernate can udpate based on ClientId and Type. Something like: Update License SET Used=Used-1 WHERE ClientId='xxx' AND Type=1 Please help. Thanks, Mahesh

    Read the article

  • NSOutlineView/NSTreeController - calculate sum of column

    - by matei
    I have a NSOutlineView bound to a NSTreeController. My data items are a custom class , let's call them "Row", and suppose a Row contains a "name" and a numeric field called "number" . All these "Rows" are found in let's say a "RowContainer" which has a "rows" mutable array holding the parent (level 0) rows. Each row also has a "children" NSMutableArray member which holds it's children. I have this working, and I want to display under the outlineview a textfield with the sum of all the "number" values of the rows. I bound this textfield to a "total" property of the "RowContainer". Now the problem is how or from where to trigger the recalculation of the "total" property, since this involves a recursive walk on the tree of rows, and I always get a "Collection was mutated while being enumerated" error. I've tried making a method "recalculateTotal", and calling it from the "setNumber" method of the "Row" class , but same error occurs. If I put the recalculation logic in the "total" getter, I can't trigger it to do the math. I'm sure the solution is simple but I can't see it

    Read the article

  • XSLT pass node into CDATA JS

    - by davomarti
    I have the following xml <AAA> <BBB>Total</BBB> </AAA> Transforming it with the following xslt using the xsl:copy-of tag, because I want to use the xml to create a xml doc in js. <xsl:template match="/"> <![CDATA[ function myFunc() { xmlStr = ']]><xsl:copy-of select="/"/><![CDATA['; } ]]> </xsl:template> The output looks like this function myFunc() { xmlStr = '<AAA> <BBB>Total</BBB> </AAA>'; } JS doesn't like this because of the missing semicolons ending the lines. How can I fix my xsl to get the below result: function myFunc() { xmlStr = '<AAA><BBB>Total</BBB></AAA>'; } I've tried normalize-space() and translate() but they strip the tags from the xml. Thanks!

    Read the article

  • How do I define a Calculated Measure in MDX based on a Dimension Attribute?

    - by ShaneD
    I would like to create a calculated measure that sums up only a specific subset of records in my fact table based on a dimension attribute. Given: Dimension Date LedgerLineItem {Charge, Payment, Write-Off, Copay, Credit} Measures LedgerAmount Relationships * LedgerLineItem is a degenerate dimension of FactLedger If I break down LedgerAmount by LedgerLineItem.Type I can easily see how much is charged, paid, credit, etc, but when I do not break it down by LedgerLineItem.Type I cannot easily add the credit, paid, credit, etc into a pivot table. I would like to create separate calculated measures that sum only specific type (or multiple types) of ledger facts. An example of the desired output would be: | Year | Charged | Total Paid | Amount - Ledger | | 2008 | $1000 | $600 | -$400 | | 2009 | $2000 | $1500 | -$500 | | Total | $3000 | $2100 | -$900 | I have tried to create the calculated measure a couple of ways and each one works in some circumstances but not in others. Now before anyone says do this in ETL, I have already done it in ETL and it works just fine. What I am trying to do as part of learning to understand MDX better is to figure out how to duplicate what I have done in the ETL in MDX as so far I am unable to do that. Here are two attempts I have made and the problems with them. This works only when ledger type is in the pivot table. It returns the correct amount of the ledger entries (although in this case it is identical to [amount - ledger] but when I try to remove type and just get the sum of all ledger entries it returns unknown. CASE WHEN ([Ledger].[Type].currentMember = [Ledger].[Type].&[Credit]) OR ([Ledger].[Type].currentMember = [Ledger].[Type].&[Paid]) OR ([Ledger].[Type].currentMember = [Ledger].[Type].&[Held Money: Copay]) THEN [Measures].[Amount - ledger] ELSE 0 END This works only when ledger type is not in the pivot table. It always returns the total payment amount, which is incorrect when I am slicing by type as I would only expect to see the credit portion under credit, the paid portion, under paid, $0 under charge, etc. sum({([Ledger].[Type].&[Credit]), ([Ledger].[Type].&[Paid]), ([Ledger].[Type].&[Held Money: Copay])}, [Measures].[Amount - ledger]) Is there any way to make this return the correct numbers regardless of whether Ledger.Type is included in my pivot table or not?

    Read the article

  • x86 .net application with system.OutOfMemoryException

    - by Allen
    Hi,Guys I got OutOfMemoryException after the app running for 1 day, the app totally use 1.5G memory , all consumed by managed heap, gen 2 used 200mb , and LOB used 1.3mb, however the weired thing is, 900mb of space are Free. from perf counter I saw there had number of gen 2 gc collection happened, why GC collector cannot collect those 900mb free space in gen2 and LOB? I'm really appreicate for your help. following info are from windbg: 0:000> !eeheap -gc Number of GC Heaps: 1 generation 0 starts at 0x183153f0 generation 1 starts at 0x182aa834 generation 2 starts at 0x02131000 ephemeral segment allocation context: none segment begin allocated size 02130000 02131000 0312f284 0xffe284(16769668) 07750000 07751000 0874fc5c 0xffec5c(16772188) 09e30000 09e31000 0ae2fc2c 0xffec2c(16772140) 0b230000 0b231000 0c22ffec 0xffefec(16773100) 0c230000 0c231000 0d22f6f0 0xffe6f0(16770800) 0d230000 0d231000 0e22ea10 0xffda10(16767504) 0e230000 0e231000 0f22c1c4 0xffb1c4(16757188) 10390000 10391000 1138ddf4 0xffcdf4(16764404) 154e0000 154e1000 164da90c 0xff990c(16750860) 34aa0000 34aa1000 35a9dbfc 0xffcbfc(16763900) 7aca0000 7aca1000 7bc9edfc 0xffddfc(16768508) 49760000 49761000 4a75ef64 0xffdf64(16768868) 7bca0000 7bca1000 7cc99bac 0xff8bac(16747436) 17a70000 17a71000 183313fc 0x8c03fc(9176060) Large object heap starts at 0x03131000 segment begin allocated size 03130000 03131000 041250c8 0xff40c8(16728264) 08920000 08921000 099102f8 0xfef2f8(16708344) .... .... 4c760000 4c761000 4d71d578 0xfbc578(16500088) 1bb10000 1bb11000 1ca110d0 0xf000d0(15728848) 57760000 57761000 5862d7f8 0xecc7f8(15517688) Total Size: Size: 0x5ab13450 (1521562704) bytes. ------------------------------ GC Heap Size: Size: 0x5ab13450 (1521562704) bytes. 0:000> !dumpheap -stat total 0 objects Statistics: MT Count TotalSize Class Name 73037c78 1 12 System.Configuration.GenericEnumConverter 73036da0 1 12 System.Configuration.InfiniteIntConverter .... .... 69161c3c 35025 6809420 System.Windows.EffectiveValueEntry[] 69164748 54 12471072 MS.Internal.WeakEventTable+EventKey[] 710e2228 9540 190389260 System.Byte[] 710dd2b8 1317031 339257932 System.String 0035a670 6427 902224056 Free Total 3615631 objects

    Read the article

  • Using MongoDB's map/reduce to "group by" two fields

    - by ibz
    I need something slightly more complex than the examples in the MongoDB docs and I can't seem to be able to wrap my head around it. Say I have a collection of objects of the form {date: "2010-10-10", type: "EVENT_TYPE_1", user_id: 123, ...} Now I want to get something similar to a SQL GROUP BY query, grouping over both date and type. That is, I want the number of events of each type in each day. Also, I'd like to make it unique by user_id, ie. if a user has more events in the same day, count it only once. I'm trying to do this with map/reduce. I do db.logs.mapReduce(function() { emit(this.type, 1); }, function(k, vals) { var total = 0; for (var i = 0; i < vals.length; i++) total += vals[i]; return total; }}) which nicely groups by type, but now, how can I group by date at the same time? Seems the key in emit() can't be an array (I thought about doing emit([this.date, this.type], 1)). Also, how can I ensure the per-user uniqueness? I'm just starting with MongoDB and I'm still having trouble grasping the basic concepts. Also, there is not much documentation available out there. Any help from more experienced users is appreciated. Thanks!

    Read the article

  • jqGrid Sort or Search does not work with columns having json dot notation

    - by rsmoorthy
    I have this jqGrid: $("#report").jqGrid( { url: '/py/db?coll=report', datatype: 'json', height: 250, colNames: ['ACN', 'Status', 'Amount'], colModel: [ {name:'acn', sortable:true}, {name:'meta.status', sortable:true}, {name:amount} ], caption: 'Show Report', rownumbers: true, gridview: true, rowNum: 10, rowList: [10,20,30], pager: '#report_pager', viewrecords: true, sortname: 'acn', sortorder: "desc", altRows: true, loadonce: true, mtype: "GET", rowTotal: 1000, jsonReader: { root: "rows", page: "page", total: "total", records: "records", repeatitems: false, id: "acn" } }); Notice that the column 'meta.status' is in JSON dot notation and accordingly the data sent from the server is like this: {"page": "1", "total": "1", "records": "5", "rows": [ {"acn":1,"meta": {"status":"Confirmed"}, "amount": 50}, {"acn":2,"meta": {"status":"Started"}, "amount": 51}, {"acn":3,"meta": {"status":"Stopped"}, "amount": 52}, {"acn":4,"meta": {"status":"Working"}, "amount": 53}, {"acn":5,"meta": {"status":"Started"}, "amount": 54} ] } The problems are of two fold: Sorting does not work on columns with dot notation, here "meta.status". It does not even show the sortable icons on the column header, and nothing happens even if the header is clicked. Sorting does not work, whether loadonce is true or false. If I try Searching (after setting loadonce to true) for the column meta.status (other columns without dot notation is okay), then it throws up a javascript error like this. Any help? Thanks Moorthy

    Read the article

  • How to implement web cache: internal fragmentation VS external fragmentation

    - by Summer_More_More_Tea
    Hi there: I come up with this question when play with Firefox web cache: in which approach does the browser cache a response in limited disk space(take my configuration as an example, 50MB is the upper bound)? I think two ways can be employed. One is cache the total response object one by one, but this is inefficient and will introduce external fragmentation, thus the total cache space may not be fully used. The second is take the total space(50MB) as a consecutive file, splitting it into fixed-length slots; incoming response objects will also be treated blocks of data with the same length as the slots. We can fill slots until the whole file is run out of, then some displacement algorithm can be used to swap out the old cached objects. The latter approach will of course bing in internal fragmentation, but in my opinion is easier to implement and maintain than the first strategy. But when I enter Firefox's Cache directory, I find it (maybe) use a different method: a lot of varied-length files reside in that directory and all those files are filled with undisplayable characters. I don't but really want to know what mechanism that a commercial browser, e.g. Firefoix, employed to implement web cache. Regards.

    Read the article

  • Algorithm for optimally choosing actions to perform a task

    - by Jules
    There are two data types: tasks and actions. An action costs a certain time to complete, and a set of tasks this actions consists of. A task has a set of actions, and our job is to choose one of them. So: class Task { Set<Action> choices; } class Action { float time; Set<Task> dependencies; } For example the primary task could be "Get a house". The possible actions for this task: "Buy a house" or "Build a house". The action "Build a house" costs 10 hours and has the dependencies "Get bricks" and "Get cement", etcetera. The total time is the sum of all the times of the actions required to perform. We want to choose actions such that the total time is minimal. Note that the dependencies can be diamond shaped. For example "Get bricks" could require "Get a car" (to transport the bricks) and "Get cement" would also require a car. Even if you do "Get bricks" and "Get cement" you only have to count the time it takes to get a car once. Note also that the dependencies can be circular. For example "Money" - "Job" - "Car" - "Money". This is no problem for us, we simply select all of "Money", "Job" and "Car". The total time is simply the sum of the time of these 3 things. Mathematical description: Let actions be the chosen actions. valid(task) = ?action ? task.choices. (action ? actions ? ?tasks ? action.dependencies. valid(task)) time = sum {action.time | action ? actions} minimize time subject to valid(primaryTask)

    Read the article

  • SQL Server search filter and order by performance issues

    - by John Leidegren
    We have a table value function that returns a list of people you may access, and we have a relation between a search and a person called search result. What we want to do is that wan't to select all people from the search and present them. The query looks like this SELECT qm.PersonID, p.FullName FROM QueryMembership qm INNER JOIN dbo.GetPersonAccess(1) ON GetPersonAccess.PersonID = qm.PersonID INNER JOIN Person p ON p.PersonID = qm.PersonID WHERE qm.QueryID = 1234 There are only 25 rows with QueryID=1234 but there are almost 5 million rows total in the QueryMembership table. The person table has about 40K people in it. QueryID is not a PK, but it is an index. The query plan tells me 97% of the total cost is spent doing "Key Lookup" witht the seek predicate. QueryMembershipID = Scalar Operator (QueryMembership.QueryMembershipID as QM.QueryMembershipID) Why is the PK in there when it's not used in the query at all? and why is it taking so long time? The number of people total 25, with the index, this should be a table scan for all the QueryMembership rows that have QueryID=1234 and then a JOIN on the 25 people that exists in the table value function. Which btw only have to be evaluated once and completes in less than 1 second.

    Read the article

  • Using PLINQ to calculate and update values within the enclosure does not work

    - by Keith
    I recently needed to do a running total on a report. Where for each group, I order the rows and then calculate the running total based on the previous rows within the group. Aha! I thought, a perfect use case for PLINQ! However, when I wrote up the code, I got some strange behavior. The values I was modifying showed as being modified when stepping through the debugger, but when they were accessed they were always zero. Sample Code: class Item { public int PortfolioID; public int TAAccountID; public DateTime TradeDate; public decimal Shares; public decimal RunningTotal; } List<Item> itemList = new List<Item> { new Item { PortfolioID = 1, TAAccountID = 1, TradeDate = new DateTime(2010, 5, 1), Shares = 5.335m, }, new Item { PortfolioID = 1, TAAccountID = 1, TradeDate = new DateTime(2010, 5, 2), Shares = -2.335m, }, new Item { PortfolioID = 2, TAAccountID = 1, TradeDate = new DateTime(2010, 5, 1), Shares = 7.335m, }, new Item { PortfolioID = 2, TAAccountID = 1, TradeDate = new DateTime(2010, 5, 2), Shares = -3.335m, }, }; var found = (from i in itemList where i.TAAccountID == 1 select new Item { TAAccountID = i.TAAccountID, PortfolioID = i.PortfolioID, Shares = i.Shares, TradeDate = i.TradeDate, RunningTotal = 0 }); found.AsParallel().ForAll(x => { var prevItems = found.Where(i => i.PortfolioID == x.PortfolioID && i.TAAccountID == x.TAAccountID && i.TradeDate <= x.TradeDate); x.RunningTotal = prevItems.Sum(s => s.Shares); }); foreach (Item i in found) { Console.WriteLine("Running total: {0}", i.RunningTotal); } Console.ReadLine(); If I change the select for found to be .ToArray(), then it works fine and I get calculated reuslts. Any ideas what I am doing wrong?

    Read the article

  • convert list of relative widths to pixel widths

    - by mkoryak
    This is a code review question more then anything. I have the following problem: Given a list of relative widths (no unit whatsoever, just all relative to each other), generate a list of pixel widths so that these pixel widths have the same proportions as the original list. input: list of proportions, total pixel width. output: list of pixel widths, where each width is an int, and the sum of these equals the total width. Code: var sizes = "1,2,3,5,7,10".split(","); //initial proportions var totalWidth = 1024; // total pixel width var sizesTotal = 0; for (var i = 0; i < sizes.length; i++) { sizesTotal += parseInt(sizes[i], 10); } if(sizesTotal != 100){ var totalLeft = 100;; for (var i = 0; i < sizes.length; i++) { sizes[i] = Math.floor(parseInt(sizes[i], 10) / sizesTotal * 100); totalLeft -= sizes[i]; } sizes[sizes.lengh - 1] = totalLeft; } totalLeft = totalWidth; for (var i = 0; i < sizes.length; i++) { widths[i] = Math.floor(totalWidth / 100 * sizes[i]) totalLeft -= widths[i]; } widths[sizes.lenght - 1] = totalLeft; //return widths which contains a list of INT pixel sizes

    Read the article

  • Parsing Chunk of Data into Hash of Array With Perl

    - by neversaint
    I have data that looks like this: #info #info2 1:SRX004541 Submitter: UT-MGS, UT-MGS Study: Glossina morsitans transcript sequencing project(SRP000741) Sample: Glossina morsitans(SRS002835) Instrument: Illumina Genome Analyzer Total: 1 run, 8.3M spots, 299.9M bases Run #1: SRR016086, 8330172 spots, 299886192 bases 2:SRX004540 Submitter: UT-MGS Study: Anopheles stephensi transcript sequencing project(SRP000747) Sample: Anopheles stephensi(SRS002864) Instrument: Solexa 1G Genome Analyzer Total: 1 run, 8.4M spots, 401M bases Run #1: SRR017875, 8354743 spots, 401027664 bases 3:SRX002521 Submitter: UT-MGS Study: Massive transcriptional start site mapping of human cells under hypoxic conditions.(SRP000403) Sample: Human DLD-1 tissue culture cell line(SRS001843) Instrument: Solexa 1G Genome Analyzer Total: 6 runs, 27.1M spots, 977M bases Run #1: SRR013356, 4801519 spots, 172854684 bases Run #2: SRR013357, 3603355 spots, 129720780 bases Run #3: SRR013358, 3459692 spots, 124548912 bases Run #4: SRR013360, 5219342 spots, 187896312 bases Run #5: SRR013361, 5140152 spots, 185045472 bases Run #6: SRR013370, 4916054 spots, 176977944 bases What I want to do is to create a hash of array with first line of each chunk as keys and SR## part of lines with "^Run" as its array member: $VAR = { 'SRX004541' => ['SRR016086'], # etc } But why my construct doesn't work. And it must be a better way to do it. use Data::Dumper; my %bighash; my $head = ""; my @temp = (); while ( <> ) { chomp; next if (/^\#/); if ( /^\d{1,2}:(\w+)/ ) { print "$1\n"; $head = $1; } elsif (/^Run \#\d+: (\w+),.*/){ print "\t$1\n"; push @temp, $1; } elsif (/^$/) { push @{$bighash{$head}}, [@temp]; @temp =(); } } print Dumper \%bighash ;

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >