Search Results

Search found 4690 results on 188 pages for 'k ran'.

Page 13/188 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • CD/DVD drive not mounted when inserted with Disc of any kind

    - by Cisco Sán
    I just noticed that if a insert a CD or a DVD of any kind, the Drive will start spinning but it will not show the mounted disc. Before it used to ask me what to do with the media inserted. Now it doesn't even do that. I ran in the terminal this code: eject -n and it displays this: " eject: device is `/dev/sr0'" what can I do to get the functionality back on my drive. also ran this command: sudo mount -o ro,unhide,uid=1000 /dev/cdrom /mnt/cdrom but in return i get this: " mount: mount point /mnt/cdrom does not exist" Running Ubuntu 11.10 HERE IS THE HISTORY UNTIL NOW thanks Waltinator: I ran the 'dmesg' but don't know what I'm looking for. Im a newbie on this. The same thing with the 'ls -rlt /var/log' command. Should I create the directory for the mount? at this point really don't know what to do. – Cisco Sán 7 hours ago Here are 3 lines from my dmesg after I successfully inserted a CD: [ 4804.416018] wlan0: no IPv6 routers present [ 8214.125450] ISdit ISO 9660 Extensions: Microsoft Joliet Level 3 [ 8214.136556] ISO 9660 Extensions: RRIP_1991A The first line is a previous event, my wireless going online. The next 2 lines are a good result. The number in square brackets is "seconds since boot", the rest of the line is usually helpful. And no, you should NOT create the mount point. Let's try to get the automatic mounting to work. – waltinator 7 hours ago ok this are my last 3 lines on the 'dmesg' [ 18.130819] init: plymouth-stop pre-start process (1396) terminated with status 1 [ 28.780011] wlan0: no IPv6 routers present [ 505.632119] CE: hpet increased min_delta_ns to 20113 nsec – Cisco Sán 6 hours ago It looks like your CD/DVD drive is not connected to the data bus, and not causing an interrupt when you insert a platter. – waltinator 6 hours ago Try dmesg | grep -A8 CD-ROM which should show you what the system thought was available when it came up. – waltinator 6 hours ago here is my printout [0.774351] scsi 0:0:0:0: CD-ROM HL-DT-ST DVD+-RW GSA-T40N A100 PQ: 0 ANSI: 5 [0.778117] sr0: scsi3-mmc drive: 24x/24x writer dvd-ram cd/rw xa/form2 cdda tray [0.778122] cdrom: Uniform CD-ROM driver Revision: 3.20 [0.778282] sr 0:0:0:0: Attached scsi CD-ROM sr0 [0.778340] sr 0:0:0:0: Attached scsi generic sg0 type 5 [0.780416] Freeing unused kernel memory: 984k freed [0.780732] Write protecting the kernel read-only data: 10240k [0.780986] Freeing unused kernel memory: 20k freed [0.786331] Freeing unused kernel memory: 1400k freed [0.804912] udevd[90]: starting version 173 [0.874178] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded [0.874208] r8169 0000:02:00.0: PCI INT A - GSI 16 (level, low) - IRQ 16 OK, your system sees the drive. Can you open and close the tray with eject and eject -t? Run udevadm monitor while you insert a CD (type ^C when done) and see if you get "change" and "add" messages. – waltinator 6 hours ago ok, "eject" works perfectly "eject -t" does nothing. this is the message for "udevadm monitor": KERNEL[13771.009267] change /devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0/block/sr0 (block) UDEV [13773.878887] change /devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0 /block/sr0 (block) – Cisco Sán 6 hours ago sudo hwinfo --cdrom (the hwinfo package is installable through Software Center) describes my CD-ROM, try it. – waltinator 4 hours ago My read out from the "sudo hwinfo --cdrom" are the following: hal.1: read hal dataprocess 2753: arguments to dbus_move_error() were incorrect, assertion "(dest) == NULL || !dbus_error_is_set ((dest))" failed in file ../../dbus/dbus-errors.c line 280. This is normally a bug in some application using the D-Bus library. libhal.c 3483 : Error unsubscribing to signals, error=The name org.freedesktop.Hal was not provided by any .service files 22: SCSI 00.0: 10602 CD-ROM (DVD) [Created at block.247] Unique ID: KD9E.JgkxTS4hgl2 Parent ID: 3p2J.gdUMCD83e+E SysFS ID: /class/block/sr0 SysFS BusID: 0:0:0:0 SysFS Device Link: /devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0 Hardware Class: cdrom Model: "HL-DT-ST DVD+-RW GSA-T40N" Vendor: "HL-DT-ST" Device: "DVD+-RW GSA-T40N" Revision: "A100" Driver: "ata_piix", "sr" Driver Modules: "ata_piix" Device File: /dev/sr0 (/dev/sg0) Device Files: /dev/sr0, /dev/scd0, /dev/disk/by-id/ata-HL-DT-ST_DVD+_-RW_GSA-T40N_K048BJ74257, /dev/disk/by-path/pci-0000:00:1f.1-scsi-0:0:0:0, /dev/cdrom, /dev/cdrw, /dev/dvd, /dev/dvdrw Device Number: block 11:0 (char 21:0) Features: DVD Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #17 (IDE interface) Drive Speed: 31 Volume ID: "Movie" Publisher: "INTERVIDEO" Creation date: "20050424162207000" Thanks for the help. To Castro, hope this is what you meant and sorry for the comments..

    Read the article

  • The Changing Face of PASS

    - by Bill Graziano
    I’m starting my sixth year on the PASS Board.  I served two years as the Program Director, two years as the Vice-President of Marketing and I’m starting my second year as the Executive Vice-President of Finance.  There’s a pretty good chance that if PASS has done something you don’t like or is doing something you don’t like, that I’m involved in one way or another. Andy Leonard asked in a comment on his blog if the Board had ever reversed itself based on community input.  He asserted that it hadn’t.  I disagree.  I’m not going to try and list all the changes we make inside portfolios based on feedback from and meetings with the community.  I’m going to focus on major governance issues since I was elected to the Board. Management Company The first big change was our management company.  Our old management company had a standard approach to running a non-profit.  It worked well when PASS was launched.  Having a ready-made structure and process to run the organization enabled the organization to grow quickly.  As time went on we were limited in some of the things we wanted to do.  The more involved you were with PASS, the more you saw these limitations.  Key volunteers were regularly providing feedback that they wanted certain changes that were difficult for us to accomplish.  The Board at that time wanted changes that were difficult or impossible to accomplish under that structure. This was not a simple change.  Imagine a $2.5 million dollar company letting all its employees go on a Friday and starting with a new staff on Monday.  We also had a very narrow window to accomplish that so that we wouldn’t affect the Summit – our only source of revenue.  We spent the year after the change rebuilding processes and putting on the Summit in Denver.  That’s a concrete example of a huge change that PASS made to better serve its members.  And it was a change that many in the community were telling us we needed to make. Financials We heard regularly from our members that they wanted our financials posted.  Today on our web site you can find audited financials going back to 2004.  We publish our budget at the start of each year.  If you ask a question about the financials on the PASS site I do my best to answer it.  I’m also trying to do a better job answering financial questions posted in other locations.  (And yes, I know I owe a few of you some blog posts.) That’s another concrete example of a change that our members asked for that the Board agreed was a good decision. Minutes When I started on the Board the meeting minutes were very limited.  The minutes from a two day Board meeting might fit on one page.  I think we did the bare minimum we were legally required to do.  Today Board meeting minutes run from 5 to 12 pages and go into incredible detail on what we talk about.  There are certain topics that are NDA but where possible we try to list the topic we discussed but that the actual discussion was under NDA.  We also publish the agenda of Board meetings ahead of time. This is another specific example where input from the community influenced the decision.  It was certainly easier to have limited minutes but I think the extra effort helps our members understand what’s going on. Board Q&A At the 2009 Summit the Board held its first public Q&A with our members.  We’d always been available individually to answer questions.  There’s a benefit to getting us all in one room and asking the really hard questions to watch us squirm.  We learn what questions we don’t have good answers for.  We get to see how many people in the crowd look interested in the various questions and answers. I don’t recall the genesis of how this came about.  I’m fairly certain there was some community pressure though. Board Votes Until last November, the Board only reported the vote totals and not how individual Board members voted.  That was one of the topics at a great lunch I had with Tim Mitchell and Kendal van Dyke at the Summit.  That was also the topic of the first question asked at the Board Q&A by Kendal.  Kendal expressed his opposition to to anonymous votes clearly and passionately and without trying to paint anyone into a corner.  Less than 24 hours later the PASS Board voted to make individual votes public unless the topic was under NDA.  That’s another area where the Board decided to change based on feedback from our members. Summit Location While this isn’t actually a governance issue it is one of the more public decisions we make that has taken some public criticism.  There is a significant portion of our members that want the Summit near them.  There is a significant portion of our members that like the Summit in Seattle.  There is a significant portion of our members that think it should move around the country.  I was one that felt strongly that there were significant, tangible benefits to our attendees to being in Seattle every year.  I’m also one that has been swayed by some very compelling arguments that we need to have at least one outside Seattle and then revisit the decision.  I can’t tell you how the Board will vote but I know the opinion of our members weighs heavily on the decision. Elections And that brings us to the grand-daddy of all governance issues.  My thesis for this blog post is that the PASS Board has implemented policy changes in response to member feedback.  It isn’t to defend or criticize our election process.  It’s just to say that is has been under going continuous change since I’ve been on the Board.  I ran for the Board in the fall of 2005.  I don’t know much about what happened before then.  I was actively volunteering for PASS for four years prior to that as a chapter leader and on the program committee.  I don’t recall any complaints about elections but that doesn’t mean they didn’t occur.  The questions from the Nominating Committee (NomCom) were trivial and the selection process rudimentary (For example, “Tell us about your accomplishments”).  I don’t even remember who I ran against or how many other people ran.  I ran for the VP of Marketing in the fall of 2007.  I don’t recall any significant changes the Board made in the election process for that election.  I think a lot of the changes in 2007 came from us asking the management company to work on the election process.  I was expecting a similar set of puff ball questions from my previous election.  Boy, was I in for a shock.  The NomCom had found a much better set of questions and really made the interview portion difficult.  The questions were much more behavioral in nature.  I’d already written about my vision for PASS and my goals.  They wanted to know how I handled adversity, how I handled criticism, how I handled conflict, how I handled troublesome volunteers, how I motivated people and how I responded to motivation. And many, many other things. They grilled me for over an hour.  I’ve done a fair bit of technical sales in my time.  I feel I speak well under pressure addressing pointed questions.  This interview intentionally put me under pressure.  In addition to wanting to know about my interpersonal skills, my work experience, my volunteer experience and my supervisory experience they wanted to see how I’d do under pressure.  They wanted to see who would respond under pressure and who wouldn’t.  It was a bit of a shock. That was the first big change I remember in the election process.  I know there were other improvements around the process but none of them stick in my mind quite like the unexpected hour-long grilling. The next big change I remember was after the 2009 elections.  Andy Warren was unhappy with the election process and wanted to make some changes.  He worked with Hannes at HQ and they came up with a better set of processes.  I think Andy moved PASS in the right direction.  Nonetheless, after the 2010 election even more people were very publicly clamoring for changes to our election process.  In August of 2010 we had a choice to make.  There were numerous bloggers criticizing the Board and our upcoming election.  The easy change would be to announce that we were changing the process in a way that would satisfy our critics.  I believe that a knee-jerk response to criticism is seldom correct. Instead the Board spent August and September and October and November listening to the community.  I visited two SQLSaturdays and asked questions of everyone I could.  I attended chapter meetings and asked questions of as many people as they’d let me.  At Summit I made it a point to introduce myself to strangers and ask them about the election.  At every breakfast I’d sit down at a table full of strangers and ask about the election.  I’m happy to say that I left most tables arguing about the election.  Most days I managed to get 2 or 3 breakfasts in. I spent less time talking to people that had already written about the election.  They were already expressing their opinion.  I wanted to talk to people that hadn’t spoken up.  I wanted to know what the silent majority thought.  The Board all attended the Q&A session where our members expressed their concerns about a variety of issues including the election. The PASS Board also chose to create the Election Review Committee.  We wanted people from the community that had been involved with PASS to look at our election process with fresh eyes while listening to what the community had to say and give us some advice on how we could improve the process.  I’m a part of this as is Andy Warren.  None of the other members are on the Board.  I’ve sat in numerous calls and interviews with this group and attended an open meeting at the Summit.  We asked anyone that wanted to discuss the election to come speak with us.  The ERC held an open meeting at the Summit and invited anyone to attend.  There are forums on the ERC web site where we’ve invited people to participate.  The ERC has reached to key people involved in recent elections.  The years that I haven’t mentioned also saw minor improvements in the election process.  Off the top of my head I don’t recall what exact changes were made each year.  Specifically since the 2010 election we’ve gone out of our way to seek input from the community about the process.  I’m not sure what more we could have done to invite feedback from the community. I think to say that we haven’t “fixed” the election process isn’t a fair criticism at this time.  We haven’t rushed any changes through the process.  If you don’t see any changes in our election process in July or August then I think it’s fair to criticize us for ignoring the community or ask for an explanation for what we’ve done. In Summary Andy’s main point was that the PASS Board hasn’t changed in response to our members wishes.  I think I’ve shown that time and time again the PASS Board has changed in response to what our members want.  There are only two outstanding issues: Summit location and elections.  The 2013 Summit location hasn’t been decided yet.  Our work on the elections is also in progress.  And at every step in the election review we’ve gone out of our way to listen to the community and incorporate their feedback on the process. I also hope I’m not encouraging everyone that wants some change in the organization to organize a “blog rush” against the Board.  We take public suggestions very seriously but we also take the time to evaluate those suggestions and learn what the rest of our members think and make a measured decision.

    Read the article

  • Installing Visual Studio 2003 on Windows 7 64-bit

    - by Cole Shelton
    My team is currently supporting a 1.1 app and we are installing VS.NET 2003 on Windows 7. We haven't had any issues on the 32-bit machines, but FrontPage Server Extensions are failing to install on my 64-bit machine. Others on the Interwebs say that they have done this successfully, so I wanted to know if anyone here has and if they know of a solution. The specific issue is that FPSE (to clarify, I'm installing "FrontPage 2002 Server Extensions for IIS 7.0") fails to install correctly. In EventViewer I get the error: Microsoft FrontPage Server Extensions: Error #3004f Message: Unable to read configuration information for Microsoft Internet Information Server: ImpersonateLoggedOnUser Error. I've looded for errors with ImpersonateLoggedOnUser on 64-bit and did find a case where it fails on 64-bit when UAC is turned off (which I did have it off). I turned UAC back on, ran command prompt as administrator, and ran msiexec on the FPSE package. Still no dice. I have followed this tutorial (and the others it points to) for installing: http://frankbuchan.blogspot.com/2009/08/visual-studio-2003-under-windows-7.html

    Read the article

  • Mavericks system Ruby and gem broken

    - by T1000
    When I tried to run ruby -v or gem -v (or any other command), I get: dyld: lazy symbol binding failed: Symbol not found: _ruby_run Referenced from: /usr/local/bin/ruby Expected in: /usr/lib/libruby.dylib dyld: Symbol not found: _ruby_run Referenced from: /usr/local/bin/ruby Expected in: /usr/lib/libruby.dylib This is after I ran rvm system to temporally switch to the system default Ruby. RVM is working fine, but I have a special need to install a gem to the system Ruby and I can't because of this problem. Does anyone know why? It seems to be some kind of link problem to Ruby, but I'm don't know how to solve this. I ran which ruby and it's at this point located in "/usr/local/bin/ruby". I checked the Ruby in "/usr/lib/" and it's pointing to my system Ruby: "../../System/Library/Frameworks/Ruby.framework/Versions/Current/usr/lib/ruby" Any help would be appreciated.

    Read the article

  • Rails route error? uninitialized constant ActiveResource::Base

    - by Marco
    I'm following the Getting Started with Rails guide but ran into an issue opening http://localhost:3000 Shell output: [2010-03-23 19:19:14] ERROR NameError: uninitialized constant ActiveResource::Base Error in the browser: Internal Server Error uninitialized constant ActiveResource::Base WEBrick/1.3.1 (Ruby/1.8.7/2009-06-12) at localhost:3000 I followed the directions exactly as they were specified in the guide: Ran rails generate controller home index I removed index.html Added root :to = "home#index" to config/routes.rb I checked app/views/home/index.html.erb and it is indeed there. I then used rails server to launch the server. At first attempt the browser loads a blank page, but afterwards starts showing the browser error above. Why is it that Rails can't locate the index.html.erb file? Or is the error something different? - Running Rails 3.0beta with Ruby 1.8.7

    Read the article

  • Axapta 2009 WCF service

    - by Rogue101
    I am trying to add a service reference to axapta 2009. All is working well, its a simple web method(external webservice) that gets executed on the server tier(necessary, otherwise clr interop error) But I've ran into the following problems : is it possible to close the proxy one way or another? Because this option is not available in the generated service object in AX (only the webmethods and a tostring). at a certain moment, i ran into a service with faulted state. Normally, you create the service object again, but this didnt solve anything, until i restarted the AOS. Is this normal behaviour? Is the service object cached or something like that on server side? Thx in advance.

    Read the article

  • OpenCV 2.0 and Python

    - by Jive Dadson
    I cannot get the example Python programs to run. When executing the Python command "from opencv import cv" I get the message "ImportError: No module named _cv". There is a stale _cv.pyd in the site-packages directory, but no _cv.py anywhere. See step 5 below. MS Windows XP, VC++ 2008, Python 2.6, OpenCV 2.0 Here's what I have done. Downloaded and ran the MS Windows installer for OpenCV2.0. Downloaded and installed CMake Downloaded and installed SWIG Ran CMake. After unchecking "ENABLE_OPENMP" in the CMake GUI, I was able to build OpenCV using INSTALL.vcproj and BUILD_ALL.vcproj. I do not know what the difference is, so I built everything under both of those project files. The C example programs run fine. Copied contents of OpenCV2.0/Python2.6/lib/site-packages to my installed Python2.6/lib/site-packages directory. I notice that it contains an old _cv.pyd and an old libcv.dll.a.

    Read the article

  • i18n with webpy

    - by translation..
    Hello, i Have a problem with i18n, using webpy. I have followed this : http://webpy.org/cookbook/i18n_support_in_template_file So, in my .wsgi there is : #i18n gettext.install('messages',I18N_PATH,unicode=True) gettext.translation('messages',I18N_PATH,languages=['fr_FR','en_US']).install(True) So i ran : pygettext.py -a -v -d messages -o i18n/messages.po controllers/*.py views/*.html I have copied and translated messages.po, I have also change the "content-type" and the "content-transfer-encoding: "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: UTF-8\n" And i ran this command: msgfmt -v -o i18n/fr_FR/LC_MESSAGES/messages.mo i18n/fr_FR/LC_MESSAGES/messages.po >>>93 messages traduits. here is the arborescence of i18n folder: i18n/: en_US fr_FR messages.po i18n/en_US: LC_MESSAGES i18n/en_US/LC_MESSAGES: messages.mo messages.po i18n/fr_FR: LC_MESSAGES i18n/fr_FR/LC_MESSAGES: messages.mo messages.po But when i go in my website (my browser's language is "fr_fr"), i haven't the string translated. And I don't know why. Anyone has an idea? Thanks

    Read the article

  • Scheduling Swingworker threads

    - by Simonw
    Hi, I have a 2 processes to perform in my swing application, one to fill a list, and one to do operations on each element on the list. I've just moved the 2 processes into Swingworker threads to stop the GUI locking up while the tasks are performed, and because I will need to do this set of operations to several lists, so concurrency wouldn't be a bad idea in the first place. However, when I just ran fillList.execute();doStuffToList.execute(); the doStuffToList thread to ran on the empty list (duh...). How do I tell the second process to wait until the first one is done? I suppose I could just nest the second process at the end of the first one, but i dunno, it seems like bad practice.

    Read the article

  • No speed-up with useless printf's using OpenMP

    - by t2k32316
    I just wrote my first OpenMP program that parallelizes a simple for loop. I ran the code on my dual core machine and saw some speed up when going from 1 thread to 2 threads. However, I ran the same code on a school linux server and saw no speed-up. After trying different things, I finally realized that removing some useless printf statements caused the code to have significant speed-up. Below is the main part of the code that I parallelized: #pragma omp parallel for private(i) for(i = 2; i <= n; i++) { printf("useless statement"); prime[i-2] = is_prime(i); } I guess that the implementation of printf has significant overhead that OpenMP must be duplicating with each thread. What causes this overhead and why can OpenMP not overcome it?

    Read the article

  • User input in perl - Issue with running script in KomodoEdit

    - by golwalkar.rohan
    i wrote this tiny code on gedit and ran it :- #/usr/bin/perl print "Enter the radius of circle: \n"; $radius = <>; chomp $radius; print "radius is: $radius\n"; $circumference = (2*3.141592654) * $radius; print "Circumference of circle with radius : $radius = $circumference\n"; Runs fine using command line.Ran the same code on Komodo Edit: facing an issue i expect first line as output as :- Enter the radius of circle: whearas it waits on the screen i.e waiting for an input and after that runs everything in sequence -- can someone tell me why it runs fine with command line but not Komodo?

    Read the article

  • How to remove previous versions of Xcode

    - by Ed Marty
    When the 3.2 beta of the iPhone OS first came out, Xcode for 3.2 had to be installed side-by-side with the version for 3.1.2. I installed the new version (3.2) in /Developer and moved 3.1.2 to /Xcode3.1.2. Now I want to get rid of the old version and just use the new one since we can do that now. I ran the uninstall tool at /Xcode3.1.2/Library/uninstall-devtools and uninstall-developer-folder, but the directory still exists and has lots of stuff still in it, adding up to about 5 GB. At this point, am I safe just deleting the folder if I want to totally get rid of it and still use the /Developer folder? (At about 8 GB, it's got lots more in it, but I'm not sure if that's just because it's larger or because the old version was 8 GB before I ran the uninstall tool)?

    Read the article

  • Generating a reasonable ctags database for Boost

    - by Robert S. Barnes
    I'm running Ubuntu 8.04 and I ran the command: $ ctags -R --c++-kinds=+p --fields=+iaS --extra=+q -f ~/.vim/tags/stdlibcpp /usr/include/c++/4.2.4/ to generate a ctags database for the standard C++ library and STL ( libstdc++ ) on my system for use with the OmniCppComplete vim script. This gave me a very reasonable 4MB tags file which seems to work fairly well. However, when I ran the same command against the installed Boost headers: $ ctags -R --c++-kinds=+p --fields=+iaS --extra=+q -f ~/.vim/tags/boost /usr/include/boost/ I ended up with a 1.4 GB tags file! I haven't tried it yet, but that seems likes it's going to be too large to be useful. Is there a way to get a slimmer, more usable tags file for my installed Boost headers? Edit Just as a note, libstdc++ includes TR1, which has allot of Boost libs in it. So there must be something weird going on for libstdc++ to come out with a 4 MB tags file and Boost to end up with a 1.4 GB tags file.

    Read the article

  • Setting up java configurations in eclipse..Param files

    - by Charlie
    I'm going to be using ECJ for doing genetic programming and I haven't touched java in years. I'm working on setting up the eclipse environment and I'm catching a few snags. The ECJ source has several packages, and several sample programs come along with it. I ran one sample program (called tutorial1) by going to the run configurations and adding -file pathToParamsFile to the program arguments. This made it point to the params file of that tutorial and run that sample. In a new example I am testing (from the package gui) there are TWO params files. I tried pointing to just one param file and a program ran in the console, but there was supposed to be a GUI which did not load. I'm not sure what I'm doing wrong. Any help would be greaaatly appreciated.

    Read the article

  • Ruby on rails link_to syntax

    - by mizipzor
    After following a tutorial Ive found. Im now redoing it again, without the scaffolding part, to learn it better. However, editing my \app\views\home\index.html.erb to contain: <h1>Rails test project</h1> <%= link_to "my blog", posts_path> I get an error: undefined local variable or method `posts_path' for #<ActionView::Base:0x4e1d954> Before I did this, I ran rake db:create, defined a migration class and ran rake db:migrate, everything without a problem. So the database should contain a posts table. But that link_to command cant seem to find posts_path. That variable (or is it even a function?) is probably defined through the scaffold routine. My question now is; how do I do that manually myself, define posts_path?

    Read the article

  • TFS Build Configuration Vs Test Manager

    - by Ben
    Hi, I have been tasked with setting up TFS 2010 for my company. After setting up TFS and configuring the basics (New collection, project, adding solution to souce control), i thought i would try out some unit testing with it. I configured the Build Controller and Agent for my solution and added in some basic unit tests. These ran ok and did exactly what i would expect (i broke the build then ran the Build Definition, and it showed me where the errors were). My question is, what advantages (apart from the "Black box call stack logger") does Test Manager have over the TFS builds? Is it worth the extra effort of setting that up and configuring it? Only knowing the basics of what Test Manager is, that may be a very naive question to ask, and i appoligise if it is. Thanks

    Read the article

  • OpenMP + SSE gives no speedup

    - by Sayan Ghosh
    Hi, My Professor found out this interesting experiment of 3D Linearly separable Kernel Convolution using SSE and OpenMP, and gave the task to me to benchmark the statistics on our system. The author claims a crazy 18 fold speedup from the serial approach! Might not be always, but we were expecting at least a 2-4 times speedup running this on a Dual Core Intel. http://software.intel.com/en-us/articles/16bit-3d-convolution-sse4openmp-implementation-on-penryn-cpu/#comment-41994 Alas, we could find exactly no speedup. The serial code performs always better, with or without OpenMP. I am using Linux, and observed a certain trend...when no other processes are running on the system, after a while the loadavg starts increasing, and the the %CPU utilization falls down. Another probable false positive which I ran into accidentally...I started the program, then immediately paused it. Then I ran it on background with bg, and saw a speedup of more than 2. This happens all the time! Any advice would be great. Thanks, Sayan

    Read the article

  • Xcode: Application name in OS X cannot be localized?

    - by Andrew Chang
    I have an project named "Multi-Camera Supervisor". I make the "MainMenu.xib" file localized. Here are the menu bar in localized nib file of Xcode: For English: For Japanese: But when I ran my application in Xcode, The first item doesn't work. Here are the menu bars when my application ran: For English: For Japanese You can see that the application name was still "Multi-Camera Supervisor". Meanwhile, the application name appeared in Dock icon was not localized either. How should I solve this? How can I localize the application name not only in main menu but also in Dock?

    Read the article

  • Setting up java configurations in eclipse. multiple .param files

    - by Charlie
    I'm going to be using ECJ for doing genetic programming and I haven't touched java in years. I'm working on setting up the eclipse environment and I'm catching a few snags. The ECJ source has several packages, and several sample programs come along with it. I ran one sample program (called tutorial1) by going to the run configurations and adding -file pathToParamsFile to the program arguments. This made it point to the params file of that tutorial and run that sample. In a new example I am testing (from the package gui) there are TWO params files. I tried pointing to just one param file and a program ran in the console, but there was supposed to be a GUI which did not load. I'm not sure what I'm doing wrong. Any help would be greaaatly appreciated.

    Read the article

  • Long Double in C

    - by reubensammut
    I've been reading the C Primer Plus book and got to this example #include <stdio.h> int main(void) { float aboat = 32000.0; double abet = 2.14e9; long double dip = 5.32e-5; printf("%f can be written %e\n", aboat, aboat); printf("%f can be written %e\n", abet, abet); printf("%f can be written %e\n", dip, dip); return 0; } After I ran this on my macbook I was quite shocked at the output: 32000.000000 can be written 3.200000e+04 2140000000.000000 can be written 2.140000e+09 2140000000.000000 can be written 2.140000e+09 So I looked round and found out that the correct format to display long double is to use %Lf. However I still can't understand why I got the double abet value instead of what I got when I ran it on Cygwin, Ubuntu and iDeneb which is roughly -1950228512509697486020297654959439872418023994430148306244153100897726713609 013030397828640261329800797420159101801613476402327600937901161313172717568.0 00000 can be written 2.725000e+02 Any ideas?

    Read the article

  • New NCover 3.4.2 makes all my MSTest unit tests fail

    - by Steven
    Yesterday, I decided to install the newest NCover version (3.4.2). However, when I ran it on my existing .ncover configuration file, the NCover output suddenly reported that all my MSTest tests failed. Of course those tests succeed when ran within Visual Studio. Because of this, NCover isn't able to determine any coverage. Somehow the old configuration doesn't seem to work with the new version. Does anyone have any idea what the problem could be or how to solve it? Btw. Here is my ncover configuration. Project settings: Path to application to profile: c:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe Arguments for the application to profile: /testcontainer:D:\dev\MyApp\MyApp.Services.Tests.Unit\bin\Debug\MyApp.Services.Tests.Unit.dll /testcontainer:D:\dev\MyApp\MyApp.WS.Tests.Unit\bin\Debug\MyApp.WS.Tests.Unit.dll Working folder: D:\dev\MyApp

    Read the article

  • SQLce Select query problem

    - by DieHard
    Wrote a Truck show Contest voting app, financial etc using sqlite. decided to write backup app for show day using ce 3.5. Created db moved to data directory, created tables configured dgridviews all is well. Entered some test data started management studio 08 ran select query against table and got null returns. Started app from vs studio and found that test data is gone. Re entered data ran query in MS data gone again. If I use VS Studio can start and enter data, close app restart and data is still there, seems only when using outside tool on select query data deletes. I don't know ce that well but this cannot be right. select * from votes = delete * from votes??????????????

    Read the article

  • SSH Password/User problem with Cygwin sshd service

    - by Supernovah
    hello I just set up SSHd through Cygwin on a Windows XP Pro box overseas using a RAT and discluded the openssh package from the install. I ran the cywin shell (from c:\cywin) and ran Now, It's under a port I know is safe and fowarded properly, but I won't share it's number. It's not a common port, but it's under 40000. Firewalls are off etc etc. I'm on the first Admin account made on the box. (It's full admin) I've run the following commands chmod +r /etc/passwd chmod +r /etc/group hmod 777 /var /*Created New Admin User Account To Be Used via SSH*/ mkpasswd -cl > /etc/passwd mkgroup --local > /etc/group I can connect locally, but not externally. I know my ports etc are fine. Any possible problems, as i really need this tunnel up :P

    Read the article

  • Javascript clears a variable after there is no further reference it

    - by Praveen Prasad
    It is said, javascript clears a variable from memory after its being referenced last. just for the sake of this question i created a JS file with only one variable; //file start //variable defined var a=["Hello"] //refenence to that variable alert(a[0]); // //file end no further reference to that variable, so i expect javascript to clear varaible 'a' Now i just ran this page and then opened firebug and ran this code alert(a[0]); Now this alerts the value of variable, If the statement "Javascript clears a variable after there is no further reference it" is true how come alert() shows its value. Is it because all variable defined in global context become properties of window object, and since even after the execution file window objects exist so does it properties.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >