Search Results

Search found 15241 results on 610 pages for 'solaris operating environment'.

Page 574/610 | < Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >

  • Really "wow" them in the interview

    - by Juliet
    Let me put it to you this way: I'm a top-notch programmer, but a notoriously bad interviewee. I've flunked 3 interviews consecutively because I get so nervous that my voice tightens at least 2 octaves higher and I start visibly shaking -- mind you, I can handle whatever technical questions the interviewer throws at me in that state, but I think it looks bad to come off as a quivering, squeaky-voiced young woman during a job interview. I've just got the personality type of a shy computer programmer. No matter how technical I am, I'm going to get passed up in favor of a smooth talker. I have another interview coming up shortly, and I want to really impress the company. Here are my trouble spots: What can I do to be less nervous during my interview? I always get really excited when I hear I have a face-to-face interview, but get more and more anxious as D-Day the interview approaches. My employers wants me to explain what I used to do at my prior employment. I'm a very chatty person and tend to talk/squeak for 10 minutes at a time. How long or short should I time my answers? On that note, when I'm explaining what I did at prior jobs, what exactly is my interviewer looking for? At some point, my interviewer will ask "do you have any questions for me while you're here?" I should, but what kinds of questions should I ask to show that I'm interested in being employed? My interviewer always asks why I'm looking for a new job. The real reason is that my present salary is $27K/yr [Edit to add: and I've yet to get a raise since I started], and I want to make more money -- otherwise the work environment is fine. How do I sugarcoat "I want to make more money" into something that sounds nicer? I have only one prior programmer job, and I've worked there for 18 months, but I have the skill of someone with 4 to 6 years of experience. What can I say to compete against applicants with more work experience? I took a low-paying $27K/yr programming job just to get my foot in IT, and I've been trying to leverage that job as a stepping stone to better opportunities. I get interviews because I consistently out-score senior-level developers in aptitude tests, and my desired salary range is right in the ballpark of what most companies want to offer. Unfortunately, while I've been a programming as a hobby for 10 years and I'm geared to graduate with my BA in Comp Sci in May '09, employers see me as a junior-level programmer with no degree. I want to prove them wrong and get a job that matches my skill level. I'd appreciate any advice anyone has to offer, especially if they can help me get a better job in the process.

    Read the article

  • How to make efficient code emerge through unit testing

    - by Jean
    Hi, I participate in a TDD Coding Dojo, where we try to practice pure TDD on simple problems. It occured to me however that the code which emerges from the unit tests isn't the most efficient. Now this is fine most of the time, but what if the code usage grows so that efficiency becomes a problem. I love the way the code emerges from unit testing, but is it possible to make the efficiency property emerge through further tests ? Here is a trivial example in ruby: prime factorization. I followed a pure TDD approach making the tests pass one after the other validating my original acceptance test (commented at the bottom). What further steps could I take, if I wanted to make one of the generic prime factorization algorithms emerge ? To reduce the problem domain, let's say I want to get a quadratic sieve implementation ... Now in this precise case I know the "optimal algorithm, but in most cases, the client will simply add a requirement that the feature runs in less than "x" time for a given environment. require 'shoulda' require 'lib/prime' class MathTest < Test::Unit::TestCase context "The math module" do should "have a method to get primes" do assert Math.respond_to? 'primes' end end context "The primes method of Math" do should "return [] for 0" do assert_equal [], Math.primes(0) end should "return [1] for 1 " do assert_equal [1], Math.primes(1) end should "return [1,2] for 2" do assert_equal [1,2], Math.primes(2) end should "return [1,3] for 3" do assert_equal [1,3], Math.primes(3) end should "return [1,2] for 4" do assert_equal [1,2,2], Math.primes(4) end should "return [1,5] for 5" do assert_equal [1,5], Math.primes(5) end should "return [1,2,3] for 6" do assert_equal [1,2,3], Math.primes(6) end should "return [1,3] for 9" do assert_equal [1,3,3], Math.primes(9) end should "return [1,2,5] for 10" do assert_equal [1,2,5], Math.primes(10) end end # context "Functionnal Acceptance test 1" do # context "the prime factors of 14101980 are 1,2,2,3,5,61,3853"do # should "return [1,2,3,5,61,3853] for ${14101980*14101980}" do # assert_equal [1,2,2,3,5,61,3853], Math.primes(14101980*14101980) # end # end # end end and the naive algorithm I created by this approach module Math def self.primes(n) if n==0 return [] else primes=[1] for i in 2..n do if n%i==0 while(n%i==0) primes<<i n=n/i end end end primes end end end

    Read the article

  • How do I make Linux recognize a new SATA /dev/sda drive I hot swapped in without rebooting?

    - by Philip Durbin
    Hot swapping out a failed SATA /dev/sda drive worked fine, but when I went to swap in a new drive, it wasn't recognized: [root@fs-2 ~]# tail -18 /var/log/messages May 5 16:54:35 fs-2 kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x50000 action 0xe frozen May 5 16:54:35 fs-2 kernel: ata1: SError: { PHYRdyChg CommWake } May 5 16:54:40 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:45 fs-2 kernel: ata1: device not ready (errno=-16), forcing hardreset May 5 16:54:45 fs-2 kernel: ata1: soft resetting link May 5 16:54:50 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:55 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:54:55 fs-2 kernel: ata1: soft resetting link May 5 16:55:00 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:05 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:05 fs-2 kernel: ata1: soft resetting link May 5 16:55:10 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:40 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:40 fs-2 kernel: ata1: limiting SATA link speed to 1.5 Gbps May 5 16:55:40 fs-2 kernel: ata1: soft resetting link May 5 16:55:45 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:45 fs-2 kernel: ata1: reset failed, giving up May 5 16:55:45 fs-2 kernel: ata1: EH complete I tried a couple things to make the server find the new /dev/sda, such as rescan-scsi-bus.sh but they didn't work: [root@fs-2 ~]# echo "---" > /sys/class/scsi_host/host0/scan -bash: echo: write error: Invalid argument [root@fs-2 ~]# [root@fs-2 ~]# /root/rescan-scsi-bus.sh -l [snip] 0 new device(s) found. 0 device(s) removed. [root@fs-2 ~]# [root@fs-2 ~]# ls /dev/sda ls: /dev/sda: No such file or directory I ended up rebooting the server. /dev/sda was recognized, I fixed the software RAID, and everything is fine now. But for next time, how can I make Linux recognize a new SATA drive I have hot swapped in without rebooting? The operating system in question is RHEL5.3: [root@fs-2 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.3 (Tikanga) The hard drive is a Seagate Barracuda ES.2 SATA 3.0-Gb/s 500-GB, model ST3500320NS. Here is the lscpi output: [root@fs-2 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0a.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0d.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0e.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) Update: In perhaps a dozen cases, we've been forced to reboot servers because hot swap hasn't "just worked." Thanks for the answers to look more into the SATA controller. I've included the lspci output for the problematic system above (hostname: fs-2). I could still use some help understanding what exactly isn't supported hardware-wise in terms of hot swap for that system. Please let me know what other output besides lspci might be useful. The good news is that hot swap "just worked" today on one of our servers (hostname: www-1), which is very rare for us. Here is the lspci output: [root@www-1 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:19.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 09:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS (rev 04)

    Read the article

  • How to maximize http.sys file upload performance

    - by anelson
    I'm building a tool that transfers very large streaming data sets (possibly on the order of terabytes in a single stream; routinely in the tens of gigabytes) from one server to another. The client portion of the tool will read blocks from the source disk, and send them over the network. The server side will read these blocks off the network and write them to a file on the server disk. Right now I'm trying to decide which transport to use. Options are raw TCP, and HTTP. I really, REALLY want to be able to use HTTP. The HttpListener (or WCF if I want to go that route) make it easy to plug in to the HTTP Server API (http.sys), and I can get things like authentication and SSL for free. The problem right now is performance. I wrote a simple test harness that sends 128K blocks of NULL bytes using the BeginWrite/EndWrite async I/O idiom, with async BeginRead/EndRead on the server side. I've modified this test harness so I can do this with either HTTP PUT operations via HttpWebRequest/HttpListener, or plain old socket writes using TcpClient/TcpListener. To rule out issues with network cards or network pathways, both the client and server are on one machine and communicate over localhost. On my 12-core Windows 2008 R2 test server, the TCP version of this test harness can push bytes at 450MB/s, with minimal CPU usage. On the same box, the HTTP version of the test harness runs between 130MB/s and 200MB/s depending upon how I tweak it. In both cases CPU usage is low, and the vast majority of what CPU usage there is is kernel time, so I'm pretty sure my usage of C# and the .NET runtime is not the bottleneck. The box has two 6-core Xeon X5650 processors, 24GB of single-ranked DDR3 RAM, and is used exclusively by me for my own performance testing. I already know about HTTP client tweaks like ServicePointManager.MaxServicePointIdleTime, ServicePointManager.DefaultConnectionLimit, ServicePointManager.Expect100Continue, and HttpWebRequest.AllowWriteStreamBuffering. Does anyone have any ideas for how I can get HTTP.sys performance beyond 200MB/s? Has anyone seen it perform this well on any environment?

    Read the article

  • Help deciphering exception details from WebRequestCreator when setting ContentType to "application/json"

    - by Stephen Patten
    Hello, This one is real simple, run this Silverlight4 example with the ContentType property commented out and you'll get back a response from from my service in xml. Now uncomment the property and run it and you'll get an exception similar to this one. System.Net.ProtocolViolationException occurred Message=A request with this method cannot have a request body. StackTrace: at System.Net.Browser.AsyncHelper.BeginOnUI(SendOrPostCallback beginMethod, Object state) at System.Net.Browser.ClientHttpWebRequest.EndGetResponse(IAsyncResult asyncResult) at com.patten.silverlight.ViewModels.WebRequestLiteViewModel.<MakeCall>b__0(IAsyncResult cb) InnerException: What I am trying to accomplish is just pulling down some JSON formatted data from my wcf endpoint. Can this really be this hard, or is it another classic example of just overlooking something simple. Edit: While perusing SO, I noticed similar posts, like this one Why am I getting ProtocolViolationException when trying to use HttpWebRequest? Thank you, Stephen try { Address = "http://stephenpattenconsulting.com/Services/GetFoodDescriptionsLookup(2100)"; // Get the URI Uri httpSite = new Uri(Address); // Create the request object using the Browsers networking stack // HttpWebRequest wreq = (HttpWebRequest)WebRequest.Create(httpSite); // Create the request using the operating system's networking stack HttpWebRequest wreq = (HttpWebRequest)WebRequestCreator.ClientHttp.Create(httpSite); // http://stackoverflow.com/questions/239725/c-webrequest-class-and-headers // These headers have been set, so use the property that has been exposed to change them // wreq.Headers[HttpRequestHeader.ContentType] = "application/json"; //wreq.ContentType = "application/json"; // Issue the async request. // http://timheuer.com/blog/archive/2010/04/23/silverlight-authorization-header-access.aspx wreq.BeginGetResponse((cb) => { HttpWebRequest rq = cb.AsyncState as HttpWebRequest; HttpWebResponse resp = rq.EndGetResponse(cb) as HttpWebResponse; StreamReader rdr = new StreamReader(resp.GetResponseStream()); string result = rdr.ReadToEnd(); Jounce.Framework.JounceHelper.ExecuteOnUI(() => { Result = result; }); rdr.Close(); }, wreq); } catch (WebException ex) { Jounce.Framework.JounceHelper.ExecuteOnUI(() => { Error = ex.Message; }); } catch (Exception ex) { Jounce.Framework.JounceHelper.ExecuteOnUI(() => { Error = ex.Message; }); } EDIT: This is how the WCF 4 end point is configured, primarily 'adapted' from this link http://geekswithblogs.net/michelotti/archive/2010/08/21/restful-wcf-services-with-no-svc-file-and-no-config.aspx [ServiceContract] public interface IRDA { [OperationContract] IList<FoodDescriptionLookup> GetFoodDescriptionsLookup(String id); [OperationContract] FOOD_DES GetFoodDescription(String id); [OperationContract] FOOD_DES InsertFoodDescription(FOOD_DES foodDescription); [OperationContract] FOOD_DES UpdateFoodDescription(String id, FOOD_DES foodDescription); [OperationContract] void DeleteFoodDescription(String id); } // RESTfull service [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class RDAService : IRDA { [WebGet(UriTemplate = "FoodDescription({id})")] public FOOD_DES GetFoodDescription(String id) { ... } [AspNetCacheProfile("GetFoodDescriptionsLookup")] [WebGet(UriTemplate = "GetFoodDescriptionsLookup({id})")] public IList<FoodDescriptionLookup> GetFoodDescriptionsLookup(String id) { return rda.GetFoodDescriptionsLookup(id); ; } [WebInvoke(UriTemplate = "FoodDescription", Method = "POST")] public FOOD_DES InsertFoodDescription(FOOD_DES foodDescription) { ... } [WebInvoke(UriTemplate = "FoodDescription({id})", Method = "PUT")] public FOOD_DES UpdateFoodDescription(String id, FOOD_DES foodDescription) { ... } [WebInvoke(UriTemplate = "FoodDescription({id})", Method = "DELETE")] public void DeleteFoodDescription(String id) { ... } } And the portion of my web.config that pertains to WCF <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> <standardEndpoints> <webHttpEndpoint> <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true" /> </webHttpEndpoint> </standardEndpoints> </system.serviceModel>

    Read the article

  • Mac OS - Built SVN from source, now Apache2 not loading sites

    - by Geuis
    This relates to another question I asked earlier today. I built SVN 1.6.2 from source. In the process, it has completely screwed up my dev environment. After I built SVN, Apache wasn't loading. It was giving me this error: Syntax error on line 117 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec /apache2/mod_dav_svn.so into server: dlopen(/usr/libexec/apache2/mod_dav_svn.so, 10): no suitable image found. Did find:\n\t/usr/libexec/apache2/mod_dav_svn.so: mach-o, but wrong architecture It appears that SVN over-wrote the old mod_dav_svn.so and I am not able to get it to build as FAT, and I can't recover whatever was originally there. I resolved this(temporarily?) by commenting out the line that was loading the mod_dav_svn.so and got Apache to start at this point. However, even though Apache is running I am now getting this error when trying to access my dev sites: Directory index forbidden by Options directive: /usr/share/tomcat6/webapps/ROOT/ I have Apache2 sitting in front of Tomcat6. I access my local dev site using the internal name "http://localthesite". I have had virtual directories set up that have worked until this SVN debacle. Tomcat is installed at /usr/local/apache-tomcat, and webapps is /usr/local/apache-tomcat/webapps. Our production servers deploy tomcat to /usr/share/tomcat6, so I have symlinks setup on my system to replicate this as well. These point back to the actual installation path. This has all been working fine as well. None of our configurations for Apache2, Tomcat, or .htaccess have changed. Over the weekend, I performed a "Repair Disk Permissions" on the system. This was before I discovered the mod_dav_svn.so problem. I have been reading up on this all morning and the most common answer is that there is an Options -Indexes set. We have this in a config file, but it was there before and when I removed it during testing, I still got the same errors from Apache. At this point, I'm assuming I either totally borked the native Apache2 installation on this Mac, or that there is a permissions error somewhere that I'm missing. The permissions error could be from the SVN installation, or from my repair process. Does anyone have any idea what could be the problem? I'm totally blocked right now and have no idea where to check next.

    Read the article

  • Howto install acts_as_xapian on Ubuntu

    - by normalocity
    I've run into some great resources for installing "acts_as_xapian", and the supporting native libraries that are necessary to make it work. I've even got it to work well on my dev machine (OS X). However, I've followed the instructions on this site, and it doesn't work on my Ubuntu 9.10 (Karmic) production box. After successfully installing the "core" and "bindings" for xapian, the start of the mongrel server fails with the following: => Booting Mongrel => Rails 2.3.5 application starting on http://0.0.0.0:3000 acts_as_xapian: No Ruby bindings for Xapian installed /home/feelgoodtrader/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:105:in `const_missing': uninitialized constant ActsAsXapian::Search::Xapian (NameError) from /var/lib/gems/1.8/gems/acts_as_xapian-0.1.3/lib/acts_as_xapian/search.rb:8 from /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `require' from /home/feelgoodtrader/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from /home/feelgoodtrader/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in `new_constants_in' from /home/feelgoodtrader/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.8/gems/acts_as_xapian-0.1.3/lib/acts_as_xapian.rb:2 from /var/lib/gems/1.8/gems/acts_as_xapian-0.1.3/lib/acts_as_xapian.rb:1:in `each' from /var/lib/gems/1.8/gems/acts_as_xapian-0.1.3/lib/acts_as_xapian.rb:1 from /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `require' from /home/feelgoodtrader/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from /home/feelgoodtrader/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in `new_constants_in' from /home/feelgoodtrader/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from /home/feelgoodtrader/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:208:in `load' from /home/feelgoodtrader/.gem/ruby/1.8/gems/rails-2.3.5/lib/initializer.rb:307:in `load_gems' from /home/feelgoodtrader/.gem/ruby/1.8/gems/rails-2.3.5/lib/initializer.rb:307:in `each' from /home/feelgoodtrader/.gem/ruby/1.8/gems/rails-2.3.5/lib/initializer.rb:307:in `load_gems' from /home/feelgoodtrader/.gem/ruby/1.8/gems/rails-2.3.5/lib/initializer.rb:164:in `process' from /home/feelgoodtrader/.gem/ruby/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `send' from /home/feelgoodtrader/.gem/ruby/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `run' from /home/feelgoodtrader/feelgoodtrader/config/environment.rb:13 from /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `require' from /home/feelgoodtrader/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from /home/feelgoodtrader/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in `new_constants_in' from /home/feelgoodtrader/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from /home/feelgoodtrader/.gem/ruby/1.8/gems/rails-2.3.5/lib/commands/server.rb:84 from /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `require' from script/server:3

    Read the article

  • VB6 app not executing as scheduled task unless user is logged on

    - by Tedd Hansen
    Hi Would greatly apprechiate some help on this one! It may be a tricky one. :) Problem I have an VB6 application which is set up as scheduled task. It starts every time, but when executing CreateObject it fails if user is not logged on to computer. I am looking for information on what could cause this. Primary suspicion is that some Windows API fails. Key points Behaviour confirmed on Windows 2000, 2003, 2008 and Vista. The application executes as user X at scheduled time, executed by Windows Task Scheduler. It executes every time. Application does start! -- If user X is logged on via RDP it runs perfectly. (Note that user doesn't need to be connected, only logged on) -- If user X is not logged on to computer the application fails. Failure point Application fails when using CreateObject() to instansiate a DCOM object which is also part of the application. The DCOM objects declare .dll-references at startup (globally/on top of .bas-file) and run a small startup function. Failure must be during startup, possibly in one of the .dll-declarations. Thoughts After some Googling my initial suspicion was directed at MAPI. From what I could see MAPI required user to be logged on. The application has MAPI references. But even with all MAPI references removed it still does not work. What is the difference if an user is logged on? Registry mapping? Environment? explorer.exe is running. Isn't the user logged on when application executes as the user? What info would help? A definitive answer would be truly great. Any information regarding any VB6 feature/Windows API that could act differently depending on wether user is logged on or not would definitively help. Similar experiences may lead me in the right direction. Tips on debuggin this. Thanks! :)

    Read the article

  • Am I "wasting" my time learning C and other low level stuff ?

    - by Andreas Grech
    I have just recently started learning C and the reason I did that was because frankly, I consider myself to be of a "less-developer" than the people who know and work with C. Thus I planned to start learning ASM, C, C++ and bought the K&R book and started pushing myself to learn the C Programming Language and up till now I'm doing great...learning about arrays the low level way (ie the pointer + offset thing), pointers and all that and obviously asking questions on stackoverflow for guidance. My problem is that sometimes I get thinking if instead of learning this low level stuff, maybe I should maybe spend more time learning newer, more widely used technologies...basically, more web stuff. Now I am well versed with both C# and ASP.Net and currently that's what I do for a living, but still there exists Microsoft technologies that I haven't quite touched upon...such as ASP.Net MVC, The Entity Framework etc... And those are only Microsoft Technologies...obviously there are other stuff that I would like to touch upon...stuff like Ruby, which would lead me to Ruby on Rails, or Python for Django or even Java and J2EE, or maybe even PHP; ie, basically mainly Web Stuff. Mind you, I did touch upon some of the stuff I mentioned earlier on, such as PHP and Java but I am still not quite versed in them as I am in C# and ASP.Net...but still, I think that by learning other languages that are used in the web environment will broaden my horizons...both as a developer who loves learning, and also Career wise. My point is, am I really using up my time correctly by learning older, lower level stuff? Stuff that for my current line of work, will most probably never use, but still is interesting to know ? To be frankly honest, I am also learning C so that I could, maybe someday, get into Electronics and Micro-controller programming but that is a whole new world for me and, if I choose to go there, will take some time to get adjusted to. And even then, I don't know if I can get a career in working in that line of work. ...but I still wonder about this question over and over...Am I doing the right thing by learning C instead of something (Web-stuff) that will most probably be more useful for me career-wise? I'm sorry for such asking such a long and most probably a boring question, but I feel as if this is the only place where I can ask such a question and get an honest answer from experts in the field. Thank you for your time.

    Read the article

  • Crystal reports .net visual studio 2008 bundled edition

    - by DeveloperChris
    I have a serious issue with crystal reports. when run in my development environment or debugged on my local machine it works fine. but when the application is published to a windows server 2003 it has the dreaded "The report you requested requires further information" Message I have had no luck trying to get rid of this message Anybody know what I can try? DC Here is a bunch more info. I use a placeholder in the aspx page and then set the user/password and database in the codebehind I could not get it to work with a dataset and found that I had to assign odbc connection in the cr designer. and then in the code behind change the above details as required. This is done because the same report can get the data from 3 different databases (live development and training) protected override void Page_Load(object sender, EventArgs e) { base.Page_Load(sender, e); CrystalReportSource1.ReportDocument.Load(Server.MapPath(@"~/Reports/Report5asp.rpt")); CrystalReportViewer1.ReportSource = ConfigureCrystalReports(CrystalReportSource1.ReportDocument,CrystalReportViewer1); // parameters CrystalReportViewer1.ParameterFieldInfo.Clear(); AddParameter("DIid", _app.Data["DIid"], CrystalReportViewer1.ParameterFieldInfo); AddParameter("EEid", _app.Data["EEid"], CrystalReportViewer1.ParameterFieldInfo); AddParameter("CTid", _app.Data["CTid"], CrystalReportViewer1.ParameterFieldInfo); } public ReportDocument ConfigureCrystalReports(ReportDocument report, CrystalReportViewer viewer) { String _connectionString = _app.ConnectionString(); String dsn = _app.DSN(); SqlConnectionStringBuilder SConn = new SqlConnectionStringBuilder(_connectionString); TableLogOnInfos crtableLogoninfos = new TableLogOnInfos(); TableLogOnInfo crtableLogoninfo = new TableLogOnInfo(); ConnectionInfo crConnectionInfo = new ConnectionInfo(); crConnectionInfo.ServerName = dsn;// SConn.DataSource; crConnectionInfo.DatabaseName = SConn.InitialCatalog; crConnectionInfo.UserID = SConn.UserID; crConnectionInfo.Password = SConn.Password; crConnectionInfo.Type = ConnectionInfoType.SQL; crConnectionInfo.IntegratedSecurity = false; foreach (CrystalDecisions.CrystalReports.Engine.Table CrTable in report.Database.Tables) { crtableLogoninfo = CrTable.LogOnInfo; crtableLogoninfo.ConnectionInfo = crConnectionInfo; CrTable.ApplyLogOnInfo(crtableLogoninfo); } return report; } As stated this works fine on my XP machine used for development when deployed on winserver 2003 I get the error DC Some interesting additional information I moved the development to my home machine so I could work on the problem this weekend. So now I am developing debugging and testing on the same machine! In VS2008 I can edit and preview the reports with no problems If I fire up the debugger I can view the reports in the browser with no problems But if I publish the website to another folder on the same machine and fire up IIS and try to browse to a report I get the aforementioned error. All else works as expected. IIS runs under different permissions than VS2008 so perhaps its something to do with that, but I have tried lots of different permissions and cannot get it to run. DC

    Read the article

  • Why do I get a NullPointerException when initializing Spring

    - by niklassaers
    Hi guys, I've got a problem running a batch job on my server, whereas it runs fine from Eclipse on my development workstation. I've got my Spring environment set up using Roo, made an entity, and make a batch that does some work, and test it well on my develompent box. I initialize my context and do the work, but when I run my batch on the server, the context isn't initialized properly. Here's the code: public class TestBatch { private static ApplicationContext context; @SuppressWarnings("unchecked") public static void main(final String[] args) { context = new ClassPathXmlApplicationContext("/META-INF/spring/applicationContext.xml"); try { @SuppressWarnings("unused") TestBatch app = new TestBatch(); } catch (Exception ex) { ex.printStackTrace(); } } public void TestBatch() { /** Do Something using the context **/ } } And here's the log and exception: 2010-02-16 11:54:16,072 [main] INFO org.springframework.context.support.ClassPathXmlApplicationContext - Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@6037fb1e: startup date [Tue Feb 16 11:54:16 CET 2010]; root of context hierarchy Exception in thread "main" java.lang.ExceptionInInitializerError at org.springframework.context.support.AbstractRefreshableApplicationContext.createBeanFactory(AbstractRefreshableApplicationContext.java:194) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:127) at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:458) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:388) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:139) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:83) at tld.mydomain.myproject.batch.TestBatch.main(TestBatch.java:51) Caused by: java.lang.NullPointerException at org.springframework.beans.factory.support.DefaultListableBeanFactory.<clinit>(DefaultListableBeanFactory.java:103) ... 7 more Any idea or hints as to what's going on? My classpath is set to $PROJECTHOME/target/classes, and all my dependencies are in $PROJECTHOME/target/lib, and I execute using "export CLASSPATH=$PROJECTHOME/target/classes; java -Djava.endorsed.dirs=$PROJECTHOME/target/lib tld.mydomain.myproject.batch.TestBatch" Is there anything in my setup that looks very wrong? When I run this from Eclipse, no problems, but when I deploy it on the server where I want to run it and run it as described above, I get this problem. Because it runs from Eclipse, I believe my config files are all right, but how can I debug what's causing this? Perhaps I have some config errors or a mismatch between the server and the development workstation after all? Or is this a really weird way of saying file not found, and if so, how do I make sure it finds the correct file?? I'm really looking forward to hearing your suggestions as to how to tackle this problem. Cheers Nik

    Read the article

  • redoing object model construction to fit with asynchronous data fetching

    - by Andrew Patterson
    I have a modeled a set of objects that correspond with some real world concepts. TradeDrug, GenericDrug, TradePackage, DrugForm Underlying the simple object model I am trying to provide is a complex medical terminology that uses numeric codes to represent relationships and concepts, all accessible via a REST service - I am trying to hide away some of that complexity with an object wrapper. To give a concrete example I can call TradeDrug d = Searcher.FindTradeDrug("Zoloft") or TradeDrug d = new TradeDrug(34) where 34 might be the code for Zoloft. This will consult a remote server to find out some details about Zoloft. I might then call GenericDrug generic = d.EquivalentGeneric() System.Out.WriteLine(generic.ActiveIngredient().Name) in order to get back the generic drug sertraline as an object (again via a background REST call to the remote server that has all these drug details), and then perhaps find its ingredient. This model works fine and is being used in some applications that involve data processing. Recently however I wanted to do a silverlight application that used and displayed these objects. The silverlight environment only allows asynchronous REST/web service calls. I have no problems with how to make the asychhronous calls - but I am having trouble with what the design should be for my object construction. Currently the constructors for my objects do some REST calls sychronously. public TradeDrug(int code) { form = restclient.FetchForm(code) name = restclient.FetchName(code) etc.. } If I have to use async 'events' or 'actions' in order to use the Silverlight web client (I know silverlight can be forced to be a synchronous client but I am interested in asychronous approaches), does anyone have an guidance or best practice for how to structure my objects. I can pass in an action callback to the constructor public TradeDrug(int code, Action<TradeDrug> constructCompleted) { } but this then allows the user to have a TradeDrug object instance before what I want to construct is actually finished. It also doesn't support an 'event' async pattern because the object doesn't exist to add the event to until it is constructed. Extending that approach might be a factory object that itself has an asynchronous interface to objects factory.GetTradeDrugAsync(code, completedaction) or with a GetTradeDrugCompleted event? Does anyone have any recommendations?

    Read the article

  • Problem connecting to postgres with Kohana 3 database module on OS X Snow Leopard

    - by Bart Gottschalk
    Environment: Mac OS X 10.6 Snow Leopard PHP 5.3 Kohana 3.0.4 When I try to configure and use a connection to a postgresql database on localhost I get the following error: ErrorException [ Warning ]: mysql_connect(): [2002] No such file or directory (trying to connect via unix:///var/mysql/mysql.sock) Here is the configuration of the database in /modules/database/config/database.php (note the third instance named 'pgsqltest') return array ( 'default' => array ( 'type' => 'mysql', 'connection' => array( /** * The following options are available for MySQL: * * string hostname * string username * string password * boolean persistent * string database * * Ports and sockets may be appended to the hostname. */ 'hostname' => 'localhost', 'username' => FALSE, 'password' => FALSE, 'persistent' => FALSE, 'database' => 'kohana', ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), 'alternate' => array( 'type' => 'pdo', 'connection' => array( /** * The following options are available for PDO: * * string dsn * string username * string password * boolean persistent * string identifier */ 'dsn' => 'mysql:host=localhost;dbname=kohana', 'username' => 'root', 'password' => 'r00tdb', 'persistent' => FALSE, ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), 'pgsqltest' => array( 'type' => 'pdo', 'connection' => array( /** * The following options are available for PDO: * * string dsn * string username * string password * boolean persistent * string identifier */ 'dsn' => 'mysql:host=localhost;dbname=pgsqltest', 'username' => 'postgres', 'password' => 'dev1234', 'persistent' => FALSE, ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), ); And here is the code to create the database instance, create a query and execute the query: $pgsqltest_db = Database::instance('pgsqltest'); $query = DB::query(Database::SELECT, 'SELECT * FROM test')->execute(); I'm continuing to research a solution for this error but thought I'd ask to see if someone else has already found a solution. Any ideas are welcome. One other note is that I know my build of PHP can access this postgresql db since I'm able to manage the db using phpPgAdmin. But I have yet to determine what phpPgAdmin is doing differently to connect to the db than what Kohana 3 is attempting. Bart

    Read the article

  • Django facebook integration error

    - by Gaurav
    I'm trying to integrate facebook into my application so that users can use their FB login to login to my site. I've got everything up and running and there are no issues when I run my site using the command line python manage.py runserver But this same code refuses to run when I try and run it through Apache. I get the following error: Environment: Request Method: GET Request URL: http://helvetica/foodfolio/login Django Version: 1.1.1 Python Version: 2.6.4 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'foodfolio.app', 'foodfolio.facebookconnect'] Installed Middleware: ('django.contrib.sessions.middleware.SessionMiddleware', 'facebook.djangofb.FacebookMiddleware', 'django.middleware.common.CommonMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'facebookconnect.middleware.FacebookConnectMiddleware') Template error: In template /home/swat/website-apps/foodfolio/facebookconnect/templates/facebook/js.html, error at line 2 Caught an exception while rendering: No module named app.models 1 : <script type="text/javascript"> 2 : FB_RequireFeatures(["XFBML"], function() {FB.Facebook.init("{{ facebook_api_key }}", " {% url facebook_xd_receiver %} ")}); 3 : 4 : function facebookConnect(loginForm) { 5 : FB.Connect.requireSession(); 6 : FB.Facebook.get_sessionState().waitUntilReady(function(){loginForm.submit();}); 7 : } 8 : function pushToFacebookFeed(data){ 9 : if(data['success']){ 10 : var template_data = data['template_data']; 11 : var template_bundle_id = data['template_bundle_id']; 12 : feedTheFacebook(template_data,template_bundle_id,function(){}); Traceback: File "/usr/lib/pymodules/python2.6/django/core/handlers/base.py" in get_response 92. response = callback(request, *callback_args, **callback_kwargs) File "/home/swat/website-apps/foodfolio/app/controller.py" in __showLogin__ 238. context_instance = RequestContext(request)) File "/usr/lib/pymodules/python2.6/django/shortcuts/__init__.py" in render_to_response 20. return HttpResponse(loader.render_to_string(*args, **kwargs), **httpresponse_kwargs) File "/usr/lib/pymodules/python2.6/django/template/loader.py" in render_to_string 108. return t.render(context_instance) File "/usr/lib/pymodules/python2.6/django/template/__init__.py" in render 178. return self.nodelist.render(context) File "/usr/lib/pymodules/python2.6/django/template/__init__.py" in render 779. bits.append(self.render_node(node, context)) File "/usr/lib/pymodules/python2.6/django/template/debug.py" in render_node 71. result = node.render(context) File "/usr/lib/pymodules/python2.6/django/template/__init__.py" in render 946. autoescape=context.autoescape)) File "/usr/lib/pymodules/python2.6/django/template/__init__.py" in render 779. bits.append(self.render_node(node, context)) File "/usr/lib/pymodules/python2.6/django/template/debug.py" in render_node 81. raise wrapped Exception Type: TemplateSyntaxError at /foodfolio/login Exception Value: Caught an exception while rendering: No module named app.models

    Read the article

  • emulator crashes

    - by Dave
    I am setting up an Android environment for the first time on Eclipse. I have many years of Eclipse experience, but new to Android. This is being done on an Apple Mac Mini, running MacOSX 10.6.3. I am using the latest Eclipse Classic, version 3.5.2. I am trying to get the tiny hello world program running. When I run it, I get the following in the console window of Eclipse: [2010-06-12 13:48:08 - HelloAndroid] Automatic Target Mode: launching new emulator with compatible AVD 'Android2.2AVD' [2010-06-12 13:48:08 - HelloAndroid] Launching a new emulator with Virtual Device 'Android2.2AVD' [2010-06-12 13:48:11 - HelloAndroid] New emulator found: emulator-5554 [2010-06-12 13:48:11 - HelloAndroid] Waiting for HOME ('android.process.acore') to be launched... [2010-06-12 13:48:12 - Emulator] 2010-06-12 13:48:12.783 emulator[50495:903] Warning once: This application, or a library it uses, is using NSQuickDrawView, which has been deprecated. Apps should cease use of QuickDraw and move to Quartz. [2010-06-12 13:48:19 - HelloAndroid] emulator-5554 disconnected! Cancelling 'com.example.helloandroid.HelloAndroid activity launch'! The emulator crashes with the following info. I have followed all the instructions for running the hello world sample. Anyone have any ideas? Process: emulator [50398] Path: /Users/jeremy/android-sdk-mac_86/tools/emulator Identifier: emulator Version: ??? (???) Code Type: X86 (Native) Parent Process: eclipse [50388] Date/Time: 2010-06-12 13:28:38.595 -0400 OS Version: Mac OS X 10.6.3 (10D573) Report Version: 6 Interval Since Last Report: 363037 sec Crashes Since Last Report: 9 Per-App Crashes Since Last Report: 7 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x00000000007fd000 Crashed Thread: 4 Thread 0: Dispatch queue: com.apple.main-thread 0 emulator 0x000eed4e helper_set_cp15 + 30 Thread 1: 0 libSystem.B.dylib 0x9020bbd2 __workq_kernreturn + 10 1 libSystem.B.dylib 0x9020c168 _pthread_wqthread + 941 2 libSystem.B.dylib 0x9020bd86 start_wqthread + 30 Thread 2: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x9020cb42 kevent + 10 1 libSystem.B.dylib 0x9020d25c _dispatch_mgr_invoke + 215 2 libSystem.B.dylib 0x9020c719 _dispatch_queue_invoke + 163 3 libSystem.B.dylib 0x9020c4be _dispatch_worker_thread2 + 240 4 libSystem.B.dylib 0x9020bf41 _pthread_wqthread + 390 5 libSystem.B.dylib 0x9020bd86 start_wqthread + 30 Thread 3: 0 libSystem.B.dylib 0x901e635a semaphore_timedwait_signal_trap + 10 1 libSystem.B.dylib 0x90213ea1 _pthread_cond_wait + 1066 2 libSystem.B.dylib 0x90242a28 pthread_cond_timedwait_relative_np + 47 3 com.apple.audio.CoreAudio 0x9056f965 CAGuard::WaitFor(unsigned long long) + 219 4 com.apple.audio.CoreAudio 0x90572997 CAGuard::WaitUntil(unsigned long long) + 289 5 com.apple.audio.CoreAudio 0x90570294 HP_IOThread::WorkLoop() + 1892 6 com.apple.audio.CoreAudio 0x9056fb2b HP_IOThread::ThreadEntry(HP_IOThread*) + 17 7 com.apple.audio.CoreAudio 0x9056fa42 CAPThread::Entry(CAPThread*) + 140 8 libSystem.B.dylib 0x90213a19 _pthread_start + 345 9 libSystem.B.dylib 0x9021389e thread_start + 34 Thread 4 Crashed: 0 emulator 0x00040380 audioInDeviceIOProc + 96 Thread 4 crashed with X86 Thread State (32-bit): eax: 0x00000000 ebx: 0x007fd000 ecx: 0x000001fe edx: 0x0198f3f0 edi: 0x00000200 esi: 0x01119850 ebp: 0x01119800 esp: 0xb020fad0 ss: 0x0000001f efl: 0x00010212 eip: 0x00040380 cs: 0x00000017 ds: 0x0000001f es: 0x0000001f fs: 0x0000001f gs: 0x00000037 cr2: 0x007fd000

    Read the article

  • Odd ActiveRecord model dynamic initialization bug in production

    - by qfinder
    I've got an ActiveRecord (2.3.5) model that occasionally exhibits incorrect behavior that appears to be related to a problem in its dynamic initialization. Here's the code: class Widget < ActiveRecord::Base extend ActiveSupport::Memoizable serialize :settings VALID_SETTINGS = %w(show_on_sale show_upcoming show_current show_past) VALID_SETTINGS.each do |setting| class_eval %{ def #{setting}=(val); self.settings[:#{setting}] = (val == "1"); end def #{setting}; self.settings[:#{setting}]; end } end def initialize_settings self.settings ||= { :show_on_sale => true, :show_upcoming => true } end after_initialize :initialize_settings # All the other stuff the model does end The idea was to use a single record field (settings) to persist a bunch of configuration data for this object, but allow all the settings to seamlessly work with form helpers and the like. (Why this approach makes sense here is a little out of scope, but let's assume that it does.) Net-net, Widget should end up with instance methods (eg #show_on_sale= #show_on_sale) for all the entires in the VALID_SETTINGS array. Any default values should be specified in initialize_settings. And indeed this works, mostly. In dev and staging, no problems at all. But in production, the app sometimes ends up in a state where a) any writes to the dynamically generated setters fail and b) none of the default values appear to be set - although my leading theory is that the dynamically generated reader methods are just broken. The code, db, and environment is otherwise identical between the three. A typical error message / backtrace on the fail looks like: IndexError: index 141145 out of string (eval):2:in []=' (eval):2:inshow_on_sale=' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2746:in send' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2746:inattributes=' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2742:in each' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2742:inattributes=' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2634:in `update_attributes!' ...(then controller and all the way down) Ideas or theories as to what might be going on? My leading theory is that something is going wrong in instance initialization wherein the class instance variable settings is ending up as a string rather than a hash. This explains both the above setter failure (:show_on_sale is being used to index into the string) and the fact that getters don't work (an out of bounds [] call on a string just returns nil). But then how and why might settings occasionally end up as a string rather than hash?

    Read the article

  • JVM throws OutOfMemory during gc though there are plenty memory left...

    - by Shu L.
    I have my java application configured to use 5G memory. I got an OutOfMemory out of blue. I inspected the gc log and found plenty of memory left: young generation occupies 4% allocated space, tenure generation occupancy is 5% and perm generation is 43%. I am puzzled why JVM throws an OutOfMemory at the gc time. Does anyone know why this is happening? Your help is greatly appreciated. JVM memory and gc settings: -server -Xms5g -Xmx5g -Xss256k -XX:NewSize=2g -XX:MaxNewSize=2g -XX:+UseParallelOldGC -XX:+UseTLAB -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:+DisableExplicitGC gc.log 2009-09-19T03:34:59.741+0000: 92836.778: [GC Desired survivor size 152567808 bytes, new threshold 1 (max 15) [PSYoungGen: 1941492K-144057K(1947072K)] 3138022K-1340830K(5092800K), 0.1947640 secs] [Times: user=0.61 sys=0.01, real=0.19 secs] 2009-09-19T03:35:29.918+0000: 92866.954: [GC Desired survivor size 152109056 bytes, new threshold 1 (max 15) [PSYoungGen: 1941625K-144049K(1948608K)] 3138398K-1341080K(5094336K), 0.1942000 secs] [Times: user=0.61 sys=0.01, real=0.20 secs] 2009-09-19T03:35:56.883+0000: 92893.920: [GC Desired survivor size 156565504 bytes, new threshold 1 (max 15) [PSYoungGen: 1567994K-115427K(1915072K)] 2765026K-1312820K(5060800K), 0.1586320 secs] [Times: user=0.50 sys=0.01, real=0.16 secs] 2009-09-19T03:35:57.042+0000: 92894.079: [GC Desired survivor size 179961856 bytes, new threshold 1 (max 15) [PSYoungGen: 115427K-0K(1898560K)] 1312820K-1313987K(5044288K), 0.0775650 secs] [Times: user=0.42 sys=0.19, real=0.08 secs] 2009-09-19T03:35:57.120+0000: 92894.157: [Full GC [PSYoungGen: 0K-0K(1898560K)] [ParOldGen: 1313987K-159522K(3145728K)] 1313987K-159522K(5044288K) [PSPermGen: 20025K-19942K(40256K)], 0.56923 00 secs] [Times: user=2.18 sys=0.05, real=0.57 secs] 2009-09-19T03:35:57.690+0000: 92894.726: [GC Desired survivor size 197066752 bytes, new threshold 1 (max 15) [PSYoungGen: 0K-0K(1745728K)] 159522K-159522K(4891456K), 0.0072590 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 2009-09-19T03:35:57.698+0000: 92894.734: [Full GC [PSYoungGen: 0K-0K(1745728K)] [ParOldGen: 159522K-158627K(3145728K)] 159522K-158627K(4891456K) [PSPermGen: 19942K-19934K(45504K)], 0.3280480 secs] [Times: user=1.46 sys=0.00, real=0.33 secs] Heap PSYoungGen total 1745728K, used 87233K [0x00002aab73650000, 0x00002aabf3650000, 0x00002aabf3650000) eden space 1745664K, 4% used [0x00002aab73650000,0x00002aab78b80778,0x00002aabddf10000) from space 64K, 0% used [0x00002aabddf10000,0x00002aabddf10000,0x00002aabddf20000) to space 192448K, 0% used [0x00002aabe7a60000,0x00002aabe7a60000,0x00002aabf3650000) ParOldGen total 3145728K, used 158627K [0x00002aaab3650000, 0x00002aab73650000, 0x00002aab73650000) object space 3145728K, 5% used [0x00002aaab3650000,0x00002aaabd138d28,0x00002aab73650000) PSPermGen total 45504K, used 19965K [0x00002aaaae250000, 0x00002aaab0ec0000, 0x00002aaab3650000) object space 45504K, 43% used [0x00002aaaae250000,0x00002aaaaf5cf668,0x00002aaab0ec0000) I am on 64-bit Linux and JRE 1.6.0_10: $uname -a Linux x 2.6.24-etchnhalf.1-amd64 #1 SMP Tue Oct 14 03:11:45 UTC 2008 x86_64 GNU/Linux $java -version java version "1.6.0_10" Java(TM) SE Runtime Environment (build 1.6.0_10-b33) Java HotSpot(TM) 64-Bit Server VM (build 11.0-b15, mixed mode)

    Read the article

  • Java Hardware Acceleration

    - by Freezerburn
    I have been spending some time looking into the hardware acceleration features of Java, and I am still a bit confused as none of the sites that I found online directly and clearly answered some of the questions I have. So here are the questions I have for hardware acceleration in Java: 1) In Eclipse version 3.6.0, with the most recent Java update for Mac OS X (1.6u10 I think), is hardware acceleration enabled by default? I read somewhere that someCanvas.getGraphicsConfiguration().getBufferCapabilities().isPageFlipping() is supposed to give an indication of whether or not hardware acceleration is enabled, and my program reports back true when that is run on my main Canvas instance for drawing to. If my hardware acceleration is not enabled now, or by default, what would I have to do to enable it? 2) I have seen a couple articles here and there about the difference between a BufferedImage and VolatileImage, mainly saying that VolatileImage is the hardware accelerated image and is stored in VRAM for fast copy-from operations. However, I have also found some instances where BufferedImage is said to be hardware accelerated as well. Is BufferedImage hardware accelerated as well in my environment? What would be the advantage of using a VolatileImage if both types are hardware accelerated? My main assumption for the advantage of having a VolatileImage in the case of both having acceleration is that VolatileImage is able to detect when its VRAM has been dumped. But if BufferedImage also support acceleration now, would it not have the same kind of detection built into it as well, just hidden from the user, in case that the memory is dumped? 3) Is there any advantage to using someGraphicsConfiguration.getCompatibleImage/getCompatibleVolatileImage() as opposed to ImageIO.read() In a tutorial I have been reading for some general concepts about setting up the rendering window properly (tutorial) it uses the getCompatibleImage method, which I believe returns a BufferedImage, to get their "hardware accelerated" images for fast drawing, which ties into question 2 about if it is hardware accelerated. 4) This is less hardware acceleration, but it is something I have been curious about: do I need to order which graphics get drawn? I know that when using OpenGL via C/C++ it is best to make sure that the same graphic is drawn in all the locations it needs to be drawn at once to reduce the number of times the current texture needs to be switch. From what I have read, it seems as if Java will take care of this for me and make sure things are drawn in the most optimal fashion, but again, nothing has ever said anything like this clearly. 5) What AWT/Swing classes support hardware acceleration, and which ones should be used? I am currently using a class that extends JFrame to create a window, and adding a Canvas to it from which I create a BufferStrategy. Is this good practice, or is there some other type of way I should be implementing this? Thank you very much for your time, and I hope I provided clear questions and enough information for you to answer my several questions.

    Read the article

  • Unable to verify body hash for DKIM

    - by Joshua
    I'm writing a C# DKIM validator and have come across a problem that I cannot solve. Right now I am working on calculating the body hash, as described in Section 3.7 Computing the Message Hashes. I am working with emails that I have dumped using a modified version of EdgeTransportAsyncLogging sample in the Exchange 2010 Transport Agent SDK. Instead of converting the emails when saving, it just opens a file based on the MessageID and dumps the raw data to disk. I am able to successfully compute the body hash of the sample email provided in Section A.2 using the following code: SHA256Managed hasher = new SHA256Managed(); ASCIIEncoding asciiEncoding = new ASCIIEncoding(); string rawFullMessage = File.ReadAllText(@"C:\Repositories\Sample-A.2.txt"); string headerDelimiter = "\r\n\r\n"; int headerEnd = rawFullMessage.IndexOf(headerDelimiter); string header = rawFullMessage.Substring(0, headerEnd); string body = rawFullMessage.Substring(headerEnd + headerDelimiter.Length); byte[] bodyBytes = asciiEncoding.GetBytes(body); byte[] bodyHash = hasher.ComputeHash(bodyBytes); string bodyBase64 = Convert.ToBase64String(bodyHash); string expectedBase64 = "2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8="; Console.WriteLine("Expected hash: {1}{0}Computed hash: {2}{0}Are equal: {3}", Environment.NewLine, expectedBase64, bodyBase64, expectedBase64 == bodyBase64); The output from the above code is: Expected hash: 2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8= Computed hash: 2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8= Are equal: True Now, most emails come across with the c=relaxed/relaxed setting, which requires you to do some work on the body and header before hashing and verifying. And while I was working on it (failing to get it to work) I finally came across a message with c=simple/simple which means that you process the whole body as is minus any empty CRLF at the end of the body. (Really, the rules for Body Canonicalization are quite ... simple.) Here is the real DKIM email with a signature using the simple algorithm (with only unneeded headers cleaned up). Now, using the above code and updating the expectedBase64 hash I get the following results: Expected hash: VnGg12/s7xH3BraeN5LiiN+I2Ul/db5/jZYYgt4wEIw= Computed hash: ISNNtgnFZxmW6iuey/3Qql5u6nflKPTke4sMXWMxNUw= Are equal: False The expected hash is the value from the bh= field of the DKIM-Signature header. Now, the file used in the second test is a direct raw output from the Exchange 2010 Transport Agent. If so inclined, you can view the modified EdgeTransportLogging.txt. At this point, no matter how I modify the second email, changing the start position or number of CRLF at the end of the file I cannot get the files to match. What worries me is that I have been unable to validate any body hash so far (simple or relaxed) and that it may not be feasible to process DKIM through Exchange 2010.

    Read the article

  • MBR Booting from DOS

    - by eflukx
    For a project I would like to invoke the MBR on the first harddisk directly from DOS. I've written a small assembler program that loads the MBR in memory at 0:7c00h an does a far jump to it. I've put my util on a bootable floppy. The disk (HD0, 0x80) i'm trying to boot has a TrueCrypt boot loader on it. It shows up the TrueCrypt screen, but after typing in the password it crashes the system. When I run my little utlility (w00t.com) on a normal WinXP machine it seams to crash immedealty. Apparently I'm forgetting some crucial stuff the BIOS normally does, my guess is it's something trivial. Can someone with better bare-metal DOS and BIOS experience help me out? Heres my code: .MODEL tiny .386 _TEXT SEGMENT USE16 INCLUDE BootDefs.i ORG 100h start: ; http://vxheavens.com/lib/vbw05.html ; Before DOS has booted the BIOS stores the amount of usable lower memory ; in a word located at 0:413h in memory. We going to erase this value because ; we have booted dos before loading the bootsector, and dos is fat (and ugly). ; fake free memory ;push ds ;push 0 ;pop ds ;mov ax, TC_BOOT_LOADER_SEGMENT / 1024 * 16 + TC_BOOT_MEMORY_REQUIRED ;mov word ptr ds:[413h], ax ;ax = memory in K ;pop ds ;lea si, memory_patched_msg ;call print ;mov ax, cs mov ax, 0 mov es, ax ; read first sector to es:7c00h (== cs:7c00) mov dl, 80h mov cl, 1 mov al, 1 mov bx, 7c00h ;load sector to es:bx call read_sectors lea si, mbr_loaded_msg call print lea si, jmp_to_mbr_msg call print ;Set BIOS default values in environment cli mov dl, 80h ;(drive C) xor ax, ax mov ds, ax mov es, ax mov ss, ax mov sp, 0ffffh sti push es push 7c00h retf ;Jump to MBR code at 0:7c00h ; Print string print: xor bx, bx mov ah, 0eh cld @@: lodsb test al, al jz print_end int 10h jmp @B print_end: ret ; Read sectors of the first cylinder read_sectors: mov ch, 0 ; Cylinder mov dh, 0 ; Head ; DL = drive number passed from BIOS mov ah, 2 int 13h jnc read_ok lea si, disk_error_msg call print read_ok: ret memory_patched_msg db 'Memory patched', 13, 10, 7, 0 mbr_loaded_msg db 'MBR loaded', 13, 10, 7, 0 jmp_to_mbr_msg db 'Jumping to MBR code', 13, 10, 7, 0 disk_error_msg db 'Disk error', 13, 10, 7, 0 _TEXT ENDS END start

    Read the article

  • Maven webapp with maven-eclipse-plugin doesn't generates <dependent-module>

    - by chrsk
    I use the eclipse:eclipse goal to generate an Eclipse Project environment. The deployment works fine. The goal creates the var classpath entries for all needed dependencies. With m2eclipse there was the Maven Container which defines an export folder which was WEB-INF/lib for me. But i don't want to rely on m2eclipse so i don't use it anymore. the class path entries which are generated by eclipse:eclipse goal don't have such a export folder. While booting the servlet container with WTP it publishes all resources and classes except the libraries to the context. Whats missing to publish the needed libs, or isn't that possible without m2eclipse integration? Enviroment Eclipse 3.5 JEE Galileo Apache Maven 2.2.1 (r801777; 2009-08-06 21:16:01+0200) Java version: 1.6.0_14 m2eclipse The maven-eclipse-plugin configuration <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-eclipse-plugin</artifactId> <version>2.8</version> <configuration> <projectNameTemplate>someproject-[artifactId]</projectNameTemplate> <useProjectReferences>false</useProjectReferences> <downloadSources>false</downloadSources> <downloadJavadocs>false</downloadJavadocs> <wtpmanifest>true</wtpmanifest> <wtpversion>2.0</wtpversion> <wtpapplicationxml>true</wtpapplicationxml> <wtpContextName>someproject-[artifactId]</wtpContextName> <additionalProjectFacets> <jst.web>2.3</jst.web> </additionalProjectFacets> </configuration> </plugin> The generated files After executing the eclipse:eclipse goal, the dependent-module is not listed in my generated .settings/org.eclipse.wst.common.component, so on server booting i miss the depdencies. This is what i get: <?xml version="1.0" encoding="UTF-8"?> <project-modules id="moduleCoreId" project-version="1.5.0"> <wb-module deploy-name="someproject-core"> <wb-resource deploy-path="/" source-path="src/main/java"/> <wb-resource deploy-path="/" source-path="src/main/webapp"/> <wb-resource deploy-path="/" source-path="src/main/resources"/> </wb-module> </project-modules>

    Read the article

  • TFS CM resource recommendations / some questions

    - by John
    I am working with a small development shop that consists of a group of 5 developers and 1 QA person. We are using TFS and need to get more sophisticated on how we use this tool. Currently the development team checks in their code each evening. A nightly build runs and pushes the output out on a network share. Our QA person uses this build for testing the next day. Sometimes the build off the trunk codebase has issues/bugs that hinder the QA process, and it hasn’t been a giant issue in the past, but we now want to get to a state where we have our QA person testing on a stable QA build. So I believe we need to create a branch (call it QA), and the developers will continue to develop off the trunk, but the QA person will use builds created from code in the QA branch. Seems simple enough, but we have started doing code reviews as well. So we have another desire in that only code that has been code reviewed can be promoted to the QA branch. Each developer works off a TFS item, and when they check in a changeset, they do it against a TFS item which creates a link between a checked in code file and a TFS item. Eventually the TFS item becomes complete and ready for code review. All code attached to the TFS item is reviewed. How can the versions of these files get promoted to the QA branch? In the QA branch, if a bug is found, we want to fix it in the QA branch and have the changes migrated back to the trunk. I believe TFS has a way to automatically do this doesn’t it? Long story short, we want to get to a build and CM environment that I believe is pretty standard, but we are unaware of how to make this happen with TFS. Given our situation above, can someone point out a book or website(s) that would address our specific needs? We would like to make this happen without having to get too deep in CM theory or TFS. I very much appreciate any and all suggestions! Thanks, John

    Read the article

  • Volunteer for a potential employer?

    - by EoRaptor013
    I've been looking for work since March, and haven't had much luck. Recently, however, I interviewed with a small company near my home for a C#, .NET, SQL development position. I hit it off very well with the hiring manager during the phone screen, and even more so during the face to face. Unfortunately, I failed the practical test: wiring up a web form, creating a couple of SQL stored procedures, saving new data with validation, and creating a minimal search screen. I knew what I was doing, but I was too slow to meet their standards as all the work needed to be done within an hour. Nevertheless, I really liked the place, the environment, the people who I would have been working with, and the boss. (I gave the company an 11 on Joel's 12 point scale.) So, the obvious next step was to scrape the rust off. I've been trying to create little projects for myself, but I don't know that I've been effective in getting any faster. What with all that goes into creating a project, I'm not heads-down coding as much as I think I need. Now, with all that introduction, here's the question. I have been thinking about calling the hiring manager at that place, and asking him to let me volunteer for three or four weeks, with no strings attached. I think it would benefit me, and wouldn't cost him anything (as long as I didn't slow the existing people down!). At the end of that period, he might, or might not, be inclined to hire me, but even if not, I would have had as much as 160 hours of in the trenches development. Maybe not all shiny, but no more rust, I would think. Does this plan make any sense at all? I certainly don't want to sound desperate (although, I'm not far from being there), and I very much need the tuneup, lube, and change the oil. What's the downside, if any, to me doing this? Do any of you see red flags going up—either from the prerspective of the hiring manager, or from the perspective of a developer?

    Read the article

  • A simple Python deployment problem - a whole world of pain

    - by Evgeny
    We have several Python 2.6 applications running on Linux. Some of them are Pylons web applications, others are simply long-running processes that we run from the command line using nohup. We're also using virtualenv, both in development and in production. What is the best way to deploy these applications to a production server? In development we simply get the source tree into any directory, set up a virtualenv and run - easy enough. We could do the same in production and perhaps that really is the most practical solution, but it just feels a bit wrong to run svn update in production. We've also tried fab, but it just never works first time. For every application something else goes wrong. It strikes me that the whole process is just too hard, given that what we're trying to achieve is fundamentally very simple. Here's what we want from a deployment process. We should be able to run one simple command to deploy an updated version of an application. (If the initial deployment involves a bit of extra complexity that's fine.) When we run this command it should copy certain files, either out of a Subversion repository or out of a local working copy, to a specified "environment" on the server, which probably means a different virtualenv. We have both staging and production version of the applications on the same server, so they need to somehow be kept separate. If it installs into site-packages, that's fine too, as long as it works. We have some configuration files on the server that should be preserved (ie. not overwritten or deleted by the deployment process). Some of these applications import modules from other applications, so they need to be able to reference each other as packages somehow. This is the part we've had the most trouble with! I don't care whether it works via relative imports, site-packages or whatever, as long as it works reliably in both development and production. Ideally the deployment process should automatically install external packages that our applications depend on (eg. psycopg2). That's really it! How hard can it be?

    Read the article

  • physics game programming box2d - orientating a turret-like object using torques

    - by egarcia
    This is a problem I hit when trying to implement a game using the LÖVE engine, which covers box2d with Lua scripting. The objective is simple: A turret-like object (seen from the top, on a 2D environment) needs to orientate itself so it points to a target. The turret is on the x,y coordinates, and the target is on tx, ty. We can consider that x,y are fixed, but tx, ty tend to vary from one instant to the other (i.e. they would be the mouse cursor). The turret has a rotor that can apply a rotational force (torque) on any given moment, clockwise or counter-clockwise. The magnitude of that force has an upper limit called maxTorque. The turret also has certain rotational inertia, which acts for angular movement the same way mass acts for linear movement. There's no friction of any kind, so the turret will keep spinning if it has an angular velocity. The turret has a small AI function that re-evaluates its orientation to verify that it points to the right direction, and activates the rotator. This happens every dt (~60 times per second). It looks like this right now: function Turret:update(dt) local x,y = self:getPositon() local tx,ty = self:getTarget() local maxTorque = self:getMaxTorque() -- max force of the turret rotor local inertia = self:getInertia() -- the rotational inertia local w = self:getAngularVelocity() -- current angular velocity of the turret local angle = self:getAngle() -- the angle the turret is facing currently -- the angle of the like that links the turret center with the target local targetAngle = math.atan2(oy-y,ox-x) local differenceAngle = _normalizeAngle(targetAngle - angle) if(differenceAngle <= math.pi) then -- counter-clockwise is the shortest path self:applyTorque(maxTorque) else -- clockwise is the shortest path self:applyTorque(-maxTorque) end end ... it fails. Let me explain with two illustrative situations: The turret "oscillates" around the targetAngle. If the target is "right behind the turret, just a little clock-wise", the turret will start applying clockwise torques, and keep applying them until the instant in which it surpasses the target angle. At that moment it will start applying torques on the opposite direction. But it will have gained a significant angular velocity, so it will keep going clockwise for some time... until the target will be "just behind, but a bit counter-clockwise". And it will start again. So the turret will oscillate or even go in round circles. I think that my turret should start applying torques in the "opposite direction of the shortest path" before it reaches the target angle (like a car braking before stopping). Intuitively, I think the turret should "start applying torques on the opposite direction of the shortest path when it is about half-way to the target objective". My intuition tells me that it has something to do with the angular velocity. And then there's the fact that the target is mobile - I don't know if I should take that into account somehow or just ignore it. How do I calculate when the turret must "start braking"?

    Read the article

< Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >