Search Results

Search found 30312 results on 1213 pages for 'tactial test team'.

Page 409/1213 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • gcc precompiled headers weird behaviour with -c option

    - by pachanga
    Folks, I'm using gcc-4.4.1 on Linux and before trying precompiled headers in a really large project I decided to test them on simple program. They "kinda work" but I'm not happy with results and I'm sure there is something wrong about my setup. First of all, I wrote a simple program(main.cpp) to test if they work at all: #include <boost/bind.hpp> #include <boost/function.hpp> #include <boost/type_traits.hpp> int main() { return 0; } Then I created the precompiled headers file pre.h(in the same directory) as follows: #include <boost/bind.hpp> #include <boost/function.hpp> #include <boost/type_traits.hpp> ...and compiled it: $ g++ -I. pre.h (pre.h.gch was created) After that I measured compile time with and without precompiled headers: with pch $ time g++ -I. -include pre.h main.cpp real 0m0.128s user 0m0.088s sys 0m0.048s without pch $ time g++ -I. main.cpp real 0m0.838s user 0m0.784s sys 0m0.056s So far so good! Almost 7 times faster, that's impressive! Now let's try something more realistic. All my sources are built with -c option and for some reason I can't make pch play nicely with it. You can reproduce this with the following steps below... I created the test module foo.cpp as follows: #include <boost/bind.hpp> #include <boost/function.hpp> #include <boost/type_traits.hpp> int whatever() { return 0; } Here are the timings of my attempts to build the module foo.cpp with and without pch: with pch $ time g++ -I. -include pre.h -c foo.cpp real 0m0.357s user 0m0.348s sys 0m0.012s without pch $ time g++ -I. -c foo.cpp real 0m0.330s user 0m0.292s sys 0m0.044s That's quite strange, looks like there is no speed up at all!(I ran timings for several times). It turned out precompiled headers were not used at all in this case, I checked it with -H option(output of "g++ -I. -include pre.h -c foo.cpp -H" didn't list pre.h.gch at all). What am I doing wrong?

    Read the article

  • Diophantine Equation

    - by swapnika
    Write an iterative program that finds the largest number of McNuggets that cannot be bought in exact quantity. Your program should print the answer in the following format (where the correct number is provided in place of n): "Largest number of McNuggets that cannot be bought in exact quantity: n" Hints: Hypothesize possible instances of numbers of McNuggets that cannot be purchased exactly, starting with 1 For each possible instance, called n, 1. Test if there exists non-negative integers a, b, and c, such that 6a+9b+20c = n. (This can be done by looking at all feasible combinations of a, b, and c) 2. If not, n cannot be bought in exact quantity, save n When you have found six consecutive values of n that in fact pass the test of having an exact solution, the last answer that was saved (not the last value of n that had a solution) is the correct answer, since you know by the theorem that any amount larger can also be bought in exact quantity

    Read the article

  • Disable selected automated tests at runtime

    - by squig
    Is is posable to disable selected automated tests at runtime? I'm using VSTS and rhino mocks and have some intergation tests that require an external dependancy to be installed (MQ). Not all the developers on my team have this installed. Currently all the tests that require MQ inherit from a base class that checks if MQ is installed and if is not sets the test result to inconclusive. This works as it stops the tests from running, but marks the test run as unsuccseessful and can hide other failures. Any ideas?

    Read the article

  • How to register a service with Mono.ZeroConf?

    - by pablo
    Hi, I'm trying to test the ZeroConf sample at http://www.mono-project.com/Mono.Zeroconf. I'm running OpenSuse 11 and Mono 2.2. My server code is: using System; using Mono.Zeroconf; namespace zeroconftestserver { class MainClass { public static void Main(string[] args) { RegisterService service = new RegisterService (); service.Name = "test server"; service.RegType = "_daap._tcp"; service.ReplyDomain = "local."; service.Port = 6060; // TxtRecords are optional TxtRecord txt_record = new TxtRecord (); txt_record.Add ("Password", "false"); service.TxtRecord = txt_record; service.Register(); Console.WriteLine("Service registered!"); Console.ReadLine(); } } } But I can't find my registered service with the sample client browser code nor with mzclient. Thanks!

    Read the article

  • point light illumination using Phong model

    - by Myx
    Hello: I wish to render a scene that contains one box and a point light source using the Phong illumination scheme. The following are the relevant code snippets for my calculation: R3Rgb Phong(R3Scene *scene, R3Ray *ray, R3Intersection *intersection) { R3Rgb radiance; if(intersection->hit == 0) { radiance = scene->background; return radiance; } ... // obtain ambient term ... // this is zero for my test // obtain emissive term ... // this is also zero for my test // for each light in the scene, obtain calculate the diffuse and specular terms R3Rgb intensity_diffuse(0,0,0,1); R3Rgb intensity_specular(0,0,0,1); for(unsigned int i = 0; i < scene->lights.size(); i++) { R3Light *light = scene->Light(i); R3Rgb light_color = LightIntensity(scene->Light(i), intersection->position); R3Vector light_vector = -LightDirection(scene->Light(i), intersection->position); // check if the light is "behind" the surface normal if(normal.Dot(light_vector)<=0) continue; // calculate diffuse reflection if(!Kd.IsBlack()) intensity_diffuse += Kd*normal.Dot(light_vector)*light_color; if(Ks.IsBlack()) continue; // calculate specular reflection ... // this I believe to be irrelevant for the particular test I'm doing } radiance = intensity_diffuse; return radiance; } R3Rgb LightIntensity(R3Light *light, R3Point position) { R3Rgb light_intensity; double distance; double denominator; if(light->type != R3_DIRECTIONAL_LIGHT) { distance = (position-light->position).Length(); denominator = light->constant_attenuation + (light->linear_attenuation*distance) + (light->quadratic_attenuation*distance*distance); } switch(light->type) { ... case R3_POINT_LIGHT: light_intensity = light->color/denominator; break; ... } return light_intensity; } R3Vector LightDirection(R3Light *light, R3Point position) { R3Vector light_direction; switch(light->type) { ... case R3_POINT_LIGHT: light_direction = position - light->position; break; ... } light_direction.Normalize(); return light_direction; } I believe that the error must be somewhere in either LightDirection(...) or LightIntensity(...) functions because when I run my code using a directional light source, I obtain the desired rendered image (thus this leads me to believe that the Phong illumination equation is correct). Also, in Phong(...), when I computed the intensity_diffuse and while debugging, I divided light_color by 10, I was obtaining a resulting image that looked more like what I need. Am I calculating the light_color correctly? Thanks.

    Read the article

  • Trying to extend ActionView::Helpers::FormBuilder

    - by nibbo
    Hello I am trying to DRY up some code by moving some logic into the FormBuilder. After reading the documentation about how to select and alternative form builder the logical solution for me seemed to be something like this. In the view <% form_for @event, :builder => TestFormBuilder do |f| %> <%= f.test %> <%= f.submit 'Update' %> <% end %> and then in app/helpers/application_helper.rb module ApplicationHelper class TestFormBuilder < ActionView::Helpers::FormBuilder def test puts 'apa' end end end This, however, gives me an error at the "form_for" uninitialized constant ActionView::Base::CompiledTemplates::TestFormBuilder Where am I doing it wrong?

    Read the article

  • [Ruby] OpenSSL verify certificate from own CA

    - by sardaukar
    Hello all and thanks for your time reading this. I need to verify certificates issued by my own CA, for which I have a certificate. How can I do the equivalent to openssl's openssl verify -CAfile in Ruby code? The RDoc for OpenSSL is not very helpful in this regard. I've tried: require 'openssl' ca = OpenSSL::X509::Certificate.new(File.read('ca-cert.pem')) lic = OpenSSL::X509::Certificate.new(File.read('cert.pem')) puts lic.verify( ca ) but I get: test.rb:7:in `verify': wrong argument (OpenSSL::X509::Certificate)! (Expected kind of OpenSSL::PKey::PKey) (TypeError) from test.rb:7 I can't even find "verify" in the OpenSSL Rdoc at http://www.ruby-doc.org/stdlib/libdoc/openssl/rdoc/index.html. Any help is appreciated. Thanks again!

    Read the article

  • MSVC 2008 - Unresolved External errors with LIB but only with DLL, not with EXE project

    - by Robert Oschler
    I have a DLL that I am trying to link with a libjpeg LIB using MSVC 2008 that is generating Unresolved External Symbol errors for the libjpeg functions. I also have a test project that links with the exact same libjpeg library file and links without error and runs fine too. I have triple-checked my LIB path and dependent LIBS list settings and literally copy and pasted them from the EXE project to the DLL project. I still get the errors. I do have the libjpeg include headers surrounded by extern "C" so it is not a name mangling issue and the unresolved external warnings show the "missing" libjpeg functions as undecorated (just a leading underscore and the @ sign parameter byte count suffix after each name). What could make the linker with the DLL project be unable to find the functions properly when the test EXE project has no trouble at all? I'm using the pre-compiled 32-bit static multi-threaded debug library which I downloaded from ClanLib. Thanks, Robert

    Read the article

  • convert ArrayList.toString() back to ArrayList in one call

    - by dotnetnewbie
    I have a toString() representation of an ArrayList. Copying the toString() value to clipboard, I want to copy it back into my IDE editor, and create the ArrayList instance in one line. In fact, what I'm really doing is this: my ArrayList.toString() has data I need to setup a unit test. I want to copy this ArrayList.toString() into my editor to build a test against this edge case I don't want to parse anything by hand My input looks like this: [15.82, 15.870000000000001, 15.92, 16.32, 16.32, 16.32, 16.32, 17.05, 17.05, 17.05, 17.05, 18.29, 18.29, 19.16] The following do not work: Arrays.asList() google collections Lists.newArrayList() Suggestions?

    Read the article

  • CSS class not having an effect on a div

    - by ETFairfax
    Hi Peeps, The following is an section of my css file plus some HTML. Can anyone tell me when I put class="containerHeader selected" (as is on Test Header A) the background colour is not being set to Red??? Cheers, ET Fairfax. div#builderContainer { margin-top: 15px; width: 390px; height: 700px; border: solid 0px #CCCCCC; background-repeat: no-repeat; } div#builderContainer .container { display: none; -moz-border-radius: 4px; -webkit-border-radius: 4px; border-radius: 4px; /* Corner radius */ border: solid 1px #999999; } div#builderContainer .container div:hover { background-color: #EEEEEE; } div#builderContainer .containerHeader { -moz-border-radius: 4px; -webkit-border-radius: 4px; border-radius: 4px; background: #93c3cd url(images/ui-bg_diagonals-small_50_93c3cd_40x40.png) 50% 50% repeat; border-bottom: solid 0px #999999; margin: 0px; margin-top: 25px; padding: 10px; /* display: none; */ border: solid 1px #999999; font-weight: bold; font-family: Verdana; background-color: #FFF; cursor: pointer; vertical-align: middle; } div#builderContainer .containerHeader:hover { background: #ccd232 url(images/ui-bg_diagonals-small_75_ccd232_40x40.png) 50% 50% repeat; } div#builderContainer .containerHeader:active { background: #db4865 url(images/ui-bg_diagonals-small_40_db4865_40x40.png) 50% 50% repeat; } div#builderContainer .containerHeader .selected { background-color: Red; } <div id="builderContainer"> <div class="containerHeader selected" id="CHA">Test Header A</div> <div class="container" id="CA"></div> <div class="containerHeader" id="CHB">Test Header B</div> <div class="container" id="CB"></div> </div>

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • SQL Cruise Alaska 2011

    - by Grant Fritchey
    I had the extreme good fortune to get sent on the last SQL Cruise to Alaska. I love my job. In case you don't what this is, SQL Cruise is a trip on a cruise ship during which you get to attend classes while on the boat, learning all about SQL Server and related topics as well as network with the instructors and the other Cruisers. Frankly, it's amazing. Classes ran from Monday, 5/30, to Saturday, 6/4. The networking was constant, between classes, at night on cruise ship, out on excursions in Alaskan rainforests and while snorkeling in ocean waters. Here's a run down of the experience from my point of view. Because I couldn't travel out 2 days early, I missed the BBQ that occurred the day before the cruise when many of the Cruisers received their swag bags. Some of that swag came from Red Gate. I researched what was useful on a cruise like this and purchased small flashlights and binoculars for all the Cruisers. The flashlights were because, depending on your cabin, ships can be very dark. The binoculars were so that the cruisers could watch all the beautiful landscape as it flowed by. I would have liked to have been there when the bags were opened, but I heard from several people that they appreciated the gifts. Cruisers "In" the hot tub. Pictured: Marjory Woody, Michele Grondin, Kyle Brandt, Grant Fritchey, John Halunen Sunday I went to board the ship with my wife. We had a bit of an adventure because I messed up our documents. It all worked out and we got on board to meet up at the back of the boat at one of the outdoor bars with the other Cruisers, thanks to tweets letting everyone know where to go. That was the end of electronic coordination on the trip (connectivity in Alaska was horrible for everyone except AT&T). The Cruisers were a great bunch of people and it was a real honor to meet them and get to spend time with them. After everyone settled into their cabins, our very first activity was a contest, sponsored by Red Gate. The Cruisers, in an effort to get to know each other and the ship, were required to go all over taking various photographs, some of them hilarious. The winning team of three would all win prizes. Some of the significant others helped out and I tagged along with a team that tied for first but lost the coin toss. The winning team consisted of Christina Leo (blog|twitter), Ryan Malcom (twitter), Neil Hambly (blog|twitter). They then had to do math and identify the cabin with the lowest prime number, oh, and get a picture of it and be the first to get back up to the bar where we were waiting. Christina came in first and very happily carried home an Ipad2. Ryan won a 1TB portable hard drive and Neil won a wireless mouse (picture below, note my special SQL Server Central Friday Shirt. Thanks Steve (blog|twitter)). Winners: Christina Leo, Neil Hambly, Ryan Malcolm. Just Lucky: Grant Fritchey Monday morning classes started. Buck Woody (blog|twitter) was a special guest speaker on this cruise. His theme was "Three C's on the High Seas: Career, Communication and Cloud." The first session was all on Career. I'm not going to type out all my notes from the session, but let's just say, if you get the chance to hear Buck talk about how to manage your career, I suggest you attend. I have a ton of blog posts that I'll be putting together over the next several months (yes, months) both here and over on ScaryDBA. I also have a bunch of work I'm going to be doing to get my career performance bumped up a notch or two (and let's face it, that won't be easy). Later on Monday, Tim Ford (blog|twitter) did a session on DMOs. Specifically the session was on Tim's Period Table of DMOs that he has put together, and how to use some of the more interesting DMOs in your day to day job. It was a great session, packed with good information. Next, Brent Ozar (blog|twitter) did a session on how to monitor and guide SAN configuration for the DBA that doesn't have access to the SAN. That was some seriously useful information. Tuesday morning we only had a single class. Kendra Little (blog|twitter) taught us all about "No Lock for Yes Fun".  It was all about the different transaction isolation levels and how they work. There is so often confusion in this area and Kendra does a great job in clarifying the information. Also, she tosses in her excellent drawings to liven up the presentation. Then it was excursion time in Juneau. My wife and I, along with several other Cruisers, took a hike up around the Mendenhall Glacier. It was absolutely beautiful weather and walking through the Alaskan rain forest was a treat. Our guide, Jason, was a great guy and it was a good day of hiking. Wednesday was an all day excursion in Skagway. My wife and I took the "Ghost and Good Time Girls" walking tour that ended up at a bar that used to be a brothel, the Red Onion. It was a great history of the town. We went back out and hit a few museums and exhibits. We also hiked up the side of the mountain to see the Dewey Lake and some great views of the town. Finally we hiked out to the far side of town to see the Gold Rush cemetery. Hiking done we went back to the boat and had a quiet dinner on our own. Thursday we cruised through Glacier Bay and saw at least four different glaciers including sitting next to the Marjory Glacier for  about an hour. It was amazing. Then it got better. We went into class with Buck again, this time to talk about Communication. Again, I've got pages of notes that I'm going to be referring back to for some time to come. This was an excellent opportunity to learn. Snorkelers: Nicole Bertrand, Aaron Bertrand, Grant Fritchey, Neil Hambly, Christina Leo, John Robel, Yanni Robel, Tim Ford Friday we pulled into Ketchikan. A bunch of us went snorkeling. Yes, snorkeling. Yes, in Alaska. Yes, snorkeling in the ocean in Alaska. It was fantastic. They had us put on 7mm thick wet suits (an adventure all by itself) so it was basically warm the entire time we were in the water (except for the occasional squirt of cold water down my back). Before we got in the water a bald eagle flew up and landed about 15 feet in front of us, which was just an incredible event. Then our guide pointed out about 14 other eagles in the area, hanging out in the trees. Wow! The water was pretty clear and there was a ton of things to see. That was absolutely a blast. Back on the boat I presented a session called Execution Plans: The Deep Dive (note the nautical theme). It seemed to go over well and I had several good questions come out of the session that will lead to new blog posts. After I presented, it was Aaron Bertrand's (blog|twitter) turn. He did a session on "What's New in Denali" that provided a lot of great information. He was able to incorporate new things straight out of Tech-Ed, so this was expanded beyond his usual presentation. The man really knows what he's talking about and communicates it well. Saturday we were travelling so there was time for a bunch of classes. Jeremiah Peschka (blog|twitter) did a great overview of some of the NoSQL databases and what they should be used for. The session was called "The Database is Dead" but it was really about how there are specific uses for these databases that SQL Server doesn't fill, but also that these databases can't replace SQL Server in other areas. Again, good material. Brent Ozar presented again with a session on Defensive Indexing. It was an overview of how indexes work and a deep dive into how to apply them appropriately in your databases to better support access. A good session, as you would expect. Then we pulled into Victoria, BC, in Canada and had a nice dinner with several of the Cruisers, including Denny Cherry (blog|twitter). After that it was back to Seattle on Sunday. By the way, the Science Fiction Museum in Seattle isn't a Science Fiction Museum any more. I was very disappointed to discover this. Overall, it was a great experience. I'm extremely appreciative of Red Gate for sending me and for Tim, Brent, Kendra and Jeremiah for having me. The other Cruisers were all amazing people and it was an honor & privilege to meet them and spend time with them. While this was a seriously fun time, it was also a very serious training opportunity with solid information coming from seasoned industry pros.

    Read the article

  • How to set a .PNG image as a TILED background image for my WPF Form?

    - by John McClane
    I'm learning WPF on my own and I can't seem to find a way to make this work. Here's my code: <Window x:Class="Test.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Test" Height="600" Width="800" > <DockPanel> <Menu DockPanel.Dock="Right" Height="30" VerticalAlignment="Top" Background="#2E404B" BorderThickness="2.6"> <Menu.BitmapEffect> <DropShadowBitmapEffect Direction="270" ShadowDepth="3" Color="#2B3841"/> </Menu.BitmapEffect> </Menu> </DockPanel> How can I make a .BackgroundImage thing appear? :D

    Read the article

  • Can we generate multiple coverage reports using Hudson Emma plugin

    - by Subhashish
    We run both unit (junit) and system (fit) tests on instrumented code in our build. The consolidated coverage report for both is generated as part of the build itself. We then feed the unit test coverage report to the Hudson Emma plugin, configure benchmark numbers and things work nicely. Is it possible to also feed in the system test coverage report separately to the same plugin so that we can get that report and configure benchmarks for that as well? I know there is a workaround of creating a downstream project for the latter activity but it would be good to be able to do both in the same build.

    Read the article

  • Model Based Testing Strategies

    - by Doubt
    What strategies have you used with Model Based Testing? Do you use it exclusively for integration testing, or branch it out to other areas (unit/functional/system/spec verification)? Do you build focused "sealed" models or do you evolve complex onibus models over time? When in the product cycle do you invest in creating MBTs? What sort of base test libraries do you exclusively create for MBTs? What difference do you make in your functional base test libraries to better support MBTs?

    Read the article

  • jquery selector problem with script tags

    - by Tauren
    I'm attempting to select all <script type="text/html"> tags in a page. I use <script> tags to store HTML templates, similar to how John Resig does it. For some reason, the following jquery selector doesn't seem to be selecting anything: $("script[type*=html]").each(function() { alert("Found script "+this.id); }); This markup is in the BODY of the HTML document: <body> <script id="filter-search" type="text/html"> <dt>Search</dt> <dd><input type="text"/></dd> </script> </body> I've also tried putting it into the HEAD of the HTML document, and it is still not found. No alert is ever shown. If I instead change my code to this: $("script[type*=javascript]").each(function() { alert("Found script "+this.id); }); Then it finds only the scripts in the HEAD that have a src to an external file. Scripts in the actual page are not found. For instance, with the following in HEAD: <head> <script type="text/javascript" src="jquery.js" id="jquery"></script> <script type="text/javascript" src="jquery-ui.js" id="ui"></script> <script type="text/javascript" id="custom"> $(document).ready( function() { $("script[type*=javascript]").each(function() { alert("Found script "+this.id); }); $("script[type*=html]").each(function() { alert("Found TEMPLATE script "+this.id); }); }); </script> <script id="filter-test" type="text/html"> <dt>Test</dt> </script> </head> <body> <script id="filter-search" type="text/html"> <dt>Search</dt> <dd><input type="text"/></dd> </script> </body> I get the following alerts: Found script jquery Found script ui The custom and filter-test scripts in the HEAD are not selected, nor is the filter-search script in the body tag. Is this the expected behavior? Why does this not work? I can work around it, but it is annoying that it doesn't work.

    Read the article

  • Having two ODP.NET (ODAC) versions in the same server

    - by vizcaynot
    Hello: Some months ago, a colleague of mine installed ODAC 11.106.21 in a server using XCOPY and then he developed many applications that use this client without problems (in test and production windows servers). Past week, I developed an application under ODAC 11.1.07.20. When I asked him to install these new ODAC version using XCOPY in a different folder and then include my application in the test server, he answered me that I should use ODAC 11.106.21 because he could have troubles with his applications. So I would like to know: 1) If it is really possible to have two different ODAC versions in one server. 2) If the answer is positive, how can I firmly ensure to my colleague that he will not have troubles with his applications? 3) If the answer is positive, is this necessary to do some kind of configuration in the server? Thanks!!

    Read the article

  • jQuery, PHP, AJAX, "tu" variable beeing posted for no reason, shows in var_dump()

    - by Mattis
    A jQuery AJAX request .post()s data to page.php, which creates $res and var_dump()s it. $res: $res = array(); foreach ($_REQUEST as $key => $value) { if($key){ $res[$key] = $value; } } var_dump($res): array(4) { ["text1"]=> string(6) "mattis" ["text2"]=> string(4) "test" ["tu"]=> string(32) "deb6adbbff4234b5711cc4368c153bc4" ["PHPSESSID"]=> string(32) "cda24363cb9d3226bd37b2577ed0bc0b" } My javascript only sends text1 and text2: $.post("page.php",{ text1:"mattis", text2:"test" } What is the "tu" variable beeing sent? Apparantly it's very similar to the session id, but I've never seen it before. EDIT: It is sent in IE but not in FF.

    Read the article

  • PLT Scheme URL dispatch

    - by Inaimathi
    I'm trying to hook up URL dispatch with PLT Scheme. I've taken a look at the tutorial and the server documentation. I can figure out how to route requests to the same servlets. Specific example: (define (start request) (blog-dispatch request)) (define-values (blog-dispatch blog-url) (dispatch-rules (("") list-posts) (("posts" (string-arg)) review-post) (("archive" (integer-arg) (integer-arg)) review-archive) (else list-posts))) (define (list-posts req) `(list-posts)) (define (review-post req p) `(review-post ,p)) (define (review-archive req y m) `(review-archive ,y ,m)) Assuming the above code running on a server listening 8080, localhost:8080/ goes to a page that says "list-posts". Going to localhost:8080/posts/test goes to a PLT "file not found" page (with the above code, I'd expect it to go to a page that says "review-post test"). It feels like I'm missing something small and obvious. Can anyone give me a hint?

    Read the article

  • How to Create a Temporary Function in Emacs Lisp

    - by Cristian
    I'm making some tedious calls to a bunch of functions, but the parameters will be determined at runtime. I wrote a simple function to keep my code DRY but giving it a name is unnecessary. I don't use this function anywhere else. I'm trying to do it the way I would in Scheme, but I get a void-function error: (let ((do-work (lambda (x y z) (do-x x) (do-y y) ;; etc ))) (cond (test-1 (do-work 'a 'b 'c)) (test-2 (do-work 'i 'j 'k)))) I could stick it all into an apply (e.g., (apply (lambda ...) (cond ...))) but that isn't very readable. Is there a better way?

    Read the article

  • Convert table base design to table less design in best way

    - by Brij
    What is the best optimized way to convert following in table less design? the layout should be cross browser compatible and SEO Friendly. <table cellpadding="0" cellspacing="0"> <tr> <td>Row 1 Column 1</td> <td>Row 1 Column 2</td> <td>Row 1 Column 3</td> </tr> <tr> <td colspan="3" align="center">Row 2</td> </tr> <tr> <td>Row 3</td> <td align="right" colspan="2"><img src="test.jpg" alt="test" /></td> </tr> </table>

    Read the article

  • Paging over a lazy-loaded collection with NHibernate

    - by HackedByChinese
    I read this article where Ayende states NHibernate can (compared to EF 4): Collection with lazy=”extra” – Lazy extra means that NHibernate adapts to the operations that you might run on top of your collections. That means that blog.Posts.Count will not force a load of the entire collection, but rather would create a “select count(*) from Posts where BlogId = 1” statement, and that blog.Posts.Contains() will likewise result in a single query rather than paying the price of loading the entire collection to memory. Collection filters and paged collections - this allows you to define additional filters (including paging!) on top of your entities collections, which means that you can easily page through the blog.Posts collection, and not have to load the entire thing into memory. So I decided to put together a test case. I created the cliché Blog model as a simple demonstration, with two classes as follows: public class Blog { public virtual int Id { get; private set; } public virtual string Name { get; set; } public virtual ICollection<Post> Posts { get; private set; } public virtual void AddPost(Post item) { if (Posts == null) Posts = new List<Post>(); if (!Posts.Contains(item)) Posts.Add(item); } } public class Post { public virtual int Id { get; private set; } public virtual string Title { get; set; } public virtual string Body { get; set; } public virtual Blog Blog { get; private set; } } My mappings files look like this: <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" default-access="property" auto-import="true" default-cascade="none" default-lazy="true"> <class xmlns="urn:nhibernate-mapping-2.2" name="Model.Blog, TestEntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" table="Blogs"> <id name="Id" type="System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Id" /> <generator class="identity" /> </id> <property name="Name" type="System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Name" /> </property> <property name="Type" type="System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Type" /> </property> <bag lazy="extra" name="Posts"> <key> <column name="Blog_Id" /> </key> <one-to-many class="Model.Post, TestEntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </bag> </class> </hibernate-mapping> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" default-access="property" auto-import="true" default-cascade="none" default-lazy="true"> <class xmlns="urn:nhibernate-mapping-2.2" name="Model.Post, TestEntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" table="Posts"> <id name="Id" type="System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Id" /> <generator class="identity" /> </id> <property name="Title" type="System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Title" /> </property> <property name="Body" type="System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Body" /> </property> <many-to-one class="Model.Blog, TestEntityFramework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Blog"> <column name="Blog_id" /> </many-to-one> </class> </hibernate-mapping> My test case looks something like this: using (ISession session = Configuration.Current.CreateSession()) // this class returns a custom ISession that represents either EF4 or NHibernate { blogs = (from b in session.Linq<Blog>() where b.Name.Contains("Test") orderby b.Id select b); Console.WriteLine("# of Blogs containing 'Test': {0}", blogs.Count()); Console.WriteLine("Viewing the first 5 matching Blogs."); foreach (Blog b in blogs.Skip(0).Take(5)) { Console.WriteLine("Blog #{0} \"{1}\" has {2} Posts.", b.Id, b.Name, b.Posts.Count); Console.WriteLine("Viewing first 5 matching Posts."); foreach (Post p in b.Posts.Skip(0).Take(5)) { Console.WriteLine("Post #{0} \"{1}\" \"{2}\"", p.Id, p.Title, p.Body); } } } Using lazy="extra", the call to b.Posts.Count does do a SELECT COUNT(Id)... which is great. However, b.Posts.Skip(0).Take(5) just grabs all Posts for Blog.Id = ?id, and then LINQ on the application side is just taking the first 5 from the resulting collection. What gives?

    Read the article

  • Are xlrd and xlwt compatible?

    - by Leo
    I'm trying to create a workbook with python and I need to test the values of different cells to fill it but I'm having some troubles. I use xlrd and xlwt to create and edit the excel file. I've made a little example of my problem and I don't understand why it's not working. import xlwt import xlrd wb = xlwt.Workbook() ws = wb.add_sheet('Test') ws.write(0,0,"ah") cell = ws.cell(0,0) # AttributeError: 'Worksheet' object has no attribute 'cell' print cell.value I had taken for granted that xlrd and xlwt have shared classes which can interact with each other but it doesn't seem to be the case. How do I get the cell value of an open Worksheet object?

    Read the article

  • TDD, DDD and the No-getters principle

    - by Justin
    Hi all, After several years of following the bad practice handed down from 'architects' at my place of work and thinking that there must be a better way, I've recently been reading up around TDD and DDD and I think the principles and practices would be a great fit for the complexity of the software we write. However, many of the TDD samples I have seen call a method on the domain object and then test properties of the object to ensure the behaviour executed correctly. On the other hand, several respected people in the industry (Greg Young most noticeably so) advocate the "no-getters" principle on our domain objects. My question therefore is: How does one test the functionality of a domain object if it is forbidden to retrieve its state? I believe I am missing something fundamental so please feel free to call me an idiot and enlighten me - any guidance would be greatly appreciated.

    Read the article

  • How do I run all my PHPUnit tests?

    - by JJ
    I have script called Script.php and tests for it in Tests/Script.php, but when I run phpunit Tests it does not execute any tests in my test file. How do I run all my tests with phpunit? PHPUnit 3.3.17, PHP 5.2.6-3ubuntu4.2, latest Ubuntu Output: $ phpunit Tests PHPUnit 3.3.17 by Sebastian Bergmann. Time: 0 seconds OK (0 tests, 0 assertions) And here are my script and test files: Script.php <?php function returnsTrue() { return TRUE; } ?> Tests/Script.php <?php require_once 'PHPUnit/Framework.php'; require_once 'Script.php' class TestingOne extends PHPUnit_Framework_TestCase { public function testTrue() { $this->assertEquals(TRUE, returnsTrue()); } public function testFalse() { $this->assertEquals(FALSE, returnsTrue()); } } class TestingTwo extends PHPUnit_Framework_TestCase { public function testTrue() { $this->assertEquals(TRUE, returnsTrue()); } public function testFalse() { $this->assertEquals(FALSE, returnsTrue()); } } ?>

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >