Search Results

Search found 3469 results on 139 pages for 'broken harddrive'.

Page 127/139 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Many Different Things Rolled into a Ball

    - by MOSSLover
    Yeah I know I don’t blog much anymore, because life has taken me places that don’t involve the interwebs unfortunately.  I am in the midst of planning two events, starting a non for profit, creating more sessions for various conferences, submitting to various conferences, working a 40 hour a week job, attempting to hang out with boyfriend/friends/family.  So you can see that list does not include this blog sadly that’s how it goes sometimes.  The bottom piece very important over any of the top pieces.  I haven’t seen St. Louis in a while and I get to go back.  I was gone from home for MVP Summit and Best Practices Conference, so the boyfriend and cat didn’t get to see me either for a bit.  Then you have to add in the whole toilet being broken fiasco this week.  Maintenance really thought it would be cool to turn off the ability to flush.  I mean who does that?  Then when we call the owner he comes by turns it on and we figure it was an accident, because well the next day no one came by to tell us there was a leak.  It was all kinds of strangeness and involved me running to other people’s toilets.  As Dan Usher would say, I was a sad panda for a few days.  So I guess I wanted to post a few thoughts here just because I can.  I do not like multiple content editor webparts embedded with html files in numerous pages doing the same thing.  I will tell you why I don’t like these particular webparts and the way they are being used.  First off if you have a bunch of pages with script includes it’s about time you should just dump them into the masterpage.  Why bother finding all 20 pages and changing those pages when you can just use a single masterpage that already exists? The other thing that is bothering me days is screen scraping.  Just don’t do it, because in 2010 you will find the UI is substantially slower.  I understand you are new and you have no idea what to do.  You are also using 2007 am I right?  So then you need to go to codeplex.com and type in a search for SPServices.  Download it, use it, love it and then have it’s babies (well maybe don’t go so far this is not the GRID in Tron). If you have a ton of constants in your code why did you not go in and create a webpart with a bunch of properties and/or link to a configuration list hidden in the browser?  This type of property and list could help you out in the long run.  The power users and administrators can now change the control without you having to compile it over and over again.  It’s good stuff.  Also, you can change the control without compiling it, especially in 2007 where you have to do a farm solution.  In 2010 you can do a sandbox solution I guess, but shouldn’t you make it as easy and supportable as possible for other users? In conclusion I’m an angry person when it comes to viewing something repeatedly and analyzing it in a system.  Now we will move on to the next topic…MVP Summit…So yeah I can’t really talk about particulars, but I can talk about my experience as a person.  Don’t build something up to be cooler than it is only to be dropped from your 10,000 foot perch.  My experience was great, but the content overall was something to be desired.  It’s ok I got to meet a lot of people I would not have met if I had not gone.  Some of it was surreal, such as product group members showing up and talking to us.  It was pretty neat.  Plus I never had the chance to get to that mythical MS Office in Redmond.  Prior to Summit it was like Rainbow Brites unicorn trying taunting me on television when I was a kid.  So I guess with all that said I give it a B.  It was awesome in some way, but lacking in other ways.  The cool part is that I got to go.  Would I have lived without going? Yes, but it was still cool. I could prattle on about other things and make this post massive, but I’m going to pass and give myself a piece of Sunday to play Rockband and do 800 other things.  I hope the two of you who read this blog are well.  I’ll catch you all at another juncture.  Have a good weekend and varying holidays in between. Technorati Tags: SharePoint,MVP Summit,JQuery,Javascript

    Read the article

  • Best Method For Evaluating Existing Software or New Software

    How many of us have been faced with having to decide on an off-the-self or a custom built component, application, or solution to integrate in to an existing system or to be the core foundation of a new system? What is the best method for evaluating existing software or new software still in the design phase? One of the industry preferred methodologies to use is the Active Reviews for Intermediate Designs (ARID) evaluation process.  ARID is a hybrid mixture of the Active Design Review (ADR) methodology and the Architectural Tradeoff Analysis Method (ATAM). So what is ARID? ARD’s main goal is to ensure quality, detailed designs in software. One way in which it does this is by empowering reviewers by assigning generic open ended survey questions. This approach attempts to remove the possibility for allowing the standard answers such as “Yes” or “No”. The ADR process ignores the “Yes”/”No” questions due to the fact that they can be leading based on how the question is asked. Additionally these questions tend to receive less thought in comparison to more open ended questions. Common Active Design Review Questions What possible exceptions can occur in this component, application, or solution? How should exceptions be handled in this component, application, or solution? Where should exceptions be handled in this component, application, or solution? How should the component, application, or solution flow based on the design? What is the maximum execution time for every component, application, or solution? What environments can this component, application, or solution? What data dependencies does this component, application, or solution have? What kind of data does this component, application, or solution require? Ok, now I know what ARID is, how can I apply? Let’s imagine that your organization is going to purchase an off-the-shelf (OTS) solution for its customer-relationship management software. What process would we use to ensure that the correct purchase is made? If we use ARID, then we will have a series of 9 steps broken up by 2 phases in order to ensure that the correct OTS solution is purchases. Phase 1 Identify the Reviewers Prepare the Design Briefing Prepare the Seed Scenarios Prepare the Materials When identifying reviewers for a design it is preferred that they be pulled from a candidate pool comprised of developers that are going to implement the design. The believe is that developers actually implementing the design will have more a vested interest in ensuring that the design is correct prior to the start of code. Design debriefing consist of a summary of the design, examples of the design solving real world examples put in to use and should be no longer than two hours typically. The primary goal of this briefing is to adequately summarize the design so that the review members could actually implement the design. In the example of purchasing an OTS product I would attempt to review my briefing prior to its distribution with the review facilitator to ensure that nothing was excluded that should have not been. This practice will also allow me to test the length of the briefing to ensure that can be delivered in an appropriate about of time. Seed Scenarios are designed to illustrate conceptualized scenarios when applied with a set of sample data. These scenarios can then be used by the reviewers in the actual evaluation of the software, All materials needed for the evaluation should be prepared ahead of time so that they can be reviewed prior to and during the meeting. Materials Included: Presentation Seed Scenarios Review Agenda Phase 2 Present ARID Present Design Brainstorm and prioritize scenarios Apply scenarios Summarize Prior to the start of any ARID review meeting the Facilitator should define the remaining steps of ARID so that all the participants know exactly what they are doing prior to the start of the review process. Once the ARID rules have been laid out, then the lead designer presents an overview of the design which typically takes about two hours. During this time no questions about the design or rational are allowed to be asked by the review panel as a standard, but they are written down for use latter in the process. After the presentation the list of compiled questions is then summarized and sent back to the lead designer as areas that need to be addressed further. In the example of purchasing an OTS product issues could arise regarding security, the implementation needed or even if this is this the correct product to solve the needed solution. After the Design presentation a brainstorming and prioritize scenarios process begins by reducing the seed scenarios down to just the highest priority scenarios.  These will then be used to test the design for suitability. Once the selected scenarios have been defined the reviewers apply the examples provided in the presentation to the scenarios. The intended output of this process is to provide code or pseudo code that makes use of the examples provided while solving the selected seed scenarios. As a standard rule, the designers of the systems are not allowed to help the review board unless they all become stuck. When this occurs it is documented and along with the reason why the designer needed to help the review panel back on track. Once all of the scenarios have been completed the review facilitator reviews with the group issues that arise during the process. Then the reviewers will be polled as to efficacy of the review experience. References: Clements, Paul., Kazman, Rick., Klien, Mark. (2002). Evaluating Software Architectures: Methods and Case Studies Indianapolis, IN: Addison-Wesley

    Read the article

  • Changing the Game: Why Oracle is in the IT Operations Management Business

    - by DanKoloski
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Next week, in Orlando, is the annual Gartner IT Operations Management Summit. Oracle is a premier sponsor of this annual event, which brings together IT executives for several days of high level talks about the state of operational management of enterprise IT. This year, Sushil Kumar, VP Product Strategy and Business Development for Oracle’s Systems & Applications Management, will be presenting on the transformation in IT Operations required to support enterprise cloud computing. IT Operations transformation is an important subject, because year after year, we hear essentially the same refrain – large enterprises spend an average of two-thirds (67%!) of their IT resources (budget, energy, time, people, etc.) on running the business, with far too little left over to spend on growing and transforming the business (which is what the business actually needs and wants). In the thirtieth year of the distributed computing revolution (give or take, depending on how you count it), it’s amazing that we have still not moved the needle on the single biggest component of enterprise IT resource utilization. Oracle is in the IT Operations Management business because when management is engineered together with the technology under management, the resulting efficiency gains can be truly staggering. To put it simply – what if you could turn that 67% of IT resources spent on running the business into 50%? Or 40%? Imagine what you could do with those resources. It’s now not just possible, but happening. This seems like a simple idea, but it is a radical change from “business as usual” in enterprise IT Operations. For the last thirty years, management has been a bolted-on afterthought – we pick and deploy our technology, then figure out how to manage it. This pervasive dysfunction is a broken cycle that guarantees high ongoing operating costs and low agility. If we want to break the cycle, we need to take a more tightly-coupled approach. As a complete applications-to-disk platform provider, Oracle is engineering management together with technology across our stack and hooking that on-premise management up live to My Oracle Support. Let’s examine the results with just one piece of the Oracle stack – the Oracle Database. Oracle began this journey with the Oracle Database 9i many years ago with the introduction of low-impact instrumentation in the database kernel (“tell me what’s wrong”) and through Database 10g, 11g and 11gR2 has successively added integrated advisory (“tell me how to fix what’s wrong”) and lifecycle management and automated self-tuning (“fix it for me, and do it on an ongoing basis for all my assets”). When enterprises take advantage of this tight-coupling, the results are game-changing. Consider the following (for a full list of public references, visit this link): British Telecom improved database provisioning time 1000% (from weeks to minutes) which allows them to provide a new DBaaS service to their internal customers with no additional resources Cerner Corporation Saved $9.5 million in CapEx and OpEx AND launched a brand-new cloud business at the same time Vodafone Group plc improved response times 50% and reduced maintenance planning times 50-60% while serving 391 million registered mobile customers Or the recent Database Manageability and Productivity Cost Comparisons: Oracle Database 11g Release 2 vs. SAP Sybase ASE 15.7, Microsoft SQL Server 2008 R2 and IBM DB2 9.7 as conducted by independent analyst firm ORC. In later entries, we’ll discuss similar results across other portions of the Oracle stack and how these efficiency gains are required to achieve the agility benefits of Enterprise Cloud. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • XmlWriter and lower ASCII characters

    - by Rick Strahl
    Ran into an interesting problem today on my CodePaste.net site: The main RSS and ATOM feeds on the site were broken because one code snippet on the site contained a lower ASCII character (CHR(3)). I don't think this was done on purpose but it was enough to make the feeds fail. After quite a bit of debugging and throwing in a custom error handler into my actual feed generation code that just spit out the raw error instead of running it through the ASP.NET MVC and my own error pipeline I found the actual error. The lovely base exception and error trace I got looked like this: Error: '', hexadecimal value 0x03, is an invalid character. at System.Xml.XmlUtf8RawTextWriter.InvalidXmlChar(Int32 ch, Byte* pDst, Boolean entitize)at System.Xml.XmlUtf8RawTextWriter.WriteElementTextBlock(Char* pSrc, Char* pSrcEnd)at System.Xml.XmlUtf8RawTextWriter.WriteString(String text)at System.Xml.XmlWellFormedWriter.WriteString(String text)at System.Xml.XmlWriter.WriteElementString(String localName, String ns, String value)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItemContents(XmlWriter writer, SyndicationItem item, Uri feedBaseUri)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItem(XmlWriter writer, SyndicationItem item, Uri feedBaseUri)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteFeed(XmlWriter writer)at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteTo(XmlWriter writer)at CodePasteMvc.Controllers.ApiControllerBase.GetFeed(Object instance) in C:\Projects2010\CodePaste\CodePasteMvc\Controllers\ApiControllerBase.cs:line 131 XML doesn't like extended ASCII Characters It turns out the issue is that XML in general does not deal well with lower ASCII characters. According to the XML spec it looks like any characters below 0x09 are invalid. If you generate an XML document in .NET with an embedded &#x3; entity (as mine did to create the error above), you tend to get an XML document error when displaying it in a viewer. For example, here's what the result of my  feed output looks like with the invalid character embedded inside of Chrome which displays RSS feeds as raw XML by default: Other browsers show similar error messages. The nice thing about Chrome is that you can actually view source and jump down to see the line that causes the error which allowed me to track down the actual message that failed. If you create an XML document that contains a 0x03 character the XML writer fails outright with the error: '', hexadecimal value 0x03, is an invalid character. The good news is that this behavior is overridable so XML output can at least be created by using the XmlSettings object when configuring the XmlWriter instance. In my RSS configuration code this looks something like this:MemoryStream ms = new MemoryStream(); var settings = new XmlWriterSettings() { CheckCharacters = false }; XmlWriter writer = XmlWriter.Create(ms,settings); and voila the feed now generates. Now generally this is probably NOT a good idea, because as mentioned above these characters are illegal and if you view a raw XML document you'll get validation errors. Luckily though most RSS feed readers however don't care and happily accept and display the feed correctly, which is good because it got me over an embarrassing hump until I figured out a better solution. How to handle extended Characters? I was glad to get the feed fixed for the time being, but now I was still stuck with an interesting dilemma. CodePaste.net accepts user input for code snippets and those code snippets can contain just about anything. This means that ASP.NET's standard request filtering cannot be applied to this content. The code content displayed is encoded before display so for the HTML end the CHR(3) input is not really an issue. While invisible characters are hardly useful in user input it's not uncommon that odd characters show up in code snippets. You know the old fat fingering that happens when you're in the middle of a coding session and those invisible characters do end up sometimes in code editors and then end up pasted into the HTML textbox for pasting as a Codepaste.net snippet. The question is how to filter this text? Looking back at the XML Charset Spec it looks like all characters below 0x20 (space) except for 0x09 (tab), 0x0A (LF), 0x0D (CR) are illegal. So applying the following filter with a RegEx should work to remove invalid characters:string code = Regex.Replace(item.Code, @"[\u0000-\u0008,\u000B,\u000C,\u000E-\u001F]", ""); Applying this RegEx to the code snippet (and title) eliminates the problems and the feed renders cleanly.© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  XML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • concurrency::index<N> from amp.h

    - by Daniel Moth
    Overview C++ AMP introduces a new template class index<N>, where N can be any value greater than zero, that represents a unique point in N-dimensional space, e.g. if N=2 then an index<2> object represents a point in 2-dimensional space. This class is essentially a coordinate vector of N integers representing a position in space relative to the origin of that space. It is ordered from most-significant to least-significant (so, if the 2-dimensional space is rows and columns, the first component represents the rows). The underlying type is a signed 32-bit integer, and component values can be negative. The rank field returns N. Creating an index The default parameterless constructor returns an index with each dimension set to zero, e.g. index<3> idx; //represents point (0,0,0) An index can also be created from another index through the copy constructor or assignment, e.g. index<3> idx2(idx); //or index<3> idx2 = idx; To create an index representing something other than 0, you call its constructor as per the following 4-dimensional example: int temp[4] = {2,4,-2,0}; index<4> idx(temp); Note that there are convenience constructors (that don’t require an array argument) for creating index objects of rank 1, 2, and 3, since those are the most common dimensions used, e.g. index<1> idx(3); index<2> idx(3, 6); index<3> idx(3, 6, 12); Accessing the component values You can access each component using the familiar subscript operator, e.g. One-dimensional example: index<1> idx(4); int i = idx[0]; // i=4 Two-dimensional example: index<2> idx(4,5); int i = idx[0]; // i=4 int j = idx[1]; // j=5 Three-dimensional example: index<3> idx(4,5,6); int i = idx[0]; // i=4 int j = idx[1]; // j=5 int k = idx[2]; // k=6 Basic operations Once you have your multi-dimensional point represented in the index, you can now treat it as a single entity, including performing common operations between it and an integer (through operator overloading): -- (pre- and post- decrement), ++ (pre- and post- increment), %=, *=, /=, +=, -=,%, *, /, +, -. There are also operator overloads for operations between index objects, i.e. ==, !=, +=, -=, +, –. Here is an example (where no assertions are broken): index<2> idx_a; index<2> idx_b(0, 0); index<2> idx_c(6, 9); _ASSERT(idx_a.rank == 2); _ASSERT(idx_a == idx_b); _ASSERT(idx_a != idx_c); idx_a += 5; idx_a[1] += 3; idx_a++; _ASSERT(idx_a != idx_b); _ASSERT(idx_a == idx_c); idx_b = idx_b + 10; idx_b -= index<2>(4, 1); _ASSERT(idx_a == idx_b); Usage You'll most commonly use index<N> objects to index into data types that we'll cover in future posts (namely array and array_view). Also when we look at the new parallel_for_each function we'll see that an index<N> object is the single parameter to the lambda, representing the (multi-dimensional) thread index… In the next post we'll go beyond being able to represent an N-dimensional point in space, and we'll see how to define the N-dimensional space itself through the extent<N> class. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • How Can I Safely Destroy Sensitive Data CDs/DVDs?

    - by Jason Fitzpatrick
    You have a pile of DVDs with sensitive information on them and you need to safely and effectively dispose of them so no data recovery is possible. What’s the most safe and efficient way to get the job done? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader HaLaBi wants to know how he can safely destroy CDs and DVDs with personal data on them: I have old CDs/DVDs which have some backups, these backups have some work and personal files. I always had problems when I needed to physically destroy them to make sure no one will reuse them. Breaking them is dangerous, pieces could fly fast and may cause harm. Scratching them badly is what I always do but it takes long time and I managed to read some of the data in the scratched CDs/DVDs. What’s the way to physically destroy a CD/DVD safely? How should he approach the problem? The Answer SuperUser contributor Journeyman Geek offers a practical solution coupled with a slightly mad-scientist solution: The proper way is to get yourself a shredder that also handles cds – look online for cd shredders. This is the right option if you end up doing this routinely. I don’t do this very often – For small scale destruction I favour a pair of tin snips – they have enough force to cut through a cd, yet are blunt enough to cause small cracks along the sheer line. Kitchen shears with one serrated side work well too. You want to damage the data layer along with shearing along the plastic, and these work magnificently. Do it in a bag, cause this generates sparkly bits. There’s also the fun, and probably dangerous way – find yourself an old microwave, and microwave them. I would suggest doing this in a well ventilated area of course, and not using your mother’s good microwave. There’s a lot of videos of this on YouTube – such as this (who’s done this in a kitchen… and using his mom’s microwave). This results in a very much destroyed cd in every respect. If I was an evil hacker mastermind, this is what I’d do. The other options are better for the rest of us. Another contributor, Keltari, notes that the only safe (and DoD approved) way to dispose of data is total destruction: The answer by Journeyman Geek is good enough for almost everything. But oddly, that common phrase “Good enough for government work” does not apply – depending on which part of the government. It is technically possible to recover data from shredded/broken/etc CDs and DVDs. If you have a microscope handy, put the disc in it and you can see the pits. The disc can be reassembled and the data can be reconstructed — minus the data that was physically destroyed. So why not just pulverize the disc into dust? Or burn it to a crisp? While technically, that would completely eliminate the data, it leaves no record of the disc having existed. And in some places, like DoD and other secure facilities, the data needs to be destroyed, but the disc needs to exist. If there is a security audit, the disc can be pulled to show it has been destroyed. So how can a disc exist, yet be destroyed? Well, the most common method is grinding the disc down to destroy the data, yet keep the label surface of the disc intact. Basically, it’s no different than using sandpaper on the writable side, till the data is gone. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Level of detail algorithm not functioning correctly

    - by Darestium
    I have been working on this problem for months; I have been creating Planet Generator of sorts, after more than 6 months of work I am no closer to finishing it then I was 4 months ago. My problem; The terrain does not subdivide in the correct locations properly, it almost seems as if there is a ghost camera next to me, and the quads subdivide based on the position of this "ghost camera". Here is a video of the broken program: http://www.youtube.com/watch?v=NF_pHeMOju8 The best example of the problem occurs around 0:36. For detail limiting, I am going for a chunked LOD approach, which subdivides the terrain based on how far you are away from it. I use a "depth table" to determine how many subdivisions should take place. void PQuad::construct_depth_table(float distance) { tree[0] = -1; for (int i = 1; i < MAX_DEPTH; i++) { tree[i] = distance; distance /= 2.0f; } } The chuncked LOD relies on the child/parent structure of quads, the depth is determined by a constant e.g: if the constant is 6, there are six levels of detail. The quads which should be drawn go through a distance test from the player to the centre of the quad. void PQuad::get_recursive(glm::vec3 player_pos, std::vector<PQuad*>& out_children) { for (size_t i = 0; i < children.size(); i++) { children[i].get_recursive(player_pos, out_children); } if (this->should_draw(player_pos) || this->depth == 0) { out_children.emplace_back(this); } } bool PQuad::should_draw(glm::vec3 player_position) { float distance = distance3(player_position, centre); if (distance < tree[depth]) { return true; } return false; } The root quad has four children which could be visualized like the following: [] [] [] [] Where each [] is a child. Each child has the same amount of children up until the detail limit, the quads which have are 6 iterations deep are leaf nodes, these nodes have no children. Each node has a corresponding Mesh, each Mesh structure has 16x16 Quad-shapes, each Mesh's Quad-shapes halves in size each detail level deeper - creating more detail. void PQuad::construct_children() { // Calculate the position of the Quad based on the parent's location calculate_position(); if (depth < (int)MAX_DEPTH) { children.reserve((int)NUM_OF_CHILDREN); for (int i = 0; i < (int)NUM_OF_CHILDREN; i++) { children.emplace_back(PQuad(this->face_direction, this->radius)); PQuad *child = &children.back(); child->set_depth(depth + 1); child->set_child_index(i); child->set_parent(this); child->construct_children(); } } else { leaf = true; } } The following function creates the vertices for each quad, I feel that it may play a role in the problem - I just can't determine what is causing the problem. void PQuad::construct_vertices(std::vector<glm::vec3> *vertices, std::vector<Color3> *colors) { vertices->reserve(quad_width * quad_height); for (int y = 0; y < quad_height; y++) { for (int x = 0; x < quad_width; x++) { switch (face_direction) { case YIncreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, quad_height - 1.0f, -(position.y + y * element_width))); break; case YDecreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, 0.0f, -(position.y + y * element_width))); break; case XIncreasing: vertices->emplace_back(glm::vec3(quad_width - 1.0f, position.y + y * element_width, -(position.x + x * element_width))); break; case XDecreasing: vertices->emplace_back(glm::vec3(0.0f, position.y + y * element_width, -(position.x + x * element_width))); break; case ZIncreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, position.y + y * element_width, 0.0f)); break; case ZDecreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, position.y + y * element_width, -(quad_width - 1.0f))); break; } // Position the bottom, right, front vertex of the cube from being (0,0,0) to (-16, -16, 16) (*vertices)[vertices->size() - 1] -= glm::vec3(quad_width / 2.0f, quad_width / 2.0f, -(quad_width / 2.0f)); colors->emplace_back(Color3(255.0f, 255.0f, 255.0f, false)); } } switch (face_direction) { case YIncreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, quad_height - 1.0f, -(position.y + quad_height / 2.0f)); break; case YDecreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, 0.0f, -(position.y + quad_height / 2.0f)); break; case XIncreasing: this->centre = glm::vec3(quad_width - 1.0f, position.y + quad_height / 2.0f, -(position.x + quad_width / 2.0f)); break; case XDecreasing: this->centre = glm::vec3(0.0f, position.y + quad_height / 2.0f, -(position.x + quad_width / 2.0f)); break; case ZIncreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, position.y + quad_height / 2.0f, 0.0f); break; case ZDecreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, position.y + quad_height / 2.0f, -(quad_height - 1.0f)); break; } this->centre -= glm::vec3(quad_width / 2.0f, quad_width / 2.0f, -(quad_width / 2.0f)); } Any help in discovering what is causing this "subdivding in the wrong place" would be greatly appreciated.

    Read the article

  • Knowing your user is key--Part 1: Motivation

    - by erikanollwebb
    I was thinking where the best place to start in this blog would be and finally came back to a theme that I think is pretty critical--successful gamification in the enterprise comes down to knowing your user.  Lots of folks will say that gamification is about understanding that everyone is a gamer.  But at least in my org, that argument won't play for a lot of people.  Pun intentional.  It's not that I don't see the attraction to the idea--really, very few people play no games at all.  If they don't play video games, they might play solitaire on their computer.  They may play card games, or some type of sport.  Mario Herger has some great facts on how much game playing there is going on at his Enterprise-Gamification.com website. But at the end of the day, I can't sell that into my organization well.  We are Oracle.  We make big, serious software designed run your whole business.  We don't make Angry Birds out of your financial reporting tools.  So I stick with the argument that works better.  Gamification techniques are really just good principals of user experience packaged a little differently.  Feedback?  We already know feedback is important when using software.  Progress indicators?  Got that too.  Game mechanics may package things in a more explicit way but it's not really "new".  To know how to use game mechanics, and what a user experience team is important for, is totally understanding who our users are and what they are motivated by. For several years, I taught college psychology courses, including Motivation.  Motivation is generally broken down into intrinsic and extrinsic motivation.  There's intrinsic, which comes from within the individual.  And there's extrinsic, which comes from outside the individual.  Intrinsic motivation is that motivation that comes from just a general sense of pleasure in the doing of something.  For example, I like to cook.  I like to cook a lot.  The kind of cooking I think is just fun makes other people--people who don't like to cook--cringe.  Like the cake I made this week--the star-spangled rhapsody from The Cake Bible: two layers of meringue, two layers of genoise flavored with a raspberry eau de vie syrup, whipped cream with berries and a mousseline buttercream, also flavored with raspberry liqueur and topped with fresh raspberries and blueberries. I love cooking--I ask for cooking tools for my birthday and Christmas, I take classes like sushi making and knife skills for fun.  I like reading about you can make an emulsion of egg yolks, melted butter and lemon, cook slowly and transform them into a sauce hollandaise (my use of all the egg yolks that didn't go into the aforementioned cake).  And while it's nice when people like what I cook, I don't do it for that.  I do it because I think it's fun.  My former boss, Ultan Ó Broin, loves to fish in the sea off the coast of Ireland.  Not because he gets prizes for it, or awards, but because it's fun.  To quote a note he sent me today when I asked if having been recently ill kept him from the beginning of mackerel season, he told me he had already been out and said "I can fish when on a deathbed" (read more of Ultan's work, see his blogs on User Assistance and Translation.). That's not the kind of intensity you get about something you don't like to do.  I'm sure you can think of something you do just because you like it. So how does that relate to gamification?  Gamification in the enterprise space is about uncovering the game within work.  Gamification is about tapping into things people already find motivating.  But to do that, you need to know what that user is motivated by. Customer Relationship Management (CRM) is one of those areas where over-the-top gamification seems to work (not to plug a competitor in this space, but you can search on what Bunchball* has done with a company just a little north of us on 101 for the CRM crowd).  Sales people are naturally competitive and thrive on that plus recognition of their sales work.  You can use lots of game mechanics like leaderboards and challenges and scorecards with this type of user and they love it.  Show my whole org I'm leading in sales for the quarter?  Bring it on!  However, take the average accountant and show how much general ledger activity they have done in the last week and expose it to their whole org on a leaderboard and I think you'd see a lot of people looking for a new job.  Why?  Because in general, accountants aren't extraverts who thrive on competition in their work.  That doesn't mean there aren't game mechanics that would work for them, but they won't be the same game mechanics that work for sales people.  It's a different type of user and they are motivated by different things. To break this up, I'll stop here and post now.  I'll pick this thread up in the next post. Thoughts? Questions? *Disclosure: To my knowledge, Oracle has no relationship with Bunchball at this point in time.

    Read the article

  • On Reflector Pricing

    - by Nick Harrison
    I have heard a lot of outrage over Red Gate's decision to charge for Reflector. In the interest of full disclosure, I am a fan of Red Gate. I have worked with them on several usability tests. They also sponsor Simple Talk where I publish articles. They are a good company. I am also a BIG fan of Reflector. I have used it since Lutz originally released it. I have written my own add-ins. I have written code to host reflector and use its object model in my own code. Reflector is a beautiful tool. The care that Lutz took to incorporate extensibility is amazing. I have never had difficulty convincing my fellow developers that it is a wonderful tool. Almost always, once anyone sees it in action, it becomes their favorite tool. This wide spread adoption and usability has made it an icon and pivotal pillar in the DotNet community. Even folks with the attitude that if it did not come out of Redmond then it must not be any good, still love it. It is ironic to hear everyone clamoring for it to be released as open source. Reflector was never open source, it was free, but you never were able to peruse the source code and contribute your own changes. You could not even use Reflector to view the source code. From the very beginning, it was never anyone's intention for just anyone to examine the source code and make their own contributions aside from the add-in model. Lutz chose to hand over the reins to Red Gate because he believed that they would be able to build on his original vision and keep the product viable and effective. He did not choose to make it open source, hoping that the community would be up to the challenge. The simplicity and elegance may well have been lost with the "design by committee" nature of open source. Despite being a wonderful and beloved tool, Reflector cannot be an easy tool to maintain. Maybe because it is so wonderful and beloved, it is even more difficult to maintain. At any rate, we have high expectations. Reflector must continue to be able to reasonably disassemble every language construct that the framework and core languages dream up. We want it to be fast, and we also want it to continue to be simple to use. No small order. Red Gate tried to keep the core product free. Sadly there was not enough interest in the Pro version to subsidize the rest of the expenses. $35 is a reasonable cost, more than reasonable. I have read the blog posts and forum posts complaining about the time associated with getting the expense approved. I have heard people complain about the cost being unreasonable if you are a developer from certain countries. Let's do the math. How much of a productivity boost is Reflector? How many hours do you think it saves you in a typical project? The next question is a little easier if you are a contractor or a consultant, but what is your hourly rate? If you are not a contractor, you can probably figure out an hourly rate. How long does it take to get a return on your investment? The value added proposition is not a difficult one to make. I have read people clamoring that Red Gate sucks and is evil. They complain about broken promises and conflicts of interest. Relax! Red Gate is not evil. The world is not coming to an end. The sun will come up tomorrow. I am sure that Red Gate will come up with options for volume licensing or site licensing for companies that want to get a licensed copy for their entire team. Don't panic, and I am sure that many great improvements are on the horizon. Switching the UI to WPF and including a tabbed interface opens up lots of possibilities.

    Read the article

  • NDepend Evaluation: Part 3

    - by Anthony Trudeau
    NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high code quality. NDepend uses a number of metrics and aggregates the data in pleasing static and active visual reports. My evaluation of NDepend will be broken up into several different parts. In the first part of the evaluation I looked at installing the add-in.  And in the last part I went over my first impressions including an overview of the features.  In this installment I provide a little more detail on a few of the features that I really like. Dependency Matrix The dependency matrix is one of the rich visual components provided with NDepend.  At a glance it lets you know where you have coupling problems including cycles.  It does this with number indicating the weight of the dependency and a color-coding that indicates the nature of the dependency. Green and blue cells are direct dependencies (with the difference being whether the relationship is from row-to-column or column-to-row).  Black cells are the ones that you really want to know about.  These indicate that you have a cycle.  That is, type A refers to type B and type B also refers to Type A. But, that’s not the end of the story.  A handy pop-up appears when you hover over the cell in question.  It explains the color, the dependency, and provides several interesting links that will teach you more than you want to know about the dependency. You can double-click the problem cells to explode the dependency.  That will show the dependencies on a method-by-method basis allowing you to more easily target and fix the problem.  When you’re done you can click the back button on the toolbar. Dependency Graph The dependency graph is another component provided.  It’s complementary to the dependency matrix, but it isn’t as easy to identify dependency issues using the window. On a positive note, it does provide more information than the matrix. My biggest issue with the dependency graph is determining what is shown.  This was not readily obvious.  I ended up using the navigation buttons to get an acceptable view.  I would have liked to choose what I see. Once you see the types you want you can get a decent idea of coupling strength based on the width of the dependency lines.  Double-arrowed lines are problematic and are shown in red.  The size of the boxes will be related to the metric being displayed.  This is controlled using the Box Size drop-down in the toolbar.  Personally, I don’t find the size of the box to be helpful, so I change it to Constant Font. One nice thing about the display is that you can see the entire path of dependencies when you hover over a type.  This is done by color-coding the dependencies and dependants.  It would be nice if selecting the box for the type would lock the highlighting in place. I did find a perhaps unintended work-around to the color-coding.  You can lock the color-coding in by hovering over the type, right-clicking, and then clicking on the canvas area to clear the pop-up menu.  You can then do whatever with it including saving it to an image file with the color-coding. CQL NDepend uses a code query language (CQL) to work with your code just like it was a database.  CQL cannot be confused with the robustness of T-SQL or even LINQ, but it represents an impressive attempt at providing an expressive way to enumerate and interrogate your code. There are two main windows you’ll use when working with CQL.  The CQL Query Explorer allows you to define what queries (rules) are run as part of a report – I immediately unselected rules that I don’t want in my results.  The CQL Query Edit window is where you can view or author your own rules.  The explorer window is pretty self-explanatory, so I won’t mention it further other than to say that any queries you author will appear in the custom group. Authoring your own queries is really hard to screw-up.  The Intellisense-like pop-ups tell you what you can do while making composition easy.  I was able to create a query within two minutes of playing with the editor.  My query warns if any types that are interfaces don’t start with an “I”. WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” The results from the CQL Query Edit window are immediate. That fact makes it useful for ad hoc querying.  It’s worth mentioning two things that could make the experience smoother.  First, out of habit from using Visual Studio I expect to be able to scroll and press Tab to select an item in the list (like Intellisense).  You have to press Enter when you scroll to the item you want.  Second, the commands are case-sensitive.  I don’t see a really good reason to enforce that. CQL has a lot of potential not just in enforcing code quality, but also enforcing architectural constraints that your enterprise has defined. Up Next My next update will be the final part of the evaluation.  I will summarize my experience and provide my conclusions on the NDepend add-in. ** View Part 1 of the Evaluation ** ** View Part 2 of the Evaluation ** Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend.

    Read the article

  • No More NCrunch For Me

    - by Steve Wilkes
    When I opened up Visual Studio this morning, I was greeted with this little popup: NCrunch is a Visual Studio add-in which runs your tests while you work so you know if and when you've broken anything, as well as providing coverage indicators in the IDE and coverage metrics on demand. It recently went commercial (which I thought was fair enough), and time is running out for the free version I've been using for the last couple of months. From my experiences using NCrunch I'm going to let it expire, and go about my business without it. Here's why. Before I start, let me say that I think NCrunch is a good product, which is to say it's had a positive impact on my programming. I've used it to help test-drive a library I'm making right from the start of the project, and especially at the beginning it was very useful to have it run all my tests whenever I made a change. The first problem is that while that was cool to start with, it’s recently become a bit of a chore. Problems Running Tests NCrunch has two 'engine modes' in which it can run tests for you - it can run all your tests when you make a change, or it can figure out which tests were impacted and only run those. Unfortunately, it became clear pretty early on that that second option (which is marked as 'experimental') wasn't really working for me, so I had to have it run everything. With a smallish number of tests and while I was adding new features that was great, but I've now got 445 tests (still not exactly loads) and am more in a 'clean and tidy' mode where I know that a change I'm making will probably only affect a particular subset of the tests. With that in mind it's a bit of a drag sitting there after I make a change and having to wait for NCrunch to run everything. I could disable it and manually run the tests I know are impacted, but then what's the point of having NCrunch? If the 'impacted only' engine mode worked well this problem would go away, but that's not what I found. Secondly, what's wrong with this picture? I've got 445 tests, and NCrunch has queued 455 tests to run. So it's queued duplicate tests - in this quickly-screenshotted case 10, but I've seen the total queue get up over 600. If I'm already itchy waiting for it to run all my tests against a change I know only affects a few, I'm even itchier waiting for it to run a lot of them twice. Problems With Code Coverage NCrunch marks each line of code with a dot to say if it's covered by tests - a black dot says the line isn't covered, a red dot says it's covered but at least one of the covering tests is failing, and a green dot means all the covering tests pass. It also calculates coverage statistics for you. Unfortunately, there's a couple of flaws in the coverage. Firstly, it doesn't support ExcludeFromCodeCoverage attributes. This feature has been requested and I expect will be included in a later release, but right now it doesn't. So this: ...is counted as a non-covered line, and drags your coverage statistics down. Hmph. As well as that, coverage of certain types of code is missed. This: ...is definitely covered. I am 100% absolutely certain it is, by several tests. NCrunch doesn't pick it up, down go my coverage statistics. I've had NCrunch find genuinely uncovered code which I've been able to remove, and that's great, but what's the coverage percentage on this project? Umm... I don't know. Conclusion None of these are major, tool-crippling problems, and I expect NCrunch to get much better in future releases. The current version has some great features, like this: ...that's a line of code with a failing test covering it, and NCrunch can run that failing test and take me to that line exquisitely easily. That's awesome! I'd happily pay for a tool that can do that. But here's the thing: NCrunch (currently) costs $159 (about £100) for a personal licence and $289 (about £180) for a commercial one. I'm not sure which one I'd need as my project is a personal one which I'm intending to open-source, but I'm a professional, self-employed developer, but in any case - that seems like a lot of money for an imperfect tool. If it did everything it's advertised to do more or less perfectly I'd consider it, but it doesn't. So no more NCrunch for me.

    Read the article

  • How can I estimate the entropy of a password?

    - by Wug
    Having read various resources about password strength I'm trying to create an algorithm that will provide a rough estimation of how much entropy a password has. I'm trying to create an algorithm that's as comprehensive as possible. At this point I only have pseudocode, but the algorithm covers the following: password length repeated characters patterns (logical) different character spaces (LC, UC, Numeric, Special, Extended) dictionary attacks It does NOT cover the following, and SHOULD cover it WELL (though not perfectly): ordering (passwords can be strictly ordered by output of this algorithm) patterns (spatial) Can anyone provide some insight on what this algorithm might be weak to? Specifically, can anyone think of situations where feeding a password to the algorithm would OVERESTIMATE its strength? Underestimations are less of an issue. The algorithm: // the password to test password = ? length = length(password) // unique character counts from password (duplicates discarded) uqlca = number of unique lowercase alphabetic characters in password uquca = number of uppercase alphabetic characters uqd = number of unique digits uqsp = number of unique special characters (anything with a key on the keyboard) uqxc = number of unique special special characters (alt codes, extended-ascii stuff) // algorithm parameters, total sizes of alphabet spaces Nlca = total possible number of lowercase letters (26) Nuca = total uppercase letters (26) Nd = total digits (10) Nsp = total special characters (32 or something) Nxc = total extended ascii characters that dont fit into other categorys (idk, 50?) // algorithm parameters, pw strength growth rates as percentages (per character) flca = entropy growth factor for lowercase letters (.25 is probably a good value) fuca = EGF for uppercase letters (.4 is probably good) fd = EGF for digits (.4 is probably good) fsp = EGF for special chars (.5 is probably good) fxc = EGF for extended ascii chars (.75 is probably good) // repetition factors. few unique letters == low factor, many unique == high rflca = (1 - (1 - flca) ^ uqlca) rfuca = (1 - (1 - fuca) ^ uquca) rfd = (1 - (1 - fd ) ^ uqd ) rfsp = (1 - (1 - fsp ) ^ uqsp ) rfxc = (1 - (1 - fxc ) ^ uqxc ) // digit strengths strength = ( rflca * Nlca + rfuca * Nuca + rfd * Nd + rfsp * Nsp + rfxc * Nxc ) ^ length entropybits = log_base_2(strength) A few inputs and their desired and actual entropy_bits outputs: INPUT DESIRED ACTUAL aaa very pathetic 8.1 aaaaaaaaa pathetic 24.7 abcdefghi weak 31.2 H0ley$Mol3y_ strong 72.2 s^fU¬5ü;y34G< wtf 88.9 [a^36]* pathetic 97.2 [a^20]A[a^15]* strong 146.8 xkcd1** medium 79.3 xkcd2** wtf 160.5 * these 2 passwords use shortened notation, where [a^N] expands to N a's. ** xkcd1 = "Tr0ub4dor&3", xkcd2 = "correct horse battery staple" The algorithm does realize (correctly) that increasing the alphabet size (even by one digit) vastly strengthens long passwords, as shown by the difference in entropy_bits for the 6th and 7th passwords, which both consist of 36 a's, but the second's 21st a is capitalized. However, they do not account for the fact that having a password of 36 a's is not a good idea, it's easily broken with a weak password cracker (and anyone who watches you type it will see it) and the algorithm doesn't reflect that. It does, however, reflect the fact that xkcd1 is a weak password compared to xkcd2, despite having greater complexity density (is this even a thing?). How can I improve this algorithm? Addendum 1 Dictionary attacks and pattern based attacks seem to be the big thing, so I'll take a stab at addressing those. I could perform a comprehensive search through the password for words from a word list and replace words with tokens unique to the words they represent. Word-tokens would then be treated as characters and have their own weight system, and would add their own weights to the password. I'd need a few new algorithm parameters (I'll call them lw, Nw ~= 2^11, fw ~= .5, and rfw) and I'd factor the weight into the password as I would any of the other weights. This word search could be specially modified to match both lowercase and uppercase letters as well as common character substitutions, like that of E with 3. If I didn't add extra weight to such matched words, the algorithm would underestimate their strength by a bit or two per word, which is OK. Otherwise, a general rule would be, for each non-perfect character match, give the word a bonus bit. I could then perform simple pattern checks, such as searches for runs of repeated characters and derivative tests (take the difference between each character), which would identify patterns such as 'aaaaa' and '12345', and replace each detected pattern with a pattern token, unique to the pattern and length. The algorithmic parameters (specifically, entropy per pattern) could be generated on the fly based on the pattern. At this point, I'd take the length of the password. Each word token and pattern token would count as one character; each token would replace the characters they symbolically represented. I made up some sort of pattern notation, but it includes the pattern length l, the pattern order o, and the base element b. This information could be used to compute some arbitrary weight for each pattern. I'd do something better in actual code. Modified Example: Password: 1234kitty$$$$$herpderp Tokenized: 1 2 3 4 k i t t y $ $ $ $ $ h e r p d e r p Words Filtered: 1 2 3 4 @W5783 $ $ $ $ $ @W9001 @W9002 Patterns Filtered: @P[l=4,o=1,b='1'] @W5783 @P[l=5,o=0,b='$'] @W9001 @W9002 Breakdown: 3 small, unique words and 2 patterns Entropy: about 45 bits, as per modified algorithm Password: correcthorsebatterystaple Tokenized: c o r r e c t h o r s e b a t t e r y s t a p l e Words Filtered: @W6783 @W7923 @W1535 @W2285 Breakdown: 4 small, unique words and no patterns Entropy: 43 bits, as per modified algorithm The exact semantics of how entropy is calculated from patterns is up for discussion. I was thinking something like: entropy(b) * l * (o + 1) // o will be either zero or one The modified algorithm would find flaws with and reduce the strength of each password in the original table, with the exception of s^fU¬5ü;y34G<, which contains no words or patterns.

    Read the article

  • State of the (Commerce) Union: What the healthcare.gov hiccups teach us about the commerce customer experience

    - by Katrina Gosek
    Guest Post by Brenna Johnson, Oracle Commerce Product A lot has been said about the healthcare.gov debacle in the last week. Regardless of your feelings about the Affordable Care Act, there’s a hidden issue in this story that most of the American people don’t understand: delivering a great commerce customer experience (CX) is hard. It shouldn’t be, but it is. The reality of the government’s issues getting the healthcare site up and running smooth is something we in the online commerce community know too well.  If there’s one thing the botched launch of the site has taught us, it’s that regardless of the size of your budget or the power of an executive with a high-profile project, some of the biggest initiatives with the most attention (and the most at stake) don’t go as planned. It may even give you a moment of solace – we have the same issues! But why?  Organizations engage too many separate vendors with different technologies, running sections or pieces of a site to get live. When things go wrong, it takes time to identify the problem – and who or what is at the center of it. Unfortunately, this is a brittle way of setting up a site, making it susceptible to breaks, bugs, and scaling issues. But, it’s the reality of running a site with legacy technology constraints in today’s demanding, customer-centric market. This approach also means there’s also a lot of cooks in lots of different kitchens. You’ve got development and IT, the business and the marketing team, an external Systems Integrator to bring it all together, a digital agency or consultant, QA, product experts, 3rd party suppliers, and the list goes on. To complicate things, different business units are held responsible for different pieces of the site and managing different technologies. And again – due to legacy organizational structure and processes, this is all accepted as the normal State of the Union. Digital commerce has been commonplace for 15 years. Yet, getting a site live, maintained and performing requires orchestrating a cast of thousands (or at least, dozens), big dollars, and some finger-crossing. But it shouldn’t. The great thing about the advent of mobile commerce and the continued maturity of online commerce is that it’s forced organizations to think from the outside, in. Consumers – whether they’re shopping for shoes or a new healthcare plan – don’t care about what technology issues or processes you have behind the scenes. They just want it to work.  They want their experience to be easy, fast, and tailored to them and their needs – whatever they are. This doesn’t sound like a tall order to the American consumer – especially since they interact with sites that do work smoothly.  But the reality is that it takes scores of people, teams, check-ins, late nights, testing, and some good luck to get sites to run, and even more so at Black Friday (or October 1st) traffic levels.  The last thing on a customer’s mind is making excuses for why they can’t buy a product – just get it to work. So what is the government doing? My guess is working day and night to get the site performing  - and having to throw big money at the problem. In the meantime they’re sending frustrated online users to the call center, or even a location where a trained “navigator” can help them in-person to complete their selection. Sounds a lot like multichannel commerce (where broken communication between siloed touchpoints will only frustrate the consumer more). One thing we’ve learned is that consumers spend their time and money with brands they know and trust. When sites are easy to use and adapt to their needs, they tend to spend more, come back, and even become long-time loyalists. Achieving this may require moving internal mountains, but there’s too much at stake to ignore the sea change in how organizations are thinking about their customer. If the thought of re-thinking your internal teams, technologies, and processes sounds like a headache, think about the pain associated with losing valuable customers – and dollars. Regardless if you’re in B2B or B2C, it’s guaranteed that your competitors are making CX a priority. Those early to the game who have made CX a priority have already begun to outpace their competition. So as you’re planning for 2014, look to the news this week. Make sure the customer experience is a focus at your organization. Expectations are at record highs. Map your customer’s journey, and think from the outside, in. How easy is it for your customers to do business with you? If they interact with many touchpoints across your organization, are the call center, website, mobile environment, or brick and mortar location in sync? Do you have the technology in place to achieve this? It’s time to give the people what they want!

    Read the article

  • Issues POSTing XML to OAuth and Signature Invalid with Ruby OAuth Gem

    - by thynctank
    [Cross-posted from the OAuth Ruby Google Group. If you couldn't help me there, don't worry bout it] I'm working on integrating a project with TripIt's OAuth API and am running into a weird issue. I authenticate fine, I store and retrieve the token/secret for a given user with no problem, I can even make GET requests to a number of services using the gem. But when I try using the one service I need POST for, I'm getting a 401 "invalid signature" response. Perhaps I'm not understanding how to pass in data to the AccessToken's post method, so here's a sample of my code: xml = <<-XML <Request> <Trip> <start_date>2008-12-09</start_date> <end_date>2008-12-27</end_date> <primary_location>New York, NY</primary_location> </Trip> </Request> XML` response = access_token.post('/v1/create', {:xml => xml}, {'Content-Type' => 'application/x-www-form-urlencoded'}) I've tried this with and without escaping the xml string before hand. The guys at TripIt seemed to think that perhaps the xml param wasn't getting included in the signature_base_string, but when I output that (from lib/signature/base.rb) I see: POST&https%3A%2F%2Fapi.tripit.com%2Fv1%2Fcreate&oauth_consumer_key %3D%26oauth_nonce %3Djs73Y9caeuffpmPVc6lqxhlFN3Qpj7OhLcfBTYv8Ww%26oauth_signature_method %3DHMAC-SHA1%26oauth_timestamp%3D1252011612%26oauth_token %3D%26oauth_version%3D1.0%26xml%3D%25253CRequest%25253E %25250A%252520%252520%25253CTrip%25253E%25250A %252520%252520%252520%252520%25253Cstart_date%25253E2008-12-09%25253C %252Fstart_date%25253E%25250A %252520%252520%252520%252520%25253Cend_date%25253E2008-12-27%25253C %252Fend_date%25253E%25250A %252520%252520%252520%252520%25253Cprimary_location%25253ENew %252520York%252C%252520NY%25253C%252Fprimary_location%25253E%25250A %252520%252520%25253C%252FTrip%25253E%25250A%25253C%252FRequest%25253E %25250A This seems to be correct to me. I output signature (from the same file) and the output doesn't match the oauth_signature param of the Auth header in lib/client/ net_http.rb. It's been URL-encoded in the auth header. Is this correct? Anyone know if the gem is broken/if there's a fix somewhere? I'm finding it hard to trace through some of the code.

    Read the article

  • How can you add a UIGestureRecognizer to a UIBarButtonItem as in the common undo/redo UIPopoverContr

    - by SG
    Problem In my iPad app, I cannot attach a popover to a button bar item only after press-and-hold events. But this seems to be standard for undo/redo. How do other apps do this? Background I have an undo button (UIBarButtonSystemItemUndo) in the toolbar of my UIKit (iPad) app. When I press the undo button, it fires it's action which is undo:, and that executes correctly. However, the "standard UE convention" for undo/redo on iPad is that pressing undo executes an undo but pressing and holding the button reveals a popover controller where the user selected either "undo" or "redo" until the controller is dismissed. The normal way to attach a popover controller is with presentPopoverFromBarButtonItem:, and I can configure this easily enough. To get this to show only after press-and-hold we have to set a view to respond to "long press" gesture events as in this snippet: UILongPressGestureRecognizer *longPressOnUndoGesture = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(handleLongPressOnUndoGesture:)]; //Broken because there is no customView in a UIBarButtonSystemItemUndo item [self.undoButtonItem.customView addGestureRecognizer:longPressOnUndoGesture]; [longPressOnUndoGesture release]; With this, after a press-and-hold on the view the method handleLongPressOnUndoGesture: will get called, and within this method I will configure and display the popover for undo/redo. So far, so good. The problem with this is that there is no view to attach to. self.undoButtonItem is a UIButtonBarItem, not a view. Possible solutions 1) [The ideal] Attach the gesture recognizer to the button bar item. It is possible to attach a gesture recognizer to a view, but UIButtonBarItem is not a view. It does have a property for .customView, but that property is nil when the buttonbaritem is a standard system type (in this case it is). 2) Use another view. I could use the UIToolbar but that would require some weird hit-testing and be an all around hack, if even possible in the first place. There is no other alternative view to use that I can think of. 3) Use the customView property. Standard types like UIBarButtonSystemItemUndo have no customView (it is nil). Setting the customView will erase the standard contents which it needs to have. This would amount to re-implementing all the look and function of UIBarButtonSystemItemUndo, again if even possible to do. Question How can I attach a gesture recognizer to this "button"? More specifically, how can I implement the standard press-and-hold-to-show-redo-popover in an iPad app? Ideas? Thank you very much, especially if someone actually has this working in their app (I'm thinking of you, omni) and wants to share...

    Read the article

  • Memory allocation problem with SVMs in OpenCV

    - by worksintheory
    Hi, I've been using OpenCV happily for a while, but now I have a problem which has bugged me for quite some time. The following code is reasonably minimal example of my problem: #include <cv.h> #include <ml.h> using namespace cv; int main(int argc, char **argv) { int sampleCountForTesting = 2731; //BROKEN: Breaks svm.train_auto(...) for values of 2731 or greater! Mat trainingData( sampleCountForTesting, 1, CV_32FC1, Scalar::all(0.0) ); Mat trainingResponses( sampleCountForTesting, 1, CV_32FC1, Scalar::all(0.0) ); for(int j = 0; j < 6; j++) { trainingData.at<float>( j, 0 ) = (float) (j%2); trainingResponses.at<float>( j, 0 ) = (float) (j%2); //Setting a few values so I don't get a "single class" error } CvSVMParams svmParams( 100, //100 is CvSVM::C_SVC, 2, //2 is CvSVM::RBF, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, NULL, TermCriteria( TermCriteria::MAX_ITER | TermCriteria::EPS, 2, 1.0 ) ); CvSVM svm = CvSVM(); svm.train_auto( trainingData, trainingResponses, Mat(), Mat(), svmParams ); return 0; } I just create matrices to hold the training data and responses, then set a few entries to some value other than zero, then run the SVM. But it breaks whenever there are 2731 rows or more: OpenCV Error: One of arguments' values is out of range (requested size is negative or too big) in cvMemStorageAlloc, file [omitted]/opencv/OpenCV-2.2.0/modules/core/src/datastructs.cpp, line 332 With fewer rows, it seems to be fine and a classifier trained in a similar manner to the above seems to be giving reasonable output. Am I doing something wrong? I'm pretty sure it's not actually anything to do with lack of memory, as I've got 6GB and also the code works fine when the data has 2730 rows and 10000 columns, which is a much bigger allocation. I'm running OpenCV 2.2 on OSX 10.6 and initially I thought the problem might be related to this bug if for some reason the fix wasn't included in the MacPorts version. Now I've also tried downloading the most recent stable version from the OpenCV site and building with cmake and using that, but I still get the same error, and the fix is definitely included in that version. Any help would be much appreciated! Thanks,

    Read the article

  • CruiseControl.Net Publisher email failing to send e-mail

    - by CrimsonX
    So here's my problem: I cannot seem to be able to configure CruiseControl.NET to send out an e-mail to me when a build occurs. I copied the example from the documentation and filled it in with my own values. http://confluence.public.thoughtworks.org/display/CCNET/Email+Publisher Here's the relevant section of ccnet.config below <publishers> <merge> <files> <file>C:\Build\Temp\*.xml</file> </files> </merge> <xmllogger logDir="buildlogs" /> <statistics> <statisticList> <statistic /> </statisticList> </statistics> <email includeDetails="TRUE" mailhostUsername="user" mailhostPassword="password" useSSL="TRUE"> <from>[email protected]</from> <mailhost>mail.mycompanysmtpserver.com</mailhost> <users> <user name="MyName Lastname" group="buildmaster" address="[email protected]" /> </users> <groups> <group name="buildmaster"> <notifications> <notificationType>Always</notificationType> </notifications> </group> </groups> <modifierNotificationTypes> <NotificationType>Failed</NotificationType> <NotificationType>Fixed</NotificationType> </modifierNotificationTypes> <subjectSettings> <subject buildResult="StillBroken" value="Build is still broken for {CCNetProject}" /> </subjectSettings> </email> </publishers> I have had a successful CruiseControl.NET server configured for some time, and it successfully updates people through CCTray, but I need to add e-mail support as well. I've already looked at the relevant StackOverflow articles like this one and many more and tried my hand at googling the solution but I dont know what I could be doing wrong. The only other thing that I'd like to validate is that I can send/receive e-mails using my username/password with the SMTP server I received from IT (my mail works properly) but I just followed the steps on testing an SMTP server here and everything looks fine http://support.microsoft.com/kb/153119 Anyone have any ideas as to why I'm getting this problem, or how to troubleshoot it further?

    Read the article

  • Are there any working implementations of the rolling hash function used in the Rabin-Karp string sea

    - by c14ppy
    I'm looking to use a rolling hash function so I can take hashes of n-grams of a very large string. For example: "stackoverflow", broken up into 5 grams would be: "stack", "tacko", "ackov", "ckove", "kover", "overf", "verfl", "erflo", "rflow" This is ideal for a rolling hash function because after I calculate the first n-gram hash, the following ones are relatively cheap to calculate because I simply have to drop the first letter of the first hash and add the new last letter of the second hash. I know that in general this hash function is generated as: H = c1ak - 1 + c2ak - 2 + c3ak - 3 + ... + cka0 where a is a constant and c1,...,ck are the input characters. If you follow this link on the Rabin-Karp string search algorithm , it states that "a" is usually some large prime. I want my hashes to be stored in 32 bit integers, so how large of a prime should "a" be, such that I don't overflow my integer? Does there exist an existing implementation of this hash function somewhere that I could already use? Here is an implementation I created: public class hash2 { public int prime = 101; public int hash(String text) { int hash = 0; for(int i = 0; i < text.length(); i++) { char c = text.charAt(i); hash += c * (int) (Math.pow(prime, text.length() - 1 - i)); } return hash; } public int rollHash(int previousHash, String previousText, String currentText) { char firstChar = previousText.charAt(0); char lastChar = currentText.charAt(currentText.length() - 1); int firstCharHash = firstChar * (int) (Math.pow(prime, previousText.length() - 1)); int hash = (previousHash - firstCharHash) * prime + lastChar; return hash; } public static void main(String[] args) { hash2 hashify = new hash2(); int firstHash = hashify.hash("mydog"); System.out.println(firstHash); System.out.println(hashify.hash("ydogr")); System.out.println(hashify.rollHash(firstHash, "mydog", "ydogr")); } } I'm using 101 as my prime. Does it matter if my hashes will overflow? I think this is desirable but I'm not sure. Does this seem like the right way to go about this?

    Read the article

  • What should I do when the managed object context fails to save?

    - by dontWatchMyProfile
    Example: I have an Cat entity with an catAge attribute. In the data modeler, I configured catAge as int with a max of 100. Then I do this: [newManagedObject setValue:[NSNumber numberWithInt:125] forKey:@"catAge"]; // Save the context. NSError *error = nil; if (![context save:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); } I'm getting an error in the console, like this: 2010-06-12 11:40:41.947 CatTest[2250:207] Unresolved error Error Domain=NSCocoaErrorDomain Code=1610 UserInfo=0x10164d0 "Operation could not be completed. (Cocoa error 1610.)", { NSLocalizedDescription = "Operation could not be completed. (Cocoa error 1610.)"; NSValidationErrorKey = catAge; NSValidationErrorObject = <NSManagedObject: 0x10099f0> (entity: Cat; id: 0x1006a90 <x-coredata:///Cat/t3BCBC34B-8405-4F16-B591-BE804B6811562> ; data: { catAge = 125; catName = "No Name"; }); NSValidationErrorPredicate = SELF <= 100; NSValidationErrorValue = 125; } Well, so I have an validation error. But the odd thing is, that it seems the MOC is broken after this. If I just tap "add" to add another invalid Cat object and save that, I'm getting this: 2010-06-12 11:45:13.857 CatTest[2250:207] Unresolved error Error Domain=NSCocoaErrorDomain Code=1560 UserInfo=0x1232170 "Operation could not be completed. (Cocoa error 1560.)", { NSDetailedErrors = ( Error Domain=NSCocoaErrorDomain Code=1610 UserInfo=0x1215f00 "Operation could not be completed. (Cocoa error 1610.)", Error Domain=NSCocoaErrorDomain Code=1610 UserInfo=0x1209fc0 "Operation could not be completed. (Cocoa error 1610.)" ); } That seems to report two errors now. BUT: When I try to delete now an valid, existing object from the table view (using the default core data template in a navigation-based app), then the app crashes! All I get in the console is: 2010-06-12 11:47:18.931 CatTest[2250:207] Unresolved error Error Domain=NSCocoaErrorDomain Code=1560 UserInfo=0x123eb30 "Operation could not be completed. (Cocoa error 1560.)", { NSDetailedErrors = ( Error Domain=NSCocoaErrorDomain Code=1610 UserInfo=0x1217010 "Operation could not be completed. (Cocoa error 1610.)", Error Domain=NSCocoaErrorDomain Code=1610 UserInfo=0x123ea80 "Operation could not be completed. (Cocoa error 1610.)" ); } ...so no idea where or why it crashes, but it does. So the question is, what are the neccessary steps to take when there's an validation error?

    Read the article

  • sql: Group by x,y,z; return grouped by x,y with lowest f(z)

    - by Sai Emrys
    This is for http://cssfingerprint.com I collect timing stats about how fast the different methods I use perform on different browsers, etc., so that I can optimize the scraping speed. Separately, I have a report about what each method returns for a handful of URLs with known-correct values, so that I can tell which methods are bogus on which browsers. (Each is different, alas.) The related tables look like this: CREATE TABLE `browser_tests` ( `id` int(11) NOT NULL AUTO_INCREMENT, `bogus` tinyint(1) DEFAULT NULL, `result` tinyint(1) DEFAULT NULL, `method` varchar(255) DEFAULT NULL, `url` varchar(255) DEFAULT NULL, `os` varchar(255) DEFAULT NULL, `browser` varchar(255) DEFAULT NULL, `version` varchar(255) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=33784 DEFAULT CHARSET=latin1 CREATE TABLE `method_timings` ( `id` int(11) NOT NULL AUTO_INCREMENT, `method` varchar(255) DEFAULT NULL, `batch_size` int(11) DEFAULT NULL, `timing` int(11) DEFAULT NULL, `os` varchar(255) DEFAULT NULL, `browser` varchar(255) DEFAULT NULL, `version` varchar(255) DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=28849 DEFAULT CHARSET=latin1 (user_agent is broken down pre-insert into browser, version, and os from a small list of recognized values using regex; I keep the original user-agent string just in case.) I have a query like this that tells me the average timing for every non-bogus browser / version / method tuple: select c, avg(bogus) as bog, timing, method, browser, version from browser_tests as b inner join ( select count(*) as c, round(avg(timing)) as timing, method, browser, version from method_timings group by browser, version, method having c > 10 order by browser, version, timing ) as t using (browser, version, method) group by browser, version, method having bog < 1 order by browser, version, timing; Which returns something like: c bog tim method browser version 88 0.8333 184 reuse_insert Chrome 4.0.249.89 18 0.0000 238 mass_insert_width Chrome 4.0.249.89 70 0.0400 246 mass_insert Chrome 4.0.249.89 70 0.0400 327 mass_noinsert Chrome 4.0.249.89 88 0.0556 367 reuse_reinsert Chrome 4.0.249.89 88 0.0556 383 jquery Chrome 4.0.249.89 88 0.0556 863 full_reinsert Chrome 4.0.249.89 187 0.0000 105 jquery Chrome 5.0.307.11 187 0.8806 109 reuse_insert Chrome 5.0.307.11 123 0.0000 110 mass_insert_width Chrome 5.0.307.11 176 0.0000 231 mass_noinsert Chrome 5.0.307.11 176 0.0000 237 mass_insert Chrome 5.0.307.11 187 0.0000 314 reuse_reinsert Chrome 5.0.307.11 187 0.0000 372 full_reinsert Chrome 5.0.307.11 12 0.7500 82 reuse_insert Chrome 5.0.335.0 12 0.2500 102 jquery Chrome 5.0.335.0 [...] I want to modify this query to return only the browser/version/method with the lowest timing - i.e. something like: 88 0.8333 184 reuse_insert Chrome 4.0.249.89 187 0.0000 105 jquery Chrome 5.0.307.11 12 0.7500 82 reuse_insert Chrome 5.0.335.0 [...] How can I do this, while still returning the method that goes with that lowest timing? I could filter it app-side, but I'd rather do this in mysql since it'd work better with my caching.

    Read the article

  • Calculate year for end date: PostgreSQL

    - by Dave Jarvis
    Background Users can pick dates as shown in the following screen shot: Any starting month/day and ending month/day combinations are valid, such as: Mar 22 to Jun 22 Dec 1 to Feb 28 The second combination is difficult (I call it the "tricky date scenario") because the year for the ending month/day is before the year for the starting month/day. That is to say, for the year 1900 (also shown selected in the screen shot above), the full dates would be: Dec 22, 1900 to Feb 28, 1901 Dec 22, 1901 to Feb 28, 1902 ... Dec 22, 2007 to Feb 28, 2008 Dec 22, 2008 to Feb 28, 2009 Problem Writing a SQL statement that selects values from a table with dates that fall between the start month/day and end month/day, regardless of how the start and end days are selected. In other words, this is a year wrapping problem. Inputs The query receives as parameters: Year1, Year2: The full range of years, independent of month/day combination. Month1, Day1: The starting day within the year to gather data. Month2, Day2: The ending day within the year (or the next year) to gather data. Previous Attempt Consider the following MySQL code (that worked): end_year = start_year + greatest( -1 * sign( datediff( date( concat_ws('-', year, end_month, end_day ) ), date( concat_ws('-', year, start_month, start_day ) ) ) ), 0 ) How it works, with respect to the tricky date scenario: Create two dates in the current year. The first date is Dec 22, 1900 and the second date is Feb 28, 1900. Count the difference, in days, between the two dates. If the result is negative, it means the year for the second date must be incremented by 1. In this case: Add 1 to the current year. Create a new end date: Feb 28, 1901. Check to see if the date range for the data falls between the start and calculated end date. If the result is positive, the dates have been provided in chronological order and nothing special needs to be done. This worked in MySQL because the difference in dates would be positive or negative. In PostgreSQL, the equivalent functionality always returns a positive number, regardless of their relative chronological order. Question How should the following (broken) code be rewritten for PostgreSQL to take into consideration the relative chronological order of the starting and ending month/day pairs (with respect to an annual temporal displacement)? SELECT m.amount FROM measurement m WHERE (extract(MONTH FROM m.taken) >= month1 AND extract(DAY FROM m.taken) >= day1) AND (extract(MONTH FROM m.taken) <= month2 AND extract(DAY FROM m.taken) <= day2) Any thoughts, comments, or questions? (The dates are pre-parsed into MM/DD format in PHP. My preference is for a pure PostgreSQL solution, but I am open to suggestions on what might make the problem simpler using PHP.) Versions PostgreSQL 8.4.4 and PHP 5.2.10

    Read the article

  • App starts in 1.5 emulator but doesn't in 1.6

    - by DixieFlatline
    My app works on 1.5 emulator and 1.5 device. When i try to start it on 1.6 emulator, it produces strange exceptions (doesn't even start). I don't have any 1.6 device to try this app if it works on a real device. I get some warnings in Eclipse ( warning: Ignoring InnerClasses attribute for an anonymous inner class that doesn't come with an associated EnclosingMethod attribute. This class was probably produced by a broken compiler.) and get cant rid of them (i think they come from some apache jars that i need to make http multipart posts). Is it possible that this jars are cause for my exceptions in 1.6 or is it something else? My logacat: 04-29 16:14:55.874: ERROR/AndroidRuntime(392): Uncaught handler: thread main exiting due to uncaught exception 04-29 16:14:55.894: ERROR/AndroidRuntime(392): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.poslji.gor/com.poslji.gor.FormFiller}: java.lang.NumberFormatException: unable to parse 'null' as integer 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2401) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2417) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at android.app.ActivityThread.access$2100(ActivityThread.java:116) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1794) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at android.os.Handler.dispatchMessage(Handler.java:99) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at android.os.Looper.loop(Looper.java:123) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at android.app.ActivityThread.main(ActivityThread.java:4203) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at java.lang.reflect.Method.invokeNative(Native Method) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at java.lang.reflect.Method.invoke(Method.java:521) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at dalvik.system.NativeStart.main(Native Method) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): Caused by: java.lang.NumberFormatException: unable to parse 'null' as integer 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at java.lang.Integer.parseInt(Integer.java:358) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at java.lang.Integer.parseInt(Integer.java:333) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at com.poslji.gor.FormFiller.nastavi(FormFiller.java:322) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at com.poslji.gor.FormFiller.onCreate(FormFiller.java:188) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1123) 04-29 16:14:55.894: ERROR/AndroidRuntime(392): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2364)

    Read the article

  • Complex error handling

    - by Caspin
    I've got a particularly ornery piece of network code. I'm using asio but that really doesn't matter for this question. I assume there is no way to unbind a socket other than closing it. The problem is that open(), bind(), and listen() can all throw a system_error. So I handled the code with a simple try/catch. The code as written in broken. using namespace boost::asio; class Thing { public: ip::tcp::endpoint m_address; ip::tcp::acceptor m_acceptor; /// connect should handle all of its exceptions internally. bool connect() { try { m_acceptor.open( m_address.protocol() ); m_acceptor.set_option( tcp::acceptor::reuse_address(true) ); m_acceptor.bind( m_address ); m_acceptor.listen(); m_acceptor.async_accept( /*stuff*/ ); } catch( const boost::system::system_error& error ) { assert(acceptor.is_open()); m_acceptor.close(); return false; } return true; } /// don't call disconnect unless connect previously succeeded. void disconnect() { // other stuff needed to disconnect is ommited m_acceptor.close(); } }; The error is if the socket fails to connect it will try to close it in the catch block and throw another system_error about closing an acceptor that has never been opened. One solution is to add an if( acceptor.is_open() ) in the catch block but that tastes wrong. Kinda like mixing C-style error checking with c++ exceptions. If I where to go that route, I may as well use the non-throwing version of open(). boost::system::error_code error; acceptor.open( address.protocol, error ); if( ! error ) { try { acceptor.set_option( tcp::acceptor::reuse_address(true) ); acceptor.bind( address ); acceptor.listen(); acceptor.async_accept( /*stuff*/ ); } catch( const boost::system::system_error& error ) { assert(acceptor.is_open()); acceptor.close(); return false; } } return !error; Is there an elegant way to handle these possible exceptions using RAII and try/catch blocks? Am I just wrong headed in trying to avoid if( error condition ) style error handling when using exceptions?

    Read the article

  • Java Nimbus LAF with transparent text fields

    - by Software Monkey
    I have an application that uses disabled JTextFields in several places which are intended to be transparent - allowing the background to show through instead of the text field's normal background. When running the new Nimbus LAF these fields are opaque (despite setting setOpaque(false)), and my UI is broken. It's as if the LAF is ignoring the opaque property. Setting a background color explicitly is both difficult in several places, and less than optimal due to background images actually doesn't work - it still paints it's LAF default background over the top, leaving a border-like appearance (the splash screen below has the background explicitly set to match the image). Any ideas on how I can get Nimbus to not paint the background for a JTextField? Note: I need a JTextField, rather than a JLabel, because I need the thread-safe setText(), and wrapping capability. Note: My fallback position is to continue using the system LAF, but Nimbus does look substantially better. See example images below. Conclusions The surprise at this behavior is due to a misinterpretation of what setOpaque() is meant to do - from the Nimbus bug report: This is a problem the the orginal design of Swing and how it has been confusing for years. The issue is setOpaque(false) has had a side effect in exiting LAFs which is that of hiding the background which is not really what it is ment for. It is ment to say that the component my have transparent parts and swing should paint the parent component behind it. It's unfortunate that the Nimbus components also appear not to honor setBackground(null) which would otherwise be the recommended way to stop the background painting. Setting a fully transparent background seems unintuitive to me. In my opinion, setOpaque()/isOpaque() is a faulty public API choice which should have been only: public boolean isFullyOpaque(); I say this, because isOpaque()==true is a contract with Swing that the component subclass will take responsibility for painting it's entire background - which means the parent can skip painting that region if it wants (which is an important performance enhancement). Something external cannot directly change this contract (legitimately), whose fulfillment may be coded into the component. So the opacity of the component should not have been settable using setOpaque(). Instead something like setBackground(null) should cause many components to "no long have a background" and therefore become not fully opaque. By way of example, in an ideal world most components should have an isOpaque() that looks like this: public boolean isOpaque() { return (background!=null); }

    Read the article

  • VSTS test deployment and invalid assembly culture

    - by Merlyn Morgan-Graham
    I have a DLL that I'm testing, which links to a DLL that has what I think is an invalid value for AssemblyCulture. The value is "Neutral" (notice the upper-case "N"), whereas the DLL I'm testing, and every other DLL in my project, has a value of "neutral" (because they specify AssemblyCulture("")). When I try to deploy the DLL that links to the problem DLL, I get this error in VSTS: Failed to queue test run '...': Culture is not supported. Parameter name: name Neutral is an invalid culture identifier. <Exception>System.Globalization.CultureNotFoundException: Culture is not supported. Parameter name: name Neutral is an invalid culture identifier. at System.Globalization.CultureInfo..ctor(String name, Boolean useUserOverride) at System.Globalization.CultureInfo..ctor(String name) at System.Reflection.RuntimeAssembly.GetReferencedAssemblies(RuntimeAssembly assembly) at System.Reflection.RuntimeAssembly.GetReferencedAssemblies() at Microsoft.VisualStudio.TestTools.Utility.AssemblyLoadWorker.ProcessChildren(Assembly assembly) at Microsoft.VisualStudio.TestTools.Utility.AssemblyLoadWorker.GetDependentAssemblies(String path) at Microsoft.VisualStudio.TestTools.Utility.AssemblyLoadWorker.GetDependentAssemblies(String path) at Microsoft.VisualStudio.TestTools.Utility.AssemblyLoadStrategy.GetDependentAssemblies(String path) at Microsoft.VisualStudio.TestTools.Utility.AssemblyHelper.GetDependentAssemblies(String path, DependentAssemblyOptions options, String configFile) at Microsoft.VisualStudio.TestTools.TestManagement.DeploymentManager.GetDependencies(String master, String configFile, TestRunConfiguration runConfig, DeploymentItemOrigin dependencyOrigin, List`1 dependencyDeploymentItems, Dictionary`2 missingDependentAssemblies) at Microsoft.VisualStudio.TestTools.TestManagement.DeploymentManager.DoDeployment(TestRun run, FileCopyService fileCopyService) at Microsoft.VisualStudio.TestTools.TestManagement.ControllerProxy.SetupTestRun(TestRun run, Boolean isNewTestRun, FileCopyService fileCopyService, DeploymentManager deploymentManager) at Microsoft.VisualStudio.TestTools.TestManagement.ControllerProxy.SetupRunAndListener(TestRun run, FileCopyService fileCopyService, DeploymentManager deploymentManager) at Microsoft.VisualStudio.TestTools.TestManagement.ControllerProxy.QueueTestRunWorker(Object state)</Exception> Even if I don't link to the DLL (in my VSTS wrapper test, or in the NUnit test), as soon as I add it in my GenericTest file (I'm wrapping NUnit tests), I get that exception. We don't have the source for the problem DLL, and it is also code signed, so I can't solve this by recompiling. Is there a way to skip deploying the dependencies of a DLL DeploymentItem, to fix or disable the culture check, or to work around this by convoluted means (maybe somehow embed the assembly)? Is there a way to override the value for the culture, short of hacking the DLL (and removing code signing so the hack works)? Maybe with an external manifest? Any correct solution must work without weird changes to production code. We can't deploy a hacked DLL, for example. It also must allow the DLL to be instrumented for code coverage. Additional note: I do get a linker warning when compiling the DLL under test that links to the problem DLL, but this hasn't broken anything but VSTS, and multiple versions have shipped.

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >