Search Results

Search found 27766 results on 1111 pages for 'bad idea jeans'.

Page 96/1111 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

  • 256 Worker Role 3D Rendering Demo is now a Lab on my Azure Course

    - by Alan Smith
    Ever since I came up with the crazy idea of creating an Azure application that would spin up 256 worker roles (please vote if you like it ) to render a 3D animation created using the Kinect depth camera I have been trying to think of something useful to do with it. I have also been busy working on developing training materials for a Windows Azure course that I will be delivering through a training partner in Stockholm, and for customers wanting to learn Windows Azure. I hit on the idea of combining the render demo and a course lab and creating a lab where the students would create and deploy their own mini render farms, which would participate in a single render job, consisting of 2,000 frames. The architecture of the solution is shown below. As students would be creating and deploying their own applications, I thought it would be fun to introduce some competitiveness into the lab. In the 256 worker role demo I capture the rendering statistics for each role, so it was fairly simple to include the students name in these statistics. This allowed the process monitor application to capture the number of frames each student had rendered and display a high-score table. When I demoed the application I deployed one instance that started rendering a frame every few minutes, and the challenge for the students was to deploy and scale their applications, and then overtake my single role instance by the end of the lab time. I had the process monitor running on the projector during the lab so the class could see the progress of their deployments, and how they were performing against my implementation and their classmates. When I tested the lab for the first time in Oslo last week it was a great success, the students were keen to be the first to build and deploy their solution and then watch the frames appear. As the students mostly had MSDN suspicions they were able to scale to the full 20 worker role instances and before long we had over 100 worker roles working on the animation. There were, however, a few issues who the couple of issues caused by the competitive nature of the lab. The first student to scale the application to 20 instances would render the most frames and win; there was no way for others to catch up. Also, as they were competing against each other, there was no incentive to help others on the course get their application up and running. I have now re-written the lab to divide the student into teams that will compete to render the most frames. This means that if one developer on the team can deploy and scale quickly, the other team still has a chance to catch up. It also means that if a student finishes quickly and puts their team in the lead they will have an incentive to help the other developers on their team get up and running. As I was using “Sharks with Lasers” for a lot of my demos, and reserved the sharkswithfreakinlasers namespaces for some of the Azure services (well somebody had to do it), the students came up with some creative alternatives, like “Camels with Cannons” and “Honey Badgers with Homing Missiles”. That gave me the idea for the teams having to choose a creative name involving animals and weapons. The team rendering architecture diagram is shown below.   Render Challenge Rules In order to ensure fair play a number of rules are imposed on the lab. ·         The class will be divided into teams, each team choses a name. ·         The team name must consist of a ferocious animal combined with a hazardous weapon. ·         Teams can allocate as many worker roles as they can muster to the render job. ·         Frame processing statistics and rendered frames will be vigilantly monitored; any cheating, tampering, and other foul play will result in penalties. The screenshot below shows an example of the team render farm in action, Badgers with Bombs have taken a lead over Camels with Cannons, and both are  leaving the Sharks with Lasers standing. If you are interested in attending a scheduled delivery of my Windows Azure or Windows Azure Service bus courses, or would like on-site training, more details are here.

    Read the article

  • Problems implementing a screen space shadow ray tracing shader

    - by Grieverheart
    Here I previously asked for the possibility of ray tracing shadows in screen space in a deferred shader. Several problems were pointed out. One of the most important problem is that only visible objects can cast shadows and objects between the camera and the shadow caster can interfere. Still I thought it'd be a fun experiment. The idea is to calculate the view coordinates of pixels and cast a ray to the light. The ray is then traced pixel by pixel to the light and its depth is compared with the depth at the pixel. If a pixel is in front of the ray, a shadow is casted at the original pixel. At first I thought that I could use the DDA algorithm in 2D to calculate the distance 't' (in p = o + t d, where o origin, d direction) to the next pixel and use it in the 3D ray equation to find the ray's z coordinate at that pixel's position. For the 2D ray, I would use the projected and biased 3D ray direction and origin. The idea was that 't' would be the same in both 2D and 3D equations. Unfortunately, this is not the case since the projection matrix is 4D. Thus, some tweak needs to be done to make this work this way. I would like to ask if someone knows of a way to do what I described above, i.e. from a 2D ray in texture coordinate space to get the 3D ray in screen space. I did implement a simple version of the idea which you can see in the following video: video here Shadows may seem a bit pixelated, but that's mostly because of the size of the step in 't' I chose. And here is the shader: #version 330 core uniform sampler2D DepthMap; uniform vec2 projAB; uniform mat4 projectionMatrix; const vec3 light_p = vec3(-30.0, 30.0, -10.0); noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } void main(void){ vec3 origin = CalcPosition(); if(origin.z < -60) discard; vec2 pixOrigin = pass_TexCoord; //tex coords vec3 dir = normalize(light_p - origin); vec2 texel_size = vec2(1.0 / 600.0); float t = 0.1; ivec2 pixIndex = ivec2(pixOrigin / texel_size); out_AO = 1.0; while(true){ vec3 ray = origin + t * dir; vec4 temp = projectionMatrix * vec4(ray, 1.0); vec2 texCoord = (temp.xy / temp.w) * 0.5 + 0.5; ivec2 newIndex = ivec2(texCoord / texel_size); if(newIndex != pixIndex){ float depth = texture(DepthMap, texCoord).r; float linearDepth = projAB.y / (depth - projAB.x); if(linearDepth > ray.z + 0.1){ out_AO = 0.2; break; } pixIndex = newIndex; } t += 0.5; if(texCoord.x < 0 || texCoord.x > 1.0 || texCoord.y < 0 || texCoord.y > 1.0) break; } } As you can see, here I just increment 't' by some arbitrary factor, calculate the 3D ray and project it to get the pixel coordinates, which is not really optimal. Hopefully, I would like to optimize the code as much as possible and compare it with shadow mapping and how it scales with the number of lights. PS: Keep in mind that I reconstruct position from depth by interpolating rays through a full screen quad.

    Read the article

  • Persevering & Friday Night Big Ideas

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein, Oracle Midsize Programs Every successful company, personal accomplishment, and philanthropic endeavor starts with one good idea. I have my best ideas on Friday evenings. The creative side of my brain is stimulated by end of week endorphins. Free thinking. Anything is possible. But, as my kids love to remind me, most of Dad's Friday Night Big Ideas (FNBIs) fizzle on the drawing board. Usually there's one barrier blocking the way that seems insurmountable by noon on Monday. For example, trekking the 486 mile Colorado Trail is on my bucket list. Since I have a job, I'll have to do it in bits and pieces--day hikes, weekends, and a vacation week here and there. With my trick neck, backpacking is not an option. How to survive equip myself for overnight backcountry travel was that one seemingly insurmountable barrier.  Persevering Lewis and Clark wouldn't have given up so I explored options and, as I blogged about back in December, I had an FNBI to hire llamas to carry my load. Last weekend, that idea came to fruition. Early Saturday morning, I met up with Bill, the owner of Antero Llamas, for an overnight training expedition along segment 14 of the Colorado Trail with a string of twelve llamas. It was a crash course on learning how to saddle, load, pasture, and mediate squabbles. Amazingly, we left the trailhead with me, the complete novice, at the lead. Instead of trying to impart three decades of knowledge on me in two days, Bill taught me two things: "Go With the Flow" and "Plan B". It worked. There were times I would be lost in thought for long stretches of time until one snort would remind me that I had a string of twelve llamas trailing behind. A funny thing happened along the trail... Up until last Saturday, my plan had been to trek all 28 segments of the trail east to west and sequentially. Out of some self-imposed sense of decorum. That plan presented myriad logistical challenges such as impassable snow pack on the Continental Divide when segment 6 is up next. On Sunday, as we trekked along the base of 14,000 ft peaks, I applied Bill's llama handling philosophy to my quest and came up with a much more realistic and enjoyable strategy for achieving my goal.  Seize opportunities to hike regardless of order. Define my own segments. Go west to east for awhile if it makes more sense. Let the llamas carry more creature comforts. Chill out.  I will still set foot on all 486 miles of the trail. Technically, the end result will be the same.And I and my traveling companions--human and camelid--will enjoy the journey more. Much more. Got Big Ideas of Your Own? Check out Tongal. This growing Oracle customer works with brands to crowd source fantastic ideas for promoting products and services. Your great idea could earn you cash.  Looking for more news and information about Oracle Solutions for Midsize Companies? Read the latest Oracle for Midsize Companies Newsletter Sign-up to receive the latest communications from Oracle’s industry leaders and experts Jim Lein I evangelize Oracle's enterprise solutions for growing midsize companies. I recently celebrated 15 years with Oracle, having joined JD Edwards in 1999. I'm based in Evergreen, Colorado and love relating stories about creativity and innovation whether they be about software, live music, or the mountains. The views expressed here are my own, and not necessarily those of Oracle.

    Read the article

  • Feynman's inbox

    - by user12607414
    Here is Richard Feynman writing on the ease of criticizing theories, and the difficulty of forming them: The problem is not just to say something might be wrong, but to replace it by something — and that is not so easy. As soon as any really definite idea is substituted it becomes almost immediately apparent that it does not work. The second difficulty is that there is an infinite number of possibilities of these simple types. It is something like this. You are sitting working very hard, you have worked for a long time trying to open a safe. Then some Joe comes along who knows nothing about what you are doing, except that you are trying to open the safe. He says ‘Why don’t you try the combination 10:20:30?’ Because you are busy, you have tried a lot of things, maybe you have already tried 10:20:30. Maybe you know already that the middle number is 32 not 20. Maybe you know as a matter of fact that it is a five digit combination… So please do not send me any letters trying to tell me how the thing is going to work. I read them — I always read them to make sure that I have not already thought of what is suggested — but it takes too long to answer them, because they are usually in the class ‘try 10:20:30’. (“Seeking New Laws”, page 161 in The Character of Physical Law.) As a sometime designer (and longtime critic) of widely used computer systems, I have seen similar difficulties appear when anyone undertakes to publicly design a piece of software that may be used by many thousands of customers. (I have been on both sides of the fence, of course.) The design possibilities are endless, but the deep design problems are usually hidden beneath a mass of superfluous detail. The sheer numbers can be daunting. Even if only one customer out of a thousand feels a need to express a passionately held idea, it can take a long time to read all the mail. And it is a fact of life that many of those strong suggestions are only weakly supported by reason or evidence. Opinions are plentiful, but substantive research is time-consuming, and hence rare. A related phenomenon commonly seen with software is bike-shedding, where interlocutors focus on surface details like naming and syntax… or (come to think of it) like lock combinations. On the other hand, software is easier than quantum physics, and the population of people able to make substantial suggestions about software systems is several orders of magnitude bigger than Feynman’s circle of colleagues. My own work would be poorer without contributions — sometimes unsolicited, sometimes passionately urged on me — from the open source community. If a Nobel prize winner thought it was worthwhile to read his mail on the faint chance of learning a good idea, I am certainly not going to throw mine away. (In case anyone is still reading this, and is wondering what provoked a meditation on the quality of one’s inbox contents, I’ll simply point out that the volume has been very high, for many months, on the Lambda-Dev mailing list, where the next version of the Java language is being discussed. Bravo to those of my colleagues who are surfing that wave.) I started this note thinking there was an odd parallel between the life of the physicist and that of a software designer. On second thought, I’ll bet that is the story for anybody who works in public on something requiring special training. (And that would be pretty much anything worth doing.) In any case, Feynman saw it clearly and said it well.

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • VHDL - Problem with std_logic_vector

    - by wretrOvian
    Hi, i'm coding a 4-bit binary adder with accumulator: library ieee; use ieee.std_logic_1164.all; entity binadder is port(n,clk,sh:in bit; x,y:inout std_logic_vector(3 downto 0); co:inout bit; done:out bit); end binadder; architecture binadder of binadder is signal state: integer range 0 to 3; signal sum,cin:bit; begin sum<= (x(0) xor y(0)) xor cin; co<= (x(0) and y(0)) or (y(0) and cin) or (x(0) and cin); process begin wait until clk='0'; case state is when 0=> if(n='1') then state<=1; end if; when 1|2|3=> if(sh='1') then x<= sum & x(3 downto 1); y<= y(0) & y(3 downto 1); cin<=co; end if; if(state=3) then state<=0; end if; end case; end process; done<='1' when state=3 else '0'; end binadder; The output : -- Compiling architecture binadder of binadder ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(15): No feasible entries for infix operator "xor". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(15): Type error resolving infix expression "xor" as type std.standard.bit. ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): No feasible entries for infix operator "and". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Bad expression in right operand of infix expression "or". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): No feasible entries for infix operator "and". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Bad expression in left operand of infix expression "or". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Bad expression in right operand of infix expression "or". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Type error resolving infix expression "or" as type std.standard.bit. ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(28): No feasible entries for infix operator "&". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(28): Type error resolving infix expression "&" as type ieee.std_logic_1164.std_logic_vector. ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(39): VHDL Compiler exiting I believe i'm not handling std_logic_vector's correctly. Please tell me how? :(

    Read the article

  • Are Visual Studio Setup Projects suitable for complex setups

    - by Robert
    Are "Visual Studio Setup" Projects suitable for complex setups in different versions? The application is rather large ( 500.000 lines of code) and is under continuous development. Every 6 to 10 month a new version gets released. We have multiple configuration files (INI and XML), registry keys, database migration scripts, etc. The application is in the progress of being migrated from VB6 to .NET . The old installer was build with Installshield. The feedback to Installshield is: Bad adaptability, bad reuse - thats way we are evaluating "Visual Studio Setup" as an alternative. Other products we consider: Free Solutions WiX NSIS Commercial Solutions Installshield (again..) Wise Advanced Installer sth. missing? Solutions we don't like to consider: Inno Setup (It just doesn't feel right)

    Read the article

  • Entity Framework - Using Transactions or SaveChanges(false) and AcceptAllChanges()?

    - by mark smith
    Hi there, I have been investigating transactions and it appears that they take call of them selves in EF as long as i pass false to savechanges.. SaveChanges(false) and if all goes well then AcceptAllChanges() Question is what is something goes bad, don't have to rollback? or as soon as the my method goes out of scope its ended? What happens to any indentiy columns that were assigned half way through the transaction.. i presume if somebody else added a record after mine before mine went bad then this means there will be a missing Identity value. Is there any reason to use standard "transactionScope" in code? ideas? - thanks

    Read the article

  • How to avoid open-redirect vulnerability and safely redirect on successful login (HINT: ASP.NET MVC

    - by Brad B.
    Normally, when a site requires that you are logged in before you can access a certain page, you are taken to the login screen and after successfully authenticating yourself, you are redirected back to the originally requested page. This is great for usability - but without careful scrutiny, this feature can easily become an open redirect vulnerability. Sadly, for an example of this vulnerability, look no further than the default LogOn action provided by ASP.NET MVC 2: [HttpPost] public ActionResult LogOn(LogOnModel model, string returnUrl) { if (ModelState.IsValid) { if (MembershipService.ValidateUser(model.UserName, model.Password)) { FormsService.SignIn(model.UserName, model.RememberMe); if (!String.IsNullOrEmpty(returnUrl)) { return Redirect(returnUrl); // open redirect vulnerability HERE } else { return RedirectToAction("Index", "Home"); } } else { ModelState.AddModelError("", "User name or password incorrect..."); } } return View(model); } If a user is successfully authenticated, they are redirected to "returnUrl" (if it was provided via the login form submission). Here is a simple example attack (one of many, actually) that exploits this vulnerability: Attacker, pretending to be victim's bank, sends an email to victim containing a link, like this: http://www.mybank.com/logon?returnUrl=http://www.badsite.com Having been taught to verify the ENTIRE domain name (e.g., google.com = GOOD, google.com.as31x.example.com = BAD), the victim knows the link is OK - there isn't any tricky sub-domain phishing going on. The victim clicks the link, sees their actual familiar banking website and is asked to logon Victim logs on and is subsequently redirected to http://www.badsite.com which is made to look exactly like victim's bank's website, so victim doesn't know he is now on a different site. http://www.badsite.com says something like "We need to update our records - please type in some extremely personal information below: [ssn], [address], [phone number], etc." Victim, still thinking he is on his banking website, falls for the ploy and provides attacker with the information Any ideas on how to maintain this redirect-on-successful-login functionality yet avoid the open-redirect vulnerability? I'm leaning toward the option of splitting the "returnUrl" parameter into controller/action parts and use "RedirectToRouteResult" instead of simply "Redirect". Does this approach open any new vulnerabilities? Side note: I know this open-redirect may not seem to be a big deal compared to the likes of XSS and CSRF, but us developers are the only thing protecting our customers from the bad guys - anything we can do to make the bad guys' job harder is a win in my book. Thanks, Brad

    Read the article

  • Page looks good in most browsers except in IE7...why

    - by reinhat
    Hi, The following page looks good in Firefox, Safari, Chrome, IE6 and IE8... but it looks bad in IE7. I don't have IE7 but I need to fix this issue because someone seen it in IE7 and it looks bad. Does anyone has any idea why this page renders different in IE7?...and what is the solution to make it display correctly? http://www.aetna.com/2009annualreport/mainBoard.html Problem: When you click on the "Board of Directors" or "Management Team" link and the listing table panel opens up, the far right third of the panel is getting cut off. Also some information appears to be missing in the cells. Thanks, Attila

    Read the article

  • Developing Installer Packages, are Visual Studio Setup Projects suitable for complex setups

    - by Robert
    Are "Visual Studio Setup" Projects suitable for complex setups in different versions? The application is rather large ( 500.000 lines of code) and is under continuous development. Every 6 to 10 month a new version gets released. We have multiple configuration files (INI and XML), registry keys, database migration scripts, etc. The application is in the progress of being migrated from VB6 to .NET . The old installer was build with Installshield. The feedback to Installshield is: Bad adaptability, bad reuse - thats way we are evaluating "Visual Studio Setup" as an alternative. Other products we consider: Free Solutions WiX NSIS Commercial Solutions Installshield (again..) Wise Advanced Installer sth. missing? Solutions we don't like to consider: Inno Setup (It just doesn't feel right)

    Read the article

  • Unable to generate temporary class for web service

    - by sac
    I have an application with a proxy class for my webservice - This works fine in all 32-bit machines. However the same app throws an exception in windows server 2008 64-bit machine. It looks like the temporary class could not be generated for the web service. The error in the event viewer is "error CS0008: Unexpected error reading metadata from file '' -- 'Bad Key. ' Here's the call stack... at System.Xml.Serialization.Compiler.Compile(Assembly parent, String ns, XmlSerializerCompilerParameters xmlParameters, Evidence evidence) at System.Xml.Serialization.TempAssembly.GenerateAssembly(XmlMapping[] xmlMappings, Type[] types, String defaultNamespace, Evidence evidence, XmlSerializerCompilerParameters parameters, Assembly assembly, Hashtable assemblies) at System.Xml.Serialization.TempAssembly..ctor(XmlMapping[] xmlMappings, Type[] types, String defaultNamespace, String location, Evidence evidence) at System.Xml.Serialization.XmlSerializer.GetSerializersFromCache(XmlMapping[] mappings, Type type) at System.Xml.Serialization.XmlSerializer.FromMappings(XmlMapping[] mappings, Type type) at System.Web.Services.Protocols.SoapClientType..ctor(Type type) at System.Web.Services.Protocols.SoapHttpClientProtocol..ctor() at Fusion.ServiceCatalogProxy..ctor() I am not able to get any info about this bad key error....

    Read the article

  • django-rest-framework: api versioning

    - by w--
    so googling around it appears that the general consensus is that embedding version numbers in REST URIs is a bad practice and a bad idea. even on SO there are strong proponents supporting this. e.g. Best practices for API versioning? My question is about how to accomplish the proposed solution of using the accept header / content negotiation in the django-rest-framework to accomplish this. It looks like content negotiation in the framework, http://django-rest-framework.org/api-guide/content-negotiation.html is already configured to automatically return intended values based on accepted MIME types. If I start using the Accept header for custom types, I'll lose this benefit of the framework. Is there a better way to accomplish this in the framework?

    Read the article

  • .NET projects build automation with NAnt/MSBuild + SVN

    - by petr k.
    Hi everyone, for quite a while now, I've been trying to figure out how to setup an automated build process at our shop. I've read many posts and guides on this matter and none of them really fits my specifics needs. My SVN repository is laid out as follows \projects \projectA (a product) \tags \1.0.0.1 \1.0.0.2 ... \trunk \src \proj1 (a VS C# project) \proj2 \documentation Then I have a network share, with a folder for each project (product), which in turn contains the binaries, written documentation and the generated API documentation (via NDoc - each project may have an .ndoc file in the repository) for every historical version (from the tags SVN folder) and for the latest version as well (from the trunk). Basically, what I want to do in a scheduled batch build are these steps: examine the project's SVN folder and identify tags not present in the network share for each of these tags check out the tag folder build (with Release config) copy the resulting binaries to the network share search for .ndoc files generate CHM files via NDoc copy the resulting CHM files to the network share do the same as in 2., but for the HEAD revision of trunk Now, the trouble is, I have no idea where to start. I do not keep .sln files in the repository, but I am able to replace these with MSBuild files which in turn build the C# projects belonging to the specific product. I guess the most troubling part is the examination of the repository for tags which have not been processed yet - i.e. searching the tags and comparing them to a project's directory structure on the network share. I have no idea how to do that in any of the build tools (NAnt, MSBuild). Could you please provide me with some pointers on how to approach this task as a whole and in detail as well? I do not care if I use NAnt, MSBuild, or both. I am aware that this might be rather complex, but every idea and NAnt/MSBuild snippet will be a great help. Thanks in advance.

    Read the article

  • Database structure - To join or not to join

    - by Industrial
    Hi! We're drawing up the database structure with the help of mySQL Workbench for a new app and the number of joins required to make a listing of the data is increasing drastically as the many-to-many relationships increases. The application will be quite read-heavy and have a couple of hundred thousand rows per table. The questions: Is it really that bad to merge tables where needed and thereby reducing joins? Should we start looking at horizontal partitioning? (in conjunction with merging tables) Is there a better way then pivot tables to take care of many-to-many relationships? We discussed about instead storing all data in serialized text columns and having the application make the sorting instead of the database, but this seems like a very bad idea, even though that the database will be heavily cached. What do you think? Thanks!

    Read the article

  • Scheduling Swingworker threads

    - by Simonw
    Hi, I have a 2 processes to perform in my swing application, one to fill a list, and one to do operations on each element on the list. I've just moved the 2 processes into Swingworker threads to stop the GUI locking up while the tasks are performed, and because I will need to do this set of operations to several lists, so concurrency wouldn't be a bad idea in the first place. However, when I just ran fillList.execute();doStuffToList.execute(); the doStuffToList thread to ran on the empty list (duh...). How do I tell the second process to wait until the first one is done? I suppose I could just nest the second process at the end of the first one, but i dunno, it seems like bad practice.

    Read the article

  • Why not allow mutation of the this binding?

    - by gnucom
    Hi Everyone, I'm building a interpreter/compiler for a school project (well now its turning into a hobby project) and an instructor warned me not to allow mutation of the 'this' binding (he said it was gross and made a huge deal about it) but I never learned why this is so... dangerous or bad. I'm very curious about why this is so bad. I figured this sort of feature could be useful in some way or another. I'm wondering if anyone familiar with building languages can tell me what sort of problems mutation on the 'this' binding can cause, and if they know of any cool or useful tricks that one could do if it actually was allowed. Do any languages that you're aware of allow mutation of 'this'? Thanks,

    Read the article

  • ASP.NET MVC - creating and handling with URLs with Greater Than and Less Than characters

    - by pcampbell
    Consider a link to a page for a user's profile. A page is creating that URL like this: //Model.Name has value "<bad guy>" Html.ActionLink("foo, "ViewUser", new { id=5, title=Url.Encode(Model.Name) }) The actual outcome was http://mysite/Users/5/%253cbad%2guy%253e When navigating to that URL, the server generates a HTTP Error 400 - Bad Request. Question: Given that the Model.Name may contain Unicode characters, or characters otherwise illegal in URLs, what's the best way to strip out illegal characters, or otherwise encode them? The problem surfaces when testing out 'interesting' user inputs with < and >, but anything could come from the user, and therefore be put in a URL by way of Model.Name.

    Read the article

  • subfig package in latex

    - by Tim
    Hi, When I am using subfig package in latex, it gives some errors: Package subfig Warning: Your document class has a bad definition of \endfigure, most likely \let\endfigure=\end@float which has now been changed to \def\endfigure{\end@float} because otherwise subsequent changes to \end@float (like done by several packages changing float behaviour) can't take effect on \endfigure. Please complain to your document class author. Package subfig Warning: Your document class has a bad definition of \endtable, most likely \let\endtable=\end@float which has now been changed to \def\endtable{\end@float} because otherwise subsequent changes to \end@float (like done by several packages changing float behaviour) can't take effect on \endtable. Please complain to your document class author. (/usr/share/texmf/tex/latex/caption/caption.sty `rotating' package detected `float' package detected ) LaTeX Warning: You have requested, on input line 139, version `2005/06/26' of package caption, but only version `1995/04/05 v1.4b caption package (AS)' is available. ! Undefined control sequence. l.163 \DeclareCaptionOption {listofformat}{\caption@setlistofformat{#1}} How can I solve it? Thanks and regards!

    Read the article

  • Seg Fault when using std::string on an embedded Linux platform

    - by Brad
    Hi, I have been working for a couple of days on a problem with my application running on an embedded Arm Linux platform. Unfortunately the platform precludes me from using any of the usual useful tools for finding the exact issue. When the same code is run on the PC running Linux, I get no such error. In the sample below, I can reliably reproduce the problem by uncommenting the string, list or vector lines. Leaving them commented results in the application running to completion. I expect that something is corrupting the heap, but I cannot see what? The program will run for a few seconds before giving a segmentation fault. The code is compiled using a arm-linux cross compiler: arm-linux-g++ -Wall -otest fault.cpp -ldl -lpthread arm-linux-strip test Any ideas greatly appreciated. #include <stdio.h> #include <vector> #include <list> #include <string> using namespace std; ///////////////////////////////////////////////////////////////////////////// class TestSeg { static pthread_mutex_t _logLock; public: TestSeg() { } ~TestSeg() { } static void* TestThread( void *arg ) { int i = 0; while ( i++ < 10000 ) { printf( "%d\n", i ); WriteBad( "Function" ); } pthread_exit( NULL ); } static void WriteBad( const char* sFunction ) { pthread_mutex_lock( &_logLock ); printf( "%s\n", sFunction ); //string sKiller; // <----------------------------------Bad //list<char> killer; // <----------------------------------Bad //vector<char> killer; // <----------------------------------Bad pthread_mutex_unlock( &_logLock ); return; } void RunTest() { int threads = 100; pthread_t _rx_thread[threads]; for ( int i = 0 ; i < threads ; i++ ) { pthread_create( &_rx_thread[i], NULL, TestThread, NULL ); } for ( int i = 0 ; i < threads ; i++ ) { pthread_join( _rx_thread[i], NULL ); } } }; pthread_mutex_t TestSeg::_logLock = PTHREAD_MUTEX_INITIALIZER; int main( int argc, char *argv[] ) { TestSeg seg; seg.RunTest(); pthread_exit( NULL ); }

    Read the article

  • Why should I prepend my custom attributes with "data-"?

    - by Horace Loeb
    So any custom data attribute that I use should start with "data-": <li class="user" data-name="John Resig" data-city="Boston" data-lang="js" data-food="Bacon"> <b>John says:</b> <span>Hello, how are you?</span> </li> Will anything bad happen if I just ignore this? I.e.: <li class="user" name="John Resig" city="Boston" lang="js" food="Bacon"> <b>John says:</b> <span>Hello, how are you?</span> </li> I guess one bad thing is that my custom attributes could conflict with HTML attributes with special meanings (e.g., name), but aside from this, is there a problem with just writing "example_text" instead of "data-example_text"? (It won't validate, but who cares?)

    Read the article

  • What is an Ontology (Database?) ?

    - by Robert Gould
    I was just reading this article and it mentions that some organization had an Ontology as(?) their database(?) layer, and that the decision to do this was bad. Problem is I hadn't heard about this before, so I can't understand why it's bad. So I tried googling about databases and ontology, and came about quite a few pdfs from 2006 that we're full of incomprehensible content (for my mind). I read a few of these and at this point still have absolutely no idea what they are talking about. My current impression is that it was some crazy fad of 2006 that some academics were trying to sell us, but failed miserably due to the wording of their ideas. But I'm still curious if anyone actually knows what this is actually all about.

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >