Search Results

Search found 978 results on 40 pages for 'nobody'.

Page 19/40 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • How do I use an API?

    - by GRardB
    Background I have no idea how to use an API. I know that all APIs are different, but I've been doing research and I don't fully understand the documentation that comes along with them. There's a programming competition at my university in a month and a half that I want to compete in (revolved around APIs) but nobody on my team has ever used one. We're computer science majors, so we have experience programming, but we've just never been exposed to an API. I tried looking at Twitter's documentation, but I'm lost. Would anyone be able to give me some tips on how to get started? Maybe a very easy API with examples, or explaining essential things about common elements of different APIs? I don't need a full-blown tutorial on Stack Overflow; I just need to be pointed in the right direction. Update The programming languages that I'm most fluent in are C (simple text editor usually) and Java (Eclipse). In an attempt to be more specific with my question: I understand that APIs (and yes, external libraries are what I was referring to) are simply sets of functions. Question I guess what I'm trying to ask is how I would go about accessing those functions. Do I need to download specific files and include them in my programs, or do they need to be accessed remotely, etc.?

    Read the article

  • How do I use an API?

    - by GRardB
    Background I have no idea how to use an API. I know that all APIs are different, but I've been doing research and I don't fully understand the documentation that comes along with them. There's a programming competition at my university in a month and a half that I want to compete in (revolved around APIs) but nobody on my team has ever used one. We're computer science majors, so we have experience programming, but we've just never been exposed to an API. I tried looking at Twitter's documentation, but I'm lost. Would anyone be able to give me some tips on how to get started? Maybe a very easy API with examples, or explaining essential things about common elements of different APIs? I don't need a full-blown tutorial on Stack Overflow; I just need to be pointed in the right direction. Update The programming languages that I'm most fluent in are C (simple text editor usually) and Java (Eclipse). In an attempt to be more specific with my question: I understand that APIs (and yes, external libraries are what I was referring to) are simply sets of functions. Question I guess what I'm trying to ask is how I would go about accessing those functions. Do I need to download specific files and include them in my programs, or do they need to be accessed remotely, etc.?

    Read the article

  • Patenting a Web Application before launch?

    - by SoreThumb
    While discussing a website idea I had with friends and worked on it, they told me to be wary of theft regarding the website. Since the code I'd be working on would be mostly Javascript and HTML, the likelihood of theft is quite high. Furthermore, if I'm lucky, the idea I have would be a breakthrough when it comes to being useful. So, you can see the problem here-- I would be developing an application that's easily stolen, and unfortunately an application that companies larger than myself would want to provide. I'm also unsure if this idea has already been patented. I realize patent law is murky as in you can create a vague patent and still claim others are violating it. So, I'd like to search existing patents for one that may be relevant to my idea, and I'd like to patent it in the meantime. Does anyone have any experience regarding this? Should I invite a lawyer into the mix? As a note, I was going to add tags like, "Patents", but nobody has asked such a question yet and I just joined this StackOverflow...

    Read the article

  • Cities from Space: A Tour of Urban Planning Patterns

    - by Jason Fitzpatrick
    While many cities developed haphazardly and organically with little structured planning, other cities were developed following strict organization–organization that reveals itself beautifully when seen from space. Wired magazine shares a roundup of ten well-planned cities viewed with a satellite’s eye. Among the roundup our favorite is the oldest, seen in the photo above: This nine-pointed fortress is perhaps the best example of a planned city from the Renaissance. Palmanova was built in 1593 and is located in the northeastern corner of Italy near the border with Slovenia. It was intended to be home to a completely self-reliant utopian community that could also defend itself against the Ottomans. It had three guarded entrances, ramparts between each of the star points and eventually a moat. Sadly, nobody was willing to move there. Eventually it was used as free housing for pardoned criminals. Today it is a national monument, a tourist destination and home to around 5,000 people. Hit up the link below to check out the other nine well-planned entries in the roundup. How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices

    Read the article

  • Only one user can connect to Ubuntu samba server

    - by StaticMethod
    I setup a samba server on 12.04 LTS, and it works great for one user but not the others. I am trying to map a network drive from a windows 7 laptop. I can successfully authenticate with one user, but the other two both get "Access is denied" errors. Here is my smb.conf file. [global] server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [share] comment = Ubuntu File Server Share path = /srv/share read only = No create mask = 0755 I know that the service is successfully reading from the /etc/passwd file because if I change the Linux password for the user that works, I have to use the new password when I connect. I changed all the users so they are all members of the same groups (all three users are admins anyway). I only ever have one user connected at a time. Here are the permissions on the shared folder /srv$ ls -l drwxrwxrwx 1 nobody nogroup 16 Feb 22 17:05 share Any ideas?

    Read the article

  • How to handle new domain names?

    - by michael
    I have a new product which I'll call a pen ink reloader. I have a website using my products name, for example, www.inkywink.com which I want to have accessed by searches for keywords such as "pen ink", "pen out of ink" "ink for pens" etc. , since nobody knows that a pen ink reloader exists. I see that its quite difficult to get on front page for these keywords since they have lots of competition. However I notice that the exact phrases I want to rank highly for are available as domains. I purchase "www.penink.com" and "penoutofink.com" which for arguments sake are highly searched and the perfect keywords to get eyes on my money site www.inkywink.com . Two questions: 1. What is my best option to leverage those names so that they appear near top of searches so that I can get traffic to my money site? Do I just have them redirect 301 to inkywink.com or should I create small original content on each with links to my main site? 2. If I just have them redirected to inkywink.com, am I able to use keywords in metatag and headers for each site separately or do they all automatically obtain the same headers and tags as the site to which theyre redirected ? Thanks to anyone who can help as I'm a real newbie to all this.

    Read the article

  • Any empirical evidence on the efficacy of CMMI?

    - by mehaase
    I am wondering if there are any studies that examine the efficacy of software projects in CMMI-oriented organizations. For example, are CMMI organizations more likely to finish projects on time and/or on budget than non-CMMI organizations? Edit for clarification: CMMI stands for "Capability Maturity Model Integration". It's developed by the Software Engineering Institute at Carnegie-Mellon University (SEI-CMU). It's not a certification, but there are various companies that will "appraise" your organization to various levels of CMMI, such as level 2 and level 3. (I believe CMMI level 1 is an animalistic, Hobbesian free-for-all that nobody aspires to. In other words, everybody is at least CMMI level 1, even if you've never heard of CMMI before.) I'm definitely not an expert, but I believe that an organization can be appraised for CMMI levels within different scopes of work: i.e. service delivery, software development, foobaring, etc. My question is focused on the software development appraisal: is an organization that has been appraised to CMMI Level X for software projects more likely to finish a software project on time and on budget than another organization that has not been appraised to CMMI Level X? However, in the absence of hard data about software-oriented CMMI, I'd be interested in the effect that CMMI appraisals have on other activities as well. I originally asked the question because I've seen various studies conducted on software (e.g. the essays in The Mythical Man Month refer to numerous empirical studies, as does McConnell's Code Complete), so I know that there are organizations performing empirical studies of software development.

    Read the article

  • Good links somehow being converted to ones with a PHP redirect (not a virus)

    - by Rebecca
    This has happened to links we put on web pages and in emails. We might put www.oursite.org/work/ but when I view source it shows up as webmail.ourhosting.ca/hwebmail/services/go.php?url=https%3A%2F%2Fwww.oursite.org%2F%2work%2F This ends up at the webmail login page for our web host. But only some of the people who click the link get the login page; others go directly to the original page we intended. We don't want it to go to the webmail login page, nobody needs to log in to our web site. This occurs for links to pages on our site, but also to links to other sites that we put in emails or in posts. It seems to be browser independent as well as e-mail client independent as we variously have used Firefox and Chrome as well as MS Outlook and Thunderbird. I've tried to resolve the issue with our webhost but they keep telling me they don't support our browser, or our email client (i.e., they don't understand the issue). At the moment, our only option is to try another web host just to get rid of their login. Any ideas about what's going on?

    Read the article

  • In which fields does quality of the software product matter as much as the completion time?

    - by Nav
    Someone told me that if the software product meets clients expectations, it is good quality. But I've worked with Interaction Designers (the same kind of people who made Gmail's interface and usability so cool!), and I've loved working with them because even though they came up with hundreds of changes in requirements, and emphasised on many many subtle details, when the software was complete, I could look at the product and say WOW! The current place I work, the only thing that matters is completing the project on time. As long as it works and as long as the client says it's ok, nobody bothers to improve it. I'm not talking about gold-plating, but I believe that for a programmer to enjoy his (well, maybe her too ;) ) job, they should be able to proudly say that "Hey, I made that software" and that comes only when the product is of good quality. Apart from your opinions on this, I'd also like to know which fields (Eg. Aerospace, Finance etc.) could I find companies (or you could mention the company name) where the quality of a product is as important as completing the project on time?

    Read the article

  • Getting rank for keywords that I don't want to appear on my website [duplicate]

    - by Rober
    This question already has an answer here: Which keyword should I use. colors or colours or a combination of both? 2 answers One of my products has two names. One of them is what I consider correct and thus it is what I want to appear on my website. The other name is incorrect for me, so I would like to avoid it. But I know that many people will search my product using the "bad" name. How could I get the "bad" name indexed for my site on search engines even if nobody can read it there? Of course, I want to do it "legally" so that no engine will ban my site considering it as cloaking, black hat SEO, etc... EDIT: Having that "bad" name on my backlinks is not an option. For example I would perceive user reviews connecting my site to that word as a negative point. Maybe having my site as a search result for that word could be negative as well, but I think it is worth it.

    Read the article

  • What do I need to learn to decide on rename/recompile source package names because of company rebranding?

    - by Roberto Linares
    My company is currently at a rebranding process and the brand names have been used in the sources' package names but these names are only visible to developers who maintain this code so nobody from project management is really interested in changing them considering also that it would imply the recompiling of several old components. What factors do I need to consider when deciding on a change like that? I don't know if I should worry about legal issues or not and if so, how to address this with project management. More background details. I have all the sources and dependencies but since the company rebranding, other development areas have adopted some of the code that needs package name-changing so I cannot take the decision only by myself so I don't make everyone else's code to crash with my core components and I cannot change other areas' code without the permission of those areas' users so yes, my concern is more political than technical. I am going try to coordinate the involved it areas to make the change anyway, since it seems to be the best approach.   Unfortunatelly in my company there's no continuous integration build server so we build our code manually on demand and to get something to production I have to justify the change (even just the package name changing) to QA with an user requirement and some other bureaucratic documentation so that's why I was hesitating the change in first place.

    Read the article

  • Why is my HDD going back from standy?

    - by Pablo
    My hard drives, connected to Ubuntu server are producing the following log every exactly 5 minutes. Nov 1 14:10:50 localhost kernel: [ 1602.884936] ata2.00: hard resetting link Nov 1 14:10:51 localhost kernel: [ 1603.226804] ata2.01: hard resetting link Nov 1 14:10:52 localhost kernel: [ 1604.274533] ata2.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 1 14:10:52 localhost kernel: [ 1604.274548] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 1 14:10:52 localhost kernel: [ 1604.356669] ata2.00: configured for UDMA/133 Nov 1 14:10:52 localhost kernel: [ 1604.375247] ata2.01: configured for UDMA/133 Nov 1 14:10:52 localhost kernel: [ 1604.375265] ata2: EH complete I don't think this is related to hard drive failure, because it happens for ALL hard drives connected and ONLY when I write spindown_time = 12 in /etc/hdparm.conf. The reason I add this value is to put disks into standby mode after 60 seconds, which is happening after that period (checked with hdparm -C). The first clue I thought that smartd was running and spinning the drive. However, I couldn't find it in ps -aux | grep smart. Additionally, iostat does show that nobody accessed those drives, since Blk_read, Blk_wrtn remain unchanged. I also killed all processes that may be doing something with hdd(eg SAMBA). So I guess the problem is solely with hdparm... I have no more clue where that 5 minute value hides.

    Read the article

  • installing linux froom usb pen drive

    - by zulu
    I'm new to Linux. I'm using Ubuntu 11.04. Now i want to install Ubuntu 12.04 . I got an ISO image of Ubuntu 12.04 Desktop. I put this image in to a pen drive which is formated,set the boot option boot from usb but nothing happened . I searched this over the net and on Ubuntu website but nobody has given the complete steps . someone say u can install from the Ubuntu also ,someone says u can do a fresh installation from usb pen drive u need to make you pen drive bootable etc. etc. . My problem is that i don't know the exact steps how ton install Ubuntu from usb pen drive? All I want to do is to completely remove my Ubuntu 11.04 and install Ubuntu 12.04 from usb pen-drive. Can any body tell me how to make a pen drive bootable ? How to install Ubuntu 12.04 from pen-drive? Please give me a step by step procedure. please explain me how to do it step by step . Thanx in advance

    Read the article

  • Alternatives to time tracking methodologies [closed]

    - by Brandon Wamboldt
    Question first: What are some feasible alternatives to time tracking for employees in a web/software development company, and why are they better options Explanation: I work at a company where we work like this. Everybody is paid salary. We have 3 types of work, Contract, Adhoc and Internal (Non billable). Adhoc is just small changes that take a few hours and we just bill the client at the end of the month. Contracts are signed and we have this big long process, the usual. We figure out how much to charge by getting an estimation of the time involved (From the design and the developers), multiplying it by our hourly rate and that's it. So say we estimate 50 hours for a website. We have time tracking software and have to record the time in 15 we spend on it (7:00 to 7:15 for example), the project name, and give it some comments. Now if we go over the 50 hours, we are both losing money and are inefficient. Now that I've explained how the system works, my question is how else can it be done if a better method exists (Which I'm sure one must). Nobody here likes the current system, we just can't find an alternative. I'd be more than willing to work after hours longer hours on a project to get it done in time, but I'm much inclined to do so with the current system. I'd love to be able to sum up (Or link) to this post for my manager to show them why we should use abc system instead of this system.

    Read the article

  • How could there still not be a mysqldb module for Python 3? [closed]

    - by itsadok
    This SO question is now more than two years old. MySQL is an incredibly popular database engine, Python is an incredibly popular programming language, and Python 3 has been officially released two years ago, and was available even before that. What's more, the whole mysqldb module is just a layer translating Python's db-api to MySQL's API. It's not that big of a library. I must be missing something here. How come almost* nobody in the entire open source community has spent the (I'm guessing) two weeks it takes to port this lib? Is Python 3 that unpopular? Is the combination of python and mysql not as common as I assume? Or maybe it's just a lot harder to port mysqldb than I assume? Anyone know the inside story on this? * Now I see that this guy has done it, which takes some of the wind out of my question, but it still seems to little and too late to make sense. EDIT: OK, I'm aware that the stock answers for these kind of questions cover this one as well. Patches welcome, scratch your itch, we don't work for you and we don't have the time, etc. I actually took a shot at porting this about a year ago, but it was my first time doing anything with Python C extensions, and I failed. My point in writing this was not a plea for somebody to write it, but genuine curiosity: it seems that some much more complicated libraries have been ported to python 3 already, and in the poll for which libraries should be ported, mysqldb is not even nominated! That suggests that maybe (2) is the right answer. UPDATE: I found that there are several new libraries that provide mysql support under Python 3, I just wasn't googling hard enough. That explains everything.

    Read the article

  • Can't install on a Thinkpad W700ds

    - by Habstinat
    I want to install Ubuntu on my computer. I don't know much about Linux, but I know my way around a terminal and whatnot. My computer, a ThinkPad W700ds, refuses to read from my CD when booting. The md5sum is correct and the same CD boots fine from another computer. When I try to install from a USB, I can get the main screen, but when I select any of the options from there my screen turns black for more than 3 hours until I have to turn it off. Is there anything I can do about this? I want to have a true partition, don't want a Wubi'd install. It's a 10.10 x64 image, but my computer is 64 bit (running Windows 7 x64 right now) and the exact same CD is bootable on other computers. I've been on #ubuntu IRC for days trying to work this out but nobody knew, so I figured I would get more responses by posting to here. UPDATE: Thanks Jorge Castro. Both the alternate and desktop installers seem to not work at all with the CD. On a USB, the alternate installer lets me start installing, but in the middle of installation I get this message. The people on #ubuntu told me to just exit installation at that point, so I did.

    Read the article

  • How to avoid the GameManager god object?

    - by lorancou
    I just read an answer to a question about structuring game code. It made me wonder about the ubiquitous GameManager class, and how it often becomes an issue in a production environment. Let me describe this. First, there's prototyping. Nobody cares about writing great code, we just try to get something running to see if the gameplay adds up. Then there's a greenlight, and in an effort to clean things up, somebody writes a GameManager. Probably to hold a bunch of GameStates, maybe to store a few GameObjects, nothing big, really. A cute, little, manager. In the peaceful realm of pre-production, the game is shaping up nicely. Coders have proper nights of sleep and plenty of ideas to architecture the thing with Great Design Patterns. Then production starts and soon, of course, there is crunch time. Balanced diet is long gone, the bug tracker is cracking with issues, people are stressed and the game has to be released yesterday. At that point, usually, the GameManager is a real big mess (to stay polite). The reason for that is simple. After all, when writing a game, well... all the source code is actually here to manage the game. It's easy to just add this little extra feature or bugfix in the GameManager, where everything else is already stored anyway. When time becomes an issue, no way to write a separate class, or to split this giant manager into sub-managers. Of course this is a classical anti-pattern: the god object. It's a bad thing, a pain to merge, a pain to maintain, a pain to understand, a pain to transform. What would you suggest to prevent this from happening?

    Read the article

  • IEEE SRS documents: lightweight version when working with outside contractors?

    - by maple_shaft
    Typically we follow an Agile development process that tends not to put an emphasis on writing requirements and technical documents that nobody will read. We tend to focus our limited manpower to development and testing activities with collaborative design and whiteboarding as a key focus. There is a mostly standalone web component that will take quite a few weeks to develop, but this work can be mostly parallel with other project work going on. To try and catch up time I was given a budget for hiring a developer on oDesk to complete this work. While my team isn't accustomed to working off of a firm SRS document, I realize that with outsourced development that it is a good idea to be as firm and specific as possible so I realize that I need to provide a detailed Requirements and Technical Specification document for this work to be done correctly. When I do write a Requirements document I typically utilize the standard IEEE SRS document template but I think this is too verbose and probably overkill for what I need to communicate to a developer. Is there another requirements document that is more lightweight and also accepted by a major standards organization like the IEEE? Further, as what will be developed as a software module that will interact with other software modules, my requirements really need to delve into technical specifications for things to work correctly. In this scenario does it make sense to merge technical and requirements specifications into a single document, and if not, what is a viable alternative?

    Read the article

  • Making HTML5 videos stored on AWS S3 **difficult** to download (because I cant make it impossible)

    - by Jimmery
    I am building a website that hosts video's stored on AWS's S3 service. The videos are played thru a HTML5 player we have built. Ive just been asked to make sure "nobody can steal our video's". Now I know that if you really don't want something stolen, don't put it up on the internet. However I just need to secure these videos as good as possible, the videos need to at the very least resist someone going thru the source code and trying to download them manually. One option available to me is to completely rebuild the video player in flash. This is not ideal, for several reasons, notably because I would also then have to build an App for mobile devices to be able to view this site. So I am looking for other options. I have heard about using a token to make the file available only during certain times. I have heard of using a separate file to serve the videos that sits between the HTML5 page and the video file. I am also having a look at IAM, the Secure AWS Access Control, in the hopes AWS can solve this problem for me. Can anyone here recommend any of these options? Or perhaps suggest other options available to me? Any help would be greatly appreciated.

    Read the article

  • How to Draw Lines on the Screen (Part 2)

    - by Geertjan
    In part 1, I showed how you can click on the screen to create widgets and then connect those widgets together. But that's not really drawing, is it? (And I'm surprised nobody made that point in the comments to that blog entry.) Drawing doesn't really revolve around connecting dots together. It's more about using a free-flow style and being able to randomly write stuff onto a screen, without constraints. Something like this: I achieved the above by changing one line of code from the original referred to above. Instead of using a "mousePressed" event, I'm now using a "mouseDragged" event. That's all. And now the widgets are created when I drag my mouse on the scene. (I removed the rectangular select action, since that's also invoked during dragging and since that doesn't apply to the above scenario.) Now, the next step is to rewrite the NetBeans Platform Paint Application Tutorial, so that the Visual Library is used. That would be pretty cool.

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

  • Openvpn plugin openvpn-auth-ldap does not bind to Active Directory

    - by Selivanov Pavel
    I'm trying to configure OpenVPN with openvpn-auth-ldap plugin to authorize users via Active Directory LDAP. When I use the same server config without plugin option, and add client config with generated client key and cert, connection is successful, so problem is in the plugin. server.conf: plugin /usr/lib/openvpn/openvpn-auth-ldap.so "/etc/openvpn-test/openvpn-auth-ldap.conf" port 1194 proto tcp dev tun keepalive 10 60 topology subnet server 10.0.2.0 255.255.255.0 tls-server ca ca.crt dh dh1024.pem cert server.crt key server.key #crl-verify crl.pem persist-key persist-tun user nobody group nogroup verb 3 mute 20 openvpn-auth-ldap.conf: <LDAP> URL ldap://dc1.domain:389 TLSEnable no BindDN cn=bot_auth,cn=Users,dc=domain Password bot_auth Timeout 15 FollowReferrals yes </LDAP> <Authorization> BaseDN "cn=Users,dc=domain" SearchFilter "(sAMAccountName=%u)" RequireGroup false # <Group> # BaseDN "ou=groups,dc=mycompany,dc=local" # SearchFilter "(|(cn=developers)(cn=artists))" # MemberAttribute uniqueMember # </Group> </Authorization> Top-level domain in AD is used by historical reasons. Analogue configuration is working for Apache 2.2 in mod-authzn-ldap. User and password are correct. client.conf: remote server_name port 1194 proto tcp client pull remote-cert-tls server dev tun resolv-retry infinite nobind ca ca.crt ; with keys - works fine #cert test.crt #key test.key ; without keys - by password auth-user-pass persist-tun verb 3 mute 20 In server log there is string PLUGIN_INIT: POST /usr/lib/openvpn/openvpn-auth-ldap.so '[/usr/lib/openvpn/openvpn-auth-ldap.so] [/etc/openvpn-test/openvpn-auth-ldap.conf]' which indicates, that plugin failed. I can telnet to dc1.domain:389, so this is not network/firewall problem. Later server says TLS Error: TLS object -> incoming plaintext read error TLS handshake failed - without plugin it tryes to do usal key authentification. server log: Tue Nov 22 03:06:20 2011 OpenVPN 2.1.3 i486-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Oct 21 2010 Tue Nov 22 03:06:20 2011 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Tue Nov 22 03:06:20 2011 PLUGIN_INIT: POST /usr/lib/openvpn/openvpn-auth-ldap.so '[/usr/lib/openvpn/openvpn-auth-ldap.so] [/etc/openvpn-test/openvpn-auth-ldap.conf]' intercepted=PLUGIN_AUTH_USER_PASS_VERIFY|PLUGIN_CLIENT_CONNECT|PLUGIN_CLIENT_DISCONNECT Tue Nov 22 03:06:20 2011 Diffie-Hellman initialized with 1024 bit key Tue Nov 22 03:06:20 2011 /usr/bin/openssl-vulnkey -q -b 1024 -m <modulus omitted> Tue Nov 22 03:06:20 2011 Control Channel Authentication: using 'ta.key' as a OpenVPN static key file Tue Nov 22 03:06:20 2011 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Nov 22 03:06:20 2011 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Nov 22 03:06:20 2011 TLS-Auth MTU parms [ L:1543 D:168 EF:68 EB:0 ET:0 EL:0 ] Tue Nov 22 03:06:20 2011 Socket Buffers: R=[87380->131072] S=[16384->131072] Tue Nov 22 03:06:20 2011 TUN/TAP device tun1 opened Tue Nov 22 03:06:20 2011 TUN/TAP TX queue length set to 100 Tue Nov 22 03:06:20 2011 /sbin/ifconfig tun1 10.0.2.1 netmask 255.255.255.0 mtu 1500 broadcast 10.0.2.255 Tue Nov 22 03:06:20 2011 Data Channel MTU parms [ L:1543 D:1450 EF:43 EB:4 ET:0 EL:0 ] Tue Nov 22 03:06:20 2011 GID set to nogroup Tue Nov 22 03:06:20 2011 UID set to nobody Tue Nov 22 03:06:20 2011 Listening for incoming TCP connection on [undef] Tue Nov 22 03:06:20 2011 TCPv4_SERVER link local (bound): [undef] Tue Nov 22 03:06:20 2011 TCPv4_SERVER link remote: [undef] Tue Nov 22 03:06:20 2011 MULTI: multi_init called, r=256 v=256 Tue Nov 22 03:06:20 2011 IFCONFIG POOL: base=10.0.2.2 size=252 Tue Nov 22 03:06:20 2011 MULTI: TCP INIT maxclients=1024 maxevents=1028 Tue Nov 22 03:06:20 2011 Initialization Sequence Completed Tue Nov 22 03:07:10 2011 MULTI: multi_create_instance called Tue Nov 22 03:07:10 2011 Re-using SSL/TLS context Tue Nov 22 03:07:10 2011 Control Channel MTU parms [ L:1543 D:168 EF:68 EB:0 ET:0 EL:0 ] Tue Nov 22 03:07:10 2011 Data Channel MTU parms [ L:1543 D:1450 EF:43 EB:4 ET:0 EL:0 ] Tue Nov 22 03:07:10 2011 Local Options hash (VER=V4): 'c413e92e' Tue Nov 22 03:07:10 2011 Expected Remote Options hash (VER=V4): 'd8421bb0' Tue Nov 22 03:07:10 2011 TCP connection established with [AF_INET]10.0.0.9:47808 Tue Nov 22 03:07:10 2011 TCPv4_SERVER link local: [undef] Tue Nov 22 03:07:10 2011 TCPv4_SERVER link remote: [AF_INET]10.0.0.9:47808 Tue Nov 22 03:07:11 2011 10.0.0.9:47808 TLS: Initial packet from [AF_INET]10.0.0.9:47808, sid=a2cd4052 84b47108 Tue Nov 22 03:07:11 2011 10.0.0.9:47808 TLS_ERROR: BIO read tls_read_plaintext error: error:140890C7:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:peer did not return a certificate Tue Nov 22 03:07:11 2011 10.0.0.9:47808 TLS Error: TLS object -> incoming plaintext read error Tue Nov 22 03:07:11 2011 10.0.0.9:47808 TLS Error: TLS handshake failed Tue Nov 22 03:07:11 2011 10.0.0.9:47808 Fatal TLS error (check_tls_errors_co), restarting Tue Nov 22 03:07:11 2011 10.0.0.9:47808 SIGUSR1[soft,tls-error] received, client-instance restarting Tue Nov 22 03:07:11 2011 TCP/UDP: Closing socket client log: Tue Nov 22 03:06:18 2011 OpenVPN 2.1.3 x86_64-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Oct 22 2010 Enter Auth Username:user Enter Auth Password: Tue Nov 22 03:06:25 2011 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Tue Nov 22 03:06:25 2011 Control Channel Authentication: using 'ta.key' as a OpenVPN static key file Tue Nov 22 03:06:25 2011 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Nov 22 03:06:25 2011 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Nov 22 03:06:25 2011 Control Channel MTU parms [ L:1543 D:168 EF:68 EB:0 ET:0 EL:0 ] Tue Nov 22 03:06:25 2011 Socket Buffers: R=[87380->131072] S=[16384->131072] Tue Nov 22 03:06:25 2011 Data Channel MTU parms [ L:1543 D:1450 EF:43 EB:4 ET:0 EL:0 ] Tue Nov 22 03:06:25 2011 Local Options hash (VER=V4): 'd8421bb0' Tue Nov 22 03:06:25 2011 Expected Remote Options hash (VER=V4): 'c413e92e' Tue Nov 22 03:06:25 2011 Attempting to establish TCP connection with [AF_INET]10.0.0.2:1194 [nonblock] Tue Nov 22 03:06:26 2011 TCP connection established with [AF_INET]10.0.0.2:1194 Tue Nov 22 03:06:26 2011 TCPv4_CLIENT link local: [undef] Tue Nov 22 03:06:26 2011 TCPv4_CLIENT link remote: [AF_INET]10.0.0.2:1194 Tue Nov 22 03:06:26 2011 TLS: Initial packet from [AF_INET]10.0.0.2:1194, sid=7a3c2a0f bd35bca7 Tue Nov 22 03:06:26 2011 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this Tue Nov 22 03:06:26 2011 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funston/CN=Fort-Funston_CA/[email protected] Tue Nov 22 03:06:26 2011 Validating certificate key usage Tue Nov 22 03:06:26 2011 ++ Certificate has key usage 00a0, expects 00a0 Tue Nov 22 03:06:26 2011 VERIFY KU OK Tue Nov 22 03:06:26 2011 Validating certificate extended key usage Tue Nov 22 03:06:26 2011 ++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication Tue Nov 22 03:06:26 2011 VERIFY EKU OK Tue Nov 22 03:06:26 2011 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funston/CN=server/[email protected] Tue Nov 22 03:06:26 2011 Connection reset, restarting [0] Tue Nov 22 03:06:26 2011 TCP/UDP: Closing socket Tue Nov 22 03:06:26 2011 SIGUSR1[soft,connection-reset] received, process restarting Tue Nov 22 03:06:26 2011 Restart pause, 5 second(s) ^CTue Nov 22 03:06:27 2011 SIGINT[hard,init_instance] received, process exiting Does anybody know how to get openvpn-auth-ldap wirking?

    Read the article

  • Xcode warning: application executable contains unsupported architecture(s):arm, arm (-19031)

    - by rmvz3
    Hi all. I've been receiving this warning since I loaded my project in last Xcode 4 preview. There was no warning before that but now I can't get rid of it even in Xcode 3.2. I've been googling but nobody seems to have the same error. My project and target settings are correct (IMHO): Architectures: Standard (armv6 armv7), Base SDK: Latest iOS (currently set to iOS 4.2), Build Active Architecture Only: FALSE, Valid Architectures: armv6 armv7. I compared every project setting with other projects and and found no differences. I even have recreated the project starting from scratch and copying classes, resources and frameworks with the same result. I must say that the warning is not shown when I set Debug configuration. I hope someone can help me because I don't know what to do. Thanks in advice.

    Read the article

  • Removing Left Recursion in ANTLR

    - by prosseek
    As is explained in http://stackoverflow.com/questions/2652060/removing-left-recursion , there are two ways to remove the left recursion. Modify the original grammar to remove the left recursion using some procedure Write the grammar originally not to have the left recursion What people normally use for removing (not having) the left recursion with ANTLR? I've used flex/bison for parser, but I need to use ANTLR. The only thing I'm concerned about using ANTLR (or LL parser in genearal) is left recursion removal. In practical sense, how serious of removing left recursion in ANTLR? Is this a showstopper in using ANTLR? Or, nobody cares about it in ANTLR community? I like the idea of AST generation of ANTLR. In terms of getting AST quick and easy way, which method (out of the 2 removing left recursion methods) is preferable?

    Read the article

  • Why aren't we programming on the GPU???

    - by Chris
    So I finally took the time to learn CUDA and get it installed and configured on my computer and I have to say, I'm quite impressed! Here's how it does rendering the Mandelbrot set at 1280 x 678 pixels on my home PC with a Q6600 and a GeForce 8800GTS (max of 1000 iterations): Maxing out all 4 CPU cores with OpenMP: 2.23 fps Running the same algorithm on my GPU: 104.7 fps And here's how fast I got it to render the whole set at 8192 x 8192 with a max of 1000 iterations: Serial implemetation on my home PC: 81.2 seconds All 4 CPU cores on my home PC (OpenMP): 24.5 seconds 32 processors on my school's super computer (MPI with master-worker): 1.92 seconds My home GPU (CUDA): 0.310 seconds 4 GPUs on my school's super computer (CUDA with static domain decomposition): 0.0547 seconds So here's my question - if we can get such huge speedups by programming the GPU instead of the CPU, why is nobody doing it??? I can think of so many things we could speed up like this, and yet I don't know of many commercial apps that are actually doing it. Also, what kinds of other speedups have you seen by offloading your computations to the GPU?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >