Search Results

Search found 30819 results on 1233 pages for 'software security'.

Page 127/1233 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Microsoft Declares the Future of ASP.NET is Web API

    - by sbwalker
    Sitting on a plane on my way home from Tech Ed 2012 in Orlando, I thought it would be a good time to jot down some key takeaways from this year’s conference. Some of these items I have known since the Microsoft MVP Summit which occurred in Redmond in late February ( but due to NDA restrictions I could not share them with the developer community at large ) and some of them are a result of insightful conversations with a wide variety of industry insiders and Microsoft employees at the conference. First, let’s travel back in time 4 years to the Microsoft MVP Summit in 2008. Microsoft was facing some heat from market newcomer Ruby on Rails and responded with a new web development framework of its own, ASP.NET MVC. At the Summit they estimated that MVC would only be applicable for ~10% of all new web development projects. Based on that prediction I questioned why they were investing such considerable resources for such a relative edge case, but my guess is that they felt it was an important edge case at the time as some of the more vocal .NET evangelists as well as some very high profile start-ups ( ie. Twitter ) had publicly announced their intent to use Rails. Microsoft made a lot of noise about MVC. In fact, they focused so much of their messaging and marketing hype around MVC that it appeared that WebForms was essentially dead. Yes, it may have been true that Microsoft continued to invest in WebForms, but from an outside perspective it really appeared that MVC was the only framework getting any real attention. As a result, MVC started to gain market share. An inside source at Microsoft told me that MVC usage has grown at a rate of about 5% per year and now sits at ~30%. Essentially by focusing so much marketing effort on MVC, Microsoft actually created a larger market demand for it.  This is because in the Microsoft ecosystem there is somewhat of a bandwagon mentality amongst developers. If Microsoft spends a lot of time talking about a specific technology, developers get the perception that it must be really important. So rather than choosing the right tool for the job, they often choose the tool with the most marketing hype and then try to sell it to the customer. In 2010, I blogged about the fact that MVC did not make any business sense for the DotNetNuke platform. This was because our ecosystem relied on third party extensions which were dependent on the WebForms model. If we migrated the core to MVC it would mean that all of the third party extensions would no longer be compatible, which would be an irresponsible business decision for us to make at the expense of our users and customers. However, this did not stop the debate from continuing to occur in our ecosystem. Clearly some developers had drunk Microsoft’s Kool-Aid about MVC and were of the mindset, to paraphrase an old Scottish saying, “If its not MVC, it’s crap”. Now, this is a rather ignorant position to take as most of the benefits of MVC can be achieved in WebForms with solid architecture and responsible coding practices. Clean separation of concerns, unit testing, and direct control over page output are all possible in the WebForms model – it just requires diligence and discipline. So over the past few years some horror stories have begun to bubble to the surface of software development projects focused on ground-up rewrites of web applications for the sole purpose of migrating from WebForms to MVC. These large scale rewrites were typically initiated by engineering teams with only a single argument driving the business decision, that Microsoft was promoting MVC as “the future”. These ill-fated rewrites offered no benefit to end users or customers and in fact resulted in a less stable, less scalable and more complicated systems – basically taking one step forward and two full steps back. A case in point is the announcement earlier this week that a popular open source .NET CMS provider has decided to pull the plug on their new MVC product which has been under active development for more than 18 months and revert back to WebForms. The availability of multiple server-side development models has deeply fragmented the Microsoft developer community. Some folks like to compare it to the age-old VB vs. C# language debate. However, the VB vs. C# language debate was ultimately more of a religious war because at least the two dominant programming languages were compatible with one another and could be used interchangeably. The issue with WebForms vs. MVC is much more challenging. This is because the messaging from Microsoft has positioned the two solutions as being incompatible with one another and as a result web developers feel like they are forced to choose one path or another. Yes, it is true that it has always been technically possible to use WebForms and MVC in the same project, but the tooling support has always made this feel “dirty”. The fragmentation has also made it difficult to attract newcomers as the perceived barrier to entry for learning ASP.NET has become higher. As a result many new software developers entering the market are gravitating to environments where the development model seems more simple and intuitive ( ie. PHP or Ruby ). At the same time that the Web Platform team was busy promoting ASP.NET MVC, the Microsoft Office team has been promoting Sharepoint as a platform for building internal enterprise web applications. Sharepoint has great penetration in the enterprise and over time has been enhanced with improved extensibility capabilities for software developers. But, like many other mature enterprise ASP.NET web applications, it is built on the WebForms development model. Similar to DotNetNuke, Sharepoint leverages a rich third party ecosystem for both generic web controls and more specialized WebParts – both of which rely on WebForms. So basically this resulted in a situation where the Web Platform group had headed off in one direction and the Office team had gone in another direction, and the end customer was stuck in the middle trying to figure out what to do with their existing investments in Microsoft technology. It really emphasized the perception that the left hand was not speaking to the right hand, as strategically speaking there did not seem to be any high level plan from Microsoft to ensure consistency and continuity across the different product lines. With the introduction of ASP.NET MVC, it also made some of the third party control vendors scratch their heads, and wonder what the heck Microsoft was thinking. The original value proposition of ASP.NET over Classic ASP was the ability for web developers to emulate the highly productive desktop development model by using abstract components for creating rich, interactive web interfaces. Web control vendors like Telerik, Infragistics, DevExpress, and ComponentArt had all built sizable businesses offering powerful user interface components to WebForms developers. And even after MVC was introduced these vendors continued to improve their products, offering greater productivity and a superior user experience via AJAX to what was possible in MVC. And since many developers were comfortable and satisfied with these third party solutions, the demand remained strong and the third party web control market continued to prosper despite the availability of MVC. While all of this was going on in the Microsoft ecosystem, there has also been a fundamental shift in the general software development industry. Driven by the explosion of Internet-enabled devices, the focus has now centered on service-oriented architecture (SOA). Service-oriented architecture is all about defining a public API for your product that any client can consume; whether it’s a native application running on a smart phone or tablet, a web browser taking advantage of HTML5 and Javascript, or a rich desktop application running on a PC. REST-based services which utilize the less verbose characteristics of JSON as a transport mechanism, have become the preferred approach over older, more bloated SOAP-based techniques. SOA also has the benefit of producing a cross-platform API, as every major technology stack is able to interact with standard REST-based web services. And for web applications, more and more developers are turning to robust Javascript libraries like JQuery and Knockout for browser-based client-side development techniques for calling web services and rendering content to end users. In fact, traditional server-side page rendering has largely fallen out of favor, resulting in decreased demand for server-side frameworks like Ruby on Rails, WebForms, and (gasp) MVC. In response to these new industry trends, Microsoft did what it always does – it immediately poured some resources into developing a solution which will ensure they remain relevant and competitive in the web space. This work culminated in a new framework which was branded as Web API. It is convention-based and designed to embrace native HTTP standards without copious layers of abstraction. This framework is designed to be the ultimate replacement for both the REST aspects of WCF and ASP.NET MVC Web Services. And since it was developed out of band with a dependency only on ASP.NET 4.0, it means that it can be used immediately in a variety of production scenarios. So at Tech Ed 2012 it was made abundantly clear in numerous sessions that Microsoft views Web API as the “Future of ASP.NET”. In fact, one Microsoft PM even went as far as to say that if we look 3-4 years into the future, that all ASP.NET web applications will be developed using the Web API approach. This is a fairly bold prediction and clearly telegraphs where Microsoft plans to allocate its resources going forward. Currently Web API is being delivered as part of the MVC4 package, but this is only temporary for the sake of convenience. It also sounds like there are still internal discussions going on in terms of how to brand the various aspects of ASP.NET going forward – perhaps the moniker of “ASP.NET Web Stack” coined a couple years ago by Scott Hanselman and utilized as part of the open source release of ASP.NET bits on Codeplex a few months back will eventually stick. Web API is being positioned as the unification of ASP.NET – the glue that is able to pull this fragmented mess back together again. The  “One ASP.NET” strategy will promote the use of all frameworks - WebForms, MVC, and Web API, even within the same web project. Basically the message is utilize the appropriate aspects of each framework to solve your business problems. Instead of navigating developers to a fork in the road, the plan is to educate them that “hybrid” applications are a great strategy for delivering solutions to customers. In addition, the service-oriented approach coupled with client-side development promoted by Web API can effectively be used in both WebForms and MVC applications. So this means it is also relevant to application platforms like DotNetNuke and Sharepoint, which means that it starts to create a unified development strategy across all ASP.NET product lines once again. And so what about MVC? There have actually been rumors floated that MVC has reached a stage of maturity where, similar to WebForms, it will be treated more as a maintenance product line going forward ( MVC4 may in fact be the last significant iteration of this framework ). This may sound alarming to some folks who have recently adopted MVC but it really shouldn’t, as both WebForms and MVC will continue to play a vital role in delivering solutions to customers. They will just not be the primary area where Microsoft is spending the majority of its R&D resources. That distinction will obviously go to Web API. And when the question comes up of why not enhance MVC to make it work with Web API, you must take a step back and look at this from the higher level to see that it really makes no sense. MVC is a server-side page compositing framework; whereas, Web API promotes client-side page compositing with a heavy focus on web services. In order to make MVC work well with Web API, would require a complete rewrite of MVC and at the end of the day, there would be no upgrade path for existing MVC applications. So it really does not make much business sense. So what does this have to do with DotNetNuke? Well, around 8-12 months ago we recognized the software industry trends towards web services and client-side development. We decided to utilize a “hybrid” model which would provide compatibility for existing modules while at the same time provide a bridge for developers who wanted to utilize more modern web techniques. Customers who like the productivity and familiarity of WebForms can continue to build custom modules using the traditional approach. However, in DotNetNuke 6.2 we also introduced a new Service Framework which is actually built on top of MVC2 ( we chose to leverage MVC because it had the most intuitive, light-weight REST implementation in the .NET stack ). The Services Framework allowed us to build some rich interactive features in DotNetNuke 6.2, including the Messaging and Notification Center and Activity Feed. But based on where we know Microsoft is heading, it makes sense for the next major version of DotNetNuke ( which is expected to be released in Q4 2012 ) to migrate from MVC2 to Web API. This will likely result in some breaking changes in the Services Framework but we feel it is the best approach for ensuring the platform remains highly modern and relevant. The fact that our development strategy is perfectly aligned with the “One ASP.NET” strategy from Microsoft means that our customers and developer community can be confident in their current and future investments in the DotNetNuke platform.

    Read the article

  • Hidden divs for "lazy javascript" loading? Possible security/other issues?

    - by xyld
    I'm curious about people's opinion's and thoughts about this situation. The reason I'd like to lazy load javascript is because of performance. Loading javascript at the end of the body reduces the browser blocking and ends up with much faster page loads. But there is some automation I'm using to generate the html (django specifically). This automation has the convenience of allowing forms to be built with "Widgets" that output content it needs to render the entire widget (extra javascript, css, ...). The problem is that the widget wants to output javascript immediately into the middle of the document, but I want to ensure all javascript loads at the end of the body. When the following widget is added to a form, you can see it renders some <script>...</script> tags: class AutoCompleteTagInput(forms.TextInput): class Media: css = { 'all': ('css/jquery.autocomplete.css', ) } js = ( 'js/jquery.bgiframe.js', 'js/jquery.ajaxQueue.js', 'js/jquery.autocomplete.js', ) def render(self, name, value, attrs=None): output = super(AutoCompleteTagInput, self).render(name, value, attrs) page_tags = Tag.objects.usage_for_model(DataSet) tag_list = simplejson.dumps([tag.name for tag in page_tags], ensure_ascii=False) return mark_safe(u'''<script type="text/javascript"> jQuery("#id_%s").autocomplete(%s, { width: 150, max: 10, highlight: false, scroll: true, scrollHeight: 100, matchContains: true, autoFill: true }); </script>''' % (name, tag_list,)) + output What I'm proposing is that if someone uses a <div class=".lazy-js">...</div> with some css (.lazy-js { display: none; }) and some javascript (jQuery('.lazy-js').each(function(index) { eval(jQuery(this).text()); }), you can effectively force all javascript to load at the end of page load: class AutoCompleteTagInput(forms.TextInput): class Media: css = { 'all': ('css/jquery.autocomplete.css', ) } js = ( 'js/jquery.bgiframe.js', 'js/jquery.ajaxQueue.js', 'js/jquery.autocomplete.js', ) def render(self, name, value, attrs=None): output = super(AutoCompleteTagInput, self).render(name, value, attrs) page_tags = Tag.objects.usage_for_model(DataSet) tag_list = simplejson.dumps([tag.name for tag in page_tags], ensure_ascii=False) return mark_safe(u'''<div class="lazy-js"> jQuery("#id_%s").autocomplete(%s, { width: 150, max: 10, highlight: false, scroll: true, scrollHeight: 100, matchContains: true, autoFill: true }); </div>''' % (name, tag_list,)) + output Nevermind all the details of my specific implementation (the specific media involved), I'm looking for a consensus on whether the method of using lazy-loaded javascript through hidden a hidden tags can pose issues whether security or other related? One of the most convenient parts about this is that it follows the DRY principle rather well IMO because you don't need to hack up a specific lazy-load for each instance in the page. It just "works". UPDATE: I'm not sure if django has the ability to queue things (via fancy template inheritance or something?) to be output just before the end of the </body>?

    Read the article

  • Problem when trying to update "Duplicate sources.list"

    - by Coca Akat
    I got this problem when trying to update using sudo apt-get update W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ saucy-backports/multiverse amd64 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_saucy-backports_multiverse_binary-amd64_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ saucy-backports/multiverse i386 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_saucy-backports_multiverse_binary-i386_Packages) W: You may want to run apt-get update to correct these problems This is my souces.list : # deb cdrom:[Ubuntu 13.10 _Saucy Salamander_ - Release amd64 (20131016.1)]/ saucy main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://archive.ubuntu.com/ubuntu saucy main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://archive.ubuntu.com/ubuntu saucy-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu saucy universe deb http://archive.ubuntu.com/ubuntu saucy-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://archive.ubuntu.com/ubuntu saucy multiverse deb http://archive.ubuntu.com/ubuntu saucy-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu saucy-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu saucy-security main restricted deb http://archive.ubuntu.com/ubuntu saucy-security universe deb http://archive.ubuntu.com/ubuntu saucy-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://extras.ubuntu.com/ubuntu saucy main # deb-src http://extras.ubuntu.com/ubuntu saucy main # deb http://archive.canonical.com/ saucy partner # deb-src http://archive.canonical.com/ saucy partner # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. ## Major bug fix updates produced after the final release of the ## distribution. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu saucy-backports multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software.

    Read the article

  • Virtual Machines and Automatic Software Updates

    - by Zian Choy
    It's obvious that one's main computer should always be have all the latest security patches and most people don't blink an eye when Microsoft Update installs non-security updates. In the land of virtual machines, I've run into 2 problems with automatic updates: The virtual machines are only run when needed. Only Windows virtual machines seem to patch themselves. To elaborate on #1, I generally make a virtual machine with a purpose in mind. For example, when I needed an old copy of Internet Explorer to reproduce a bug in RSS Bandit, I had a Virtual PC named RSS Bandit. The machine only stayed running for a few minutes at a time. Consequently, there is no downtime for the machine to download updates at 3 AM. To elaborate on #2, I've noticed that if I haven't run a Windows virtual machine in a while, then the moment I log in, the computer frantically downloads updates and within seconds, if I click the Start button, there is a little orange shield next to the "Shutdown" button. However, I ran a freshly created Ubuntu VM for several hours today with hundreds of updates pending and it seemed to never download any of them or install any of them. Is there any reason to be concerned about running VMs with dozens of security holes? If I should be concerned, then is there any way to get Ubuntu to download and install updates rather than just advertising a long list of updates to download next century? I've already tried telling Ubuntu to automatically download and install updates.

    Read the article

  • Security in shared hosting vs VPS 'virtual appliances'

    - by Pedro Loureiro
    I have to change my hosting provider. Right now I have a shared hosting account but I'm considering trying the LAMP stack appliance from turnkeylinux.org. I'm very comfortable with using linux, I've been using it for a long time. I have no problem ssh'ing into remote machines and do whatever I have to do (coding, reading logs, moving files, deploying, etc). The problem is that none of those tasks have involved securing the server/firewall. My experience has been as a desktop user or developer deploying apps/files in remote servers. Ignoring the security in the application logic (read: any scripts, frameworks, websites I might have created or installed) - I'm worried about things like base configuration of deamons, firewall, ports, executable scripts being readable from the outside and whatnot. My question is: how do you compare the (expected) out of the box security of the LAMP stack from turnkey and the (expected) security of a "regular" shared hosting provider? I was hoping to find some guides with a list of steps to do to protect my server but the only documentation I found was simply referring to ubuntu's documentation.

    Read the article

  • Win 2008 R2 terminal server and redirected printer queue security

    - by Ian
    I have a case where I need a non-priv account to be able to make a modification to the redirected printer. I know, its not advisable but we're not giving them access - changes will be made in code. So, following the docs (http://technet.microsoft.com/en-us/library/ee524015(WS.10).aspx) I modified the default security for new printer queues. This doesnt work though as windows doesn't seem to assign the privs you configure in the printer admin tool to redirected printer queues. As I test I added a non-priv test user to the default security tab in the printer admin tool (control panel - admin tools - printer admin. I assigned it all privs (its a test) and logged the user into the terminal server. The redirected printers duely appeared as usual. However if I open the printer properties - security tab, the user appears in the list of accounts/groups but the options I selected (all privs) are not set. Instead the user special privs box is marked and when I click on 'advanced options' and view them, there is nothing marked. So, something is clearing these options.... the question is, why and how can I convince it not to? Ian

    Read the article

  • (Some) security perms in WinXP corrupted (shows GUID instead of username)

    - by Andy
    I've been using my Win XP machine (part of a domain) over the holiday period, so until yesterday it hadn't rebooted for about five days. I used it yesterday perfectly fine and shut it down. When I switched it on this morning the majority (but not all) of my shortcut links in the Quick Launch toolbar showed as generic file icons. If you open the folder and get properties on one of the failing shortcuts it says ''Target type: This is not a valid shortcut''. Then in Outlook I noticed my signature wasn't showing (I checked my sent folder and the sig was ok yesterday). Checking the signature folder, I can't see the security tab on any of the sig files, and I have an access denied message on trying to open them. I can see the security tab on the signature folder itself, just none of the contents. If I try and use the parent folder's security tab and ''Replace permission entries on all child objects with entries shown here that apply to child objects'' it appears to work fine, but makes no actual difference. I logged in as administrator and saw that the owner of the files showed up as a GUID (clearly should've been my account in its place). Any ideas what might have made that happen? So far I haven't heard any similar complaints from anyone else at the office...

    Read the article

  • The best dvd ripper software in 2014 review

    - by user328170
    The top 3 DVD Ripping Tools in 2014 Nowadays everyone may have several smart mobile devices, such as iphone, ipad air, ipad mini ,Samsung Galaxy and Sony Xperia. If you want to take your movies with your mobile devices, or sometimes just want to backup those classic physical discs on your notebook or workstation with high quality resolutions, you need a fast and stable software to rip them and convert them to the format you like. Fortunately, there are plenty of great software products designed to make the process easy and transform DVD to the files that are playable on any mobile device you choose. We have done a full review on dozens of products. Here are five of the best, based on our review. We test the software from its ripping speed, friendly use guide , reliability and ripping capability. The top one is still Winx DVD Ripper platinum. We've test its 6.1 version 2 years ago for its ability to quickly and easily rip DVDs and Blu-ray discs to high quality MKV files with a single click. It gave us deep impression in the test. This time we test it’s lastest 7.3.5 version. Besides easy use and speed, we test its capability to decrypt all kinds of discs with different protect method, for example, Disney X-project DRM , Sony ArccOS, RCE and region code. The result shows that winx dvd ripper platinum still maintain its advantages in all the area. Winx dvd ripper platinum is a more focused on DVD ripping software with the basic duty to rip and convert DVD. The color of UI is a modern technical sense. All the main functions are shown obviously while others specials are hidden for advanced users, making it more clear and convenient to make option. There are two company weisoft limited and Digiarty who can provide the software. Weisoft limited focus on USA, UK and Australia market. Digiarty focus on others. ripping speed ????? friendly use guide ????? reliability ????? ripping capability ????? The second one DVDFab DVDFab is also very robust during ripping dics. It can also decrypt most of the dics in the market. The shortage it still friendly use and speed. We'd note that the app is frequently updated to cut through the copy protection on even the latest DVDs and Blu-ray discs . The app is shareware, meaning most features are free, including decrypting and ripping to your hard drive. Many of you note that you use another app for compression and authoring, but many of you say they hey, storage is cheap, and the rips from DVDFab are easy, one-click, and work. ripping speed ??? friendly use guide ???? reliability ????? ripping capability ????? The Third one is Handbrake Handbrake is our favorite video encoder for a reason: it's simple, easy to use, easy to install, and offers a lots of options to get the high quality file as a result. If you're scared by them, you don't even have to use them—the app will compensate for you and pick some settings it thinks you'll like based on your destination device. So many of you like Handbrake that many of you use it in conjunction with another app (like VLC, which makes ripping easy)—you'll let another app do the rip and crack the DRM on your discs, and then process the file through Handbrake for encoding. The app is fast, can make the most of multi-core processors to speed up the process, and is completely open source. ripping speed ??? friendly use guide ???? reliability ???? ripping capability ????

    Read the article

  • Ubuntu Control Center Makes Using Ubuntu Easier

    - by Vivek
    Users who are new to Ubuntu might find it somewhat difficult to configure. Today we take a look at using Ubuntu Control Center which makes managing different aspects of the system easier. About Ubuntu Control Center A lot of utilities and software has been written to work with Ubuntu. Ubuntu Control Center is one such cool utility which makes it easy for configuring Ubuntu. The following is a brief description of Ubuntu Control Center: Ubuntu Control Center or UCC is an application inspired by Mandriva Control Center and aims to centralize and organize in a simple and intuitive form the main configuration tools for Ubuntu distribution. UCC uses all the native applications already bundled with Ubuntu, but it also utilize some third-party apps like “Hardinfo”, “Boot-up Manager”, “GuFW” and “Font-Manager”. Ubuntu Control Center Here we look at installation and use of Ubuntu Control Center in Ubuntu 10.04. First we have to satisfy some dependencies. You will need to install Font-Manager and jstest-gtk (link below)…before installing Ubuntu Control Center (UCC). Click the Install Package button. You’ll be prompted to enter in your admin password for each installation package. Installation is successful…close out of the screen. Download and install Font-Manager…again you’ll need to enter in your password to complete installation.   Once you have installed the two dependencies, you are all set to install Ubuntu Control Center (link below), double click the downloaded Ubuntu Control Center deb file to install it. Once installed you can find it under Applications \ System Tools \ UCC. Once you launch it you can start managing your system, software, hardware, and more.   You can easily control various aspects of your Ubuntu System using Ubuntu Control Center. Here we look at configuring the firewall under Network and Internet.     UCC allows easy access for configuring several aspects of your system. Once you install UCC you’ll see how easy it is to configure your Ubuntu system through an intuitive clean graphical interface. If you’re new to Ubuntu, using UCC can help you in setting up your system how you like in a user friendly way. Home Page of UCC http://code.google.com/p/ucc/ Links Download Font-Manager ManagerDownload jstest-gtkUbuntu Control Center (UCC) Similar Articles Productive Geek Tips Adding extra Repositories on UbuntuAllow Remote Control To Your Desktop On UbuntuAssign a Hotkey to Open a Terminal Window in UbuntuInstall VMware Tools on Ubuntu Edgy EftInstall Monodevelop on Ubuntu Linux TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium

    Read the article

  • Using Oracle Proxy Authentication with JPA (eclipselink-Style)

    - by olaf.heimburger
    Security is a very intriguing topic. You will find it everywhere and you need to implement it everywhere. Yes, you need. Unfortunately, one can easily forget it while implementing the last mile. The Last Mile In a multi-tier application it is a common practice to use connection pools between the business layer and the database layer. Connection pools are quite useful to speed database connection creation and to split the load. Another very common practice is to use a specific, often called technical, user to connect to the database. This user has authentication and authorization rules that apply to all application users. Imagine you've put every effort to define roles for different types of users that use your application. These roles are necessary to differentiate between normal users, premium users, and administrators (I bet you will find or already have more roles in your application). While these user roles are pretty well used within your application, once the flow of execution enters the database everything is gone. Each and every user just has one role and is the same database user. Issues? What Issues? As long as things go well, this is not a real issue. However, things do not go well all the time. Once your application becomes famous performance decreases in certain situations or, more importantly, current and upcoming regulations and laws require that your application must be able to apply different security measures on a per user role basis at every stage of your application. If you only have a bunch of users with the same name and role you are not able to find the application usage profile that causes the performance issue, or which user has accessed data that he/she is not allowed to. Another thread to your role concept is that databases tend to be used by different applications and tools. These tools can be developer tools like SQL*Plus, SQL Developer, etc. or end user applications like BI Publisher, Oracle Forms and so on. These tools have no idea of your applications role concept and access the database the way they think is appropriate. A big oversight for your perfect role model and a big nightmare for your Chief Security Officer. Speaking of the CSO, brings up another issue: Password management. Once your technical user account is compromised, every user is able to do things that he/she is not expected to do from the design of your application. Counter Measures In the Oracle world a common counter measure is to use Virtual Private Database (VPD). This restricts the values a database user can see to the allowed minimum. However, it doesn't help in regard of a connection pool user, because this one is still not the real user. Oracle Proxy Authentication Another feature of the Oracle database is Proxy Authentication. First introduced with version 9i it is a quite useful feature for nearly every situation. The main idea behind Proxy Authentication is, to create a crippled database user who has only connect rights. Even if this user is compromised the risks are well understood and fairly limited. This user can be used in every situation in which you need to connect to the database, no matter which tool or application (see above) you use.The proxy user is perfect for multi-tier connection pools. CREATE USER app_user IDENTIFIED BY abcd1234; GRANT CREATE SESSION TO app_user; But what if you need to access real data? Well, this is the primary use case, isn't it? Now is the time to bring the application's role concept into play. You define database roles that define the grants for your identified user groups. Once you have these groups you grant access through the proxy user with the application role to the specific user. CREATE ROLE app_role_a; GRANT app_role_a TO scott; ALTER USER scott GRANT CONNECT THROUGH app_user WITH ROLE app_role_a; Now, hr has permission to connect to the database through the proxy user. Through the role you can restrict the hr's rights the are needed for the application only. If hr connects to the database directly all assigned role and permissions apply. Testing the Setup To test the setup you can use SQL*Plus and connect to your database: $ sqlplus app_user[hr]/abcd1234 Java Persistence API The Java Persistence API (JPA) is a fairly easy means to build applications that retrieve data from the database and put it into Java objects. You use plain old Java objects (POJOs) and mixin some Java annotations that define how the attributes of the object are used for storing data from the database into the Java object. Here is a sample for objects from the HR sample schema EMPLOYEES table. When using Java annotations you only specify what can not be deduced from the code. If your Java class name is Employee but the table name is EMPLOYEES, you need to specify the table name, otherwise it will fail. package demo.proxy.ejb; import java.io.Serializable; import java.sql.Timestamp; import java.util.List; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.ManyToOne; import javax.persistence.NamedQueries; import javax.persistence.NamedQuery; import javax.persistence.OneToMany; import javax.persistence.Table; @Entity @NamedQueries({ @NamedQuery(name = "Employee.findAll", query = "select o from Employee o") }) @Table(name = "EMPLOYEES") public class Employee implements Serializable { @Column(name="COMMISSION_PCT") private Double commissionPct; @Column(name="DEPARTMENT_ID") private Long departmentId; @Column(nullable = false, unique = true, length = 25) private String email; @Id @Column(name="EMPLOYEE_ID", nullable = false) private Long employeeId; @Column(name="FIRST_NAME", length = 20) private String firstName; @Column(name="HIRE_DATE", nullable = false) private Timestamp hireDate; @Column(name="JOB_ID", nullable = false, length = 10) private String jobId; @Column(name="LAST_NAME", nullable = false, length = 25) private String lastName; @Column(name="PHONE_NUMBER", length = 20) private String phoneNumber; private Double salary; @ManyToOne @JoinColumn(name = "MANAGER_ID") private Employee employee; @OneToMany(mappedBy = "employee") private List employeeList; public Employee() { } public Employee(Double commissionPct, Long departmentId, String email, Long employeeId, String firstName, Timestamp hireDate, String jobId, String lastName, Employee employee, String phoneNumber, Double salary) { this.commissionPct = commissionPct; this.departmentId = departmentId; this.email = email; this.employeeId = employeeId; this.firstName = firstName; this.hireDate = hireDate; this.jobId = jobId; this.lastName = lastName; this.employee = employee; this.phoneNumber = phoneNumber; this.salary = salary; } public Double getCommissionPct() { return commissionPct; } public void setCommissionPct(Double commissionPct) { this.commissionPct = commissionPct; } public Long getDepartmentId() { return departmentId; } public void setDepartmentId(Long departmentId) { this.departmentId = departmentId; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public Long getEmployeeId() { return employeeId; } public void setEmployeeId(Long employeeId) { this.employeeId = employeeId; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public Timestamp getHireDate() { return hireDate; } public void setHireDate(Timestamp hireDate) { this.hireDate = hireDate; } public String getJobId() { return jobId; } public void setJobId(String jobId) { this.jobId = jobId; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getPhoneNumber() { return phoneNumber; } public void setPhoneNumber(String phoneNumber) { this.phoneNumber = phoneNumber; } public Double getSalary() { return salary; } public void setSalary(Double salary) { this.salary = salary; } public Employee getEmployee() { return employee; } public void setEmployee(Employee employee) { this.employee = employee; } public List getEmployeeList() { return employeeList; } public void setEmployeeList(List employeeList) { this.employeeList = employeeList; } public Employee addEmployee(Employee employee) { getEmployeeList().add(employee); employee.setEmployee(this); return employee; } public Employee removeEmployee(Employee employee) { getEmployeeList().remove(employee); employee.setEmployee(null); return employee; } } JPA could be used in standalone applications and Java EE containers. In both worlds you normally create a Facade to retrieve or store the values of the Entities to or from the database. The Facade does this via an EntityManager which will be injected by the Java EE container. Here is sample Facade Session Bean for a Java EE container. package demo.proxy.ejb; import java.util.HashMap; import java.util.List; import javax.ejb.Local; import javax.ejb.Remote; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.persistence.Query; import javax.interceptor.AroundInvoke; import javax.interceptor.InvocationContext; import oracle.jdbc.driver.OracleConnection; import org.eclipse.persistence.config.EntityManagerProperties; import org.eclipse.persistence.internal.jpa.EntityManagerImpl; @Stateless(name = "DataFacade", mappedName = "ProxyUser-TestEJB-DataFacade") @Remote @Local public class DataFacadeBean implements DataFacade, DataFacadeLocal { @PersistenceContext(unitName = "TestEJB") private EntityManager em; private String username; public Object queryByRange(String jpqlStmt, int firstResult, int maxResults) { // setSessionUser(); Query query = em.createQuery(jpqlStmt); if (firstResult 0) { query = query.setFirstResult(firstResult); } if (maxResults 0) { query = query.setMaxResults(maxResults); } return query.getResultList(); } public Employee persistEmployee(Employee employee) { // setSessionUser(); em.persist(employee); return employee; } public Employee mergeEmployee(Employee employee) { // setSessionUser(); return em.merge(employee); } public void removeEmployee(Employee employee) { // setSessionUser(); employee = em.find(Employee.class, employee.getEmployeeId()); em.remove(employee); } /** select o from Employee o */ public List getEmployeeFindAll() { Query q = em.createNamedQuery("Employee.findAll"); return q.getResultList(); } Putting Both Together To use Proxy Authentication with JPA and within a Java EE container you have to take care of the additional requirements: Use an OCI JDBC driver Provide the user name that connects through the proxy user Use an OCI JDBC driver To use the OCI JDBC driver you need to set up your JDBC data source file to use the correct JDBC URL. hr jdbc:oracle:oci8:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SID=XE))) oracle.jdbc.OracleDriver user app_user 62C32F70E98297522AD97E15439FAC0E SQL SELECT 1 FROM DUAL jdbc/hrDS Application Additionally you need to make sure that the version of the shared libraries of the OCI driver match the version of the JDBC driver in your Java EE container or Java application and are within your PATH (on Windows) or LD_LIBRARY_PATH (on most Unix-based systems). Installing the Oracle Database Instance Client software works perfectly. Provide the user name that connects through the proxy user This part needs some modification of your application software and session facade. Session Facade Changes In the Session Facade we must ensure that every call that goes through the EntityManager must be prepared correctly and uniquely assigned to this session. The second is really important, as the EntityManager works with a connection pool and can not guarantee that we set the proxy user on the connection that will be used for the database activities. To avoid changing every method call of the Session Facade we provide a method to set the username of the user that connects through the proxy user. This method needs to be called by the Facade client bfore doing anything else. public void setUsername(String name) { username = name; } Next we provide a means to instruct the TopLink EntityManager Delegate to use Oracle Proxy Authentication. (I love small helper methods to hide the nitty-gritty details and avoid repeating myself.) private void setSessionUser() { setSessionUser(username); } private void setSessionUser(String user) { if (user != null && !user.isEmpty()) { EntityManagerImpl emDelegate = ((EntityManagerImpl)em.getDelegate()); emDelegate.setProperty(EntityManagerProperties.ORACLE_PROXY_TYPE, OracleConnection.PROXYTYPE_USER_NAME); emDelegate.setProperty(OracleConnection.PROXY_USER_NAME, user); emDelegate.setProperty(EntityManagerProperties.EXCLUSIVE_CONNECTION_MODE, "Always"); } } The final step is use the EJB 3.0 AroundInvoke interceptor. This interceptor will be called around every method invocation. We therefore check whether the Facade methods will be called or not. If so, we set the user for proxy authentication and the normal method flow continues. @AroundInvoke public Object proxyInterceptor(InvocationContext invocationCtx) throws Exception { if (invocationCtx.getTarget() instanceof DataFacadeBean) { setSessionUser(); } return invocationCtx.proceed(); } Benefits Using Oracle Proxy Authentification has a number of additional benefits appart from implementing the role model of your application: Fine grained access control for temporary users of the account, without compromising the original password. Enabling database auditing and logging. Better identification of performance bottlenecks. References Effective Oracle Database 10g Security by Design, David Knox TopLink Developer's Guide, Chapter 98

    Read the article

  • Package management system corrupted. Cannot install or remove packages. U12.04LTS

    - by user271490
    Having read other posts, I believe that this may be less about samba than about update system. Below is the log file of the failed installation of Samba. I have been trying without success to install/outstall samba so that I could install anything else ... I cannot either install or remove samba using either update-manager or apt-get (nor indeed Software Centre). One of the errors that I have had to correct is the presence after "removal" (failed) of /usr/share/system-config-samba directory which finally allowed itself to be deleted. That, however was then ... I have U12.04LTS. running on release 63 because I allowed the upgrade to 64 this morning which fell over - no output to monitor - obviously even less support for my graphic chip than I am suffering already (see other posts in this forum). According to my interpretation of the dpkg returned errors there may be some problem with the package files, but if this is the case then it is on servers 'main', 'nantes uni fr' and 'best fr' at the very least if not everywhere. The suggestions offered at Package operation failed and elsewhere have not worked for me. This linked post suggests that a similar error is present in other packages, or that the error is in the 'update system' I have tried ... sudo apt-get remove samba ... autoremove ... install samba ... clean ... update -f all of the above In update-manager I have tried the "reload packages list" which fails to terminate because of the error. I have tried to install and remove samba from the software centre ... :( I am at a loss ... I need help, please! Firstly to recover my apt-get/update-manager/Software Centre so that I can at least carry on with my continuing installation - up to communicating with home network hence need for samba - which brings me to my second requirement ... samba. PS is the issue about "MaxReports" associated or apart? UPDATE! Being heartily sick of restarting FF every 5 seconds I thought I'd try again with Chromium ... and got the same errors from dpkg about corrupt compressed package - coincidence? Of course this was no longer in clipboard when I got here because apport has just errored ... AAARRRGGGH!!! Why does every error clear the clipboard? Thanks for any and all help!! installArchives() failed: Preconfiguring packages ... ... snip (Reading database ... ... snip (Reading database ... 184858 files and directories currently installed.) Unpacking samba (from .../samba_2%3a3.6.3-2ubuntu2.10_i386.deb) ... dpkg-deb (subprocess): data: internal gzip read error: ': data error' dpkg-deb: error: subprocess returned error exit status 2 dpkg: error processing /var/cache/apt/archives/samba_2%3a3.6.3-2ubuntu2.10_i386.deb (--unpack): subprocess dpkg-deb --fsys-tarfile returned error exit status 2 No apport report written because MaxReports is reached already Selecting previously unselected package system-config-samba. Unpacking system-config-samba (from .../system-config-samba_1.2.63-0ubuntu5_all.deb) ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for ufw ... Processing triggers for man-db ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Errors were encountered while processing: /var/cache/apt/archives/samba_2%3a3.6.3-2ubuntu2.10_i386.deb Error in function: dpkg: dependency problems prevent configuration of system-config-samba: system-config-samba depends on samba; however: Package samba is not installed. dpkg: error processing system-config-samba (--configure): dependency problems - leaving unconfigured

    Read the article

  • What tool or scripts do you use to audit a Linux box?

    - by Sharjeel Sayed
    I use the following tools for my auditing needs A) System Auditing and Hardening (One time) 1) Linux Security Auditing Tool (Security centric,Text based output ) 2) Dmidecode ( Retrieves info from BIOS ) 3) Systeminfo ( Generates a nice html report) 4) Syssumm (Inactive since Oct 2000) 5) Rootkit Hunter (Does a basic config check in addition to rootkit checks) 6) CIS benchmarks 7) Bastille ( Interactive hardening and a security scoring tool) B) Automatic Auditing (as a cron job or a service) 1) Logwatch 2) Psad C) Remote Auditing 1) Nmap (Port scanning) 2) Nessus ( Remote Vulnerability check) D) Wikipedia 1) System profiler Any other tools/scripts which you can recommend?

    Read the article

  • Implications of allowing Windows clients to use NTLMv1?

    - by Boden
    I have a web application that I'd like to authenticate to using pass-through NTLM for SSO. There is a problem, however, in that NTLMv2 apparently will not work in this scenario (without the application storing an identical password hash). I enabled NTLMv1 on one client machine (Vista) using its local group policy: Computer-Windows Settings-Security Settings-Network Security: LAN Manager authentication level. I changed it to Send LM & NTLM - use NTLMv2 session security if negotiated. This worked, and I'm able to login to the web application using NTLM. Now this application would be used by all of my client machines... so I'm wondering what the security risks are if I was push this policy out to all of them (not to the domain controller itself though)?

    Read the article

  • Windows 7: "Replace All Child Object Permissions" Doesn't Stay Checked

    - by raywood
    I right-click on a top-level folder in Windows Explorer. I choose Properties Security tab Advanced Change Permissions. I check "Replace all child object permissions with inheritable permissions from this object" Apply. I get a Windows Security dialog that says, "Setting security information on" the list of objects that flashes by. But now the "Replace all child object permissions" box is unchecked. What is happening here?

    Read the article

  • When to use an MS SQL instance vs. different database on same instance

    - by BoxerBucks
    We have some MS SQL servers that are setup with different instances on the same server to separate applciation DB's as well as some servers that are setup with all DB's on the same instance, just separated with security settings. When is it advisable to create a new instance for SQL server and install your DB's in that instance as opposed to just creating a new DB on the same instance and putting security around the database itself? Is there more to the decision that just a security aspect?

    Read the article

  • Allow members of a group to be unlocked by a specific account on AD

    - by JohnLBevan
    Background I'm creating a service to allow support staff to enable their firecall accounts out of hours (i.e. if there's an issue in the night and we can't get hold of someone with admin rights, another member of the support team can enable their personal firecall account on AD, which has previously been setup with admin rights). This service also logs a reason for the change, alerts key people, and a bunch of other bits to ensure that this change of access is audited / so we can ensure these temporary admin rights are used in the proper way. To do this I need the service account which my service runs under to have permissions to enable users on active directory. Ideally I'd like to lock this down so that the service account can only enable/disable users in a particular AD security group. Question How do you grant access to an account to enable/disable users who are members of a particular security group in AD? Backup Question If it's not possible to do this by security group, is there a suitable alternative? i.e. could it be done by OU, or would it be best to write a script to loop through all members of the security group and update the permissions on the objects (firecall accounts) themselves? Thanks in advance. Additional Tags (I don't yet have access to create new tags here, so listing below to help with keyword searches until it can be tagged & this bit editted/removed) DSACLS, DSACLS.EXE, FIRECALL, ACCOUNT, SECURITY-GROUP

    Read the article

  • All HTTPS, or is it OK to accept HTTP and redirect (secure vs. user friendly)

    - by tharrison
    Our site currently redirects requests sent to http://example.com to https://example.com -- everything beyond this is served over SSL. For now, the redirect is done with an Apache rewrite rule. Our site is dealing with money, however, so security is pretty important. Does allowing HTTP in this way pose any greater security risk than just not opening or listening on port 80? Ideally, it's a little more user-friendly to redirect. (I am aware that SSL is only one of a large set of security considerations, so please make the generous assumption that we have done at least a "very good" job of covering various security bases.)

    Read the article

  • Queries with developing a Voice to Text Based Software

    - by harigm
    I am looking for any software which converts the voice to the text. I can get some software which can easily convert the english launguage voice to English text. But my intention is be it any language, whatever the system gets voice that should give the output in the text format in English. Is it possible to get this kind of software? If yes any open source available to help me to use this? If not, Is this feasible to develop this kind of software, Can any one guide how to and where to begin with? I am looking for windows based software

    Read the article

  • Are there any security vulnerabilities in this PHP code?

    - by skorned
    Hi. I just got a site to manage, but am not too sure about the code the previous guy wrote. I'm pasting the login procedure below, could you have a look and tell me if there are any security vulnerabilities? At first glance, it seems like one could get in through SQL injection or manipulating cookies and the ?m= parameter. define ( 'CURRENT_TIME', time ()); / / Current time. define ( 'ONLINE_TIME_MIN', (CURRENT_TIME - BOTNET_TIMEOUT)); / / Minimum time for the status of "Online". define ( 'DEFAULT_LANGUAGE', 'en'); / / Default language. define ( 'THEME_PATH', 'theme'); / / folder for the theme. / / HTTP requests. define ( 'QUERY_SCRIPT', basename ($ _SERVER [ 'PHP_SELF'])); define ( 'QUERY_SCRIPT_HTML', QUERY_SCRIPT); define ( 'QUERY_VAR_MODULE', 'm'); / / variable contains the current module. define ( 'QUERY_STRING_BLANK', QUERY_SCRIPT. '? m ='); / / An empty query string. define ( 'QUERY_STRING_BLANK_HTML', QUERY_SCRIPT_HTML. '? m ='); / / Empty query string in HTML. define ( 'CP_HTTP_ROOT', str_replace ( '\ \', '/', (! empty ($ _SERVER [ 'SCRIPT_NAME'])? dirname ($ _SERVER [ 'SCRIPT_NAME']):'/'))); / / root of CP. / / The session cookie. define ( 'COOKIE_USER', 'p'); / / Username in the cookies. define ( 'COOKIE_PASS', 'u'); / / user password in the cookies. define ( 'COOKIE_LIVETIME', CURRENT_TIME + 2592000) / / Lifetime cookies. define ( 'COOKIE_SESSION', 'ref'); / / variable to store the session. define ( 'SESSION_LIVETIME', CURRENT_TIME + 1300) / / Lifetime of the session. ////////////////////////////////////////////////// ///////////////////////////// / / Initialize. ////////////////////////////////////////////////// ///////////////////////////// / / Connect to the database. if (! ConnectToDB ()) die (mysql_error_ex ()); / / Connecting topic. require_once (THEME_PATH. '/ index.php'); / / Manage login. if (! empty ($ _GET [QUERY_VAR_MODULE])) ( / / Login form. if (strcmp ($ _GET [QUERY_VAR_MODULE], 'login') === 0) ( UnlockSessionAndDestroyAllCokies (); if (isset ($ _POST [ 'user']) & & isset ($ _POST [ 'pass'])) ( $ user = $ _POST [ 'user']; $ pass = md5 ($ _POST [ 'pass']); / / Check login. if (@ mysql_query ( "SELECT id FROM cp_users WHERE name = '". addslashes ($ user). "' AND pass = '". addslashes ($ pass). "' AND flag_enabled = '1 'LIMIT 1") & & @ mysql_affected_rows () == 1) ( if (isset ($ _POST [ 'remember']) & & $ _POST [ 'remember'] == 1) ( setcookie (COOKIE_USER, md5 ($ user), COOKIE_LIVETIME, CP_HTTP_ROOT); setcookie (COOKIE_PASS, $ pass, COOKIE_LIVETIME, CP_HTTP_ROOT); ) LockSession (); $ _SESSION [ 'Name'] = $ user; $ _SESSION [ 'Pass'] = $ pass; / / UnlockSession (); header ( 'Location:'. QUERY_STRING_BLANK. 'home'); ) else ShowLoginForm (true); die (); ) ShowLoginForm (false); die (); ) / / Output if (strcmp ($ _GET [ 'm'], 'logout') === 0) ( UnlockSessionAndDestroyAllCokies (); header ( 'Location:'. QUERY_STRING_BLANK. 'login'); die (); ) ) ////////////////////////////////////////////////// ///////////////////////////// / / Check the login data. ////////////////////////////////////////////////// ///////////////////////////// $ logined = 0, / / flag means, we zalogininy. / / Log in session. LockSession (); if (! empty ($ _SESSION [ 'name']) & &! empty ($ _SESSION [ 'pass'])) ( if (($ r = @ mysql_query ( "SELECT * FROM cp_users WHERE name = '". addslashes ($ _SESSION [' name'])."' AND pass = ' ". addslashes ($ _SESSION [' pass']). " 'AND flag_enabled = '1' LIMIT 1 ")))$ logined = @ mysql_affected_rows (); ) / / Login through cookies. if ($ logined! == 1 & &! empty ($ _COOKIE [COOKIE_USER]) & &! empty ($ _COOKIE [COOKIE_PASS])) ( if (($ r = @ mysql_query ( "SELECT * FROM cp_users WHERE MD5 (name )='". addslashes ($ _COOKIE [COOKIE_USER ])."' AND pass = '". addslashes ($ _COOKIE [COOKIE_PASS]). " 'AND flag_enabled = '1' LIMIT 1 ")))$ logined = @ mysql_affected_rows (); ) / / Unable to login. if ($ logined! == 1) ( UnlockSessionAndDestroyAllCokies (); header ( 'Location:'. QUERY_STRING_BLANK. 'login'); die (); ) / / Get the user data. $ _USER_DATA = @ Mysql_fetch_assoc ($ r); if ($ _USER_DATA === false) die (mysql_error_ex ()); $ _SESSION [ 'Name'] = $ _USER_DATA [ 'name']; $ _SESSION [ 'Pass'] = $ _USER_DATA [ 'pass']; / / Connecting language. if (@ strlen ($ _USER_DATA [ 'language'])! = 2 | |! SafePath ($ _USER_DATA [ 'language']) | |! file_exists ( 'system / lng .'.$_ USER_DATA [' language '].' . php'))$_ USER_DATA [ 'language'] = DEFAULT_LANGUAGE; require_once ( 'system / lng .'.$_ USER_DATA [' language'].'. php '); UnlockSession ();

    Read the article

  • Writing Great Software

    - by 01010011
    Hi, I'm currently reading Head First's Object Oriented Analysis and Design. The book states that to write great software (i.e. software that is well-designed, well-coded, easy to maintain, reuse, and extend) you need to do three things: Firstly, make sure the software does everything the customer wants it to do Once step 1 is completed, apply Object Oriented principles and techniques to eliminate any duplicate code that might have slipped in Once steps 1 and 2 are complete, then apply design patterns to make sure the software is maintainable and reusable for years to come. My question is, do you follow these steps when developing great software? If not, what steps do you usually follow inorder to ensure it's well designed, well-coded, easy to maintain, reuse and extend?

    Read the article

  • lucid 10.04 LTS => Precise 12.04.1 : upgrade doesn't work

    - by Rastom
    I googled and looked into all unkown issues on ubuntu forums but I can't figure out why a 10.04 LTS server won't detect the last LTS 12.04.1. I guess since 12.04 is a fresh dist, not much is reported for related issues Here is what I did : apt-get update apt-get upgrade apt-get install update-manager-core it was already installed so no update for this package. I checked : /etc/update-manager/release-upgrades [DEFAULT] # Default prompting behavior, valid options: # # never - Never check for a new release. # normal - Check to see if a new release is available. If more than one new # release is found, the release upgrader will attempt to upgrade to # the release that immediately succeeds the currently-running # release. # lts - Check to see if a new LTS release is available. The upgrader # will attempt to upgrade to the first LTS release available after # the currently-running one. Note that this option should not be # used if the currently-running release is not itself an LTS # release, since in that case the upgrader won't be able to # determine if a newer release is available. Prompt=lts I also checked my sourcelist before running apt-get : /etc/apt/sources.list deb http://archive.ubuntu.com/ubuntu/ lucid main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-security main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-security main restricted deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted deb http://security.ubuntu.com/ubuntu lucid-security universe deb-src http://security.ubuntu.com/ubuntu lucid-security universe deb http://security.ubuntu.com/ubuntu lucid-security multiverse deb-src http://security.ubuntu.com/ubuntu lucid-security multiverse # deb http://landscape.canonical.com/packages/hardy ./ # deb-src http://landscape.canonical.com/packages/hardy ./ and then following Ubuntu guide for Precise upgrade the command below should work : root@xxxxxxxxx:/etc/apt# do-release-upgrade -d Checking for a new ubuntu release No new release found So am I missing something ? The server was accessing outside through a proxy but I grant direct access to this server to avoid any Internet access problem or redirection but no clue... Any help would be appreciated

    Read the article

  • Windows XP - Security Update for Windows XP (KB923561) (KB946648) (KB956572) (KB958644)

    - by leeand00
    My father's computer has Windows XP, but when I try to install the service packs it always fails. What gives? Here are the errors that I get in the event log: Date: 2/6/2010 Time: 12:02:18 AM Type: Error User: N/A Computer: EVO Source: Windows Update Agent Category: Installation Event ID: 20 Installation Failure: Windows failed to install the following update with error 0x80070002: Security Update for Windows XP (KB946648). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 57 69 6e 33 32 48 52 65 Win32HRe 0008: 73 75 6c 74 3d 30 78 38 sult=0x8 0010: 30 30 37 30 30 30 32 20 0070002 0018: 55 70 64 61 74 65 49 44 UpdateID 0020: 3d 7b 38 33 44 31 41 44 ={83D1AD 0028: 46 35 2d 37 37 39 44 2d F5-779D- 0030: 34 30 31 36 2d 38 43 33 4016-8C3 0038: 31 2d 35 34 39 32 37 30 1-549270 0040: 46 36 37 42 33 46 7d 20 F67B3F} 0048: 52 65 76 69 73 69 6f 6e Revision 0050: 4e 75 6d 62 65 72 3d 31 Number=1 0058: 30 34 20 00 04 . Date: 2/6/2010 Time: 12:02:18 AM Type: Error User: N/A Computer: EVO Source: Windows Update Agent Catagory: Installation Event ID: 20 Installation Failure: Windows failed to install the following update with error 0x80070002: Security Update for Windows XP (KB956572). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 57 69 6e 33 32 48 52 65 Win32HRe 0008: 73 75 6c 74 3d 30 78 38 sult=0x8 0010: 30 30 37 30 30 30 32 20 0070002 0018: 55 70 64 61 74 65 49 44 UpdateID 0020: 3d 7b 44 46 32 46 30 41 ={DF2F0A 0028: 39 38 2d 36 45 33 35 2d 98-6E35- 0030: 34 33 37 39 2d 41 42 33 4379-AB3 0038: 33 2d 41 30 33 30 33 45 3-A0303E 0040: 46 37 34 42 32 41 7d 20 F74B2A} 0048: 52 65 76 69 73 69 6f 6e Revision 0050: 4e 75 6d 62 65 72 3d 31 Number=1 0058: 30 32 20 00 02 . Date: 2/6/2010 Time: 12:02:18 AM Type: Error User: N/A Computer EVO Source: Windows Update Agent Event ID: 20 Installation Failure: Windows failed to install the following update with error 0x80070002: Security Update for Windows XP (KB958644). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 57 69 6e 33 32 48 52 65 Win32HRe 0008: 73 75 6c 74 3d 30 78 38 sult=0x8 0010: 30 30 37 30 30 30 32 20 0070002 0018: 55 70 64 61 74 65 49 44 UpdateID 0020: 3d 7b 39 33 39 37 41 32 ={9397A2 0028: 31 46 2d 32 34 36 43 2d 1F-246C- 0030: 34 35 33 42 2d 41 43 30 453B-AC0 0038: 35 2d 36 35 42 46 34 46 5-65BF4F 0040: 43 36 42 36 38 42 7d 20 C6B68B} 0048: 52 65 76 69 73 69 6f 6e Revision 0050: 4e 75 6d 62 65 72 3d 31 Number=1 0058: 30 31 20 00 01 . Date: 2/6/2010 Time: 12:02:18 AM Type: Error User: N/A Computer: EVO Source: Windows Update Agent Category: Installation Event ID: 20 Installation Failure: Windows failed to install the following update with error 0x80070002: Security Update for Windows XP (KB923561). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 57 69 6e 33 32 48 52 65 Win32HRe 0008: 73 75 6c 74 3d 30 78 38 sult=0x8 0010: 30 30 37 30 30 30 32 20 0070002 0018: 55 70 64 61 74 65 49 44 UpdateID 0020: 3d 7b 33 31 30 41 34 43 ={310A4C 0028: 30 38 2d 35 39 33 44 2d 08-593D- 0030: 34 31 41 33 2d 42 42 35 41A3-BB5 0038: 37 2d 38 33 42 33 38 36 7-83B386 0040: 44 37 37 33 42 35 7d 20 D773B5} 0048: 52 65 76 69 73 69 6f 6e Revision 0050: 4e 75 6d 62 65 72 3d 31 Number=1 0058: 30 33 20 00 03 . Thank you, Andrew

    Read the article

  • Database users in the Oracle Utilities Application Framework

    - by Anthony Shorten
    I mentioned the product database users fleetingly in the last blog post and they deserve a better mention. This applies to all versions of the Oracle Utilities Application Framework. The Oracle Utilities Application Framework uses up to three users initially as part of the base operations of the product. The type of database supported (the framework supports Oracle, IBM DB2 and Microsoft SQL Server) dictates the number of users used and their permissions. For publishing brevity I will outline what is available for the Oracle database and, in summary, mention where it differs for the other database supported. For Oracle database customers we ship three distinct database users: Administration User (SPLADM or CISADM by default) - This is the database user that actually owns the schema. This user is not used by the product to do any DML (Data Manipulation Language) SQL other than that is necessary for maintenance of the database. This database user performs all the DCL (Data Control Language) and DDL (Data Definition Language) against the database. It is typically reserved for Database Administration use only. Product Read Write User (SPLUSER or CISUSER by default) - This is the database user used by the product itself to execute DML (Data Manipulation Language) statements against the schema owned by the Administration user. This user has the appropriate read and write permission to objects within the schema owned by the Administration user. For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. Product Read User (SPLREAD or CISREAD by default) - This is the database that has read only permission to the schema owned by the Administration user. It is used for reporting or any part of the product or interface that requires read permissions to the database (for example, products that have ConfigLab and Archiving use this user for remote access). For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. You may notice the words by default in the list above. The values supplied with the installer are the default and can be changed to what the site standard or implementation wants to use (as long as they conform to the standards supported by the underlying database). You can even create multiples of each within the same database and pointing to same schema. To manage the permissions for the users, there is a utility provided with the installation (oragensec (Oracle), db2gensec (DB2) or msqlgensec (SQL Server)) that generates the security definitions for the above users. That can be executed a number of times for each schema to give users appropriate permissions. For example, it is possible to define more than one read/write User to access the database. This is a common technique used by implementations to have a different user per access mode (to separate online and batch). In fact you can also allocate additional security (such as resource profiles in Oracle) to limit the impact of specific users at the database. To facilitate users and permissions, in Oracle for example, we create a CISREAD role (read only role) and a CISUSER role (read write role) that can be allocated to the appropriate database user. When the security permissions utility, oragensec in this case, is executed it uses the role to determine the permissions. To give you a case study, my underpowered laptop has multiple installations on it of multiple products but I have one database. I create a different schema for each product and each version (with my own naming convention to help me manage the databases). I create individual users on each schema and run oragensec to maintain the permissions for each appropriately. It works fine as long I have setup the userids appropriately. This means: Creating the users with the appropriate roles. I use the common CISUSER and CISREAD role across versions and across Oracle Utilities Application Framework products. Just remember to associate the CISUSER role with the database user you want to use for read/write operations and the CISREAD role with the user you wish to use for the read only operations. The role is treated as a tag to indicate the oragensec utility which appropriate permissions to assign to the user. The utilities for the other database types essentially do the same, obviously using the technology available within those databases. Run oragensec against the read write user and read only user against the appropriate administration user (I will abbreviate the user to ADM user). This ensures the right permissions are allocated to the right users for the right products. To help me there, I use the same prefix on the user name for the same product. For example, my Oracle Utilities Application Framework V4 environment has the administration user set to FW4ADM and the associated FW4USER and FW4READ as the users for the product to use. For my MWM environment I used MWMADM for the administration user and MWMUSER and MWMREAD for my associated users. You get the picture. When I run oragensec (once for each ADM user), I know what other users to associate with it. Remember to rerun oragensec against the users if I run upgrades, service packs or database based single fixes. This assures that the users are in synchronization with the ADM user. As a side note, for those who do not understand the difference between DML, DCL and DDL: DDL (Data Definition Language) - These are SQL statements that define the database schema and the structures within. SQL Statements such as CREATE and DROP are examples of DDL SQL statements. DCL (Data Control Language) - These are the SQL statements that define the database level permissions to DDL maintained objects within the database. SQL Statements such as GRANT and REVOKE are examples of DCL SQL statements. DML (Database Manipulation Language) - These are SQL statements that alter the data within the tables. SQL Statements such as SELECT, INSERT, UPDATE and DELETE are examples of DML SQL statements. Hope this has clarified the database user support. Remember in Oracle Utilities Application Framework V4 we enhanced this by also supporting CLIENT_IDENTIFIER to allow the database to still use the administration user for the main processing but make the database session more traceable.

    Read the article

  • Wishful Thinking: Why can't HTML fix Script Attacks at the Source?

    - by Rick Strahl
    The Web can be an evil place, especially if you're a Web Developer blissfully unaware of Cross Site Script Attacks (XSS). Even if you are aware of XSS in all of its insidious forms, it's extremely complex to deal with all the issues if you're taking user input and you're actually allowing users to post raw HTML into an application. I'm dealing with this again today in a Web application where legacy data contains raw HTML that has to be displayed and users ask for the ability to use raw HTML as input for listings. The first line of defense of course is: Just say no to HTML input from users. If you don't allow HTML input directly and use HTML Encoding (HttyUtility.HtmlEncode() in .NET or using standard ASP.NET MVC output @Model.Content) you're fairly safe at least from the HTML input provided. Both WebForms and Razor support HtmlEncoded content, although Razor makes it the default. In Razor the default @ expression syntax:@Model.UserContent automatically produces HTML encoded content - you actually have to go out of your way to create raw HTML content (safe by default) using @Html.Raw() or the HtmlString class. In Web Forms (V4) you can use:<%: Model.UserContent %> or if you're using a version prior to 4.0:<%= HttpUtility.HtmlEncode(Model.UserContent) %> This works great as a hedge against embedded <script> tags and HTML markup as any HTML is turned into text that displays as HTML but doesn't render the HTML. But it turns any embedded HTML markup tags into plain text. If you need to display HTML in raw form with the markup tags rendering based on user input this approach is worthless. If you do accept HTML input and need to echo the rendered HTML input back, the task of cleaning up that HTML is a complex task. In the projects I work on, customers are frequently asking for the ability to post raw HTML quite frequently.  Almost every app that I've built where there's document content from users we start out with text only input - possibly using something like MarkDown - but inevitably users want to just post plain old HTML they created in some other rich editing application. See this a lot with realtors especially who often want to reuse their postings easily in multiple places. In my work this is a common problem I need to deal with and I've tried dozens of different methods from sanitizing, simple rejection of input to custom markup schemes none of which have ever felt comfortable to me. They work in a half assed, hacked together sort of way but I always live in fear of missing something vital which is *really easy to do*. My Wishlist Item: A <restricted> tag in HTML Let me dream here for a second on how to address this problem. It seems to me the easiest place where this can be fixed is: In the browser. Browsers are actually executing script code so they have a lot of control over the script code that resides in a page. What if there was a way to specify that you want to turn off script code for a block of HTML? The main issue when dealing with HTML raw input isn't that we as developers are unaware of the implications of user input, but the fact that we sometimes have to display raw HTML input the user provides. So the problem markup is usually isolated in only a very specific part of the document. So, what if we had a way to specify that in any given HTML block, no script code could execute by wrapping it into a tag that disables all script functionality in the browser? This would include <script> tags and any document script attributes like onclick, onfocus etc. and potentially also disallow things like iFrames that can potentially be scripted from the within the iFrame's target. I'd like to see something along these lines:<article> <restricted allowscripts="no" allowiframes="no"> <div>Some content</div> <script>alert('go ahead make my day, punk!");</script> <div onfocus="$.getJson('http://evilsite.com/')">more content</div> </restricted> </article> A tag like this would basically disallow all script code from firing from any HTML that's rendered within it. You'd use this only on code that you actually render from your data only and only if you are dealing with custom data. So something like this:<article> <restricted> @Html.Raw(Model.UserContent) </restricted> </article> For browsers this would actually be easy to intercept. They render the DOM and control loading and execution of scripts that are loaded through it. All the browser would have to do is suspend execution of <script> tags and not hookup any event handlers defined via markup in this block. Given all the crazy XSS attacks that exist and the prevalence of this problem this would go a long way towards preventing at least coded script attacks in the DOM. And it seems like a totally doable solution that wouldn't be very difficult to implement by vendors. There would also need to be some logic in the parser to not allow an </restricted> or <restricted> tag into the content as to short-circuit the rstricted section (per James Hart's comment). I'm sure there are other issues to consider as well that I didn't think of in my off-the-back-of-a-napkin concept here but the idea overall seems worth consideration I think. Without code running in a user supplied HTML block it'd be pretty hard to compromise a local HTML document and pass information like Cookies to a server. Or even send data to a server period. Short of an iFrame that can access the parent frame (which is another restriction that should be available on this <restricted> tag) that could potentially communicate back, there's not a lot a malicious site could do. The HTML could still 'phone home' via image links and href links potentially and basically say this site was accessed, but without the ability to run script code it would be pretty tough to pass along critical information to the server beyond that. Ahhhh… one can dream… Not holding my breath of course. The design by committee that is the W3C can't agree on anything in timeframes measured less than decades, but maybe this is one place where browser vendors can actually step up the pressure. This is something in their best interest to reduce the attack surface for vulnerabilities on their browser platforms significantly. Several people commented on Twitter today that there isn't enough discussion on issues like this that address serious needs in the web browser space. Realistically security has to be a number one concern with Web applications in general - there isn't a Web app out there that is not vulnerable. And yet nothing has been done to address these security issues even though there might be relatively easy solutions to make this happen. It'll take time, and it's probably not going to happen in our lifetime, but maybe this rambling thought sparks some ideas on how this sort of restriction can get into browsers in some way in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  HTML5  HTML  Security   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Oracle Database Insider Now on LinkedIn

    - by Troy Kitch
    Our close friends over at the Oracle Database Insider blog have recently started a LinkedIn discussion group. Go behind the scenes of the latest Oracle Database announcements and discussions that include Oracle Database 11g and its options, such as Database Security, and the newest product, Oracle Exadata. Come on over to post a discussion topic, an event, ask questions and stay up-to-date on the latest Oracle Database information. We'll be there to join the discussions and answer questions. Join us on LinkedIn's latest group!

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >