Search Results

Search found 6052 results on 243 pages for 'manage'.

Page 52/243 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Web service for checking out / leasing a token

    - by JP Slavinsky
    I run a web site on AWS that has a number of web servers (say 4 of them) running behind a load balancer. For this particular web site, I have one license key of New Relic for doing instrumentation. At any one time, I only want one of the 4 web servers to be using the key. If that server goes offline, I want one of the remaining web servers to be able to begin using the license key. Does anyone know of a service that would let me manage this process? The service would not particularly need to store the key itself but rather just manage the fact that only one web server can lease out the right to use the key at any time. Something where the web servers would have to come back every few minutes and renew their lease, and if they don't it becomes available to someone else. I just realized I could maybe accomplish a hacked version of this using a file on S3, but that doesn't prevent race conditions / etc and is definitely hackish. Any thoughts welcome. FWIW, this site is built on Ruby on Rails. Thanks! JP

    Read the article

  • Linux: How do I use Munin in cPanel to monitor MySQL?

    - by Continuation
    I have a cPanel server running CentOS 5.5. I want to use Munin to monitor MySQL. I went to: Main >> cPanel >> Manage Plugins and selected "Install and keep updated" for Munin and clicked "Save". I got the usual bunch of status updates about the install. At the end I got: Going to read '/home/.cpan/sources/modules/02packages.details.txt.gz' Database was generated on Wed, 02 Mar 2011 18:28:33 GMT ..........................................................................DONE Going to read '/home/.cpan/sources/modules/03modlist.data.gz' Out of memory! Callback called exit. Done Done Done Process Complete As you can see I got an "Out of memory!" message. But after that it said "Process Complete". Was Munin installed? When I went back to "Manage Plugins" it Munin has a check that against "Install and keep updated". So is everything alright? And how do I use Munin now? How do i configure it to monitor MySQL? Where can I see the results? Thanks.

    Read the article

  • Run as another user, but also as administrator

    - by Tewr
    I am trying to debug a virtual machine (VM) running on a remote computer from my workstation (A). Both VM and A are running Windows 7 Enterprise. Apparently, I need to start the Remote Debugger Service (RDS) on VM as an administrator. Apparently, I also need to run RDS as the user Tewr logged in on A (domain: DOM). VM runs the services i need to debug, as well as the remote desktop interface with an account VMUSER in a domain called VMDOMAIN. I manage to start RDS as administrator, but then the RDS process is owned by VMUSER and that's not good enough. I also manage to run RDS as DOM\Tewr, but then not as an administrator. I have Added DOM\Tewr as an administrator on VM, but thats not good enough becuase the process is still not run as administrator. How can I run the RDS process as DOM\Tewr and "As Administrator", while logged on in windows as VMDOMAIN\VM? (note: I have tried creating an account with the same credentials / password as VMUSER, as hinted in the ms article above, but with no luck...)

    Read the article

  • Better urls for this internal web server?

    - by sprugman
    I've got a server that I have admin access to, but don't fully manage. (I think it's a virtual machine, but I'm not 100% sure. It's running Apache on Windows Server 2003.) I share the ip with another user, so my sites all have to use the :8080 port. This is kind of ugly. Also, AFAIK, the only access I have is through an ip address. (I'm inside a corporate firewall and don't think I have access to a DNS server or anything.) I've adjusted my hosts file so I don't have to use the ip address on my local machine, but that's not a very generic solution. Are there any options to 1) get rid of the port requirement 2) be able to use a name (maybe a machine name) instead of the ip address in a generic way? (I'm not really a network admin -- I'm a developer managing this machine. The IT folks who really manage it are a few people away from me and tough to get to do anything, so I'm looking for a light-weight solution if possible.)

    Read the article

  • Changing farm account in Sharepoint 2010

    - by user55709
    After changing the the farm account to a domain user account I get the following error when trying to access the Central Administration page: "Cannot connect to configuration database" After I realized the headache may not be worth it, I decided to a reinstall using the following SP user account guidelines: http://technet.microsoft.com/en-us/library/ee662513.aspx After getting everything up, I am getting an error when using the designated farm account under the Central Admin Website Manage Service Accounts: "Access denied" If it is the farm administrator, why would I not be able to manage service accounts? I am able to access the other part of the admin site. Also, when logging in with the farm account it lists me as a "system account" not the domain account which I used for log in. Am I missing something or is this normal behavior? Am I not suppose to login with the farm account? When I log in with the Setup account (also a domain account) I can access everything with no errors on the site. The only difference between the two accounts is one has local admin privileges on the Sharepoint farm server which is the setup account. if you notice those privileges are not necessary for the farm account according to the article cited.

    Read the article

  • How are large companies handling the storing and cataloging of software installation disks?

    - by CT
    I just started working in the IT department of a small-medium sized construction company with about 200 users. One of my responsibilities is to setup and configure all new machines that come in. I would like suggestions on how to best manage the installation disks and licenses of the software that comes with them. Plus any additional licensed software such as Autocad, Photoshop, etc as well as peripheral driver disks such as printers and scanners. Right now every machine is associated and labeled with an asset id. All asset ids are kept in a spreadsheet with applicable serial numbers, current user, warranty info, and software licenses. The physical disks are then kept within a folder in a cabinet. Each folder is marked with the asset id number as well as the current user. My problems with this is that the system was not maintained very completely before I came to the company. There are plenty of software folders with no asset ids labeled on them. Plenty of missing software folders (most likely are a lot of the unlabeled folders). Folders with names but not asset ids. Machines get switched to different users without the folders and spreadsheet being updated. I am not saying this method would necessarily be bad if it was better implemented and managed, but if I am going to have to take a lot of time to fix this system currently in place. I thought I would ask the community first on how others manage this process in case there are easier, more efficient ways of doing so. Thank you.

    Read the article

  • CNAME vs A records

    - by deb
    I built a small rails app that allow users to make a simple site. It uses subdomain accounts ex: deb.myapp.com Whenever an user wanted to have a domain name associated with their site, they would change their NS records to point to slicehost where the application is hosted and I would manage the DNS records myself. However, as more people are using the application this is not an option for me anymore. I prefer users to keep their nameservers at goddady, register.com, etc, so they can log in and manage their own MX records or whatever else they need to change. My question is, should I have them change the A records to point to my server's ip, or should I have them create a CNAME record? Do they need to delete the default A records to allow the CNAME record to work? Will the A record take precedence and overrule the CNAME record? Thanks in advance. Sorry if this is a very basic question. I've read other posts and I can't find a definite answer.

    Read the article

  • Accessing Python module fails although its package is imported

    - by codethief
    Hey Stackers! :) My Django project's directory hierarchy looks like this: + pybsd |---+ devices |---+ templates |---+ views |---+ interaction |---- __init__.py |---- geraete.py |---- geraetemodelle.py |---- geraetegruppen.py |---- __init__.py |---- ajax.py |---- html.py |---- misc.py |---- __init__.py |---- urls.py |---- __init__.py |---- urls.py (Please excuse the German names. I preferred not to replace them here since it would add yet another possible error source when trying out the solutions you'll hopefully suggest and answering your questions.) Every request to http://URL/devices/.* is dispatched to the urls.py file living in /devices: # ... from views import html, ajax, misc, interaction urlpatterns = patterns('', # ... (r'^ajax/update/(?P<table>[a-z_]+)$', ajax.update), (r'^ajax/delete/(?P<table>[a-z_]+)$', ajax.delete), (r'^ajax/select_options/(?P<table>[a-z_]+)$', ajax.select_options), (r'^interaction/geraete/info/(?P<geraet>\d+)$', interaction.geraete.info), (r'^interaction/geraete/delete/(?P<geraet>\d+)?$', interaction.geraete.delete), (r'^interaction/geraetemodelle/delete/(?P<geraetemodell>\d+)?$', interaction.geraetemodelle.delete), (r'^interaction/geraetegruppen/delete/(?P<geraetegruppe>\d+)?$', interaction.geraetegruppen.delete), # ... ) All URL definitions work except for those referencing the interaction package. I'm constantly getting the following error: File "/home/simon/projekte/pybsd/../pybsd/devices/urls.py", line 33, in `<module>` (r'^interaction/geraete/info/(?P<geraet>\d+)$', interaction.geraete.info), AttributeError: 'module' object has no attribute 'geraete' I double-checked that the __init__.py files don't contain anything. Maybe you've already found the (Python- or Django-related?) mistake I made and am apparently unable to see. If not, read on. In any case, thanks for reading this long post! Isolating the problem 1st test It works if I provide the view functions as strings: (r'^interaction/geraete/info/(?P<geraet>\d+)$', 'devices.views.interaction.geraete.info'), (r'^interaction/geraete/delete/(?P<geraet>\d+)?$', 'devices.views.interaction.geraete.delete'), (r'^interaction/geraetemodelle/delete/(?P<geraetemodell>\d+)?$', 'devices.views.interaction.geraetemodelle.delete'), (r'^interaction/geraetegruppen/delete/(?P<geraetegruppe>\d+)?$', 'devices.views.interaction.geraetegruppen.delete'), ... or add yet another line to the imports: from views.interaction import geraete, geraetemodelle, geraetegruppen Using from views.interaction import *, however, doesn't work either and results in the same error message. 2nd test I created a file test.py in /devices: from views import interaction print dir(interaction) Output: simon@bsd-simon:~/projekte/pybsd/devices$ python test.py ['__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__'] Again, no sign of the modules I created in the interaction package (geraete.py, geraetemodelle.py, geraetegruppen.py). Unlike in urls.py, trying from view.interaction import geraete, geraetegruppen, geraetemodelle in test.py results in ImportError: No module named view.interaction this time. 3rd test I started the Django shell: $ python manage.py shell >>> import devices.views.interaction.geraete >>> dir(devices.views.interaction.geraete) ['Abteilung', 'Auftrag', 'Auftragsvorlage', 'Geraet', 'Geraetegruppe', 'Geraetemodell', 'HttpResponse', 'HttpResponseBadRequest', 'HttpResponseRedirect', 'Raum', 'Standort', '__builtins__', '__doc__', '__file__', '__name__', '__package__', 'delete', 'info', 'models', 'move', 'render_to_response'] >>> $ python manage.py shell >>> from devices.views.interaction import geraete >>> dir(geraete) ['Abteilung', 'Auftrag', 'Auftragsvorlage', 'Geraet', 'Geraetegruppe', 'Geraetemodell', 'HttpResponse', 'HttpResponseBadRequest', 'HttpResponseRedirect', 'Raum', 'Standort', '__builtins__', '__doc__', '__file__', '__name__', '__package__', 'delete', 'info', 'models', 'move', 'render_to_response'] >>> $ python manage.py shell >>> import devices.views.interaction >>> devices.views.interaction.geraete Traceback (most recent call last): File "<console>", line 1, in <module> AttributeError: 'module' object has no attribute 'geraete' >>> dir(devices.views.interaction) ['__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__']

    Read the article

  • Should we develop a custom membership provider in this case?

    - by Allen
    I'll be adding a bounty to this, probably 200, more if you guys think its appropriate. I wont accept an answer until I can add a bounty so feel free to go ahead and answer now Summary Long story short, we've been tasked with gutting the authentication and authorization parts of a fairly old and bloated asp.net application that previously had all of these components written from scratch. Since our application isn't a typical one, and none of us have experience in asp.net's built in membership provider stuff, we're not sure if we should roll our own authentication and authorization again or if we should try to work within the asp.net membership provider mindset and develop our own membership provider. Our Application We have a fairly old asp.net application that gets installed at customer locations to service clients on a LAN. Admins create users (users do not sign up) and depending on the install, we may have the software integrated with LDAP. Currently, the LDAP integration bulk-imports the users to our database and when they login, it authenticates against LDAP so we dont have to manage their passwords. Nothing amazing there. Admins can assign users to 1 group and they can change the authorization of that group to manage access to various parts of the software. Groups are maintained by Admins (web based UI) and as said earlier, granted / denied permissions to certain functionality within the application. All this was completely written from the ground up without using any of the built in .net authorization or authentication. We literally have IsLoggedIn() methods that check for login and redirect to our login page if they aren't. Our Rewrite We've been tasked to integrate more tightly with LDAP, they want us to tie groups in our application to groups (or whatever types of containers that LDAP uses) in LDAP so that when a customer opt's to use our LDAP integration, they dont have to manage their users in LDAP AND in our application. The new way, they will simply create users in LDAP, add them to Groups in LDAP and our application will see that they belong to the appropriate LDAP group and authenticate and authorize them. In addition, we've been granted the go ahead to completely rip out the User authentication and authorization code and completely re-do it. Our Problem The problem is that none of us have any experience with asp.net membership provider functionality. The little bit of exposure I have to it makes me worry that it was not intended to be used for an application such as ours. Though, developing our own ASP.NET Membership Provider and Role Manager sounds like it would be a great experience and most likely the appropriate thing to do. Basically, I'm looking for advice, should we be using the ASP.NET Membership provider & Role Management API or should we continue to roll our own? I know this decision will be influenced by our requirements so I'm going over them below Our Requirements Just a quick n dirty list Maintain the ability to have a db of users and authenticate them and give admins (only, not users) the ability to CRUD users Allow the site to integrate with LDAP, when this is chosen, they don't want any users stored in the DB, only the relationship between Groups as they exist in our app / db and the Groups/Containers as they exist in LDAP. .net 3.5 is being used (mix of asp.net webforms and asp.net mvc) Has to work in ASP.NET and ASP.NET MVC (shouldn't be a problem I'm guessing) This can't be user centric, administrators need to be the only ones that CRUD (or import via ldap) users and groups We have to be able to Auth via LDAP when its configured to do so I always try to monitor my questions closely so feel free to ask for more info. Also, as a general summary of what I'm looking for in an answer is just. "You should/shouldn't use xyz, here's why". Links regarding asp.net membership provider and role management stuff are very welcome, most of the stuff I'm finding is 5+ years old. Edit: Added some stuff to "Our Rewrite"

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • 10 Windows Tweaking Myths Debunked

    - by Chris Hoffman
    Windows is big, complicated, and misunderstood. You’ll still stumble across bad advice from time to time when browsing the web. These Windows tweaking, performance, and system maintenance tips are mostly just useless, but some are actively harmful. Luckily, most of these myths have been stomped out on mainstream sites and forums. However, if you start searching the web, you’ll still find websites that recommend you do these things. Erase Cache Files Regularly to Speed Things Up You can free up disk space by running an application like CCleaner, another temporary-file-cleaning utility, or even the Windows Disk Cleanup tool. In some cases, you may even see an old computer speed up when you erase a large amount of useless files. However, running CCleaner or similar utilities every day to erase your browser’s cache won’t actually speed things up. It will slow down your web browsing as your web browser is forced to redownload the files all over again, and reconstruct the cache you regularly delete. If you’ve installed CCleaner or a similar program and run it every day with the default settings, you’re actually slowing down your web browsing. Consider at least preventing the program from wiping out your web browser cache. Enable ReadyBoost to Speed Up Modern PCs Windows still prompts you to enable ReadyBoost when you insert a USB stick or memory card. On modern computers, this is completely pointless — ReadyBoost won’t actually speed up your computer if you have at least 1 GB of RAM. If you have a very old computer with a tiny amount of RAM — think 512 MB — ReadyBoost may help a bit. Otherwise, don’t bother. Open the Disk Defragmenter and Manually Defragment On Windows 98, users had to manually open the defragmentation tool and run it, ensuring no other applications were using the hard drive while it did its work. Modern versions of Windows are capable of defragmenting your file system while other programs are using it, and they automatically defragment your disks for you. If you’re still opening the Disk Defragmenter every week and clicking the Defragment button, you don’t need to do this — Windows is doing it for you unless you’ve told it not to run on a schedule. Modern computers with solid-state drives don’t have to be defragmented at all. Disable Your Pagefile to Increase Performance When Windows runs out of empty space in RAM, it swaps out data from memory to a pagefile on your hard disk. If a computer doesn’t have much memory and it’s running slow, it’s probably moving data to the pagefile or reading data from it. Some Windows geeks seem to think that the pagefile is bad for system performance and disable it completely. The argument seems to be that Windows can’t be trusted to manage a pagefile and won’t use it intelligently, so the pagefile needs to be removed. As long as you have enough RAM, it’s true that you can get by without a pagefile. However, if you do have enough RAM, Windows will only use the pagefile rarely anyway. Tests have found that disabling the pagefile offers no performance benefit. Enable CPU Cores in MSConfig Some websites claim that Windows may not be using all of your CPU cores or that you can speed up your boot time by increasing the amount of cores used during boot. They direct you to the MSConfig application, where you can indeed select an option that appears to increase the amount of cores used. In reality, Windows always uses the maximum amount of processor cores your CPU has. (Technically, only one core is used at the beginning of the boot process, but the additional cores are quickly activated.) Leave this option unchecked. It’s just a debugging option that allows you to set a maximum number of cores, so it would be useful if you wanted to force Windows to only use a single core on a multi-core system — but all it can do is restrict the amount of cores used. Clean Your Prefetch To Increase Startup Speed Windows watches the programs you run and creates .pf files in its Prefetch folder for them. The Prefetch feature works as a sort of cache — when you open an application, Windows checks the Prefetch folder, looks at the application’s .pf file (if it exists), and uses that as a guide to start preloading data that the application will use. This helps your applications start faster. Some Windows geeks have misunderstood this feature. They believe that Windows loads these files at boot, so your boot time will slow down due to Windows preloading the data specified in the .pf files. They also argue you’ll build up useless files as you uninstall programs and .pf files will be left over. In reality, Windows only loads the data in these .pf files when you launch the associated application and only stores .pf files for the 128 most recently launched programs. If you were to regularly clean out the Prefetch folder, not only would programs take longer to open because they won’t be preloaded, Windows will have to waste time recreating all the .pf files. You could also modify the PrefetchParameters setting to disable Prefetch, but there’s no reason to do that. Let Windows manage Prefetch on its own. Disable QoS To Increase Network Bandwidth Quality of Service (QoS) is a feature that allows your computer to prioritize its traffic. For example, a time-critical application like Skype could choose to use QoS and prioritize its traffic over a file-downloading program so your voice conversation would work smoothly, even while you were downloading files. Some people incorrectly believe that QoS always reserves a certain amount of bandwidth and this bandwidth is unused until you disable it. This is untrue. In reality, 100% of bandwidth is normally available to all applications unless a program chooses to use QoS. Even if a program does choose to use QoS, the reserved space will be available to other programs unless the program is actively using it. No bandwidth is ever set aside and left empty. Set DisablePagingExecutive to Make Windows Faster The DisablePagingExecutive registry setting is set to 0 by default, which allows drivers and system code to be paged to the disk. When set to 1, drivers and system code will be forced to stay resident in memory. Once again, some people believe that Windows isn’t smart enough to manage the pagefile on its own and believe that changing this option will force Windows to keep important files in memory rather than stupidly paging them out. If you have more than enough memory, changing this won’t really do anything. If you have little memory, changing this setting may force Windows to push programs you’re using to the page file rather than push unused system files there — this would slow things down. This is an option that may be helpful for debugging in some situations, not a setting to change for more performance. Process Idle Tasks to Free Memory Windows does things, such as creating scheduled system restore points, when you step away from your computer. It waits until your computer is “idle” so it won’t slow your computer and waste your time while you’re using it. Running the “Rundll32.exe advapi32.dll,ProcessIdleTasks” command forces Windows to perform all of these tasks while you’re using the computer. This is completely pointless and won’t help free memory or anything like that — all you’re doing is forcing Windows to slow your computer down while you’re using it. This command only exists so benchmarking programs can force idle tasks to run before performing benchmarks, ensuring idle tasks don’t start running and interfere with the benchmark. Delay or Disable Windows Services There’s no real reason to disable Windows services anymore. There was a time when Windows was particularly heavy and computers had little memory — think Windows Vista and those “Vista Capable” PCs Microsoft was sued over. Modern versions of Windows like Windows 7 and 8 are lighter than Windows Vista and computers have more than enough memory, so you won’t see any improvements from disabling system services included with Windows. Some people argue for not disabling services, however — they recommend setting services from “Automatic” to “Automatic (Delayed Start)”. By default, the Delayed Start option just starts services two minutes after the last “Automatic” service starts. Setting services to Delayed Start won’t really speed up your boot time, as the services will still need to start — in fact, it may lengthen the time it takes to get a usable desktop as services will still be loading two minutes after booting. Most services can load in parallel, and loading the services as early as possible will result in a better experience. The “Delayed Start” feature is primarily useful for system administrators who need to ensure a specific service starts later than another service. If you ever find a guide that recommends you set a little-known registry setting to improve performance, take a closer look — the change is probably useless. Want to actually speed up your PC? Try disabling useless startup programs that run on boot, increasing your boot time and consuming memory in the background. This is a much better tip than doing any of the above, especially considering most Windows PCs come packed to the brim with bloatware.     

    Read the article

  • Announcing: Great Improvements to Windows Azure Web Sites

    - by ScottGu
    I’m excited to announce some great improvements to the Windows Azure Web Sites capability we first introduced earlier this summer.  Today’s improvements include: a new low-cost shared mode scaling option, support for custom domains with shared and reserved mode web-sites using both CNAME and A-Records (the later enabling naked domains), continuous deployment support using both CodePlex and GitHub, and FastCGI extensibility.  All of these improvements are now live in production and available to start using immediately. New “Shared” Scaling Tier Windows Azure allows you to deploy and host up to 10 web-sites in a free, shared/multi-tenant hosting environment. You can start out developing and testing web sites at no cost using this free shared mode, and it supports the ability to run web sites that serve up to 165MB/day of content (5GB/month).  All of the capabilities we introduced in June with this free tier remain the same with today’s update. Starting with today’s release, you can now elastically scale up your web-site beyond this capability using a new low-cost “shared” option (which we are introducing today) as well as using a “reserved instance” option (which we’ve supported since June).  Scaling to either of these modes is easy.  Simply click on the “scale” tab of your web-site within the Windows Azure Portal, choose the scaling option you want to use with it, and then click the “save” button.  Changes take only seconds to apply and do not require any code to be changed, nor the app to be redeployed: Below are some more details on the new “shared” option, as well as the existing “reserved” option: Shared Mode With today’s release we are introducing a new low-cost “shared” scaling mode for Windows Azure Web Sites.  A web-site running in shared mode is deployed in a shared/multi-tenant hosting environment.  Unlike the free tier, though, a web-site in shared mode has no quotas/upper-limit around the amount of bandwidth it can serve.  The first 5 GB/month of bandwidth you serve with a shared web-site is free, and then you pay the standard “pay as you go” Windows Azure outbound bandwidth rate for outbound bandwidth above 5 GB. A web-site running in shared mode also now supports the ability to map multiple custom DNS domain names, using both CNAMEs and A-records, to it.  The new A-record support we are introducing with today’s release provides the ability for you to support “naked domains” with your web-sites (e.g. http://microsoft.com in addition to http://www.microsoft.com).  We will also in the future enable SNI based SSL as a built-in feature with shared mode web-sites (this functionality isn’t supported with today’s release – but will be coming later this year to both the shared and reserved tiers). You pay for a shared mode web-site using the standard “pay as you go” model that we support with other features of Windows Azure (meaning no up-front costs, and you pay only for the hours that the feature is enabled).  A web-site running in shared mode costs only 1.3 cents/hr during the preview (so on average $9.36/month). Reserved Instance Mode In addition to running sites in shared mode, we also support scaling them to run within a reserved instance mode.  When running in reserved instance mode your sites are guaranteed to run isolated within your own Small, Medium or Large VM (meaning no other customers run within it).  You can run any number of web-sites within a VM, and there are no quotas on CPU or memory limits. You can run your sites using either a single reserved instance VM, or scale up to have multiple instances of them (e.g. 2 medium sized VMs, etc).  Scaling up or down is easy – just select the “reserved” instance VM within the “scale” tab of the Windows Azure Portal, choose the VM size you want, the number of instances of it you want to run, and then click save.  Changes take effect in seconds: Unlike shared mode, there is no per-site cost when running in reserved mode.  Instead you pay only for the reserved instance VMs you use – and you can run any number of web-sites you want within them at no extra cost (e.g. you could run a single site within a reserved instance VM or 100 web-sites within it for the same cost).  Reserved instance VMs start at 8 cents/hr for a small reserved VM.  Elastic Scale-up/down Windows Azure Web Sites allows you to scale-up or down your capacity within seconds.  This allows you to deploy a site using the shared mode option to begin with, and then dynamically scale up to the reserved mode option only when you need to – without you having to change any code or redeploy your application. If your site traffic starts to drop off, you can scale back down the number of reserved instances you are using, or scale down to the shared mode tier – all within seconds and without having to change code, redeploy, or adjust DNS mappings.  You can also use the “Dashboard” view within the Windows Azure Portal to easily monitor your site’s load in real-time (it shows not only requests/sec and bandwidth but also stats like CPU and memory usage). Because of Windows Azure’s “pay as you go” pricing model, you only pay for the compute capacity you use in a given hour.  So if your site is running most of the month in shared mode (at 1.3 cents/hr), but there is a weekend when it gets really popular and you decide to scale it up into reserved mode to have it run in your own dedicated VM (at 8 cents/hr), you only have to pay the additional pennies/hr for the hours it is running in the reserved mode.  There is no upfront cost you need to pay to enable this, and once you scale back down to shared mode you return to the 1.3 cents/hr rate.  This makes it super flexible and cost effective. Improved Custom Domain Support Web sites running in either “shared” or “reserved” mode support the ability to associate custom host names to them (e.g. www.mysitename.com).  You can associate multiple custom domains to each Windows Azure Web Site.  With today’s release we are introducing support for A-Records (a big ask by many users). With the A-Record support, you can now associate ‘naked’ domains to your Windows Azure Web Sites – meaning instead of having to use www.mysitename.com you can instead just have mysitename.com (with no sub-name prefix).  Because you can map multiple domains to a single site, you can optionally enable both a www and naked domain for a site (and then use a URL rewrite rule/redirect to avoid SEO problems). We’ve also enhanced the UI for managing custom domains within the Windows Azure Portal as part of today’s release.  Clicking the “Manage Domains” button in the tray at the bottom of the portal now brings up custom UI that makes it easy to manage/configure them: As part of this update we’ve also made it significantly smoother/easier to validate ownership of custom domains, and made it easier to switch existing sites/domains to Windows Azure Web Sites with no downtime. Continuous Deployment Support with Git and CodePlex or GitHub One of the more popular features we released earlier this summer was support for publishing web sites directly to Windows Azure using source control systems like TFS and Git.  This provides a really powerful way to manage your application deployments using source control.  It is really easy to enable this from a website’s dashboard page: The TFS option we shipped earlier this summer provides a very rich continuous deployment solution that enables you to automate builds and run unit tests every time you check in your web-site, and then if they are successful automatically publish to Azure. With today’s release we are expanding our Git support to also enable continuous deployment scenarios and integrate with projects hosted on CodePlex and GitHub.  This support is enabled with all web-sites (including those using the “free” scaling mode). Starting today, when you choose the “Set up Git publishing” link on a website’s “Dashboard” page you’ll see two additional options show up when Git based publishing is enabled for the web-site: You can click on either the “Deploy from my CodePlex project” link or “Deploy from my GitHub project” link to walkthrough a simple workflow to configure a connection between your website and a source repository you host on CodePlex or GitHub.  Once this connection is established, CodePlex or GitHub will automatically notify Windows Azure every time a checkin occurs.  This will then cause Windows Azure to pull the source and compile/deploy the new version of your app automatically.  The below two videos walkthrough how easy this is to enable this workflow and deploy both an initial app and then make a change to it: Enabling Continuous Deployment with Windows Azure Websites and CodePlex (2 minutes) Enabling Continuous Deployment with Windows Azure Websites and GitHub (2 minutes) This approach enables a really clean continuous deployment workflow, and makes it much easier to support a team development environment using Git: Note: today’s release supports establishing connections with public GitHub/CodePlex repositories.  Support for private repositories will be enabled in a few weeks. Support for multiple branches Previously, we only supported deploying from the git ‘master’ branch.  Often, though, developers want to deploy from alternate branches (e.g. a staging or future branch). This is now a supported scenario – both with standalone git based projects, as well as ones linked to CodePlex or GitHub.  This enables a variety of useful scenarios.  For example, you can now have two web-sites - a “live” and “staging” version – both linked to the same repository on CodePlex or GitHub.  You can configure one of the web-sites to always pull whatever is in the master branch, and the other to pull what is in the staging branch.  This enables a really clean way to enable final testing of your site before it goes live. This 1 minute video demonstrates how to configure which branch to use with a web-site. Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming in the weeks ahead – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 next month).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • CodePlex Daily Summary for Wednesday, June 05, 2013

    CodePlex Daily Summary for Wednesday, June 05, 2013Popular ReleasesQlikView Extension - Animated Scatter Chart: Animated Scatter Chart - v1.0: Version 1.0 including Source Code qar File Example QlikView application Tested With: Browser Firefox 20 (x64) Google Chrome 27 (x64) Internet Explorer 9 QlikView QlikView Desktop 11 - SR2 (x64) QlikView Desktop 11.2 - SR1 (x64) QlikView Ajax Client 11.2 - SR2 (based on x64)BarbaTunnel: BarbaTunnel 7.2: Warning: HTTP Tunnel is not compatible with version 6.x and prior, HTTP packet format has been changed. Check Version History for more information about this release.Web Pages CMS: 0.5: First public releaseHarvester - Debug Viewer for Trace, NLog & Log4Net: v2.0.1 (.NET 4.0): Minor Updates Fixed incorrect process naming being displayed if process ID reassigned before cache invalidated. Fixed incorrect event type/source for TraceListener.TraceData methods. Updated NLog package references. Official Documentation Moved to GitHub http://cbaxter.github.com/Harvester Official Source Moved to GitHub https://github.com/cbaxter/HarvesterSuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.8: This release includes these changes below: Upgrade SuperSocket to 1.5.3 which is much more stable Added handshake request validating api (WebSocketServer.ValidateHandshake(TWebSocketSession session, string origin)) Fixed a bug that the m_Filters in the SubCommandBase can be null if the command's method LoadSubCommandFilters(IEnumerable<SubCommandFilterAttribute> globalFilters) is not invoked Fixed the compatibility issue on Origin getting in the different version protocols Marked ISub...Impulse Media Player: Impulse Media Player 3.3.1.0: EchoNest Analyzer introduced Similar track feature (social tab) Last played tracks can be removed permanently pitch / tempo bar can be hiddenBlackJumboDog: Ver5.9.0: 2013.06.04 Ver5.9.0 (1) ?????????????????????????????????($Remote.ini Tmp.ini) (2) ThreadBaseTest?? (3) ????POP3??????SMTP???????????????? (4) Web???????、?????????URL??????????????? (5) Ftp???????、LIST?????????????? (6) ?????????????????????Media Companion: Media Companion MC3.569b: New* Movies - Autoscrape/Batch Rescrape extra fanart and or extra thumbs. * Movies - Alternative editor can add manually actors. * TV - Batch Rescraper, AutoScrape extrafanart, if option enabled. Fixed* Movies - Slow performance switching to movie tab by adding option 'Disable "Not Matching Rename Pattern"' to Movie Preferences - General. * Movies - Fixed only actors with images were scraped and added to nfo * Movies - Fixed filter reset if selected tab was above Home Movies. * Updated Medi...Nearforums - ASP.NET MVC forum engine: Nearforums v9.0: Version 9.0 of Nearforums with great new features for users and developers: SQL Azure support Admin UI for Forum Categories Avoid html validation for certain roles Improve profile picture moderation and support Warn, suspend, and ban users Web administration of site settings Extensions support Visit the Roadmap for more details. Webdeploy package sha1 checksum: 9.0.0.0: e687ee0438cd2b1df1d3e95ecb9d66e7c538293b eReading: eReading: ????,??CPU???????。 ??????????。Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.93: Added -esc:BOOL switch (CodeSettings.AlwaysEscapeNonAscii property) to always force non-ASCII character (ch > 0x7f) to be escaped as the JavaScript \uXXXX sequence. This switch should be used if creating a Symbol Map and outputting the result to the a text encoding other than UTF-8 or UTF-16 (ASCII, for instance). Fixed a bug where a complex comma operation is the operand of a return statement, and it was looking at the wrong variable for possible optimization of = to just .VG-Ripper & PG-Ripper: VG-Ripper 2.9.42: changes NEW: Added Support for "GatASexyCity.com" links NEW: Added Support for "ImgCloud.co" links NEW: Added Support for "ImGirl.info" links NEW: Added Support for "SexyImg.com" links FIXED: "ImageBam.com" linksDocument.Editor: 2013.22: What's new for Document.Editor 2013.22: Improved Bullet List support Improved Number List support Minor Bug Fix's, improvements and speed upsCarrotCake, an ASP.Net WebForms CMS: Binaries and PDFs - Zip Archive (v. 4.3 20130528): Features include a content management system and a robust featured blogging engine. This includes configurable date based blog post URLs, blog post content association with categories and tags, assignment/customization of category and tag URL patterns, simple blog post feedback collection and review, blog post pagination/indexes, designation of default blog page (required to make search, category links, or tag links function), URL date formatting patterns, RSS feed support for posts and pages...PHPExcel: PHPExcel 1.7.9: See Change Log for details of the new features and bugfixes included in this release, and methods that are now deprecated.Droid Explorer: Droid Explorer 0.8.8.10 Beta: Fixed issue with some people having a folder called "android-4.2.2" in their build-tools path. - 16223 patterns & practices: Data Access Guidance: Data Access Guidance Drop3 2013.05.31: Drop 3DotNet.Highcharts: DotNet.Highcharts 2.0 with Examples: DotNet.Highcharts 2.0 Tested and adapted to the latest version of Highcharts 3.0.1 Added new chart types: Arearange, Areasplinerange, Columnrange, Gauge, Boxplot, Waterfall, Funnel and Bubble Added new type PercentageOrPixel which represents value of number or number with percentage. Used for sizes, width, height, length, etc. Removed inheritances in YAxis option classes. Closed issues: 682: Missing property - XAxisPlotLinesLabel.Text 688: backgroundColor and plotBackgroundColor are...Umbraco CMS: Umbraco 6.1.1: Source codeLooking for the source code? We're not uploading that as a zip file any more because you can already get it from CodePlex, click this link and hit the "Download" link. BlogRead the release blog post for 6.1.0. Read the release blog post for 6.1.1. Getting Started Read the installation documentation: http://our.umbraco.org/documentation/Installation/ Check the free foundation videos on how to get started building Umbraco sites. They're available from: Introduction for webmasters:...Composite C1 CMS - Open Source on .NET: Composite C1 4.0 (release candidate): Composite C1 4.0 (4.0.4897.31550) (release candidate) Write a review for this release Getting started If you are new to Composite C1 and want to install it: http://docs.composite.net/Getting-started What's new in Composite C1 4.0 The following are highlights of major changes since Composite C1 3.2: General user features: Uploads up to 512MB accepted in the media archive New “Block Selector” in Visual Editor – enable users to create styled div, blockquote etc. elements (not yet availabl...New ProjectsApiDoc: ApiDoc is a library for creating your own API documentation similar to the MSDN directly from your assembly and /// Xml comments without source code.Associativy Internal Link Graph Builder: Orchard module for automatically creating Associativy graphs (http://associativy.com/) from internal links.Azure Business App Scale Proof of Concepts: This is actually a series of proof of concept demo applications built to demonstrate scale of particular architecture or application designs. Badr: .Net Web Framework: Simple, Database-driven, Multiplatform, .Net web frameworkCalcolo di Integrali con approssimazioni: Integrali Metodo dei Trapezi Metodo dei Rettangoli Metodo delle Parabole Metodo di Montecarlo Integralsconfiguration: a full function configuration system based on .netCron Expression Descriptor: "Translate" a Cron Expression in a human readable format. Support databinding, and creation of the expression and Quartz.NET jobs schedulerDaphne Web Edition: The Daphne Web Edition of software for professional checkers players running in a browser.Entity framework T4 NHibernate mapping generator: This project contains T4 templates to create POCOs + NHibernate mappings from an Entity Framework Model (.edmx).EWS Streaming Notification Sample Application: Sample application showing how to handle multiple subscriptions using streaming notifications, specifically for Exchange 2013 (or Wave 15 of Office 365).EXACT_EXTENSION: This is project to support Account System in International SchoolsIPS Training: Playing around with Joe to teach some programming.jean0604wordpressmercurial: dfdafdaLenic.DI: Lenic.DI -- Another IOC Container Library Using DelegateManage Azure Srevice: This project is a windows form application to manage azure service and deployments.Microsoft dot Net Lab: This project concentrates into a Lab a large web project based on Microsoft Best Practices and info on “what works” and more what should be avoided in Prod env.Miris Human Milk Analyser: This is a DEMO project ONLYMultilingual call each other: Multilingual call each othernBlade: A Dependency Injection Container.nChart: A JavaScript Chart Library Base on D3.jsnCMS: A Content Management System.nReport: A JavaScript Report Library.nTemplate: A Template Engine.searchLocal: search localT4 Unit Test Constructor: T4 Unit Test Constructor is a Text Transform file that generates complete Unit Testing project based on siblings projects inside a solution.v3r137: m0l3cUL4r dyN4MiC2 5iMul47i0N u5IN9 V3Rl37 iN739r47I0N. vi5U4li23 p4R7ICL32 8y 0P3n9l P0iN7 5prI73. U23 0P3NMp 4nd 0p3NCl f0r 5p33D.Virtual Sport for Sport Team: This is the website to manage a sports team. Through this website you can manage the members of the team, from players to staff, schedules, and more. and for suWindows Azure MultiSite Role: This web role allows you to host multiple websites on the same VM instances, and synchronising the files automatically.WuvOverlay: An overlay for the game Guild Wars 2 to display useful information obtained from the public API.XYZ: XYZ

    Read the article

  • CodePlex Daily Summary for Wednesday, December 29, 2010

    CodePlex Daily Summary for Wednesday, December 29, 2010Popular ReleasesDocX: DocX v1.0.0.11: Building Examples projectTo build the Examples project, download DocX.dll and add it as a reference to the project. OverviewThis version of DocX contains many bug fixes, it is a serious step towards a stable release. Added1) Unit testing project, 2) Examples project, 3) To many bug fixes to list here, see the source code change list history.Cosmos (C# Open Source Managed Operating System): 71406: This is the second release supporting the full line of Visual Studio 2010 editions. Changes since release 71246 include: Debug info is now stored in a single .cpdb file (which is a Firebird database) Keyboard input works now (using Console.ReadLine) Console colors work (using Console.ForegroundColor and .BackgroundColor)AutoLoL: AutoLoL v1.5.0: Added the all new Masteries Browser which replaces the Quick Open combobox AutoLoL will now attemt to create file associations for mastery (*.lolm) files Each Mastery Build can now contain keywords that the Masteries Browser will use for filtering Changed the way AutoLoL detects if another instance is already running Changed the format of the mastery files to allow more information stored in* Dialogs will now focus the Ok or Cancel button which allows the user to press Return to clo...Confree for Outlook: Confree for Outlook 1.0: Confree for OutlookFeaturesCreate a Claro/Telmex conference directly from Outlook (only works for Argentina). NotesBy now, only works with Claro/Telmex Argentina (if you are using http://simon.telmex.net.ar, we are good).Paint.NET PSD Plugin: 1.6.0: Handling of layer masks has been greatly improved. Improved reliability. Many PSD files that previously loaded in as garbage will now load in correctly. Parallelized loading. PSD files containing layer masks will load in a bit quicker thanks to the removal of the sequential bottleneck. Hidden layers are no longer made visible on save. Many thanks to the users who helped expose the layer masks problem: Rob Horowitz, M_Lyons10. Please keep sending in those bug reports and PSD repro files!Razor Templating Engine: Razor Templating Engine v1.2: Changes: ADDED: Standard namespaces imports for all templates: System, System.Collections.Generic, System.Linq (Changeset 5635) ADDED: Methods for Precompilation (Changeset 3283) CHANGED: Refactored precompilation to be exposed per-TemplateService. (Changeset 3440) CHANGED: Added more descriptive compilation exception message. (Changeset 3629) FIXED: Forced reference to Microsoft.CSharp to correct support for testing frameworks. (Changeset 3689) FIXED: Added support for nested anonymous obj...PhysicalMeasure C# library: PhysicalMeasure 1.0 Release 2010-12-28: PhysicalMeasure 1.0 Release 2010-12-28Facebook C# SDK: 4.1.1: From 4.1.1 Release: Authentication bug fix caused by facebook change (error with redirects in Safari) Authenticator fix, always returning true From 4.1.0 Release Lots of bug fixes Removed Dynamic Runtime Language dependencies from non-dynamic platforms. Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample Changed internal serialization to use Json.net BREAKING CHANGE: Canvas Session is no longer supported. Use Signed...MonitorWang: MonitorWang v1.0.5 (Growler): What's new?Added Growl Notification Finalisers - these are interceptor components that work exclusively with the Growl Publisher. These allow you to modify the Growl Notification just prior to it being sent by the publisher. You can inject custom logic to precisely control how the Growl Notification will appear; this includes changing the Growl Priority level and message text. I've created to two Growl Notification Finalisers - one allows you to change the Growl Notification Priorty based on ...Catel - WPF and Silverlight MVVM library: 1.0.0: And there it is, the final release of Catel, and it is no longer a beta version!EnhSim: EnhSim 2.2.7 ALPHA: 2.2.7 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Mongoose has bee...LINQ to Twitter: LINQ to Twitter Beta v2.0.19: Mono 2.8, Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation. Bug fixes.Rocket Framework (.Net 4.0): Rocket Framework for Windows V 1.0.0: Architecture is reviewed and adjusted in a way so that I can introduce the Web version and WPF version of this framework next. - Rocket.Core is introduced - Controller button functions revisited and updated - DB is renewed to suite the implemented features - Create New button functionality is changed - Add Question Handling featuresFlickr Wallpaper Rotator (for Windows desktop): Wallpaper Flickr 1.1: Some minor bugfixes (mostly covering when network connection is flakey, so I discovered them all while at my parents' house for Christmas).NoSimplerAccounting: NoSimplerAccounting 6.0: -Fixed a bug in expense category report.NHibernate Mapping Generator: NHibernate Mapping Generator 2.0: Added support for Postgres (Thanks to Angelo)NewLife XCode: XCode v6.5.2010.1223 ????(????v3.5??): XCode v6.5.2010.1223 ????,??: NewLife.Core ??? NewLife.Net ??? XControl ??? XTemplate ????,??C#?????? XAgent ???? NewLife.CommonEnitty ??????(???,XCode??????) XCode?? ?????????,??????????????????,?????95% XCode v3.5.2009.0714 ??,?v3.5?v6.0???????????????,?????????。v3.5???????????,??????????????。 XCoder ??XTemplate?????????,????????XCode??? XCoder_Src ???????(????XTemplate????),??????????????????VivoSocial: VivoSocial 7.4.0: Please see changes: http://support.vivoware.com/project/ChangeLog.aspx?PROJID=48Umbraco CMS: Umbraco 4.6 Beta - codename JUNO: The Umbraco 4.6 beta (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains more than 89 bug fixes since the 4.5.2 release. Improved installer experience Updated Starter Kits (Simple, Blog, Personal, Business) Beautiful, free, customizable skins included Skinning engine and Skin customization (see Skinning Documentation Kit) Default dashboards on install with hide option Updated Login t...ASP.NET MVC SiteMap provider: MvcSiteMapProvider 2.3.0: Using NuGet?MvcSiteMapProvider is also listed in the NuGet feed. Learn more... Like the project? Consider a donation!Donate via PayPal via PayPal. Release notesThis will be the last release targeting ASP.NET MVC 2 and .NET 3.5. MvcSiteMapProvider 3.0.0 will be targeting ASP.NET MVC 3 and .NET 4 Web.config setting skipAssemblyScanOn has been deprecated in favor of excludeAssembliesForScan and includeAssembliesForScan ISiteMapNodeUrlResolver is now completely responsible for generating th...New Projects42 Sight Holistic Data Warehousing on Microsoft SQL Server 2008: This project is the Holistic DW Template. Included are the 3 MS XL spreadsheet reporting models as described in the book Holistic Data Warehousing. This Template is multi-purpose and requires no modification other than you populating it according to the book examplesAndshev.OrmTools: Tools for generating SQL queries using LINQ. ORM framework is also planned to be implemented.BFG and Dice Roller: This solution will contain 2 concepts, the Dice Roller with hooks for extension into other applications, as well as a Personal Battlefleet Gothic Helper application to help manage, control, and smoothe the flow of BFG games. The BFG application will target the Phone 7 platform.BlondieODOS: its an osBluehat Community projetcts: Bluehat Community projetcts.C# - DBWrapper MySQL: Abro aqui a discussão para criarmos uma classe para manipular dados em MySQL.CardGame for Education: This project goal is to create a small Card Game and to teach the team members how to design, implement and how to use "real world" tools. Englishlearning: My test project!Faran Silver: Silverlight Controls Developed till now. Include: 1- Persian Date Input 2- Close-able Tab Item 3- Shamsi Date Convertergocodelibrary: This is my project.Id3Stuff: Project that throws together some nice id3 mass editing functionsInfoStrat.OakFx: OakFx by InfoStratinivi: iniviJJODESK: This little utility software written in Vb.net 2.0, allows you to : - switch IE proxy, - manage IP, - launch website, - launch other utilty tools , - launch and manage cmd, bat, vbs script. - and much more ..... First I wrote this tool for my personal use, because I work with several networks, and customer sites. I think many other software developpers have the same need. All parameters are customisable, and are store in an ini text file. And it can run on an USB stick. learn MVC: Just keeping all source code in one place. Thats all folksLJF.Utility: Utility?????Lucidity: Lucid WILDMixModes Synergy - The WPF Toolkit: MixModes Synergy 2010 is a complete toolset that simplifies user interface development in Windows Presentation Foundation based applications. Synergy includes rich user interface controls providing professional look and feel for your applications out of the box.Multi Targetted Rss Reader: A Multi Targetted Rss Reader demo that shows how to multi target your view model across different screens (WP7, Silverlight, WPF, Surface).Secure Batch Command Tool For Administrators: A command line tool that will help you to protect your passwords that you should provide in your batch files. This tool encrypt Dos commands and update the batch files in a way that keeps your batch file maintainable and easy to use.SharePoint 2010 Conditional Lookup Control: I created a "Conditional Lookup" control for SharePoint 2010. With this you can connect two lookup fields controls on list forms to fill the second control with values depending on the selected value in the first control. There is a demo inside.sql dot: Ease when making a connection to the sql serverwolisbetter: WOL is better!WP7PasswordsBackup: backup/sync client for windows phone 7 passwords app. ??????: ??????

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • Reference Data Management

    - by rahulkamath
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableColorfulListAccent2 {mso-style-name:"Colorful List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:72; mso-style-unhide:no; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-tstyle-shading:#F8EDED; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:25; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; color:black; mso-themecolor:text1;} table.MsoTableColorfulListAccent2FirstRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#9E3A38; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themeshade:204; mso-tstyle-border-bottom:1.5pt solid white; mso-tstyle-border-bottom-themecolor:background1; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:white; mso-tstyle-shading-themecolor:background1; mso-tstyle-border-top:1.5pt solid black; mso-tstyle-border-top-themecolor:text1; color:#9E3A38; mso-themecolor:accent2; mso-themeshade:204; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2FirstCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2OddColumn {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#EFD3D2; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:63; mso-tstyle-border-top:cell-none; mso-tstyle-border-left:cell-none; mso-tstyle-border-bottom:cell-none; mso-tstyle-border-right:cell-none; mso-tstyle-border-insideh:cell-none; mso-tstyle-border-insidev:cell-none;} table.MsoTableColorfulListAccent2OddRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#F2DBDB; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:51;} Reference Data Management Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise MDM solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or mastering sales territories in light of rapid fire acquisitions that require frequent sales territory refinement, equitable distribution of leads and accounts to salespersons, and alignment of budget/forecast with results to optimize sales coverage. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? Reference data is a close cousin of master data. While master data may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and give them contextual value. The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Specialty Finance: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change.

    Read the article

  • SQL SERVER – History of SQL Server Database Encryption

    - by pinaldave
    I recently met Michael Coles and Rodeney Landrum the author of one of the kind book Expert SQL Server 2008 Encryption at SQLPASS in Seattle. During the conversation we ended up how Microsoft is evolving encryption technology. The same discussion lead to talking about history of encryption tools in SQL Server. Michale pointed me to page 18 of his book of encryption. He explicitly give me permission to re-produce relevant part of history from his book. Encryption in SQL Server 2000 Built-in cryptographic encryption functionality was nonexistent in SQL Server 2000 and prior versions. In order to get server-side encryption in SQL Server you had to resort to purchasing or creating your own SQL Server XPs. Creating your own cryptographic XPs could be a daunting task owing to the fact that XPs had to be compiled as native DLLs (using a language like C or C++) and the XP application programming interface (API) was poorly documented. In addition there were always concerns around creating wellbehaved XPs that “played nicely” with the SQL Server process. Encryption in SQL Server 2005 Prior to the release of SQL Server 2005 there was a flurry of regulatory activity in response to accounting scandals and attacks on repositories of confidential consumer data. Much of this regulation centered onthe need for protecting and controlling access to sensitive financial and consumer information. With the release of SQL Server 2005 Microsoft responded to the increasing demand for built-in encryption byproviding the necessary tools to encrypt data at the column level. This functionality prominently featured the following: Support for column-level encryption of data using symmetric keys or passphrases. Built-in access to a variety of symmetric and asymmetric encryption algorithms, including AES, DES, Triple DES, RC2, RC4, and RSA. Capability to create and manage symmetric keys. Key creation and management. Ability to generate asymmetric keys and self-signed certificates, or to install external asymmetric keys and certificates. Implementation of hierarchical model for encryption key management, similar to the ANSI X9.17 standard model. SQL functions to generate one-way hash codes and digital signatures, including SHA-1 and MD5 hashes. Additional SQL functions to encrypt and decrypt data. Extensions to the SQL language to support creation, use, and administration of encryption keys and certificates. SQL CLR extensions that provide access to .NET-based encryption functionality. Encryption in SQL Server 2008 Encryption demands have increased over the past few years. For instance, there has been a demand for the ability to store encryption keys “off-the-box,” physically separate from the database and the data it contains. Also there is a recognized requirement for legacy databases and applications to take advantage of encryption without changing the existing code base. To address these needs SQL Server 2008 adds the following features to its encryption arsenal: Transparent Data Encryption (TDE): Allows you to encrypt an entire database, including log files and the tempdb database, in such a way that it is transparent to client applications. Extensible Key Management (EKM): Allows you to store and manage your encryption keys on an external device known as a hardware security module (HSM). Cryptographic random number generation functionality. Additional cryptography-related catalog views and dynamic management views. SQL language extensions to support the new encryption functionality. The encryption book covers all the tools in its various chapter in one simple story. If you are interested how encryption evolved and reached to the stage where it is today, this book is must for everyone. You can read my earlier review of the book over here. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Technology Tagged: Encryption, SQL Server Encryption, SQLPASS

    Read the article

  • NuGet package manager in Visual Studio 2012

    - by sreejukg
    NuGet is a package manager that helps developers to automate the process of installing and upgrading packages in Visual Studio projects. It is free and open source. You can see the project in codeplex from the below link. http://nuget.codeplex.com/ Now days developers needed to work with several packages or libraries from various sources, a typical e.g. is jQuery. You will hardly find a website that not uses jQuery. When you include these packages as manually copying the files, it is difficult to task to update these files as new versions get released. NuGet is a Visual studio add on, that comes by default with Visual Studio 2012 that manages such packages. So by using NuGet, you can include new packages to you project as well as update existing ones with the latest versions. NuGet is a Visual Studio extension, and happy news for developers, it is shipped with Visual Studio 2012 by default. In this article, I am going to demonstrate how you can include jQuery (or anything similar) to a .Net project using the NuGet package manager. I have Visual Studio 2012, and I created an empty ASP.Net web application. In the solution explorer, the project looks like following. Now I need to add jQuery for this project, for this I am going to use NuGet. From solution explorer, right click the project, you will see “Manage NuGet Packages” Click on the Manage NuGet Packages options so that you will get the NuGet Package manager dialog. Since there is no package installed in my project, you will see “no packages installed” message. From the left menu, select the online option, and in the Search box (that is available in the top right corner) enter the name of the package you are looking for. In my case I just entered jQuery. Now NuGet package manager will search online and bring all the available packages that match my search criteria. You can select the right package and use the Install button just next to the package details. Also in the right pane, it will show the link to project information and license terms, you can see more details of the project you are looking for from the provided links. Now I have selected to install jQuery. Once installed successfully, you can find the green icon next to it that tells you the package has been installed successfully to your project. Now if you go to the Installed packages link from the left menu of package manager, you can see jQuery is installed and you can uninstall it by just clicking on the Uninstall button. Now close the package manager dialog and let us examine the project in solution explorer. You can see some new entries in your project. One is Scripts folder where the jQuery got installed, and a packages.config file. The packages.config is xml file that tells the NuGet package manager, the id and the version of the package you install. Based on this file NuGet package manager will identify the installed packages and the corresponding versions. Installing packages using NuGet package manager will save lot of time for developers and developers can get upgrades for the installed packages very easily.

    Read the article

  • The Mysterious ARR Server Farm to URL Rewrite link

    - by OWScott
    Application Request Routing (ARR) is a reverse proxy plug-in for IIS7+ that does many things, including functioning as a load balancer.  For this post, I’m assuming that you already have an understanding of ARR.  Today I wanted to find out how the mysterious link between ARR and URL Rewrite is maintained.  Let me explain… ARR is unique in that it doesn’t work by itself.  It sits on top of IIS7 and uses URL Rewrite.  As a result, ARR depends on URL Rewrite to ‘catch’ the traffic and redirect it to an ARR Server Farm. As the last step of creating a new Server Farm, ARR will prompt you with the following: If you accept the prompt, it will create a URL Rewrite rule for you.  If you say ‘No’, then you’re on your own to create a URL Rewrite rule. When you say ‘Yes’, the Server Farm’s checkbox for “Use URL Rewrite to inspect incoming requests” will be checked.  See the following screenshot. However, I’m not a fan of this auto-rule.  The problem is that if I make any changes to the URL Rewrite rule, which I always do, and then make the wrong change in ARR, it will blow away my settings.  So, I prefer to create my own rule and manage it myself. Since I had some old rules that were managed by ARR, I wanted to update them so that they were no longer managed that way.  I took a look at a config in applicationHost.config to try to find out what property would bind the two together.  I assumed that there would be a property on the ServerFarm called something like urlRewriteRuleName that would serve as the link between ARR and URL Rewrite.  I found no such property.  After a bit of testing, I found that the name of the URL Rewrite rule is the only link between ARR and URL Rewrite.  I wouldn’t have guessed.  The URL Rewrite rule needs to be exactly ARR_{ServerFarm Name}_loadBalance, although it’s not case sensitive. Consider the following auto-created URL Rewrite rule: And, the link between ARR and URL Rewrite exists: Now, as soon as I rename that to anything else, for example, site.com ARR Binding, the link between ARR and URL Rewrite is broken. To be certain of the relationship, I renamed it back again and sure enough, the relationship was reestablished. Why is this important?  It’s only important if you want to decouple the relationship between ARR the URL Rewrite rule, but if you want to do so, the best way to do that is to rename the URL Rewrite rule.  If you uncheck the “Use URL Rewrite to inspect incoming requests” checkbox, it will delete your rule for you without prompting.  Conclusion The mysterious link between ARR and URL Rewrite only exists through the ARR Rule name.  If you want to break the link, simply rename the URL Rewrite rule.  It’s completely safe to do so, and, in my opinion, this is a rule that you should manage yourself anyway. 

    Read the article

  • Our Oracle Recruitment Team is Growing - Multiple Job Opportunities in Bangalore, India

    - by david.talamelli
    DON"T GET STUCK IN THE MATRIXSEE YOUR FUTUREVISIT THE ORACLE The position(s): CORPORATE RECRUITING RESEARCH ANALYST(S) ABOUT ORACLE Oracle's business is information--how to manage it, use it, share it, protect it. For three decades, Oracle, the world's largest enterprise software company, has provided the software and services that allow organizations to get the most up-to-date and accurate information from their business systems. Only Oracle powers the information-driven enterprise by offering a complete, integrated solution for every segment of the process industry. When you run Oracle applications on Oracle technology, you speed implementation, optimize performance, and maximize ROI. Great hiring doesn't happen by accident; it's the culmination of a series of thoughtfully planned and well executed events. At the core of any hiring process is a sourcing strategy. This is where you come in... Do you want to be a part of a world-class recruiting organization that's on the cutting edge of technology? Would you like to experience a rewarding work environment that allows you to further develop your skills, while giving you the opportunity to develop new skills? If you answered yes, you've taken your first step towards a future with Oracle. We are building a Research Team to support our North America Recruitment Team, and we need creative, smart, and ambitious individuals to help us drive our research department forward. Oracle has a track record for employing and developing the very best in the industry. We invest generously in employee development, training and resources. Be a part of the most progressive internal recruiting team in the industry. For more information about Oracle, please visit our Web site at http://www.oracle.com Escape the hum drum job world matrix, visit the Oracle and be a part of a winning team, apply today. POSITION: Corporate Recruiting Research Analyst LOCATION: Bangalore, India RESPONSIBILITIES: •Develop candidate pipeline using Web 2.0 sourcing strategies and advanced Boolean Search techniques to support U.S. Recruiting Team for various job functions and levels. •Engage with assigned recruiters to understand the supported business as well as the recruiting requirements; partner with recruiters to meet expectations and deliver a qualified pipeline of candidates. •Source candidates to include both active and passive job seekers to provide a strong pipeline of qualified candidates for each recruiter; exercise creativity to find candidates using Oracle's advanced sourcing tools/techniques. •Fully evaluate candidate's background against the requirements provided by recruiter, and process leads using ATS (Applicant Tracking System). •Manage your efforts efficiently; maintain the highest levels of client satisfaction as well as strong operations and reporting of research activities. PREFERRED QUALIFICATIONS: •Fluent in English, with excellent written and oral communication skills. •Undergraduate degree required, MBA or Masters preferred. •Proficiency with Boolean Search techniques desired. •Ability to learn new software applications quickly. •Must be able to accommodate some U.S. evening hours. •Strong organization and attention to detail skills. •Prior HR or corporate in-house recruiting experiences a plus. •The fire in the belly to learn new ideas and succeed. •Ability to work in team and individual environments. This is an excellent opportunity to join Oracle in our Bangalore Offices. Interested applicants can send their resume to [email protected] or contact David on +61 3 8616 3364

    Read the article

  • Oracle Virtualization at Oracle OpenWorld 2012

    - by Chris Kawalek
    Mini-Series Entry 1 of 3: Hands-On Virtualization This is the first entry of a 3 part mini-series aimed at highlighting server and desktop virtualization at this year’s Oracle OpenWorld.  Oracle OpenWorld 2012 is fast approaching! If you are as excited as we are about the fascinating new Oracle virtualization content featured at Oracle OpenWorld 2012, you won’t want to miss this blog mini-series. We will be highlighting sessions that cover advances and innovations in our products, our product strategy and roadmap, and hands on labs for step-by-step instructions from our field and product experts. In the blog mini-series you will learn about: The Oracle Virtualization general keynote session Hands-on labs  Key Oracle server and desktop virtualization sessions In this entry, we will cover the Oracle Virtualization keynote session and the hands-on labs you won't want to miss. General Session: Oracle Virtualization Strategy and Roadmap Session ID: GEN8725 Oracle offers the industry’s most complete and integrated virtualization portfolio enabling organizations to realize benefits beyond simple consolidation as they transform their data centers into flexible cloud-based infrastructures. Join Oracle executives and experts to learn about Oracle’s desktop-to-data-center virtualization solutions, such as the OS, with built-in management integration at all layers that can help you virtualize and manage the complete computing environment, from physical servers to virtual servers and applications. This “don’t-miss” session offers details of the latest product updates and strategy; product roadmaps; integration with enterprise applications; and real-world examples of how Oracle server, desktop, and storage virtualization is benefiting customers. Here are our top picks for Hands-On Labs for Oracle OpenWorld 2012: Oracle Virtual Desktop Infrastructure Performance and Tablet Mobility Session ID: HOL9907 This hands-on lab demonstrates the performance (using an industry-standard load tester) and roaming capabilities of Oracle Virtual Desktop Infrastructure with Oracle’s Sun Ray Clients, Apple iPad and other clients. Deploying an IaaS Environment with Oracle VM: Hands-On Lab  Session ID: HOL9558 This hands-on lab takes you through the planning and deployment of an infrastructure as a service (IaaS) environment with Oracle VM as the foundation. It covers a range of topics, from planning storage capacity, LUN creation, network bandwidth planning, and best practices to designing and streamlining the environment for ease of management. Learn from deeply experienced field engineers and product experts. Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-On Lab Session ID: HOL9559 This hands-on lab is for application architects or system administrators who will need to deploy and manage Oracle Applications. You’ll learn how Oracle VM Templates can turn you into a power user who can virtualize and deploy complex Oracle Applications in minutes. Longtime field-experienced engineers and product experts will show you, step by step, how to download and import templates and deploy the applications. x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Session ID: HOL9870 The purpose of this hands-on lab is to demonstrate the functionality and usage of Oracle’s enterprise cloud infrastructure for x86 with Oracle VM 3.x. It covers:  Creation of VMs Migration of VMs  Quick and easy deployment of Oracle applications with Oracle VM Templates  Usage of the Storage Connect plug-in for the Sun ZFS Storage Appliance You can find these and other great sessions on the Oracle OpenWorld 2012 Content Catalogue. Start checking now to better plan and organize your week at the conference. Then you’ll be ready to sign up for all of your sessions in mid-July when the scheduling tool goes live. While the hands-on labs allow you to directly interact with Oracle virtualization products, the conference sessions allow you to hear from a wide variety of industry experts on how they're using they technology in real world deployments, solving specific challenges, and more. In tomorrow's entry, we'll start talking about the many conference sessions related to Oracle server and desktop virtualization you can attend during the show. See you then! - The Oracle Virtualization marketing team

    Read the article

  • The Beginner’s Guide to Greasemonkey User Scripts in Firefox

    - by Asian Angel
    Everybody knows that Firefox has add-ons for virtually everything, but if you don’t want to bloat your installation you’ve always got the option of Greasemonkey scripts instead. Here’s a quick primer on how to use them. Getting Started with User Scripts Once you have Greasemonkey installed, managing the extension is really easy. Left click on the status bar icon to turn the extension on/off and right click to access the context menu shown here. Whether you use the Options button in the Add-ons Manager Window or the context menu shown above, both will bring up the Manage User Scripts dialog. At the moment you have a nice clean slate to work with… time to get some scripts added in. The majority of user scripts can be found at two different sites, the first being appropriately named userscripts.org, and you can either browse by tag or search for a script. As you can see here your search for a particular type of script can be quickly narrowed down based on category. There is definitely a lot to choose from. For our example we focused on the “textarea” tag. There were 62 scripts available but we quickly found what we were looking for on the first page. Installing, Managing, & Using Your Scripts When you find a script that you want to install visit the script’s homepage and click on the “Install” button. Note: Link for this script provided below. Once you have clicked on the Install button, Greasemonkey will open up the following installation window. You will be able to view: A summary of what the script does A list of websites that the script is supposed to function on (our example is set for all) View the script source if desired Make a final decision on whether to install the script or cancel the process Right-clicking on our status bar icon shows our new script listed and active. Reopening the Manage User Scripts window shows: Our new script listed in the column on the left The websites/pages included An option to disable the script (can also be done in the context menu) The ability to edit the script The ability to uninstall the script If you choose to edit the script you will be asked to browse for and select a default text editor of your choice (first time only). Once you have selected a text editor you can make any changes desired to the script. We decided to test our new user script on the site. Going to the comment box at the bottom we could easily resize the window as desired. The Comment box definitely got a lot bigger. Conclusion If you prefer to keep the number of extensions to a minimum in your Firefox installation then Greasemonkey and the Userscripts website can easily provide that extra functionality without the bloat. For added auto website script detection goodness see our article on Greasefire. Note: See our article here for specialized How-To Geek User Style Scripts that can be added to Greasemonkey. Links Download the Greasemonkey Extension (Mozilla Add-ons) Install the Textarea & Input Resize User Script Visit the Userscripts.org Website Visit the Userstyles.org Website Similar Articles Productive Geek Tips Enjoy How-To Geek User Style Script GoodnessEnable Multi-Column Google Searches with a User ScriptSearch Alternative Search Engines from within Bing’s Search PageFind User Scripts for Your Favorite Websites the Easy WaySet Up User Scripts in Opera Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • BizTalk Cross Reference Data Management Strategy

    - by charlie.mott
    Article Source: http://geekswithblogs.net/charliemott This article describes an approach to the management of cross reference data for BizTalk.  Some articles about the BizTalk Cross Referencing features can be found here: http://home.comcast.net/~sdwoodgate/xrefseed.zip http://geekswithblogs.net/michaelstephenson/archive/2006/12/24/101995.aspx http://geekswithblogs.net/charliemott/archive/2009/04/20/value-vs.id-cross-referencing-in-biztalk.aspx Options Current options to managing this data include: Maintaining xml files in the format that can be used by the out-of-the-box BTSXRefImport.exe utility. Use of user interfaces that have been developed to manage this data: BizTalk Cross Referencing Tool XRef XML Creation Tool However, there are the following issues with the above options: The 'BizTalk Cross Referencing Tool' requires a separate database to manage.  The 'XRef XML Creation' tool has no means of persisting the data settings. The 'BizTalk Cross Referencing tool' generates integers in the common id field. I prefer to use a string (e.g. acme.country.uk). This is more readable. (see naming conventions below). Both UI tools continue to use BTSXRefImport.exe.  This utility replaces all xref data. This can be a problem in continuous integration environments that support multiple clients or BizTalk target instances.  If you upload the data for one client it would destroy the data for another client.  Yet in TFS where builds run concurrently, this would break unit tests. Alternative Approach In response to these issues, I instead use simple SQL scripts to directly populate the BizTalkMgmtDb xref tables combined with a data namepacing strategy to isolate client data. Naming Conventions All data keys use namespace prefixing.  The pattern will be <companyName>.<data Type>.  The naming conventions will be to use lower casing for all items.  The data must follow this pattern to isolate it from other company cross-reference data.  The table below shows some sample data. (Note: this data uses the 'ID' cross-reference tables.  the same principles apply for the 'value' cross-referencing tables). Table.Field Description Sample Data xref_AppType.appType Application Types acme.erp acme.portal acme.assetmanagement xref_AppInstance.appInstance Application Instances (each will have a corresponding application type). acme.dynamics.ax acme.dynamics.crm acme.sharepoint acme.maximo xref_IDXRef.idXRef Holds the cross reference data types. acme.taxcode acme.country xref_IDXRefData.CommonID Holds each cross reference type value used by the canonical schemas. acme.vatcode.exmpt acme.vatcode.std acme.country.usa acme.country.uk xref_IDXRefData.AppID This holds the value for each application instance and each xref type. GBP USD SQL Scripts The data to be stored in the BizTalkMgmtDb xref tables will be managed by SQL scripts stored in a database project in the visual studio solution. File(s) Description Build.cmd A sqlcmd script to deploy data by running the SQL scripts below.  (This can be run as part of the MSBuild process).   acme.purgexref.sql SQL script to clear acme.* data from the xref tables.  As such, this will not impact data for any other company. acme.applicationInstances.sql   SQL script to insert application type and application instance data.   acme.vatcode.sql acme.country.sql etc ...  There will be a separate SQL script to insert each cross-reference data type and application specific values for these types.

    Read the article

  • SQL SERVER – 3 Challenges for DBA and Smart Solutions

    - by Pinal Dave
    Developer’s life is never easy. DBA’s life is even crazier. DBA’s Life When a developer wakes up in the morning, most of the time have no idea what different challenges they are going to face that day. Of course, most of the developers know the project and roadmap, which they are working on. However, developers have no clue what coding challenges which they are going face for that day. DBA’s life is even crazier. When DBA wakes up in the morning – they often thank that they were not disturbed during the night due to server issues. The very next thing they wish is that they do not want to challenge which they can’t solve for that day. The problems DBA face every single day are mostly unpredictable and they just have to solve them as they come during the day. Though the life of DBA is not always bad. There are always ways and methods how one can overcome various challenges. Let us see three of the challenges and how a DBA can use various tools to overcome them. Challenge #1 Synchronize Data Across Server A Very common challenge DBA receive is that they have to synchronize the data across the servers. If you try to manually write that up, it may take forever to accomplish the task. It is nearly impossible to do the same with the help of the T-SQL. However, thankfully there are tools like dbForge Studio which can save a day and synchronize data across servers. Read my detailed blog post about the same over here: SQL SERVER – Synchronize Data Exclusively with T-SQL. Challenge #2 SQL Report Builder DBA’s are often asked to build reports on the go. It really annoys DBA’s, but hardly people care about it. No matter how busy a DBA is, they are just called upon to build reports on things on very short notice. I personally like to avoid any task which is given to me accidently and personally building report can be boring. I rather spend time with High Availability, disaster recovery, performance tuning rather than building report. I use SQL third party tool when I have to work with SQL Report. Others have extended reporting capabilities. The latter group of products includes the SQL report builder built-in todbForge Studio for SQL Server. I have blogged about this earlier over here: SQL SERVER – SQL Report Builder in dbForge Studio for SQL Server. Challenge #3 Work with the OTHER Database The manager does not understand that MySQL is different from SQL Server and SQL Server is different from Oracle. For them everything is same. In my career hundreds of times I have faced a situation that I am given a database to manage or do some task when their regular DBA is on vacation or leave. When I try to explain I do not understand the underlying the technology, I have been usually told that my manager has trust on me and I can do anything. Honestly, I can’t but I hardly dare to argue. I fall back on the third party tool to manage database when it is not in my comfort zone. For example, I was once given MySQL performance tuning task (at that time I did not know MySQL so well). To simplify search for a problem query let us use MySQL Profiler in dbForge Studio for MySQL. It provides such commands as a Query Profiling Mode and Generate Execution Plan. Here is the blog post discussing about the same: MySQL – Profiler : A Simple and Convenient Tool for Profiling SQL Queries. Well, that’s it! There were many different such occasions when I have been saved by the tool. May be some other day I will write part 2 of this blog post. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL Tagged: Devart, SQL Tool

    Read the article

  • Managing User & Role Security with Oracle SQL Developer

    - by thatjeffsmith
    With the advent of SQL Developer v3.0, users have had access to some powerful database administration features. Version 3.1 introduced more powerful features such as an interface to Data Pump and RMAN. Today I want to talk about some very simple but frequently ran tasks that SQL Developer can assist with, like: identifying privs granted to users managing role privs assigning new roles and privs to users & roles Before getting started, you’ll need a connection to the database with the proper privileges. The common ROLE used to accomplish this is the ‘DBA‘ role. Curious as to what the DBA role is actually comprised of? Let’s find out! Open the DBA Console First make sure you’re connected to the database you want to manage security on with a privileged administrator account. Then open the View menu and select ‘DBA.’ Accessing the DBA panel ‘Create’ a Connection Click on the green ‘+’ button in the DBA panel. It will ask you to choose a previously defined SQL Developer connection. Defining a DBA connection in Oracle SQL Developer Once connected you will see a tree list of DBA features you can start interacting with. Expand the ‘Security’ Tree Node As you click on an object in the DBA panel, the ‘viewer’ will open on the right-hand-side, just like you are accustomed to seeing when clicking on a table or stored procedure. Accessing the DBA role If I’m a newly hired Oracle DBA, the first thing I might want to do is become very familiar with the DBA role. People will be asking you to grant them this role or a subset of its privileges. Once you see what the role can do, you will become VERY protective of it. My favorite 3-letter 4-letter word is ‘ANY’ and the DBA role is littered with privileges like this: ANY TABLE privs granted to DBA role So if this doesn’t freak you out, then maybe you should re-consider your career path. Or in other words, don’t be granting this role to ANYONE you don’t completely trust to take care of your database. If I’m just assigned a new database to manage, the first thing I might want to look at is just WHO has been assigned the DBA role. SQL Developer makes this easy to ascertain, just click on the ‘User Grantees’ panel. Who has the keys to your car? Making Changes to Roles and Users If you mouse-right-click on a user in the Tree, you can do individual tasks like grant a sys priv or expire an account. But, you can also use the ‘Edit User’ dialog to do a lot of work in one pass. As you click through options in these dialogs, it will build the ‘ALTER USER’ script in the SQL panel, which can then be executed or copied to the worksheet or to your .SQL file to be ran at your discretion. A Few Clicks vs a Lot of Typing These dialogs won’t make you a DBA, but if you’re pressed for time and you’re already in SQL Developer, they can sure help you make up for lost time in just a few clicks!

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >