Search Results

Search found 34141 results on 1366 pages for 'even mien'.

Page 623/1366 | < Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >

  • Does software architect/designer require more skills and intellectual than software engineer (implementation)?

    - by Amumu
    So I heard the positions for designing software and writing spec for developers to implement are higher and getting paid more. I think many companies are using the Software Engineering title to depict the person to implement software, which means using tools and technologies to write the actual code. I know that in order to be a software architecture, one needs to be good at implementation in order to have an architectural overview of a system using a set of specific technologies. This is different than I thought of a Software Engineer. My thinking is similar to the standard of IEEE: A software engineer is an engineer who is capable of going from requirement analysis until the software is deployed, based on the SWEBOK (IEEE). Just look at the table of content. The IEEE even has the certificate for Software Engineering, since ABET (Accreditation Board for Engineering and Technology) seems to not have an official qualification test for Software Engineer (although IEEE is a member of ABET). The two certificates are CSDA and CSDP. I intend to take on these two examination in the future to be qualified as a software engineer, although I am already working as one (Junior position). On a side note on the issues of Software Engineer, you can read the dicussion here: Just a Programmer and Just a Software Engineer. The information of ABET does not accredit Software Engineer is in "Just a Software Engineer". On the other hand, why is Programmer/Softwar Engineer who writes code considered a low level position? Suppose if two people have equal skills after the same years of experience, one becomes a software architect and one keeps focus on implementation aspect of Software Engineering (of course he also has design skill to compose a system, since he's a software engineer as well, but maybe less than the specialized software architect), how comes work from Software Engineer is less complicated than the Software Architect? In order to write great code with turn design into reality, it requires far greater skill than just understanding a particular language and a framework. I don't think the ones who wrote and contributing Linux OS are lower level job and easier than conceptual design and writing spec. Can someone enlighten me?

    Read the article

  • Quotas - Using quotas on ZFSSA shares and projects and users

    - by Steve Tunstall
    So you don't want your users to fill up your entire storage pool with their MP3 files, right? Good idea to make some quotas. There's some good tips and tricks here, including a helpful workflow (a script) that will allow you to set a default quota on all of the users of a share at once. Let's start with some basics. I mad a project called "small" and inside it I made a share called "Share1". You can set quotas on the project level, which will affect all of the shares in it, or you can do it on the share level like I am here. Go the the share's General property page. First, I'm using a Windows client, so I need to make sure I have my SMB mountpoint. Do you know this trick yet? Go to the Protocol page of the share. See the SMB section? It needs a resource name to make the UNC path for the SMB (Windows) users. You do NOT have to type this name in for every share you make! Do this at the Project level. Before you make any shares, go to the Protocol properties of the Project, and set the SMB Resource name to "On". This special code will automatically make the SMB resource name of every share in the project the same as the share name. Note the UNC path name I got below. Since I did this at the Project level, I didn't have to lift a finger for it to work on every share I make in this project. Simple. So I have now mapped my Windows "Z:" drive to this Share1. I logged in as the user "Joe". Note that my computer shows my Z: drive as 34GB, which is the entire size of my Pool that this share is in. Right now, Joe could fill this drive up and it would fill up my pool.  Now, go back to the General properties of Share1. In the "Space Usage" area, over on the right, click on the "Show All" text under the Users & Groups section. Sure enough, Joe and some other users are in here and have some data. Note this is also a handy window to use just to see how much space your users are using in any given share.  Ok, Joe owes us money from lunch last week, so we want to give him a quota of 100MB. Type his name in the Users box. Notice how it now shows you how much data he's currently using. Go ahead and give him a 100M quota and hit the Apply button. If I go back to "Show All", I can see that Joe now has a quota, and no one else does. Sure enough, as soon as I refresh my screen back on Joe's client, he sees that his Z: drive is now only 100MB, and he's more than half way full.  That was easy enough, but what if you wanted to make the whole share have a quota, so that the share itself, no matter who uses it, can only grow to a certain size? That's even easier. Just use the Quota box on the left hand side. Here, I use a Quota on the share of 300MB.  So now I log off as Joe, and log in as Steve. Even though Steve does NOT have a quota, it is showing my Z: drive as 300MB. This would effect anyone, INCLUDING the ROOT user, becuase you specified the Quota to be on the SHARE, not on a person.  Note that back in the Share, if you click the "Show All" text, the window does NOT show Steve, or anyone else, to have a quota of 300MB. Yet we do, because it's on the share itself, not on any user, so this panel does not see that. Ok, here is where it gets FUN.... Let's say you do NOT want a quota on the SHARE, because you want SOME people, like Root and yourself, to have FULL access to it and you want the ability to fill the whole thing up if you darn well feel like it. HOWEVER, you want to give the other users a quota. HOWEVER you have, say, 200 users, and you do NOT feel like typing in each of their names and giving them each a quota, and they are not all members of a AD global group you could use or anything like that.  Hmmmmmm.... No worries, mate. We have a handy-dandy script that can do this for us. Now, this script was written a few years back by Tim Graves, one of our ZFSSA engineers out of the UK. This is not my script. It is NOT supported by Oracle support in any way. It does work fine with the 2011.1.4 code as best as I can tell, but Oracle, and I, are NOT responsible for ANYTHING that you do with this script. Furthermore, I will NOT give you this script, so do not ask me for it. You need to get this from your local Oracle storage SC. I will give it to them. I want this only going to my fellow SCs, who can then work with you to have it and show you how it works.  Here's what it does...Once you add this workflow to the Maintenance-->Workflows section, you click it once to run it. Nothing seems to happen at this point, but something did.   Go back to any share or project. You will see that you now have four new, custom properties on the bottom.  Do NOT touch the bottom two properties, EVER. Only touch the top two. Here, I'm going to give my users a default quota of about 40MB each. The beauty of this script is that it will only effect users that do NOT already have any kind of personal quota. It will only change people who have no quota at all. It does not effect the Root user.  After I hit Apply on the Share screen. Nothing will happen until I go back and run the script again. The first time you run it, it creates the custom properties. The second and all subsequent times you run it, it checks the shares for any users, and applies your quota number to each one of them, UNLESS they already have one set. Notice in the readout below how it did NOT apply to my Joe user, since Joe had a quota set.  Sure enough, when I go back to the "Show All" in the share properties, all of the users who did not have a quota, now have one for 39.1MB. Hmmm... I did my math wrong, didn't I?    That's OK, I'll just change the number of the Custom Default quota again. Here, I am adding a zero on the end.  After I click Apply, and then run the script again, all of my users, except Joe, now have a quota of 391MB  You can customize a person at any time. Here, I took the Steve user, and specifically gave him a Quota of zero. Now when I run the script again, he is different from the rest, so he is no longer effected by the script. Under Show All, I see that Joe is at 100, and Steve has no Quota at all. I can do this all day long. es, you will have to re-run the script every time new users get added. The script only applies the default quota to users that are present at the time the script is ran. However, it would be a simple thing to schedule the script to run each night, or to make an alert to run the script when certain events occur.  For you power users, if you ever want to delete these custom properties and remove the script completely, you will find these properties under the "Schema" section under the Shares section. You can remove them here. There's no need to, however, they don't hurt a thing if you just don't use them.  I hope these tips have helped you out there. Quotas can be fun. 

    Read the article

  • Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox

    - by Asian Angel
    Do you need to make the most efficient use possible of vertical UI space on your system’s screen, but have horizontal space to spare? Now you can shift the toolbar icons and their awesome functionality to a slim sidebar in Firefox using the Vertical Toolbar extension. As you can see above the sidebar even picked up on our Personas Theme to help it blend in nicely with the rest of the browser. You can access the options for the new toolbar by right clicking within the toolbar area. These are the options for the toolbar…you can choose the side of Firefox that works best for toolbar placement, adjust display, hiding, & animation settings, define how the buttons display, and add/remove additional buttons as desired. Once you open the Customize Toolbar Window make any desired additions or removals just like you would before on the top UI section and close when finished. Note: Works with Firefox 4.0b7pre – 4.0.* Vertical Toolbar [Mozilla Add-ons] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar Reader for Android Updates; Now with Feed Widgets and More

    Read the article

  • Time Warp

    - by Jesse
    It’s no secret that daylight savings time can wreak havoc on systems that rely heavily on dates. The system I work on is centered around recording dates and times, so naturally my co-workers and I have seen our fair share of date-related bugs. From time to time, however, we come across something that we haven’t seen before. A few weeks ago the following error message started showing up in our logs: “The supplied DateTime represents an invalid time. For example, when the clock is adjusted forward, any time in the period that is skipped is invalid.” This seemed very cryptic, especially since it was coming from areas of our application that are typically only concerned with capturing date-only (no explicit time component) from the user, like reports that take a “start date” and “end date” parameter. For these types of parameters we just leave off the time component when capturing the date values, so midnight is used as a “placeholder” time. How is midnight an “invalid time”? Globalization Is Hard Over the last couple of years our software has been rolled out to users in several countries outside of the United States, including Brazil. Brazil begins and ends daylight savings time at midnight on pre-determined days of the year. On October 16, 2011 at midnight many areas in Brazil began observing daylight savings time at which time their clocks were set forward one hour. This means that at the instant it became midnight on October 16, it actually became 1:00 AM, so any time between 12:00 AM and 12:59:59 AM never actually happened. Because we store all date values in the database in UTC, always adjust any “local” dates provided by a user to UTC before using them as filters in a query. The error we saw was thrown by .NET when trying to convert the Brazilian local time of 2011-10-16 12:00 AM to UTC since that local time never actually existed. We hadn’t experienced this same issue with any of our US customers because the daylight savings time changes in the US occur at 2:00 AM which doesn’t conflict with our “placeholder” time of midnight. Detecting Invalid Times In .NET you might use code similar to the following for converting a local time to UTC: var localDate = new DateTime(2011, 10, 16); //2011-10-16 @ midnight const string timeZoneId = "E. South America Standard Time"; //Windows system timezone Id for "Brasilia" timezone. var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); var convertedDate = TimeZoneInfo.ConvertTimeToUtc(localDate, localTimeZone); The code above throws the “invalid time” exception referenced above. We could try to detect whether or not the local time is invalid with something like this: var localDate = new DateTime(2011, 10, 16); //2011-10-16 @ midnight const string timeZoneId = "E. South America Standard Time"; //Windows system timezone Id for "Brasilia" timezone. var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); if (localTimeZone.IsInvalidTime(localDate)) localDate = localDate.AddHours(1); var convertedDate = TimeZoneInfo.ConvertTimeToUtc(localDate, localTimeZone); This code works in this particular scenario, but it hardly seems robust. It also does nothing to address the issue that can arise when dealing with the ambiguous times that fall around the end of daylight savings. When we roll the clocks back an hour they record the same hour on the same day twice in a row. To continue on with our Brazil example, on February 19, 2012 at 12:00 AM, it will immediately become February 18, 2012 at 11:00 PM all over again. In this scenario, how should we interpret February 18, 2011 11:30 PM? Enter Noda Time I heard about Noda Time, the .NET port of the Java library Joda Time, a little while back and filed it away in the back of my mind under the “sounds-like-it-might-be-useful-someday” category.  Let’s see how we might deal with the issue of invalid and ambiguous local times using Noda Time (note that as of this writing the samples below will only work using the latest code available from the Noda Time repo on Google Code. The NuGet package version 0.1.0 published 2011-08-19 will incorrectly report unambiguous times as being ambiguous) : var localDateTime = new LocalDateTime(2011, 10, 16, 0, 0); const string timeZoneId = "Brazil/East"; var timezone = DateTimeZone.ForId(timeZoneId); var localDateTimeMaping = timezone.MapLocalDateTime(localDateTime); ZonedDateTime unambiguousLocalDateTime; switch (localDateTimeMaping.Type) { case ZoneLocalMapping.ResultType.Unambiguous: unambiguousLocalDateTime = localDateTimeMaping.UnambiguousMapping; break; case ZoneLocalMapping.ResultType.Ambiguous: unambiguousLocalDateTime = localDateTimeMaping.EarlierMapping; break; case ZoneLocalMapping.ResultType.Skipped: unambiguousLocalDateTime = new ZonedDateTime( localDateTimeMaping.ZoneIntervalAfterTransition.Start, timezone); break; default: throw new InvalidOperationException(string.Format("Unexpected mapping result type: {0}", localDateTimeMaping.Type)); } var convertedDateTime = unambiguousLocalDateTime.ToInstant().ToDateTimeUtc(); Let’s break this sample down: I’m using the Noda Time ‘LocalDateTime’ object to represent the local date and time. I’ve provided the year, month, day, hour, and minute (zeros for the hour and minute here represent midnight). You can think of a ‘LocalDateTime’ as an “invalidated” date and time; there is no information available about the time zone that this date and time belong to, so Noda Time can’t make any guarantees about its ambiguity. The ‘timeZoneId’ in this sample is different than the ones above. In order to use the .NET TimeZoneInfo class we need to provide Windows time zone ids. Noda Time expects an Olson (tz / zoneinfo) time zone identifier and does not currently offer any means of mapping the Windows time zones to their Olson counterparts, though project owner Jon Skeet has said that some sort of mapping will be publicly accessible at some point in the future. I’m making use of the Noda Time ‘DateTimeZone.MapLocalDateTime’ method to disambiguate the original local date time value. This method returns an instance of the Noda Time object ‘ZoneLocalMapping’ containing information about the provided local date time maps to the provided time zone.  The disambiguated local date and time value will be stored in the ‘unambiguousLocalDateTime’ variable as an instance of the Noda Time ‘ZonedDateTime’ object. An instance of this object represents a completely unambiguous point in time and is comprised of a local date and time, a time zone, and an offset from UTC. Instances of ZonedDateTime can only be created from within the Noda Time assembly (the constructor is ‘internal’) to ensure to callers that each instance represents an unambiguous point in time. The value of the ‘unambiguousLocalDateTime’ might vary depending upon the ‘ResultType’ returned by the ‘MapLocalDateTime’ method. There are three possible outcomes: If the provided local date time is unambiguous in the provided time zone I can immediately set the ‘unambiguousLocalDateTime’ variable from the ‘Unambiguous Mapping’ property of the mapping returned by the ‘MapLocalDateTime’ method. If the provided local date time is ambiguous in the provided time zone (i.e. it falls in an hour that was repeated when moving clocks backward from Daylight Savings to Standard Time), I can use the ‘EarlierMapping’ property to get the earlier of the two possible local dates to define the unambiguous local date and time that I need. I could have also opted to use the ‘LaterMapping’ property in this case, or even returned an error and asked the user to specify the proper choice. The important thing to note here is that as the programmer I’ve been forced to deal with what appears to be an ambiguous date and time. If the provided local date time represents a skipped time (i.e. it falls in an hour that was skipped when moving clocks forward from Standard Time to Daylight Savings Time),  I have access to the time intervals that fell immediately before and immediately after the point in time that caused my date to be skipped. In this case I have opted to disambiguate my local date and time by moving it forward to the beginning of the interval immediately following the skipped period. Again, I could opt to use the end of the interval immediately preceding the skipped period, or raise an error depending on the needs of the application. The point of this code is to convert a local date and time to a UTC date and time for use in a SQL Server database, so the final ‘convertedDate’  variable (typed as a plain old .NET DateTime) has its value set from a Noda Time ‘Instant’. An 'Instant’ represents a number of ticks since 1970-01-01 at midnight (Unix epoch) and can easily be converted to a .NET DateTime in the UTC time zone using the ‘ToDateTimeUtc()’ method. This sample is admittedly contrived and could certainly use some refactoring, but I think it captures the general approach needed to take a local date and time and convert it to UTC with Noda Time. At first glance it might seem that Noda Time makes this “simple” code more complicated and verbose because it forces you to explicitly deal with the local date disambiguation, but I feel that the length and complexity of the Noda Time sample is proportionate to the complexity of the problem. Using TimeZoneInfo leaves you susceptible to overlooking ambiguous and skipped times that could result in run-time errors or (even worse) run-time data corruption in the form of a local date and time being adjusted to UTC incorrectly. I should point out that this research is my first look at Noda Time and I know that I’ve only scratched the surface of its full capabilities. I also think it’s safe to say that it’s still beta software for the time being so I’m not rushing out to use it production systems just yet, but I will definitely be tinkering with it more and keeping an eye on it as it progresses.

    Read the article

  • Python — Time complexity of built-in functions versus manually-built functions in finite fields

    - by stackuser
    Generally, I'm wondering about the advantages versus disadvantages of using the built-in arithmetic functions versus rolling your own in Python. Specifically, I'm taking in GF(2) finite field polynomials in string format, converting to base 2 values, performing arithmetic, then output back into polynomials as string format. So a small example of this is in multiplication: Rolling my own: def multiply(a,b): bitsa = reversed("{0:b}".format(a)) g = [(b<<i)*int(bit) for i,bit in enumerate(bitsa)] return reduce(lambda x,y: x+y,g) Versus the built-in: def multiply(a,b): # a,b are GF(2) polynomials in binary form .... return a*b #returns product of 2 polynomials in gf2 Currently, operations like multiplicative inverse (with for example 20 bit exponents) take a long time to run in my program as it's using all of Python's built-in mathematical operations like // floor division and % modulus, etc. as opposed to making my own division, remainder, etc. I'm wondering how much of a gain in efficiency and performance I can get by building these manually (as shown above). I realize the gains are dependent on how well the manual versions are built, that's not the question. I'd like to find out 'basically' how much advantage there is over the built-in's. So for instance, if multiplication (as in the example above) is well-suited for base 10 (decimal) arithmetic but has to jump through more hoops to change bases to binary and then even more hoops in operating (so it's lower efficiency), that's what I'm wondering. Like, I'm wondering if it's possible to bring the time down significantly by building them myself in ways that maybe some professionals here have already come across.

    Read the article

  • Starting a Java activity in Unity3d Android

    - by Matthew Pavlinsky
    I wrote a small Java activity extension of UnityPlayerActivity similar to what is described in the Unity docs. It has a method for displaying a song picking interface using an ACTION_GET_CONTENT intent. I start this activity using startActivityForResult() and it absolutely kills the performance of my Unity game when it is finished, it drops to about .1 FPS afterwords. I've changed removed the onActivityResult function and even tried starting the activity from inside an onKeyDown event in Java to make sure my method of starting the activity from Unity was not the problem. Heres the code in a basic sense: package com.company.product; import com.unity3d.player.UnityPlayerActivity; import com.unity3d.player.UnityPlayer; import android.os.Bundle; import android.util.Log; import android.content.Intent; public class SongPickerActivity extends UnityPlayerActivity { private Intent myIntent; final static int PICK_SONG = 1; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Log.i("SongPickerActivity", "OnCreate"); myIntent = new Intent(Intent.ACTION_GET_CONTENT); myIntent.setType("audio/*"); } public void Pick() { Log.i("SongPickerActivity", "Pick"); startActivityForResult(myIntent, PICK_SONG); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); } } This is causing me a bit more of a headache than it should and I would be thankful for any sort of advice. Does anyone have any experience with using custom activities in Unity Android or any insight on why this is happening or how to resolve this?

    Read the article

  • How to Make Sure Your Company Doesn't Go Underwater If Your Programmers Win the Lottery

    - by Graviton
    I have a few programmers under me, they are all doing very great and very smart obviously. Thank you very much. But the problem is that each and every one of them is responsible for one core area, which no one else on the team have foggiest idea on what it is. This means that if anyone of them is taken out, my company as a business is dead because they aren't replaceable. I'm thinking about bringing in new programmers to cover them, just in case they are hit by a bus, or resign or whatever. But I afraid that The old programmers might actively resist the idea of knowledge transfer, fearing that a backup might reduce their value. I don't have a system to facilitate technology transfer between different developers, so even if I ask them to do it, I've no assurance that they will do it properly. My question is, How to put it to the old programmers in such they would agree What are systems that you use, in order to facilitate this kind of "backup"? I can understand that you can do code review, but is there a simple way to conduct this? I think we are not ready for a full blown, check-in by check-in code review.

    Read the article

  • AJAX Control Toolkit - Incompatibility with HTMLEditor and UpdatePanel

    - by Guilherme Cardoso
    Unfortunately HTMLEditor component of AJAX Control Toolkit is not compatible with the UpdatePanel. The problem is when we use accents with the Mozilla Firefox browser and HTMLEditor is inside an UpdatePanel. Letters that contain accents are left with an unknown character (so is stored in the database or even returned a PostBack). Can be tested using Mozilla Firefox on the site of the ASP.NET AJAX Control Toolkit.  Write a word with accents and go to "Submit Content": http://www.asp.net/AJAX/AjaxControlToolkit/Samples/HTMLEditor/HTMLEditor.aspx As an alternative to this problem there are multiple component Rich Text Editors, some using jQuery and others not. Queneeshas provided us a list of 10 components that can be viewed here: http://www.queness.com/post/212/10-jquery-and-non-jquery-javascript-rich-text-editors Hopefully next release of the AJAX Control Toolkit, this inconsistency and others (like the ModalPopup Extender that already referenced in my blog) are resolved once and for all. This is because there are more updated versions prior to that do not have these problems, and with the passing of time some parts were coming into conflict. If you know of any alternative or want to know at this problem, you can visit the topic I created the section of the AJAX Control Toolkit in ASP.NET forum: http://forums.asp.net/p/1548141/3848763.aspx

    Read the article

  • What are the reasons why Clojure is hyped and PicoLisp widely ignored?

    - by Thorsten
    I recently discovered the Lisp family of programming languages, and it's definitely one of the more diverse and widespread families in the programming language world. I like Elisp because that most wonderful tool Emacs is an Elisp interpreter. But I was looking for one more Lisp dialect to learn and thought Clojure would be the obvious choice nowadays - until I discovered the well hidden gem PicoLisp. That must be the most intelligent programming environment I have ever seen, like taking the best ideas from Lisp and Smalltalk and adding performance and practicability - and the beauty of parsimony. There is even an Emacs-mode for it. PicoLisp must be the productivity world champion when it comes to building business applications with database and web-client - and that's a very common task. It seems that throwing more and more hardware cores at your PicoLisp application makes it faster and faster, and the database is very performant anyway. However, reactions to PicoLisp in in general mailing-lists etc. are almost hostile (envy?), and there is absolutely no hype and very little publicity (ie not one book published). Are there real justified reasons for this (except the vast amount of java-libs accessible by Clojure, I know that one)? Or is the mainstream it getting wrong again (see C vs Lisp, Java vs Smalltalk, Windows vs Linux) and will come to the conclusion 10 years later that the JVM was good as in between solution, but a really fast Lisp interpreter on multicore machines is much better and allows much cleaner concepts? PS 1: Please note: I'm not interested in Scheme or any Common Lisp dialect, although they might be fine languages. It's just PicoLisp vs Clojure. PS 2: another thing I like about PicoLisp is its similarity to Elisp in certain aspects (both are descendants from MacLisp?) - it's easier to learn two similar languages. There is so much "dynamic binding bashing" on the web, but two of the most appealing Lisp applications use it.

    Read the article

  • Python productivity VS Java Productivity

    - by toc777
    Over on SO I came across a question regarding which platform, Java or Python is best for developing on Google AppEngine. Many people were boasting of the increased productivity gained from using Python over Java. One thing I would say about the Python vs Java productivity argument, is Java has excellent IDE's to speed up development where as Python is really lacking in this area because of its dynamic nature. So even though I prefer to use Python as a language, I don't believe it gives quite the productivity boost compared to Java especially when using a new framework. Obviously if it were Java vs Python and the only editor you could use was VIM then Python would give you a huge productivity boost but when IDE's are brought into the equation its not as clear cut. I think Java's merits are often solely evaluated on a language level and often on out dated assumptions but Java has many benefits external to the language itself, e.g the JVM (often criticized but offers huge potential), excellent IDE's and tools, huge numbers of third party libraries, platforms etc.. Question, Does Python/related dynamic languages really give the huge productivity boosts often talked about? (with consideration given to using new frameworks and working with medium to large applications).

    Read the article

  • First Stable Version of Opera 15 has been Released

    - by Akemi Iwaya
    Opera has just released the first stable version of their revamped browser and will be proceeding at a rapid pace going forward. There is also news concerning the three development streams they will maintain along with news of an update for the older 12.x series for those who are not ready to update to 15.x just yet. The day is full of good news for Opera users whether they have already switched to the new Blink/Webkit Engine version or are still using the older Presto Engine version. First, news of the new development streams… Opera has released details outlining their three new release streams: Opera (Stable) – Released every couple of weeks, this is the most solid version, ready for mission-critical daily use. Opera Next – Updated more frequently than Stable, this is the feature-complete candidate for the Stable version. While it should be ready for daily use, you can expect some bugs there. Opera Developer – A bleeding edge version, you can expect a lot of fancy stuff there; however, some nasty bugs might also appear from time to time. From the Opera Desktop Team blog post: When you install Opera from a particular stream, your installation will stick to it, so Opera Stable will be always updated to Opera Stable, Opera Next to Opera Next and so on. You can choose for yourself which stream is the best for you. You can even follow a couple of them at the same time! Of particular interest is the announcement of continued development for the 12.x series. A new version (12.16) is due to be released soon to help keep the older series up to date and secure while the transition process from 12.x to 15.x continues.    

    Read the article

  • SOA Suite 11gR1 Patch Set 2 (PS2) released today!

    - by Demed L'Her
      We just released this morning SOA Suite 11gR1 Patch Set 2 (PS2)! You can download it as usual from: OTN (main platforms only) eDelivery (all platforms)   11gR1 PS2 is delivered as a sparse installer, that is to say that it is meant to be applied on the latest full release (11gR1 PS1). The good part is that it’s great for existing PS1 users who simply need to apply the patch and run the patch assistant – the not so good part is that new users will first need to download PS1. What’s in that release? Bug fixes of course but also several significant new features. Here is a short selection of the most significant features in PS2: Spring component (for native Java extensibility and integration) SOA Partitions (to organize and manage your composites) Direct Binding (for transactional invocations to and from Oracle Service Bus) HTTP binding (for those of you trying to do away with SOAP and looking for simple GET and POST) Resequencer (for ordering out-of-order messages) WS Atomic Transactions (WS-AT) support (for propagation of transactions across heterogeneous environments) Check out the complete list of new features in PS2 for more (including links to the documentation for the above)! But maybe even more importantly we are also releasing Oracle Service Bus 11gR1 and BPM Suite 11gR1 at the same time – all on the same base platform (WebLogic Server 10.3.3)! (NB: it might take a while for all pages and caches to be updated with the new content so if you don’t find what you need today, try again soon!)   Technorati Tags: ps1,11gr1ps2,new release,oracle soa suite,oracle

    Read the article

  • [GEEK SCHOOL] Network Security 1: Securing User Accounts and Passwords in Windows

    - by Matt Klein
    This How-To Geek School class is intended for people who want to learn more about security when using Windows operating systems. You will learn many principles that will help you have a more secure computing experience and will get the chance to use all the important security tools and features that are bundled with Windows. Obviously, we will share everything you need to know about using them effectively. In this first lesson, we will talk about password security; the different ways of logging into Windows and how secure they are. In the proceeding lesson, we will explain where Windows stores all the user names and passwords you enter while working in this operating systems, how safe they are, and how to manage this data. Moving on in the series, we will talk about User Account Control, its role in improving the security of your system, and how to use Windows Defender in order to protect your system from malware. Then, we will talk about the Windows Firewall, how to use it in order to manage the apps that get access to the network and the Internet, and how to create your own filtering rules. After that, we will discuss the SmartScreen Filter – a security feature that gets more and more attention from Microsoft and is now widely used in its Windows 8.x operating systems. Moving on, we will discuss ways to keep your software and apps up-to-date, why this is important and which tools you can use to automate this process as much as possible. Last but not least, we will discuss the Action Center and its role in keeping you informed about what’s going on with your system and share several tips and tricks about how to stay safe when using your computer and the Internet. Let’s get started by discussing everyone’s favorite subject: passwords. The Types of Passwords Found in Windows In Windows 7, you have only local user accounts, which may or may not have a password. For example, you can easily set a blank password for any user account, even if that one is an administrator. The only exception to this rule are business networks where domain policies force all user accounts to use a non-blank password. In Windows 8.x, you have both local accounts and Microsoft accounts. If you would like to learn more about them, don’t hesitate to read the lesson on User Accounts, Groups, Permissions & Their Role in Sharing, in our Windows Networking series. Microsoft accounts are obliged to use a non-blank password due to the fact that a Microsoft account gives you access to Microsoft services. Using a blank password would mean exposing yourself to lots of problems. Local accounts in Windows 8.1 however, can use a blank password. On top of traditional passwords, any user account can create and use a 4-digit PIN or a picture password. These concepts were introduced by Microsoft to speed up the sign in process for the Windows 8.x operating system. However, they do not replace the use of a traditional password and can be used only in conjunction with a traditional user account password. Another type of password that you encounter in Windows operating systems is the Homegroup password. In a typical home network, users can use the Homegroup to easily share resources. A Homegroup can be joined by a Windows device only by using the Homegroup password. If you would like to learn more about the Homegroup and how to use it for network sharing, don’t hesitate to read our Windows Networking series. What to Keep in Mind When Creating Passwords, PINs and Picture Passwords When creating passwords, a PIN, or a picture password for your user account, we would like you keep in mind the following recommendations: Do not use blank passwords, even on the desktop computers in your home. You never know who may gain unwanted access to them. Also, malware can run more easily as administrator because you do not have a password. Trading your security for convenience when logging in is never a good idea. When creating a password, make it at least eight characters long. Make sure that it includes a random mix of upper and lowercase letters, numbers, and symbols. Ideally, it should not be related in any way to your name, username, or company name. Make sure that your passwords do not include complete words from any dictionary. Dictionaries are the first thing crackers use to hack passwords. Do not use the same password for more than one account. All of your passwords should be unique and you should use a system like LastPass, KeePass, Roboform or something similar to keep track of them. When creating a PIN use four different digits to make things slightly harder to crack. When creating a picture password, pick a photo that has at least 10 “points of interests”. Points of interests are areas that serve as a landmark for your gestures. Use a random mixture of gesture types and sequence and make sure that you do not repeat the same gesture twice. Be aware that smudges on the screen could potentially reveal your gestures to others. The Security of Your Password vs. the PIN and the Picture Password Any kind of password can be cracked with enough effort and the appropriate tools. There is no such thing as a completely secure password. However, passwords created using only a few security principles are much harder to crack than others. If you respect the recommendations shared in the previous section of this lesson, you will end up having reasonably secure passwords. Out of all the log in methods in Windows 8.x, the PIN is the easiest to brute force because PINs are restricted to four digits and there are only 10,000 possible unique combinations available. The picture password is more secure than the PIN because it provides many more opportunities for creating unique combinations of gestures. Microsoft have compared the two login options from a security perspective in this post: Signing in with a picture password. In order to discourage brute force attacks against picture passwords and PINs, Windows defaults to your traditional text password after five failed attempts. The PIN and the picture password function only as alternative login methods to Windows 8.x. Therefore, if someone cracks them, he or she doesn’t have access to your user account password. However, that person can use all the apps installed on your Windows 8.x device, access your files, data, and so on. How to Create a PIN in Windows 8.x If you log in to a Windows 8.x device with a user account that has a non-blank password, then you can create a 4-digit PIN for it, to use it as a complementary login method. In order to create one, you need to go to “PC Settings”. If you don’t know how, then press Windows + C on your keyboard or flick from the right edge of the screen, on a touch-enabled device, then press “Settings”. The Settings charm is now open. Click or tap the link that says “Change PC settings”, on the bottom of the charm. In PC settings, go to Accounts and then to “Sign-in options”. Here you will find all the necessary options for changing your existing password, creating a PIN, or a picture password. To create a PIN, press the “Add” button in the PIN section. The “Create a PIN” wizard is started and you are asked to enter the password of your user account. Type it and press “OK”. Now you are asked to enter a 4-digit pin in the “Enter PIN” and “Confirm PIN” fields. The PIN has been created and you can now use it to log in to Windows. How to Create a Picture Password in Windows 8.x If you log in to a Windows 8.x device with a user account that has a non-blank password, then you can also create a picture password and use it as a complementary login method. In order to create one, you need to go to “PC settings”. In PC Settings, go to Accounts and then to “Sign-in options”. Here you will find all the necessary options for changing your existing password, creating a PIN, or a picture password. To create a picture password, press the “Add” button in the “Picture password” section. The “Create a picture password” wizard is started and you are asked to enter the password of your user account. You are shown a guide on how the picture password works. Take a few seconds to watch it and learn the gestures that can be used for your picture password. You will learn that you can create a combination of circles, straight lines, and taps. When ready, press “Choose picture”. Browse your Windows 8.x device and select the picture you want to use for your password and press “Open”. Now you can drag the picture to position it the way you want. When you like how the picture is positioned, press “Use this picture” on the left. If you are not happy with the picture, press “Choose new picture” and select a new one, as shown during the previous step. After you have confirmed that you want to use this picture, you are asked to set up your gestures for the picture password. Draw three gestures on the picture, any combination you wish. Please remember that you can use only three gestures: circles, straight lines, and taps. Once you have drawn those three gestures, you are asked to confirm. Draw the same gestures one more time. If everything goes well, you are informed that you have created your picture password and that you can use it the next time you sign in to Windows. If you don’t confirm the gestures correctly, you will be asked to try again, until you draw the same gestures twice. To close the picture password wizard, press “Finish”. Where Does Windows Store Your Passwords? Are They Safe? All the passwords that you enter in Windows and save for future use are stored in the Credential Manager. This tool is a vault with the usernames and passwords that you use to log on to your computer, to other computers on the network, to apps from the Windows Store, or to websites using Internet Explorer. By storing these credentials, Windows can automatically log you the next time you access the same app, network share, or website. Everything that is stored in the Credential Manager is encrypted for your protection.

    Read the article

  • Oracle Database Security Protecting the Oracle IRM Schema

    - by Simon Thorpe
    Acquiring the Information Rights Management technology in 2006 was part of Oracle's strategic security vision and IRM compliments nicely the overall Oracle security set of solutions. A year ago I spoke about how Oracle has solutions that can help companies protect information throughout its entire life cycle. With our acquisition of Sun this set of solutions has solidified and has even extended down to the operating system and hardware level. Oracle can now offer customers technology that protects their data from the disk, through the database to documents on the desktop! With the recent release of Oracle IRM 11g I was tasked to configure demonstration and evaluation environments and I thought it would make a nice story to leverage some of the security features in the latest release of the Oracle Database. After building these environments I thought I would put together a simple video demonstrating how both Database Advanced Security and Information Rights Management combined can provide a very secure platform for protecting your information. Have a look at the following which highlights these database security options.Transparent Data Encryption protecting the communication from the Oracle IRM server to the Database server. Encryption techniques provide confidentiality and integrity of the data passing to and from the IRM service on the back end. Transparent Data Encryption protecting the Oracle IRM database schema. Encryption is used to provide confidentiality of the IRM data whilst it resides at rest in the database table space. Database Vault is used to ensure only the Oracle IRM service has access to query and update the information that resides in the database. This is an excellent method of ensuring that database administrators cannot look at or make changes to the Oracle IRM database whilst retaining their ability to administrate the database. The last thing you want after deploying an IRM solution is for a curious or unhappy DBA to run a query that grants them rights to your company financial data or documents pertaining to a merger or acquisition.

    Read the article

  • xvidcap: Error accessing sound input from /dev/dsp

    - by stivlo
    I'm running Ubuntu 11.10 and I'm trying xvidcap to record a screencast with audio from the microphone, however it can't record any sound: $ xvidcap --file appo.avi --cap_geometry 700x500-0+0 Error accessing sound input from /dev/dsp Sound disabled! Sure enough /dev/dsp doesn't even exist: $ sudo ls -lh /dev/dsp ls: cannot access /dev/dsp: No such file or directory I found a blog post about fixing xvidcap sound input, however if I try the suggestion I get: $ sudo modprobe snd-pcm-oss FATAL: Module snd_pcm_oss not found. So the question is, how can I create /dev/dsp? The problem behind the problem is: how can I record sound from the microphone with xvidcap? So workarounds are welcome too. UPDATE: I've followed the suggestion of James, and something has improved. The error accessing /dev/dsp is gone, however now I get: [oss @ 0x8e0c120] Estimating duration from bitrate, this may be inaccurate xtoffmpeg.c add_audio_stream(): Can't initialize fifo for audio recording Now when I record xvidcap appears in the recording tab of pavucontrol and I can choose Audio stream from Internal Audio Analog Stereo or Monitor of Internal Audio Analog Stereo, I tried both just in case, but the video is still mute. UPDATE 2: I found that "Monitor of" is the one to record application sounds, while for microphone, I should choose "Internal Audio Analog Stereo". To rule out other problems, such as with the microphone, I tried with gnome-sound-recorder and it works. Actually I jumped on my chair, since the volume was too high! :-)

    Read the article

  • Page Speed and it&rsquo;s affect on your business

    - by ihaynes
    Originally posted on: http://geekswithblogs.net/ihaynes/archive/2014/05/29/page-speed-and-itrsquos-affect-on-your-business.aspxPage speed was an important issue 10 years ago, when we all had slow modems, but became less so with the advent of fast broadband connections that even seemed to make Ajax unnecessary.  Then along came the mobile internet and we’re back to a world where page speed and asset optimisation are critical again. If you doubt this an article on SitePoint discussing ’Page Speed and Business Metrics’ may change your mind. http://www.sitepoint.com/page-speed-business-metrics/ Here are some of the figures it quotes: Walmart – saw a 2% increase in conversions for every second of improvement in page load time. Put another way, accumulated growth of revenues went up 1% for every 100 milliseconds of load time improvement. Yahoo – for every 400 milliseconds of improvement, the site traffic increased by 9%. The bottom line is that if people have to wait more than a few seconds for a page to fully render, particularly on a mobile device, they’ll probably go elsewhere. Ignore this at your peril.   For two previous posts on the subject see: Page Weight: 10 easy fixes Xat.com Image Optimiser – Useful for RWD/Mobile

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

  • Webcast: Best Practices for Speeding Virtual Infrastructure Deployment with Oracle VM

    - by Honglin Su
    We announced Oracle VM Blade Cluster Reference Configuration last month, see the blog. The new Oracle VM blade cluster reference configuration can help reduce the time to deploy virtual infrastructure by up to 98 percent when compared to multi-vendor configurations. Customers and partners have shown lots of interests. Join Oracle's experts to learn the best practices for speeding virtual infrastructure deployment with Oracle VM, register the webcast (1/25/2011) here.   Virtualization has already been widely accepted as a means to increase IT flexibility and help IT services align better with changing business needs. The flexibility of a virtualized IT infrastructure enables new applications to be rapidly deployed, capacity to be easily scaled, and IT resources to be quickly redirected. The net result is that IT can bring greater value to the business, making virtualization an obvious win from a business perspective. However, building a virtualized infrastructure typically requires assembling and integrating multiple components (e.g. servers, storage, network, virtualization, and operating systems). This infrastructure must be deployed and tested before applications can even be installed. It can take weeks or months to plan, architect, configure, troubleshoot, and deploy a virtualized infrastructure. The process is not only time-consuming, but also error-prone, making it hard to achieve a timely and profitable return on investment.  Oracle is the only vendor that can offer a fully integrated virtualization infrastructure with all of the necessary hardware and software components. The Oracle VM blade cluster reference configuration is a single-vendor solution that addresses every layer of the virtualization stack with Oracle hardware and software components, see the figure below. It enables quick and easy deployment of the virtualized infrastructure using components that have been tested together and are all supported together by Oracle. To learn more about Oracle's virtualization offerings, visit http://oracle.com/virtualization.

    Read the article

  • Slides and Samples for My TechEd / Microsoft BI Conference Talks

    - by plitwin
    I posted the slides and samples for my talks I delivered in New Orleans on June 8th at Microsoft TechEd and Business Intelligence Conference. They can be downloaded from Paul Litwin's Conference Downloads. #1 Creating Report Subscriptions with SQL Server 2008 Reporting Services at 8 AM on Tuesday. Room 241In this session, learn how to set up standard and data-driven subscriptions using Report Manager. We discuss creating file-share, email, and null subscriptions; and how to deal with potential issues with parameters and security. We also demonstrate a sophisticated Microsoft ASP.NET-based application that creates subscriptions by calling the SSRS Web Services API.  #2 ASP.NET MVC for Web Forms Programmers at 3:15 PM Tuesday. Room 391Are you comfortable creating ASP.NET Web Form applications but even a little curious about what all the fuss is about MVC and test-driven development? In this session, Web Form junkie Paul Litwin takes a critical look at the world of ASP.NET MVC, but not from any expert point of view. Instead, Paul shares his experience as a Web Form developer who decided to take a closer look at this radical new approach to ASP.NET development. Come hear what Paul learned and if he plans to employ ASP.NET MVC in his future ASP.NET applications.

    Read the article

  • The C++ Standard Template Library as a BDB Database (part 1)

    - by Gregory Burd
    If you've used C++ you undoubtedly have used the Standard Template Libraries. Designed for in-memory management of data and collections of data this is a core aspect of all C++ programs. Berkeley DB is a database library with a variety of APIs designed to ease development, one of those APIs extends and makes use of the STL for persistent, transactional data storage. dbstl is an STL standard compatible API for Berkeley DB. You can make use of Berkeley DB via this API as if you are using C++ STL classes, and still make full use of Berkeley DB features. Being an STL library backed by a database, there are some important and useful features that dbstl can provide, while the C++ STL library can't. The following are a few typical use cases to use the dbstl extensions to the C++ STL for data storage. When data exceeds available physical memory.Berkeley DB dbstl can vastly improve performance when managing a dataset which is larger than available memory. Performance suffers when the data can't reside in memory because the OS is forced to use virtual memory and swap pages of memory to disk. Switching to BDB's dbstl improves performance while allowing you to keep using STL containers. When you need concurrent access to C++ STL containers.Few existing C++ STL implementations support concurrent access (create/read/update/delete) within a container, at best you'll find support for accessing different containers of the same type concurrently. With the Berkeley DB dbstl implementation you can concurrently access your data from multiple threads or processes with confidence in the outcome. When your objects are your database.You want to have object persistence in your application, and store objects in a database, and use the objects across different runs of your application without having to translate them to/from SQL. The dbstl is capable of storing complicated objects, even those not located on a continous chunk of memory space, directly to disk without any unnecessary overhead. These are a few reasons why you should consider using Berkeley DB's C++ STL support for your embedded database application. In the next few blog posts I'll show you a few examples of this approach, it's easy to use and easy to learn.

    Read the article

  • Silverlight Cream for May 19, 2010 -- #865

    - by Dave Campbell
    In this Issue: Michael Washington, Mike Snow(-2-), Justin Angel(-2-), Jeremy Likness, and David Kelley. Shoutout: Erik Mork and crew have their latest up: Silverlight Week – Silverlight Android? From SilverlightCream.com: Simple Silverlight 4 Example Using oData and RX Extensions Michael Washington has a follow-on tutorial up on ViewModel, Rx, and lashed up to OData... good detailed tutorial with external links for more information. Silverlight Tip of the Day #21 – Animation Easing Mike Snow has a couple new tips up -- this first one is about easing... great diagrams to help visualize and a cool demo application to boot. Silverlight Tip of the Day #22 – Data Validation Mike Snow's second tip (#22) is about validation and again he has a great demo app on the post. Windows Phone 7 - Emulator Automation Justin Angel has a WP7 post up about Automating the emulator... and in the process, loading the emulator from something other than VS2010... lots of good information. TFS2010 WP7 Continuous Integration Justin Angel hinted at continuous integration for WP7 in the last post, and he pays off with this one... even without all the bits installed on the build server. Making the ScrollViewer Talk in Silverlight 4 Jeremy Likness tried to respond to a user query about knowing when a user scrolled to the bottom of a ScrollViewer... Jeremy resolved it by listening to the right property. MEF (Microsoft Extensibility Framework) made simple (ish) David Kelley is discussing MEF and using a real-world example while doing so. Good discussion and code available in his code browser app... check thecomments. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Oracle CRM On Demand R17 and Pharma's Future

    - by charles.knapp
    By Denis Pombriant, Beagle Research, March 30 "Oracle announced Release 17 of CRM On-Demand today along with an updated vertical market version for the pharmaceutical industry. Seventeen is a lot of releases even for a SaaS company and Oracle should be proud of the milestone. The same is true of the emphasis on the pharmaceutical industry vertical. Oracle comes to the pharma CRM market with an assist from Siebel, the one time independent leader in CRM that Oracle bought a few years back. Before the acquisition Siebel and its pharma package had managed to corner about nineteen of the top twenty pharmaceutical companies. For a time in the last decade you could go from job to job as a pharma rep taking your Siebel skills with you and feel right at home. The writing on the wall now though is that pharmaceutical sales is transitioning to a SaaS model and Oracle is managing the transition for its customers. Oracle's done a good job of keeping up with changes in the industry and you have to admit that pharma sales is a different kettle of fish than almost anything else in CRM." For additional insights, read here.

    Read the article

  • Awesome Read: Buck Woody&rsquo;s post on the proper use of the Windows Azure VM Role

    - by Enrique Lima
    I have heard some service providers (or cloud providers), hosting companies and such complain, criticize or even venture to call foul on Microsoft’s Azure VM Role.  The problem:  None of them have gone through the effort of truly understanding (or perhaps not wanting to know) what is going on there.  Many have jumped right into the “purist” definition of IaaS, or PaaS for that matter. Ok, Buck’s post is a true gem (my opinion) in the sense it gives you parallels of what the VM Role is and is not.  And it brings Hyper-V and SCVMM to the forefront explaining what it is and what it also offers to the IaaS Microsoft offers. Here is an excerpt of the summary, but please go on over to read his post it will clear a lot if you are wondering when and how to use the Windows Azure VM Role. The excerpt: “Virtualizing servers alone has limitations of scale, availability and recovery. Microsoft’s offering in this area is Hyper-V and System Center, not the VM Role. The VM Role is still used for running Stateless code, just like the Web and Worker Roles, with the exception that it allows you more control over the environment of where that code runs.” The source and post:  Buck Woody -http://blogs.msdn.com/b/buckwoody/archive/2010/12/28/the-proper-use-of-the-vm-role-in-windows-azure.aspx?wa=wsignin1.0

    Read the article

  • Not Playing Nice Together

    - by David Douglass
    One of the things I’ve noticed is that two industry trends are not playing nice together, those trends being multi-core CPUs and massive hard drives.  It’s not a problem if you keep your cores busy with compute intensive work, but for software developers the beauty of multi-core CPUs (along with gobs of RAM and a 64 bit OS) is virtualization.  But when you have only one hard drive (who needs another when it holds 2 TB of data?) you wind up with a serious hard drive bottleneck.  A solid state drive would definitely help, and might even be a complete solution, but the cost is ridiculous.  Two TB of solid state storage will set you back around $7,000!  A spinning 2 TB drive is only $150. I see a couple of solutions for this.  One is the mainframe concept of near and far storage: put the stuff that will be heavily access on a solid state drive and the rest on a spinning drive.  Another solution is multiple spinning drives.  Instead of a single 2 TB drive, get four 500 GB drives.  In total, the four 500 GB drives will cost about $100 more than the single 2 TB drive.  You’ll need to be smart about what drive you place things on so that the load is spread evenly.  Another option, for better performance, would be four 10,000 RPM 300 GB drives, but that would cost about $800 more than the singe 2 TB drive and would deliver only 1.2 TB of space. All pricing based on Microcenter as of March 14, 2010.

    Read the article

  • Android “open for embedded”? Must-read Ars Technica article

    - by terrencebarr
    A few days ago ars technica published an article “Google’s iron grip on Android: Controlling open source by any means necessary”. If you are considering Android for embedded this article is a must-read to understand the severe ramifications of Google’s tight (and tightening) control on the Android technology and ecosystem. Some quotes from the ars technica article: “Android is open – except for all the good parts“ “Android actually falls into two categories: the open parts from the Android Open Source Project (AOSP) … and the closed source parts, which are all the Google-branded apps” “Android open source apps … turn into abandonware by moving all continuing development to a closed source model.” “Joining the OHA requires a company to sign its life away and promise to not build a device that runs a competing Android fork.” “Google Play Services is a closed source app owned by Google … to turn the “Android App Ecosystem” into the “Google Play Ecosystem” “You’re allowed to contribute to Android and allowed to use it for little hobbies, but in nearly every area, the deck is stacked against anyone trying to use Android without Google’s blessing“ Compare this with a recent Wired article “Oracle Makes Java More Relevant Than Ever”: “Oracle has actually opened up Java even more — getting rid of some of the closed-door machinations that used to be part of the Java standards-making process. Java has been raked over the coals for security problems over the past few years, but Oracle has kept regular updates coming. And it’s working on a major upgrade to Java, due early next year.” Cheers, – Terrence Filed under: Embedded, Mobile & Embedded Tagged: Android, embedded, Java Embedded, Open Source

    Read the article

< Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >