Search Results

Search found 562 results on 23 pages for 'responsibility'.

Page 17/23 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • PHP OOP: Avoid Singleton/Static Methods in Domain Model Pattern

    - by sunwukung
    I understand the importance of Dependency Injection and its role in Unit testing, which is why the following issue is giving me pause: One area where I struggle not to use the Singleton is the Identity Map/Unit of Work pattern (Which keeps tabs on Domain Object state). //Not actual code, but it should demonstrate the point class Monitor{//singleton construction omitted for brevity static $members = array();//keeps record of all objects static $dirty = array();//keeps record of all modified objects static $clean = array();//keeps record of all clean objects } class Mapper{//queries database, maps values to object fields public function find($id){ if(isset(Monitor::members[$id]){ return Monitor::members[$id]; } $values = $this->selectStmt($id); //field mapping process omitted for brevity $Object = new Object($values); Monitor::new[$id]=$Object return $Object; } $User = $UserMapper->find(1);//domain object is registered in Id Map $User->changePropertyX();//object is marked "dirty" in UoW // at this point, I can save by passing the Domain Object back to the Mapper $UserMapper->save($User);//object is marked clean in UoW //but a nicer API would be something like this $User->save(); //but if I want to do this - it has to make a call to the mapper/db somehow $User->getBlogPosts(); //or else have to generate specific collection/object graphing methods in the mapper $UserPosts = $UserMapper->getBlogPosts(); $User->setPosts($UserPosts); Any advice on how you might handle this situation? I would be loathe to pass/generate instances of the mapper/database access into the Domain Object itself to satisfy DI - At the same time, avoiding that results in lots of calls within the Domain Object to external static methods. Although I guess if I want "save" to be part of its behaviour then a facility to do so is required in its construction. Perhaps it's a problem with responsibility, the Domain Object shouldn't be burdened with saving. It's just quite a neat feature from the Active Record pattern - it would be nice to implement it in some way.

    Read the article

  • Returning pointers in a thread-safe way.

    - by Roddy
    Assume I have a thread-safe collection of Things (call it a ThingList), and I want to add the following function. Thing * ThingList::findByName(string name) { return &item[name]; // or something similar.. } But by doing this, I've delegated the responsibility for thread safety to the calling code, which would have to do something like this: try { list.lock(); // NEEDED FOR THREAD SAFETY Thing *foo = list.findByName("wibble"); foo->Bar = 123; list.unlock(); } catch (...) { list.unlock(); throw; } Obviously a RAII lock/unlock object would simplify/remove the try/catch/unlocks, but it's still easy for the caller to forget. There are a few alternatives I've looked at: Return Thing by value, instead of a pointer - fine unless you need to modify the Thing Add function ThingList::setItemBar(string name, int value) - fine, but these tend to proliferate Return a pointerlike object which locks the list on creation and unlocks it again on destruction. Not sure if this is good/bad practice... What's the right approach to dealing with this?

    Read the article

  • API-based solutions for sending payments to people without bank accounts

    - by Tauren
    I'm looking for inexpensive ways to send payments to hundreds or thousands of individual contractors, even if they do not have a bank account. Currently I only need to support payment in the USA, but may eventually be international. Here's the scenario: I offer a service that allows an organization or manager-type person to coordinate contractors for very short term jobs. These jobs are typically only an hour or two in length. A contractor may get only one job over an entire month, several jobs spread out over a month, multiple jobs on a single day, or any other combination. Thus, a single contractor could earn as little as one job's payment up to potentially payment for dozens. Payment for a month could be as little as $10 up to $1000's. Right now, the system provides payroll reports to the manager and it is the manager's responsibility to produce checks, stuff envelopes, and send mail via the US postal service. I'd like to remove this burden from the manager and have all the payments taken care of for them automatically by the system. I'm not sure where to start or what the best options would be. I'm starting to look into the following solutions, but don't know specifics yet and would like some advice before pursuing them. I'd also like to hear about other ideas or suggestions. PayPal (Send Money, Adaptive Payments, x.com, other???) Amazon (Flexible Payments System?) Fund some sort of pre-paid debit card? Web service with API that mails checks for you? Direct deposit via a bank API (for users with bank accounts)? The problem is that many of these contractors may not be able to obtain bank accounts or credit cards within the USA. I don't mind doing a hybrid of solutions, but are there any that would work well with this issue? I want the solution to be easy to use for the contractors, meaning that they can get the money easily (via check in the mail, debit card ATM withdrawal, etc.)

    Read the article

  • Can somebody explain this remark in the MSDN CreateMutex() documentation about the bInitialOwner fla

    - by Tom Williams
    The MSDN CreatMutex() documentation (http://msdn.microsoft.com/en-us/library/ms682411%28VS.85%29.aspx) contains the following remark near the end: Two or more processes can call CreateMutex to create the same named mutex. The first process actually creates the mutex, and subsequent processes with sufficient access rights simply open a handle to the existing mutex. This enables multiple processes to get handles of the same mutex, while relieving the user of the responsibility of ensuring that the creating process is started first. When using this technique, you should set the bInitialOwner flag to FALSE; otherwise, it can be difficult to be certain which process has initial ownership. Can somebody explain the problem with using bInitialOwner = TRUE? Earlier in the same documentation it suggests a call to GetLastError() will allow you to determine whether a call to CreateMutext() created the mutex or just returned a new handle to an existing mutex: Return Value If the function succeeds, the return value is a handle to the newly created mutex object. If the function fails, the return value is NULL. To get extended error information, call GetLastError. If the mutex is a named mutex and the object existed before this function call, the return value is a handle to the existing object, GetLastError returns ERROR_ALREADY_EXISTS, bInitialOwner is ignored, and the calling thread is not granted ownership. However, if the caller has limited access rights, the function will fail with ERROR_ACCESS_DENIED and the caller should use the OpenMutex function.

    Read the article

  • What makes you trust that a piece of open source software is not malicious?

    - by Daniel DiPaolo
    We developers are in a unique position when it comes to the ability to not only be skeptical about the capabilities provided by open source software, but to actively analyze the code since it is freely available. In fact, one may even argue that open source software developers have a social responsibility to do so to contribute to the community. But at what point do you as a developer say, "I better take a look at what this is doing before I trust using it" for any given thing? Is it a matter of trusting code with your personal information? Does it depend on the source you're getting it from? What spurred this question on was a post on Hacker News to a javascript bookmarklet that supposedly tells you how "exposed" your information on Facebook is as well as recommending some fixes. I thought for a second "I'd rather not start blindly running this code over all my (fairly locked down) Facebook information so let me check it out". The bookmarklet is simple enough, but it calls another javascript function which at the time (but not anymore) was highly compressed and undecipherable. That's when I said "nope, not gonna do it". So even though I could have verified the original uncompressed javascript from the Github site and even saved a local copy to verify and then run without hitting their server, I wasn't going to. It's several thousand lines and I'm not a total javascript guru to begin with. Yet, folks are using it anyway. Even (supposedly) bright developers. What makes them trust the script? Did they all scrutinize it line by line? Do they know the guy personally and trust him not to do anything bad? Do they just take his word? What makes you trust that a piece of open source software is not malicious?

    Read the article

  • Inversion of control domain objects construction problem

    - by Andrey
    Hello! As I understand IoC-container is helpful in creation of application-level objects like services and factories. But domain-level objects should be created manually. Spring's manual tells us: "Typically one does not configure fine-grained domain objects in the container, because it is usually the responsibility of DAOs and business logic to create/load domain objects." Well. But what if my domain "fine-grained" object depends on some application-level object. For example I have an UserViewer(User user, UserConstants constants) class. There user is domain object which cannot be injected, but UserViewer also needs UserConstants which is high-level object injected by IoC-container. I want to inject UserConstants from the IoC-container, but I also need a transient runtime parameter User here. What is wrong with the design? Thanks in advance! UPDATE It seems I was not precise enough with my question. What I really need is an example how to do this: create instance of class UserViewer(User user, UserService service), where user is passed as the parameter and service is injected from IoC. If I inject UserViewer viewer then how do I pass user to it? If I create UserViewer viewer manually then how do I pass service to it?

    Read the article

  • Proper Memory Management for Objective-C Method

    - by Justin
    Hi, I'm programming an iPhone app and I had a question about memory management in one of my methods. I'm still a little new to managing memory manually, so I'm sorry if this question seems elementary. Below is a method designed to allow a number pad to place buttons in a label based on their tag, this way I don't need to make a method for each button. The method works fine, I'm just wondering if I'm responsible for releasing any of the variables I make in the function. The application crashes if I try to release any of the variables, so I'm a little confused about my responsibility regarding memory. Here's the method: FYI the variable firstValue is my label, it's the only variable not declared in the method. -(IBAction)inputNumbersFromButtons:(id)sender { UIButton *placeHolderButton = [[UIButton alloc] init]; placeHolderButton = sender; NSString *placeHolderString = [[NSString alloc] init]; placeHolderString = [placeHolderString stringByAppendingString:firstValue.text]; NSString *addThisNumber = [[NSString alloc] init]; int i = placeHolderButton.tag; addThisNumber = [NSString stringWithFormat:@"%i", i]; NSString *newLabelText = [[NSString alloc] init]; newLabelText = [placeHolderString stringByAppendingString:addThisNumber]; [firstValue setText:newLabelText]; //[placeHolderButton release]; //[placeHolderString release]; //[addThisNumber release]; //[newLabelText release]; } The application works fine with those last four lines commented out, but it seems to me like I should be releasing these variables here. If I'm wrong about that I'd welcome a quick explanation about when it's necessary to release variables declared in functions and when it's not. Thanks.

    Read the article

  • Expose webservice directly to webclients or keep a thin server-side script layer in between?

    - by max
    Hi, I'm developing a REST webservice (Java, Jersey). The people I'm doing this for want to directly access the webservice via Javascript. Some instinct tells me this is not a good idea, but I cannot really explain that instinct. My natural approach would have been to have the webservice do the real logic and database access, but also have some (relatively thin) server-side script layer (e.g. in PHP). Clients would talk to the PHP layer which in turn would talk to the webservice. (The webservice would be pretty local to the apache/PHP server and implicitly trust calls from the script layer. The script layer would take care of session management.) (Btw, I am not talking about just hiding the webservice behind an Apache which simply redirects calls.) But as I find myself at a lack of words/arguments to explain my instinct, I wonder whether my instinct is right - note that while I have been developing all kinds of software in all kinds of languages and frameworks for like 17 years, this is the first time I develop a webservice. So my question is basically: what are your opinions? Are there any standard setups? Is my instinct totally wrong? Or partially? ;P Many thanks, Max PS: I might add a few bits of information about the planned usage of the whole application: will be accessed by different kinds of users, partly general public, partly privileged thus, all major OS/browser combinations can be expected as clients however, writing the client is not my responsibility will potentially have very high load/traffic logic of webservice will later be massively expanded for another product which is basically a superset of the functionality of the current project there is a significant likelihood that at some point an API should be exposed which can be used by 3rd party developers - obviously, with some restrictions at some point, the public view of the product should become accessible via smartphones, too (in other words, maybe a customized version of the site to adapt to the smaller display and different input methods)

    Read the article

  • Windows Workflow Foundation 4.0 Designer Rehosting with Custom Activities

    - by Robert
    I have several WF 4.0 workflows that I have created for an application my company is developing. Some of these workflows are simple, and some are very complex (i.e. many steps, several different types of activities, custom activities). For many of these workflows, I have created several custom code activities to support some internal process types. The workflows work great and we have had very few problems when it comes to maintaining them within VS 2010. We now want to move that responsibility off to our business users, so I have created a WPF application to rehost the WF designer (according to the MS samples). My problem is that when I open one of the workflows that contains custom code activities, those activities are represented as red boxes with the error message of "Activity could not be loaded because of errors in XAML." I have done research and have found several posts that mention that this is usually a problem with namespacing and referencing. The rehosted designer is in a namespace similar to this: Company.Application.Workflow.Designer And the custom code activities are contained within a separate custom workflow library, which I have included as a reference in the designer project. The library's namespace is similar to this: Company.Application.Workflow.Data.Activities As I have mentioned, the library is set as a reference in the designer's project, and I see it being copied to the output when I build the project. I have also included the reference in the XAML of the main designed application. What am I missing?

    Read the article

  • Organizing development teams

    - by Patrick
    A long time ago, when my company was much smaller, dividing the development work over teams was quite easy: the 'application' team developed the applications-specific logic, often requiring a deep insight of specific industry problems) the 'generic' team developed the parts that were common/generic for all applications (user interface related stuff, database access, low-level Windows stuff, ...) Over the years the boundaries between the teams have become fuzzy: the 'application' teams often write application-specific functionality with a 'generic' part, so instead of asking the 'generic' team to write that part for them, they write it themselves to speed up the developments; then donate it to the 'generic' team the 'generic' team's focus seems to be more 'maintenance oriented'. All of the 'very generic' code has already been written, so no new developments are needed in it, but instead they continuously have to support all the functionality donated by the application teams. All this seems to indicate that it's not a good idea anymore to have this split in teams. Maybe the 'generic' team should evolve into a 'software quality' team (defining and guarding the rules for writing good quality software), or into a 'software deployment' team (defining how software should be deployed, installed, ...). How do you split up the work in different teams if you have different applications? everybody can write generic code and donates it to a central 'generic' team? everybody can write generic code, but nobody 'manages' this generic code (everybody is the owner) generic code is written by a 'generic' team only and the applications have to wait until the 'generic' team delivers the generic part (via a library, via a DLL) there is no overlap in code between the different applications some other way? Notice that thee advantage of having the mix (allowing everybody to write everywhere in the code) is that: code is written in a more flexible way it's easier to debug the code since you can easily step into the 'generic' code in the debugger But the big (and maybe only) disadvantage is that this generic code may become nobody's responsibility if there is no clear team that manages it anymore. What is your vision?

    Read the article

  • Can I pass a non-generic type where a generic type is expected?

    - by Water Cooler v2
    I want to define a set of classes that collect and persist data. I want to call them either on-demand basis, or in a chain-of-responsibility fashion, as the caller pleases. To support the chaining, I have declared my interface like so: interface IDataManager<T, K> { T GetData(K args); void WriteData(Stream stream); void WriteData(T data, Stream stream); IDataCollectionPolicy Policy; IDataManager<T, K> NextDataManager; } But the T's and K's for each concrete types will be different. If I give it like this: IDataManager<T, K> NextDataManager; I assume that the calling code will only be able to chain types that have the same T's and K's. Is there a way I can have it chain any type of IDataManager? One thing that occurs to me is to have IDataManager inherit from a non-generic IDataManager like so: interface IDataManager { } interface IDataManager<T, K>: IDataManager { T GetData(K args); void WriteData(Stream stream); void WriteData(T data, Stream stream); IDataCollectionPolicy Policy; IDataManager NextDataManager; } Is this going to work?

    Read the article

  • Building a structure/object in a place other than the constructor

    - by Vishal Naidu
    I have different types of objects representing the same business entity. UIObject, PowershellObject, DevCodeModelObject, WMIObject all are different representation to the same entity. So say if the entity is Animal then I have AnimalUIObject, AnimalPSObject, AnimalModelObject, AnimalWMIObject, etc. Now the implementations of AnimalUIObject, AnimalPSObject, AnimalModelObject are all in separate assemblies. Now my scenario is I want to verify the contents of business entity Animal irrespective of the assembly it came from. So I created a GenericAnimal class to represent the Animal entity. Now in GenericAnimal I added the following constructors: GenericAnimal(AnimalUIObject) GenericAnimal(AnimalPSObject) GenericAnimal(AnimalModelObject) Basically I made GenericAnimal depend on all the underlying assemblies so that while verifying I deal with this abstraction. Now the other approach to do this is have GenericAnimal with an empty constructor an allow these underlying assemblies to have a Transform() method which would build the GenericAnimal. Both approaches have some pros and cons: The 1st approach: Pros: All construction logic is in one place in one class GenericAnimal Cons: GenericAnimal class must be touched every-time there is a new representation form. The 2nd approach: Pros: construction responsibility is delegated to the underlying assembly. Cons: As construction logic is spread accross assemblies, tomorrow if I need to add a property X in GenericAnimal then I have to touch all the assemblies to change the Transform method. Which approach looks better ? or Which would you consider a lesser evil ? Is there any alternative way better than the above two ?

    Read the article

  • How to strongly type properties in JavaScript that map to models in C# ?

    - by Roberto Sebestyen
    I'm not even sure if I worded the question right, but I'll try and explain as clearly as possible with an example: In the following example scenario: 1) Take a class such as this: public class foo { public string firstName {get;set;} public string lastName {get;set} } 2) Serialize that into JSON, pass it over the wire to the Browser. 3) Browser de-serializes this and turns the JSON into a JavaScript object so that you can then access the properties like this: var foo = deSerialize("*******the JSON from above**************"); alert(foo.firstName); alert(foo.lastName); What if now a new developer comes along working on this project decides that firstName is no longer a suitable property name. Lets say they use ReSharper to rename this property, since ReSharper does a pretty good job at finding (almost) all the references to the property and renaming them appropriately. However ReSharper will not be able to rename the references within the JavaScript code (#3) since it has no way of knowing that these also really mean the same thing. Which means the programmer is left with the responsibility of manually finding these references and renaming those too. The risk is that if this is forgotten, no one will know about this error until someone tests that part of the code, or worse, slip through to the customer. Back to the actual question: I have been trying to think of a solution to this to some how strongly type these property names when used in javascript, so that a tool like ReSharper can successfully rename ALL usages of the property? Here is what I have been thinking for example (This would obviously not work unless i make some kind of static properties) var foo = deSerialize("*******the JSON from above**************"); alert(foo.<%=foo.firstName.GetPropertyName()%>) alert(foo.<%=foo.lastName.GetPropertyName()%>) But that is obviously not practical. Does anyone have any thoughts on this? Thanks, and kudos to all of the talented people answering questions on this site.

    Read the article

  • Patterns and Libraries for working with raw UI values.

    - by ProfK
    By raw values, I mean the application level values provided by UI controls, such as the Text property on a TextBox. Too often I find myself writing code to check and parse such values before they get used as a business level value, e.g. PaymentTermsNumDays. I've mitigated a lot of the spade work with rough and ready extension methods like String.ToNullableInt, but we all know that just isn't right. We can't put the whole world on String's shoulders. Do I look at tasking my UI to provide business values, using a ruleset pushed out from the server app, or open my business objects up a bit to do the required sanitising etc. as they required? Neither of these approaches sits quite right with me; the first seems closer to ideal, but quite a bit of work, while the latter doesn't show much respect to the business objects' single responsibility. The responsibilities of the UI are a closer match. Between these extremes, I could also just implement another DTO layer, an IoC container with sanitising and parsing services, derive enhanced UI controls, or stick to copy and paste inline drudgery.

    Read the article

  • What job title should be most suitable for my object in resume and what salary range should I expect

    - by user354177
    I was a classic asp developer in 2000. After a year of full-time employment, I left the field. I found a part-time position as an asp developer again in 2005 and taught myself vb.net. In 2007, I got the current full-time job as an Asp.net web developer. I taught myself C#, LING t0 SQL, Web Services, AJAX, and creating all kinds of reports with reporting services. One and half years ago, I sent myself to part-time graduate program in Database and Web Systems. I'll have two semesters to go and so far my GPA is 4.0/4.0. My job responsibility is to collect business requirements from other departments, design the database, write stored procedures, create aspx pages, and create reports. I love what I do and want to advance my career to the next level. What I enjoy most is to design the relational database. I would want to become an .Net Architect eventually. I got an interview. They were looking for asp.net web developer. But I was surprised and disappointed that position would only create aspx pages. I would not even have opportunity to write stored procedures, let alone design the database (those would be provided by another group). Furthermore, they asked me some detailed questions about web forms, some of which I did not know the answers. they might be disappointed as well. I am eager to learn and can apply what I learn to real projects right away. I believe no matter what specific skills I am lacking for a new position, I can catch up quickly. I am looking for $70k range job. The object in my resume is Experience C# Web Application Developer. Due to the experience from last interview, I wonder if the object is really what I want. Could somebody answer my questions? Thank you.

    Read the article

  • In C# should I reuse a function / property parameter to compute cleaner temporary value or create a

    - by Hamish Grubijan
    The example below may not be problematic as is, but it should be enough to illustrate a point. Imagine that there is a lot more work than trimming going on. public string Thingy { set { // I guess we can throw a null reference exception here on null. value = value.Trim(); // Well, imagine that there is so much processing to do this.thingy = value; // That this.thingy = value.Trim() would not fit on one line ... So, if the assignment has to take two lines, then I either have to abusereuse the parameter, or create a temporary variable. I am not a big fan of temporary variables. On the other hand, I am not a fan of convoluted code. I did not include an example where a function is involved, but I am sure you can imagine it. One concern I have is if a function accepted a string and the parameter was "abused", and then someone changed the signature to ref in both places - this ought to mess things up, but ... who would knowingly make such a change if it already worked without a ref? Seems like it is their responsibility in this case. If I mess with the value of value, am I doing something non-trivial under the hood? If you think that both approaches are acceptable, then which do you prefer and why? Thanks.

    Read the article

  • CRM 2011 - Workflows Vs JavaScripts

    - by Kanini
    In the Contact entity, I have the following attributes Preferred email - A read only field of type Email Personal email 1 - An email field Personal email 2 - An email field Work email 1 - An email field Work email 2 - An email field School email - An email field Other email - An email field Preferred email option - An option set with the following values {Personal email 1, Personal email 2, Work email 1, Work email 2, School email and Other email). None of the above mentioned fields are required. Requirement When user picks a value from Preferred email option, we copy the email address available in that field and apply the same in the Preferred email field. Implementation The Solution Architect suggested that we implement the above requirement as a Workflow. The reason he provided was - most of the times, these values are to be populated by an external website and the data is then fed into CRM 2011 system. So, when they update Preferred email option via a Web Service call to CRM, the WF will run and updated the Preferred email field. My argument / solution What will happen if I do not pick a value from the Preferred email Option Set? Do I set it to any of the email addresses that has a value in it? If so, what if there is more than one of the email address fields are populated, i.e., what if Personal email 1 and Work email 1 is populated but no value is picked in the Option Set? What if a value existed in the Preferred email Option Set and I then change it to NULL? Should the field Preferred email (where the text value of email address is stored) be set to Read Only? If not, what if I have picked Personal email 1 in the Option Set and then edit the Preferred email address text field with a completely new email address If yes, then we are enforcing that the preferred email should be one among Personal email 1, Personal email 2, Work email 1, Work email 2, School email or Other email [My preference would be this] What if I had a value of [email protected] in the personal email 1 field and personal email 2 is empty and choose value of Personal email 1 in the drop down for Preferred email (this will set the Preferred email field to [email protected]) and later, I change the value to Personal email 2 in the Preferred email. It overwrites a valid email address with nothing. I agree that it would be highly unlikely that a user will pick Preferred email as Personal email 2 and not have a value in it but nevertheless it is a possible scenario, isn’t it? What if users typed in a value in Personal email 1 but by mistake picked Personal email 2 in the option set and Personal email 2 field had no value in it. Solution The field Preferred email option should be a required field A JS should run whenever Preferred email option is changed. That JS function should set the relevant email field as required (based on the option chosen) and another JS function should be called (see step 3). A JS function should update the value of Preferred email with the value in the email field (as picked in the option set). The JS function should also be run every time someone updates the actual email field which is chosen in the option set. The guys who are managing the external website should update the Preferred email field - surely, if they can update Preferred email option via a Web Service call, it is easy enough to update the Preferred email right? Question Which is a better method? Should it be written as a JS or a WorkFlow? Also, whose responsibility is it to update the Preferred email field when the data flows from an external website? I am new to CRM 2011 but have around 6 years of experience as a CRM consultant (with other products). I do not come from a development background as I started off as a Application Support Engineer but have picked up development in the last couple of years.

    Read the article

  • Big Data – Buzz Words: What is MapReduce – Day 7 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is Hadoop. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – MapReduce. What is MapReduce? MapReduce was designed by Google as a programming model for processing large data sets with a parallel, distributed algorithm on a cluster. Though, MapReduce was originally Google proprietary technology, it has been quite a generalized term in the recent time. MapReduce comprises a Map() and Reduce() procedures. Procedure Map() performance filtering and sorting operation on data where as procedure Reduce() performs a summary operation of the data. This model is based on modified concepts of the map and reduce functions commonly available in functional programing. The library where procedure Map() and Reduce() belongs is written in many different languages. The most popular free implementation of MapReduce is Apache Hadoop which we will explore tomorrow. Advantages of MapReduce Procedures The MapReduce Framework usually contains distributed servers and it runs various tasks in parallel to each other. There are various components which manages the communications between various nodes of the data and provides the high availability and fault tolerance. Programs written in MapReduce functional styles are automatically parallelized and executed on commodity machines. The MapReduce Framework takes care of the details of partitioning the data and executing the processes on distributed server on run time. During this process if there is any disaster the framework provides high availability and other available modes take care of the responsibility of the failed node. As you can clearly see more this entire MapReduce Frameworks provides much more than just Map() and Reduce() procedures; it provides scalability and fault tolerance as well. A typical implementation of the MapReduce Framework processes many petabytes of data and thousands of the processing machines. How do MapReduce Framework Works? A typical MapReduce Framework contains petabytes of the data and thousands of the nodes. Here is the basic explanation of the MapReduce Procedures which uses this massive commodity of the servers. Map() Procedure There is always a master node in this infrastructure which takes an input. Right after taking input master node divides it into smaller sub-inputs or sub-problems. These sub-problems are distributed to worker nodes. A worker node later processes them and does necessary analysis. Once the worker node completes the process with this sub-problem it returns it back to master node. Reduce() Procedure All the worker nodes return the answer to the sub-problem assigned to them to master node. The master node collects the answer and once again aggregate that in the form of the answer to the original big problem which was assigned master node. The MapReduce Framework does the above Map () and Reduce () procedure in the parallel and independent to each other. All the Map() procedures can run parallel to each other and once each worker node had completed their task they can send it back to master code to compile it with a single answer. This particular procedure can be very effective when it is implemented on a very large amount of data (Big Data). The MapReduce Framework has five different steps: Preparing Map() Input Executing User Provided Map() Code Shuffle Map Output to Reduce Processor Executing User Provided Reduce Code Producing the Final Output Here is the Dataflow of MapReduce Framework: Input Reader Map Function Partition Function Compare Function Reduce Function Output Writer In a future blog post of this 31 day series we will explore various components of MapReduce in Detail. MapReduce in a Single Statement MapReduce is equivalent to SELECT and GROUP BY of a relational database for a very large database. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – HDFS. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Instant Rename and Rename Refactoring

    - by Petr
    During the last weeks I have got  a few questions about rename refactoring and some users also complain to me that the refactoring in NetBeans 6.x was much faster. So I would like to explain the situation. For some people, who don't know, Instant Rename action and Rename Refactoring  can look like one action. But it's not true, even if  both actions use the same shortcut (CTRL + R). NetBeans 6.x contained only Instant Rename action (speaking about PHP support), which we can mark as very simple rename refactoring through one file. From NetBeans 7.0 the Instant Rename action works only in "non public" context. It means that this action is used for fast renaming variables that has local context like inside a method, or for renaming private methods and fields that can not be used outside of the scope, where they are declared. From user point of view these two action can be simply recognized. When is after CTRL+R called Instant Rename action, then the identifier is surrounded with rectangle and you can rename it directly in the file. It's fast and simple, also the usages of this identifier are renamed in the same time as you write. The picture below shows Instant Rename action for $message identifier, that is visible only in the print_test method and due this after CTRL+R is called Instant Rename. In NetBeans 7.0, there was added Rename Refactoring that is called for public identifiers. It means for identifiers that could be used in other files. If you press CTRL+R shortcut when the caret is inside $hello identifier from the picture above, NetBeans recognizes that $hello is declared / used in a global context and calls the Rename Refactoring that brings a dialog to change the name of the identifier. From this dialog you have to preview suggested changes, through pressing Preview button and then execute the refactoring through Do Refactoring button. Yes, it's more complicated from user point of view than Instant Rename, but in Rename Refactoring NetBeans can change more files at once. It should be  the developer responsibility to decide whether the suggested changes are right and the refactoring can be executed or in some files original name should be kept. Someone can argue that he doesn't use $hello variable in any other file so Instant Rename could be used in such case. Yes it's true, but in such case NetBeans has to know all usages of all identifiers and keep this informations up to date during editing a file. I'm sure that this is not possible due to the performance problems, mainly for big projects. So the usages are computed after pressing the Preview button. And why is the Refactor button always disabled in the Rename dialog and user has to always go through the preview phase? NetBeans has API and SPI for implementing refactoring actions and this dialog is a part of this infrastructure. If you rename an identifier for example in Java, the Refactor buttons is enabled, but Java is strongly type language and you can be almost in 99% sure that the IDE will suggest the right results. In PHP as a dynamic language, we can not be sure, what NetBeans finds is only a "guess". This is why NetBeans pushes developers to preview the changes for PHP rename. I hope that I have explain it clearly. I'm open to any discussion. What I have described above is situation in NetBeans 7.0, 7.0.1 and probably it will be also in NetBeans 7.1, because there is no plan to change it. Please write your opinion here.

    Read the article

  • Is there a low carbon future for the retail industry?

    - by user801960
    Recently Oracle published a report in conjunction with The Future Laboratory and a global panel of experts to highlight the issue of energy use in modern industry and the serious need to reduce carbon emissions radically by 2050.  Emissions must be cut by 80-95% below the levels in 1990 – but what can the retail industry do to keep up with this? There are three key aspects to the retail industry where carbon emissions can be cut:  manufacturing, transport and IT.  Manufacturing Naturally, manufacturing is going to be a big area where businesses across all industries will be forced to make considerable savings in carbon emissions as well as other forms of pollution.  Many retailers of all sizes will use third party factories and will have little control over specific environmental impacts from the factory, but retailers can reduce environmental impact at the factories by managing orders more efficiently – better planning for stock requirements means economies of scale both in terms of finance and the environment. The John Lewis Partnership has made detailed commitments to reducing manufacturing and packaging waste on both its own-brand products and products it sources from third party suppliers. It aims to divert 95 percent of its operational waste from landfill by 2013, which is a huge logistics challenge.  The John Lewis Partnership’s website provides a large amount of information on its responsibilities towards the environment. Transport Similarly to manufacturing, tightening up on logistical planning for stock distribution will make savings on carbon emissions from haulage.  More accurate supply and demand analysis will mean less stock re-allocation after initial distribution, and better warehouse management will mean more efficient stock distribution.  UK grocery retailer Morrisons has introduced double-decked trailers to its haulage fleet and adjusted distribution logistics accordingly to reduce the number of kilometers travelled by the fleet.  Morrisons measures route planning efficiency in terms of cases moved per kilometre and has, over the last two years, increased the number of cases per kilometre by 12.7%.  See Morrisons Corporate Responsibility report for more information. IT IT infrastructure is often initially overlooked by businesses when considering environmental efficiency.  Datacentres and web servers often need to run 24/7 to handle both consumer orders and internal logistics, and this both requires a lot of energy and puts out a lot of heat.  Many businesses are lowering environmental impact by reducing IT system fragmentation in their offices, while an increasing number of businesses are outsourcing their datacenters to cloud-based services.  Using centralised datacenters reduces the power usage at smaller offices, while using cloud based services means the datacenters can be based in a more environmentally friendly location.  For example, Facebook is opening a massive datacentre in Sweden – close to the Arctic Circle – to reduce the need for artificial cooling methods.  In addition, moving to a cloud-based solution makes IT services more easily scaleable, reducing redundant IT systems that would still use energy.  In store, the UK’s Carbon Trust reports that on average, lighting accounts for 25% of a retailer’s electricity costs, and for grocery retailers, up to 50% of their electricity bill comes from refrigeration units.  On a smaller scale, retailers can invest in greener technologies in store and in their offices.  The report concludes that widely shared objectives of energy security, reduced emissions and continued economic growth are dependent on the development of a smart grid capable of delivering energy efficiency and demand response, as well as integrating renewable and variable sources of energy. The report is available to download from http://emeapressoffice.oracle.com/imagelibrary/detail.aspx?MediaDetailsID=1766I’d be interested to hear your thoughts on the report.   

    Read the article

  • BizTalk Server Monitoring &ndash; SharePoint Web Part

    - by SURESH GIRIRAJAN
    I have been worked with customers using BizTalk as shared infrastructure in the enterprise, where we have two or more BizTalk apps running on it for different Business groups. Also these customers are not using BizTalk ESB portal even though they are using BizTalk ESB exception framework. So main issue with all these Business groups are they don’t have visibility into the BizTalk apps running in prod, even though they are using SCOM and other monitoring stuff in place. So I am trying to address few issues I am going to list below and how I try to mitigate them, first one on the list is how to get visibility into prod, how to provision those access to the BizTalk resources with minimal activity and how can we take advantage of the resources we have today. So I was working on creating REST data services for BizTalk RFID a year ago and available on codeplex. I thought to extend that idea to take advantage of BizTalk Data Services available in codeplex. I extended the BizTalk data services I will upload the updated service soon. So let me start thru how my solution works, so first step I am using the BizTalk data service (REST service) which expose most of the BizTalk artifacts as resources such as Applications, Orchestrations, Send ports, Receive ports, Host instances and In process instances etc. BizTalk Server Monitoring – SharePoint Web Part I am hosting the BizTalk data service in IIS with application pool configured to run under BizTalk administrator credentials. So with this setup I am making the service to make accessible anonymous. Next step of this solution I have created a SharePoint Visual web part which consumes the BizTalk data service and display all the BizTalk Application and Platform settings in read only mode. Even though BizTalk data services offers to browse resources as well perform actions like starting, stopping Orchestrations, Send ports, Receive locations, Host instances etc. Host Instances BizTalk Applications BizTalk Running / Suspended Instances So having this BizTalk Monitoring SharePoint web part, will be added to the SharePoint. This eliminates the need for granting access to the BizTalk users explicitly, so when you have BizTalk contractor or BizTalk application user need to have access to the BizTalk environment all the need is have access to the SharePoint website. You can configure the web part point to different end point based on your environment. I am making this as read only as part of this to make easier for the users and in terms of provisioning. This removes the dependency of BizTalk admin at least for viewing the BizTalk application status and errors etc. If we need to make any changes to the BizTalk application then its application owner responsibility to co-ordinate with BizTalk admins. There are options like BizTalk ESB portal, BizTalk 360 etc… but this one of the approach to reduce number of steps required to give access to BizTalk application users and also to maximize the resource we have in enterprise today. Also you can expose this data service thru Azure Service Bus and access from other apps like mobile devices or create a web site hosted in Azure etc. One last thing I have tested only with BizTalk Server 2010 on x64 VM only, but it should work on other version. I will try to upload the code shortly with instructions how to setup etc.… I welcome thoughts and suggestions… Hope this helps….

    Read the article

  • F# in ASP.NET, mathematics and testing

    - by DigiMortal
    Starting from Visual Studio 2010 F# is full member of .NET Framework languages family. It is functional language with syntax specific to functional languages but I think it is time for us also notice and study functional languages. In this posting I will show you some examples about cool things other people have done using F#. F# and ASP.NET As I am ASP/ASP.NET MVP I am – of course – interested in how people use different languages and technologies with ASP.NET. C# MVP Tomáš Petrícek writes about developing ASP.NET MVC applications using F#. He also shows how to use LINQ To SQL in F# (using F# PowerPack) and provides sample solution and Visual Studio 2010 template for F# MVC web applications. You may also find interesting how you can create controllers in F#. Excellent work, Tomáš! Vladimir Matveev has interesting example about how to use F# and ApplicationHost class to process ASP.NET requests ouside of IIS. This is simple and very straight-forward example and I strongly suggest you to take a look at it. Very cool example is project Strom in Codeplex. Storm is web services testing tool that is fully written on F#. Take a look at this site because Codeplex offers also source code besides binaries. Math Functional languages are strong in fields like mathematics and physics. When I wrote my C# example about BigInteger class I found out that recursive version of Fibonacci algorithm in C# is not performing well. In same time I made same experiment on F# and in F# there were no performance problems with recursive version. You can find F# version of Fibonacci algorithm from Bob Palmer’s blog posting Fibonacci numbers in F#. Although golden spiral is useful for solving many problems I looked for some practical code example and found one. Kean Walmsley published in his Through the Interface blog very interesting posting Creating Fibonacci spirals in AutoCAD using F#. There are also other cool examples you may be interested in. Using numerical components by Extreme Optimization  it is possible to make some numerical integration (quadrature method) using F# (also C# example is available). fsharp.it introduces factorials calculation on F#. Robert Pickering has made very good work on programming The Game of Life in Silverlight and F# – I definitely suggest you to try out this example as it is very illustrative too. Who wants something more complex may take a look at Newton basin fractal example in F# by Jonathan Birge. Testing After some searching and surfing I found out that there is almost everything available for F# to write tests and test your F# code. FsCheck - FsCheck is a port of Haskell's QuickCheck. Important parts of the manual for using FsCheck is almost literally "adapted" from the QuickCheck manual and paper. Any errors and omissions are entirely my responsibility. FsTest - This project is designed to Language Oriented Programming constructs around unit testing and behavior testing in F#. The goal of this project is to create a Domain Specific Language for testing F# code in a way that makes sense for functional programming. FsUnit - FsUnit makes unit-testing with F# more enjoyable. It adds a special syntax to your favorite .NET testing framework. xUnit.NET - xUnit.net is a developer testing framework, built to support Test Driven Development, with a design goal of extreme simplicity and alignment with framework features. It is compatible with .NET Framework 2.0 and later, and offers several runners: console, GUI, MSBuild, and Visual Studio integration via TestDriven.net, CodeRush Test Runner and Resharper. It also offers test project integration for ASP.NET MVC. Getting started Well, as a first thing you need Visual Studio 2010. Then take a look at these resources: F# samples @ MSDN Microsoft F# Developer Center @ MSDN F# Language Reference @ MSDN F# blog F# forums Real World Functional Programming: With Examples in F# and C# (Amazon) Happy F#-ing! :)

    Read the article

  • Healthcare and Distributed Data Don't Mix

    - by [email protected]
    How many times have you heard the story?  Hard disk goes missing, USB thumb drive goes missing, laptop goes missing...Not a week goes by that we don't hear about our data going missing...  Healthcare data is a big one, but we hear about credit card data, pricing info, corporate intellectual property...  When I have spoken at Security and IT conferences part of my message is "Why do you give your users data to lose in the first place?"  I don't suggest they can't have access to it...in fact I work for the company that provides the premiere data security and desktop solutions that DO provide access.  Access isn't the issue.  'Keeping the data' is the issue.We are all human - we all make mistakes... I fault no one for having their car stolen or that they dropped a USB thumb drive. (well, except the thieves - I can certainly find some fault there)  Where I find fault is in policy (or lack thereof sometimes) that allows users to carry around private, and important, data with them.  Mr. Director of IT - It is your fault, not theirs.  Ms. CSO - Look in the mirror.It isn't like one can't find a network to access the data from.  You are on a network right now.  How many Wireless ones (wifi, mifi, cellular...) are there around you, right now?  Allowing employees to remove data from the confines of (wait for it... ) THE DATA CENTER is just plain indefensible when it isn't required.  The argument that the laptop had a password and the hard disk was encrypted is ridiculous.  An encrypted drive tells thieves that before they sell the stolen unit for $75, they should crack the encryption and ascertain what the REAL value of the laptop is... credit card info, Identity info, pricing lists, banking transactions... a veritable treasure trove of info people give away on an 'encrypted disk'.What started this latest rant on lack of data control was an article in Government Health IT that was forwarded to me by Denny Olson, an Oracle Principal Sales Consultant in Minnesota.  The full article is here, but the point was that a couple laptops went missing in a couple different cases, and.. well... no one knows where the data is, and yes - they were loaded with patient info.  What were you thinking?Obviously you can't steal data form a Sun Ray appliance... since it has no data, nor any storage to keep the data on, and Secure Global Desktop allows access from Macs, Linux and Windows client devices...  but in all cases, there is no keeping the data unless you explicitly allow for it in your policy.   Since you can get at the data securely from any network, why would you want to take personal responsibility for it?  Both Sun Rays and Secure Global Desktop are widely used in Healthcare... but clearly not widely enough.We need to do a better job of getting the message out -  Healthcare (or insert your business type here) and distributed data don't mix. Then add Hot Desking and 'follow me printing' and you have something that Clinicians (and CSOs) love.Thanks for putting up my blood pressure, Denny.

    Read the article

  • Technology vs. Antiquated Methods

    - by AreYouSerious
    So Here I am talking with my Program lead, about technology, and how while my father is the VP of a major company, he still doesn't have a blackberry, or a smart phone. and I think it's funny. Most people would say it's a generational thing. That because he's older, he dosen't accept technology, and that's why. I have trouble swallowing that because this is the same man, who bought a satellite radio for his car, and made sure that the printer for the house was networked so that his and my mom's laptop could print wirelessly from the living room through their wireless network. I think it has to do with more with necessity, and partially with finical responsibility. My father is very financially conciencious. Think about it yourself. you pay for internet at your home. You have internet access at your office. But if you get a smart phone you're going to pay almost the same amount just for that access. A lot of people take it as just another fixed cost... I'm one of those. I don't even think about it, as I check my facebook from the bus, train, or even while sitting in traffic... The convience of having connection everywhere outweigh the financial responsible person screaming at in the back of my mind. However This conversation lead us to another venue of discussion.... what happens when the power dies. if you left your charger at home, or you phone or navi just stops working... are you going to be able to continue on as you did when it was working... let take the navi as an example... if your navi stops working, how many of you know how to use a map, and navigate? can you even find where you are on a map using the cross streets that your stopped at? This is a skill that unfortunatelly is overlooked these days in the child rearing process. Most people don't see the value, while some others can't do it themselves, so how can they teach their offspring? Take another example.... what if your phone gets lost... or stolen, or you drive over it? do you have the numbers in their memorized? are they recorded somewhere? I know that if it weren't for google sync I wouldn't have them backed up... not sufficiently. And what good does that do if you're in timbuckto and your phone dies, think you can get on the internet to look up those numbers? Don't get me wrong. I'm the first to see the value in technology, and am willing to pay the price to not have to wait for prices to come down. I will pay extra to have that newest thing right now. but let me tell you what.... I know that should I ever procreate it will be a requirement for my offspring (children) to learn how to do something manually before I'll let them use technology. Food for thought?? Let everyone else know what you think.... just sayin'

    Read the article

  • Big Data – Data Mining with Hive – What is Hive? – What is HiveQL (HQL)? – Day 15 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the operational database in Big Data Story. In this article we will understand what is Hive and HQL in Big Data Story. Yahoo started working on PIG (we will understand that in the next blog post) for their application deployment on Hadoop. The goal of Yahoo to manage their unstructured data. Similarly Facebook started deploying their warehouse solutions on Hadoop which has resulted in HIVE. The reason for going with HIVE is because the traditional warehousing solutions are getting very expensive. What is HIVE? Hive is a datawarehouseing infrastructure for Hadoop. The primary responsibility is to provide data summarization, query and analysis. It  supports analysis of large datasets stored in Hadoop’s HDFS as well as on the Amazon S3 filesystem. The best part of HIVE is that it supports SQL-Like access to structured data which is known as HiveQL (or HQL) as well as big data analysis with the help of MapReduce. Hive is not built to get a quick response to queries but it it is built for data mining applications. Data mining applications can take from several minutes to several hours to analysis the data and HIVE is primarily used there. HIVE Organization The data are organized in three different formats in HIVE. Tables: They are very similar to RDBMS tables and contains rows and tables. Hive is just layered over the Hadoop File System (HDFS), hence tables are directly mapped to directories of the filesystems. It also supports tables stored in other native file systems. Partitions: Hive tables can have more than one partition. They are mapped to subdirectories and file systems as well. Buckets: In Hive data may be divided into buckets. Buckets are stored as files in partition in the underlying file system. Hive also has metastore which stores all the metadata. It is a relational database containing various information related to Hive Schema (column types, owners, key-value data, statistics etc.). We can use MySQL database over here. What is HiveSQL (HQL)? Hive query language provides the basic SQL like operations. Here are few of the tasks which HQL can do easily. Create and manage tables and partitions Support various Relational, Arithmetic and Logical Operators Evaluate functions Download the contents of a table to a local directory or result of queries to HDFS directory Here is the example of the HQL Query: SELECT upper(name), salesprice FROM sales; SELECT category, count(1) FROM products GROUP BY category; When you look at the above query, you can see they are very similar to SQL like queries. Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Pig. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23  | Next Page >