Search Results

Search found 3307 results on 133 pages for 'distributed datasource'.

Page 109/133 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Notes from a short presentation on NodeJs

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/05/30/notes-from-a-short-presentation-on-nodejs.aspxI volunteered myself to give a short 30 minute presentation at a work lunch and learn on NodeJs. With my limited experience I see using Node as a great tool for build process improvement, scaffolding with yeoman, and running tests with Karma. I haven’t looked into using as a full server or development stack. I guess I’m too stuck on IIS and Visual Studio :-). Here are my notes, that aren’t very well formatted, but I wanted to share it anyways. What is it? "Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices." Why should you be interested? another popular tool that can help you get the job done you can use the command prompt! can be run at build or release time to automate tasks What are some uses? https://www.npmjs.org/ - NuGet for Node packages http://bower.io/ - NuGet for UI JavaScript libraries (jQuery, Bootstrap, Angular, etc) http://yeoman.io/ "Our workflow is comprised of three tools for improving your productivity and satisfaction when building a web app: yo (the scaffolding tool), grunt (the build tool) and bower (for package management)." -> yeoman asks which components you want alternative - http://joakimbeng.eu01.aws.af.cm/slush-replacing-yeoman-with-gulp/ https://www.npmjs.org/package/generator-cg-angular - phantom js, less, // git is needed for bower http://git-scm.com/ run installer in Windows before you can use bower // select Run Git from the Windows Command Prompt in the installer // requires a reboot http://stackoverflow.com/questions/20069297/bower-git-not-in-the-path-error npm install -g git npm install -g yo npm install -g generator-cg-angular mkdir myapp cd myapp yo cg-angular npm install -g bower npm install -g grunt-cli yo bower grunt serve grunt test grunt build // there are many generators (generator-angular) is another one // I like the Nuget HotTowel-Angular from John Papa myself // needed IIS Node for Express -> prompt from WebMatrix Karma bat to startup Karma - see below image compression - https://www.npmjs.org/search?q=optimize+images, https://github.com/heldr/node-smushit - do it from the command line LESS compiling js and css combine and minification at build with Gulp for requireJS apps quick lightweight HTTP server - "Express" Build pipeline with Grunt or Gulp http://www.johnpapa.net/gulp-and-grunt-at-anglebrackets/ Gulp is the newer and improved over Grunt. Supposed to be easier to use, but Grunt is more established. https://github.com/johnpapa/ng-demos/tree/master/grunt-gulp https://github.com/assetgraph/assetgraph-builder Does a lot of the minimizing, combining, image optimization etc using Node. Looks interesting.... http://nodejs.org http://nodeschool.io/ http://sub.watchmecode.net/getting-started-with-nodejs-installing-and-writing-your-first-code/ https://stormpath.com/blog/build-a-killer-node-dot-js-client-for-your-rest-plus-json-api/ https://codio.com/ http://www.hanselman.com/blog/ItsJustASoftwareIssueEdgejsBringsNodeAndNETTogetherOnThreePlatforms.aspx run unit tests - Karma in msBuild karma-start.bat @echo off cd %~dp0\.. REM 604800 is to make sure we only update once every 7 days call npm install --cache-min 604800 -g grunt-cli call npm install --cache-min 604800 call npm install --cache-min 604800 -g karma-cli karma start UnitTests\karma.conf.js REM karma start UnitTests\karma.conf.js --single-run REM see karma-start.bat and karam.config.js REM jsHint comes from Nuget

    Read the article

  • You Probably Already Have a “Private Cloud”

    - by BuckWoody
    I’ve mentioned before that I’m not a fan of the word “Cloud”. It’s too marketing-oriented, gimmicky and non-specific. A better definition (in many cases) is “Distributed Computing”. That means that some or all of the computing functions are handled somewhere other than under your specific control. But there is a current use of the word “Cloud” that does not necessarily mean that the computing is done somewhere else. In fact, it’s a vector of Cloud Computing that can better be termed “Utility Computing”. This has to do with the provisioning of a computing resource. That means the setup, configuration, management, balancing and so on that is needed so that a user – which might actually be a developer – can do some computing work. To that person, the resource is just “there” and works like they expect, like the phone system or any other utility. The interesting thing is, you can do this yourself. In fact, you probably already have been, or are now. It’s got a cool new trendy term – “Private Cloud”, but the fact is, if you have your setup automated, the HA and DR handled, balancing and performance tuning done, and a process wrapped around it all, you can call yourself a “Cloud Provider”. A good example here is your E-Mail system. your users – pretty much your whole company – just logs into e-mail and expects it to work. To them, you are the “Cloud” provider. On your side, the more you automate and provision the system, the more you act like a Cloud Provider. Another example is a database server. In this case, the “end user” is usually the development team, or perhaps your SharePoint group and so on. The data professionals configure, monitor, tune and balance the system all the time. The more this is automated, the more you’re acting like a Cloud Provider. Lots of companies help you do this in your own data centers, from VMWare to IBM and many others. Microsoft's offering in this is based around System Center – they have a “cloud in a box” provisioning system that’s actually pretty slick. The most difficult part of operating a Private Cloud is probably the scale factor. In the case of Windows and SQL Azure, we handle this in multiple ways – and we're happy to share how we do it. It’s not magic, and the algorithms for balancing (like the one we started with called Paxos) are well known. The key is the knowledge, infrastructure and people. Sure, you can do this yourself, and in many cases such as top-secret or private systems, you probably should. But there are times where you should evaluate using Azure or other vendors, or even multiple vendors to spread your risk. All of this should be based on client need, not on what you know how to do already. So congrats on your new role as a “Cloud Provider”. If you have an E-mail system or a database platform, you can just put that right on your resume.

    Read the article

  • Need WIF Training?

    - by Your DisplayName here!
    I spend numerous hours every month answering questions about WIF and identity in general. This made me realize that this is still quite a complicated topic once you go beyond the standard fedutil stuff. My good friend Brock and I put together a two day training course about WIF that covers everything we think is important. The course includes extensive lab material where you take standard application and apply all kinds of claims and federation techniques and technologies like WS-Federation, WS-Trust, session management, delegation, home realm discovery, multiple identity providers, Access Control Service, REST, SWT and OAuth. The lab also includes the latest version of the thinktecture identityserver and you will learn how to use and customize it. If you are looking for an open enrollment style of training, have a look here. Or contact me directly! The course outline looks as follows: Day 1 Intro to Claims-based Identity & the Windows Identity Foundation WIF introduces important concepts like conversion of security tokens and credentials to claims, claims transformation and claims-based authorization. In this module you will learn the basics of the WIF programming model and how WIF integrates into existing .NET code. Externalizing Authentication for Web Applications WIF includes support for the WS-Federation protocol. This protocol allows separating business and authentication logic into separate (distributed) applications. The authentication part is called identity provider or in more general terms - a security token service. This module looks at this scenario both from an application and identity provider point of view and walks you through the necessary concepts to centralize application login logic both using a standard product like Active Directory Federation Services as well as a custom token service using WIF’s API support. Externalizing Authentication for SOAP Services One big benefit of WIF is that it unifies the security programming model for ASP.NET and WCF. In the spirit of the preceding modules, we will have a look at how WIF integrates into the (SOAP) web service world. You will learn how to separate authentication into a separate service using the WS-Trust protocol and how WIF can simplify the WCF security model and extensibility API. Day 2 Advanced Topics:  Security Token Service Architecture, Delegation and Federation The preceding modules covered the 80/20 cases of WIF in combination with ASP.NET and WCF. In many scenarios this is just the tip of the iceberg. Especially when two business partners decide to federate, you usually have to deal with multiple token services and their implications in application design. Identity delegation is a feature that allows transporting the client identity over a chain of service invocations to make authorization decisions over multiple hops. In addition you will learn about the principal architecture of a STS, how to customize the one that comes with this training course, as well as how to build your own. Outsourcing Authentication:  Windows Azure & the Azure AppFabric Access Control Service Microsoft provides a multi-tenant security token service as part of the Azure platform cloud offering. This is an interesting product because it allows to outsource vital infrastructure services to a managed environment that guarantees uptime and scalability. Another advantage of the Access Control Service is, that it allows easy integration of both the “enterprise” protocols like WS-* as well as “web identities” like LiveID, Google or Facebook into your applications. ACS acts as a protocol bridge in this case where the application developer doesn’t need to implement all these protocols, but simply uses a service to make it happen. Claims & Federation for the Web and Mobile World Also the web & mobile world moves to a token and claims-based model. While the mechanics are almost identical, other protocols and token types are used to achieve better HTTP (REST) and JavaScript integration for in-browser applications and small footprint devices. Also patterns like how to allow third party applications to work with your data without having to disclose your credentials are important concepts in these application types. The nice thing about WIF and its powerful base APIs and abstractions is that it can shield application logic from these details while you can focus on implementing the actual application. HTH

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 6

    - by MarkPearl
    Learning Outcomes Discuss the physical characteristics of magnetic disks Describe how data is organized and accessed on a magnetic disk Discuss the parameters that play a role in the performance of magnetic disks Describe different optical memory devices Magnetic Disk The way data is stored on and retried from magnetic disks Data is recorded on and later retrieved form the disk via a conducting coil named the head (in many systems there are two heads) The writ mechanism exploits the fact that electricity flowing through a coil produces a magnetic field. Electric pulses are sent to the write head, and the resulting magnetic patterns are recorded on the surface below with different patterns for positive and negative currents The physical characteristics of a magnetic disk   Summarize from book   The factors that play a role in the performance of a disk Seek time – the time it takes to position the head at the track Rotational delay / latency – the time it takes for the beginning of the sector to reach the head Access time – the sum of the seek time and rotational delay Transfer time – the time it takes to transfer data RAID The rate of improvement in secondary storage performance has been considerably less than the rate for processors and main memory. Thus secondary storage has become a bit of a bottleneck. RAID works on the concept that if one disk can be pushed so far, additional gains in performance are to be had by using multiple parallel components. Points to note about RAID… RAID is a set of physical disk drives viewed by the operating system as a single logical drive Data is distributed across the physical drives of an array in a scheme known as striping Redundant disk capacity is used to store parity information, which guarantees data recoverability in case of a disk failure (not supported by RAID 0 or RAID 1) Interesting to note that the increase in the number of drives, increases the probability of failure. To compensate for this decreased reliability RAID makes use of stored parity information that enables the recovery of data lost due to a disk failure.   The RAID scheme consists of 7 levels…   Category Level Description Disks Required Data Availability Large I/O Data Transfer Capacity Small I/O Request Rate Striping 0 Non Redundant N Lower than single disk Very high Very high for both read and write Mirroring 1 Mirrored 2N Higher than RAID 2 – 5 but lower than RAID 6 Higher than single disk Up to twice that of a signle disk for read Parallel Access 2 Redundant via Hamming Code N + m Much higher than single disk Highest of all listed alternatives Approximately twice that of a single disk Parallel Access 3 Bit interleaved parity N + 1 Much higher than single disk Highest of all listed alternatives Approximately twice that of a single disk Independent Access 4 Block interleaved parity N + 1 Much higher than single disk Similar to RAID 0 for read, significantly lower than single disk for write Similar to RAID 0 for read, significantly lower than single disk for write Independent Access 5 Block interleaved parity N + 1 Much higher than single disk Similar to RAID 0 for read, lower than single disk for write Similar to RAID 0 for read, generally  lower than single disk for write Independent Access 6 Block interleaved parity N + 2 Highest of all listed alternatives Similar to RAID 0 for read; lower than RAID 5 for write Similar to RAID 0 for read, significantly lower than RAID 5  for write   Read page 215 – 221 for detailed explanation on RAID levels Optical Memory There are a variety of optical-disk systems available. Read through the table on page 222 – 223 Some of the devices include… CD CD-ROM CD-R CD-RW DVD DVD-R DVD-RW Blue-Ray DVD Magnetic Tape Most modern systems use serial recording – data is lade out as a sequence of bits along each track. The typical recording used in serial is referred to as serpentine recording. In this technique when data is being recorded, the first set of bits is recorded along the whole length of the tape. When the end of the tape is reached the heads are repostioned to record a new track, and the tape is again recorded on its whole length, this time in the opposite direction. That process continued back and forth until the tape is full. To increase speed, the read-write head is capable of reading and writing a number of adjacent tracks simultaneously. Data is still recorded serially along individual tracks, but blocks in sequence are stored on adjacent tracks as suggested. A tape drive is a sequential access device. Magnetic tape was the first kind of secondary memory. It is still widely used as the lowest-cost, slowest speed member of the memory hierarchy.

    Read the article

  • Building Tag Cloud Declarative ADF Component

    - by Arunkumar Ramamoorthy
    When building a website, there could a requirement to add a tag cloud to let the users know the popular tags (or terms) used in the site. In this blog, we would build a simple declarative component to be used as tag cloud in the page. To start with, we would first create the declarative component, which could display the tag cloud. We will do that by creating a new custom application from the new gallery. Give a name for the app and the project and from the new gallery, let us create a new ADF Declarative Component We need to specify the name for the declarative component, attributes in it etc. as follows For displaying the tags as cloud, we need to pass the content to this component. So, we will create an attribute to hold the values for the tag. Let us name it as "value" and make it as java.lang.String  type. Once after this, to hold the component, we need to create a tag library. This can be done by clicking on the Add Tag Library button. Clicking on OK buttons in all the open dialogs would create a declarative component for us. Now, we need to display the tag cloud based on the value passed to the component. To do that, we assume that the value is a Tree Binding and has two attributes in it, say "Name" and "Weight". To make a tag cloud, we would put together the "Name" in a loop and set it's font size based on the "Weight". After putting our logic to work, here is how the source look Attributes added to the declarative components can be retrieved by using #{attrs.<attribute_name>}. Now, we need to deploy this project as ADF Library Jar file, so that this can be distributed to the consuming applications. We'll select ADF Library Jar as type and create the profile. We would be getting the jar file after deployment. To test the functionality, we could create a simple Fusion Web Application. To add our custom component to the consuming application, we can create a file system connection pointing to the location where the jar file is and add it or, add through the project properties of the ViewController project. Now, our custom component has been added to the consuming application. We could test that by creating a VO in the model project with a query like, select 'Faces' as Name,25 as Weight from dual union all select 'ADF', 15 from dual  union all select 'ADFdi', 30 from dual union all select 'BC4J', 20 from dual union all select 'EJB', 40 from dual union all select 'WS', 35 from dual Add this VO to the AppModule, so that it would be exposed to the data control. Then, we could create a jspx page, and add a tree binding to the VO created. We can now see our Tag Cloud declarative component is available in the component palette.  It can be inserted from the component palette to our page and set it's value property to CollectionModel of the tree binding created. Now that we've created the Declarative component and added that to our page successfully, we can run the page to see how it looks. As per the query, the Tags are displayed in different fonts, based on their weight.

    Read the article

  • Cloud Computing Pricing - It's like a Hotel

    - by BuckWoody
    I normally don't go into the economics or pricing side of Distributed Computing, but I've had a few friends that have been surprised by a bill lately and I wanted to quickly address at least one aspect of it. Most folks are used to buying software and owning it outright - like buying a car. We pay a lot for the car, and then we use it whenever we want. We think of the "cloud" services as a taxi - we'll just pay for the ride we take an no more. But it's not quite like that. It's actually more like a hotel. When you subscribe to Azure using a free offering like the MSDN subscription, you don't have to pay anything for the service. But when you create an instance of a Web or Compute Role, Storage, that sort of thing, you can think of the idea of checking into a hotel room. You get the key, you pay for the room. For Azure, using bandwidth, CPU and so on is billed just like it states in the Azure Portal. so in effect there is a cost for the service and then a cost to use it, like water or power or any other utility. Where this bit some folks is that they created an instance, played around with it, and then left it running. No one was using it, no one was on - so they thought they wouldn't be charged. But they were. It wasn't much, but it was a surprise.They had the hotel room key, but they weren't in the room, so to speak. To add to their frustration, they had to talk to someone on the phone to cancel the account. I understand the frustration. Although we have all this spelled out in the sign up area, not everyone has the time to read through all that. I get that. So why not make this easier? As an explanation, we bill for that time because the instance is still running, and we have to tie up resources to be available the second you want them, and that costs money. As far as being able to cancel from the portal, that's also something that needs to be clearer. You may not be aware that you can spin up instances using code - and so cancelling from the Portal would allow you to do the same thing. Since a mistake in code could erase all of your instances and the account, we make you call to make sure you're you and you really want to take it down. Not a perfect system by any means, but we'll evolve this as time goes on. For now, I wanted to make sure you're aware of what you should do. By the way, you don't have to cancel your whole account not to be billed. Just delete the instance from the portal and you won't be charged. You don't have to call anyone for that. And just FYI - you can download the SDK for Azure and never even hit the online version at all for learning and playing around. No sign-up, no credit card, PO, nothing like that. In fact, that's how I demo Azure all the time. Everything runs right on your laptop in an emulated environment.  

    Read the article

  • The battle between Java vs. C#

    The battle between Java vs. C# has been a big debate amongst the development community over the last few years. Both languages have specific pros and cons based on the needs of a particular project. In general both languages utilize a similar coding syntax that is based on C++, and offer developers similar functionality. This being said, the communities supporting each of these languages are very different. The divide amongst the communities is much like the political divide in America, where the Java community would represent the Democrats and the .Net community would represent the Republicans. The Democratic Party is a proponent of the working class and the general population. Currently, Java is deeply entrenched in the open source community that is distributed freely to anyone who has an interest in using it. Open source communities rely on developers to keep it alive by constantly contributing code to make applications better; essentially they develop code by the community. This is in stark contrast to the C# community that is typically a pay to play community meaning that you must pay for code that you want to use because it is developed as products to be marketed and sold for a profit. This ties back into my reference to the Republicans because they typically represent the needs of business and personal responsibility. This is emphasized by the belief that code is a commodity and that it can be sold for a profit which is in direct conflict to the laissez-faire beliefs of the open source community. Beyond the general differences between Java and C#, they also target two different environments. Java is developed to be environment independent and only requires that users have a Java virtual machine running in order for the java code to execute. C# on the other hand typically targets any system running a windows operating system and has the appropriate version of the .Net Framework installed. However, recently there has been push by a segment of the Open source community based around the Mono project that lets C# code run on other non-windows operating systems. In addition, another feature of C# is that it compiles into an intermediate language, and this is what is executed when the program runs. Because C# is reduced down to an intermediate language called Common Language Runtime (CLR) it can be combined with other languages that are also compiled in to the CLR like Visual Basic (VB) .Net, and F#. The allowance and interaction between multiple languages in the .Net Framework enables projects to utilize existing code bases regardless of the actual syntax because they can be compiled in to CLR and executed as one codebase. As a software engineer I personally feel that it is really important to learn as many languages as you can or at least be open to learn as many languages as you can because no one language will work in every situation.  In some cases Java may be a better choice for a project and others may be C#. It really depends on the requirements of a project and the time constraints. In addition, I feel that is really important to concentrate on understanding the logic of programming and be able to translate business requirements into technical requirements. If you can understand both programming logic and business requirements then deciding which language to use is just basically choosing what syntax to write for a given business problem or need. In regards to code refactoring and dynamic languages it really does not matter. Eventually all projects will be refactored or decommissioned to allow for progress. This is the way of life in the software development industry. The language of a project should not be chosen based on the fact that a project will eventually be refactored because they all will get refactored.

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • At what point does "constructive" criticism of your code become unhelpful?

    - by user15859
    I recently started as a junior developer. As well as being one of the least experienced people on the team, I'm also a woman, which comes with all sorts of its own challenges working in a male-dominated environment. I've been having problems lately because I feel like I am getting too much unwarranted pedantic criticism on my work. Let me give you an example of what happened recently. Team lead was too busy to push in some branches I made, so he didn't get to them until the weekend. I checked my mail, not really meaning to do any work, and found that my two branches had been rejected on the basis of variable names, making error messages more descriptive, and moving some values to the config file. I don't feel that rejecting my branch on this basis is useful. Lots of people were working over the weekend, and I had never said that I would be working. Effectively, some people were probably blocked because I didn't have time to make the changes and resubmit. We are working on a project that is very time-sensitive, and it seems to me that it's not helpful to outright reject code based on things that are transparent to the client. I may be wrong, but it seems like these kinds of things should be handled in patch type commits when I have time. Now, I can see that in some environments, this would be the norm. However, the criticism doesn't seem equally distributed, which is what leads to my next problem. The basis of most of these problems was due to the fact that I was in a codebase that someone else had written and was trying to be minimally invasive. I was mimicking the variable names used elsewhere in the file. When I stated this, I was bluntly told, "Don't mimic others, just do what's right." This is perhaps the least useful thing I could have been told. If the code that is already checked in is unacceptable, how am I supposed to tell what is right and what is wrong? If the basis of the confusion was coming from the underlying code, I don't think it's my responsibility to spend hours refactoring a whole file that someone else wrote (and works perfectly well), potentially introducing new bugs etc. I'm feeling really singled out and frustrated in this situation. I've gotten a lot better about following the standards that are expected, and I feel frustrated that, for example, when I refactor a piece of code to ADD error checking that was previously missing, I'm only told that I didn't make the errors verbose enough (and the branch was rejected on this basis). What if I had never added it to begin with? How did it get into the code to begin with if it was so wrong? This is why I feel so singled out: I constantly run into this existing problematic code, that I either mimic or refactor. When I mimic it, it's "wrong", and if I refactor it, I'm chided for not doing enough (and if I go all the way, introducing bugs, etc). Again, if this is such a problem, I don't understand how any code gets into the codebase, and why it becomes my responsibility when it was written by someone else, who apparently didn't have their code reviewed. Anyway, how do I deal with this? Please remember that I said at the top that I'm a woman, and I'm sure these guys don't usually have to worry about decorum when they're reviewing other guys' code, but honestly that doesn't work for me, and it's causing me to be less productive. I'm worried that if I talk to my manager about it, he'll think I can't handled the environment, etc.

    Read the article

  • Drawing random smooth lines contained in a square [migrated]

    - by Doug Mercer
    I'm trying to write a matlab function that creates random, smooth trajectories in a square of finite side length. Here is my current attempt at such a procedure: function [] = drawroutes( SideLength, v, t) %DRAWROUTES Summary of this function goes here % Detailed explanation goes here %Some parameters intended to help help keep the particles in the box RandAccel=.01; ConservAccel=0; speedlimit=.1; G=10^(-8); % %Initialize Matrices Ax=zeros(v,10*t); Ay=Ax; vx=Ax; vy=Ax; x=Ax; y=Ax; sx=zeros(v,1); sy=zeros(v,1); % %Define initial position in square x(:,1)=SideLength*.15*ones(v,1)+(SideLength*.7)*rand(v,1); y(:,1)=SideLength*.15*ones(v,1)+(SideLength*.7)*rand(v,1); % for i=2:10*t %Measure minimum particle distance component wise from boundary %for each vehicle BorderGravX=[abs(SideLength*ones(v,1)-x(:,i-1)),abs(x(:,i-1))]'; BorderGravY=[abs(SideLength*ones(v,1)-y(:,i-1)),abs(y(:,i-1))]'; rx=min(BorderGravX)'; ry=min(BorderGravY)'; % %Set the sign of the repulsive force for k=1:v if x(k,i)<.5*SideLength sx(k)=1; else sx(k)=-1; end if y(k,i)<.5*SideLength sy(k)=1; else sy(k)=-1; end end % %Calculate Acceleration w/ random "nudge" and repulive force Ax(:,i)=ConservAccel*Ax(:,i-1)+RandAccel*(rand(v,1)-.5*ones(v,1))+sx*G./rx.^2; Ay(:,i)=ConservAccel*Ay(:,i-1)+RandAccel*(rand(v,1)-.5*ones(v,1))+sy*G./ry.^2; % %Ad hoc method of trying to slow down particles from jumping outside of %feasible region for h=1:v if abs(vx(h,i-1)+Ax(h,i))<speedlimit vx(h,i)=vx(h,i-1)+Ax(h,i); elseif (vx(h,i-1)+Ax(h,i))<-speedlimit vx(h,i)=-speedlimit; else vx(h,i)=speedlimit; end end for h=1:v if abs(vy(h,i-1)+Ay(h,i))<speedlimit vy(h,i)=vy(h,i-1)+Ay(h,i); elseif (vy(h,i-1)+Ay(h,i))<-speedlimit vy(h,i)=-speedlimit; else vy(h,i)=speedlimit; end end % %Update position x(:,i)=x(:,i-1)+(vx(:,i-1)+vx(:,i))/2; y(:,i)=y(:,i-1)+(vy(:,i-1)+vy(:,1))/2; % end %Plot position clf; hold on; axis([-100,SideLength+100,-100,SideLength+100]); cc=hsv(v); for j=1:v plot(x(j,1),y(j,1),'ko') plot(x(j,:),y(j,:),'color',cc(j,:)) end hold off; % end My original plan was to place particles within a square, and move them around by allowing their acceleration in the x and y direction to be governed by a uniformly distributed random variable. To keep the particles within the square, I tried to create a repulsive force that would push the particles away from the boundaries of the square. In practice, the particles tend to leave the desired "feasible" region after a relatively small number of time steps (say, 1000)." I'd love to hear your suggestions on either modifying my existing code or considering the problem from another perspective. When reading the code, please don't feel the need to get hung up on any of the ad hoc parameters at the very beginning of the script. They seem to help, but I don't believe any beside the "G" constant should truly be necessary to make this system work. Here is an example of the current output: Many of the vehicles have found their way outside of the desired square region, [0,400] X [0,400].

    Read the article

  • How do I get 5.1 surround sound working on an Acer Aspire 5738ZG?

    - by kbargais_LV
    I got a problem with sound. I tried everything but no results. :( I got 3 sound ports. my daemon: # This file is part of PulseAudio. # # PulseAudio is free software; you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # PulseAudio is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with PulseAudio; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 # USA. ## Configuration file for the PulseAudio daemon. See pulse-daemon.conf(5) for ## more information. Default values are commented out. Use either ; or # for ## commenting. ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; local-server-type = user ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = /etc/pulse/default.pa ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10 ; enable-deferred-volume = yes ; deferred-volume-safety-margin-usec = 8000 ; deferred-volume-extra-delay-usec = 0

    Read the article

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

  • Data Source Security Part 4

    - by Steve Felts
    So far, I have covered Client Identity and Oracle Proxy Session features, with WLS or database credentials.  This article will cover one more feature, Identify-based pooling.  Then, there is one more topic to cover - how these options play with transactions.Identity-based Connection Pooling An identity based pool creates a heterogeneous pool of connections.  This allows applications to use a JDBC connection with a specific DBMS credential by pooling physical connections with different DBMS credentials.  The DBMS credential is based on either the WebLogic user mapped to a database user or the database user directly, based on the “use database credentials” setting as described earlier. Using this feature enabled with “use database credentials” enabled seems to be what is proposed in the JDBC standard, basically a heterogeneous pool with users specified by getConnection(user, password). The allocation of connections is more complex if Enable Identity Based Connection Pooling attribute is enabled on the data source.  When an application requests a database connection, the WebLogic Server instance selects an existing physical connection or creates a new physical connection with requested DBMS identity. The following section provides information on how heterogeneous connections are created:1. At connection pool initialization, the physical JDBC connections based on the configured or default “initial capacity” are created with the configured default DBMS credential of the data source.2. An application tries to get a connection from a data source.3a. If “use database credentials” is not enabled, the user specified in getConnection is mapped to a DBMS credential, as described earlier.  If the credential map doesn’t have a matching user, the default DBMS credential is used from the datasource descriptor.3b. If “use database credentials” is enabled, the user and password specified in getConnection are used directly.4. The connection pool is searched for a connection with a matching DBMS credential.5. If a match is found, the connection is reserved and returned to the application.6. If no match is found, a connection is created or reused based on the maximum capacity of the pool: - If the maximum capacity has not been reached, a new connection is created with the DBMS credential, reserved, and returned to the application.- If the pool has reached maximum capacity, based on the least recently used (LRU) algorithm, a physical connection is selected from the pool and destroyed. A new connection is created with the DBMS credential, reserved, and returned to the application. It should be clear that finding a matching connection is more expensive than a homogeneous pool.  Destroying a connection and getting a new one is very expensive.  If you can use a normal homogeneous pool or one of the light-weight options (client identity or an Oracle proxy connection), those should be used instead of identity based pooling. Regardless of how physical connections are created, each physical connection in the pool has its own DBMS credential information maintained by the pool. Once a physical connection is reserved by the pool, it does not change its DBMS credential even if the current thread changes its WebLogic user credential and continues to use the same connection. To configure this feature, select Enable Identity Based Connection Pooling.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableIdentityBasedConnectionPooling.html  "Enable identity-based connection pooling for a JDBC data source" in Oracle WebLogic Server Administration Console Help. You must make the following changes to use Logging Last Resource (LLR) transaction optimization with Identity-based Pooling to get around the problem that multiple users will be accessing the associated transaction table.- You must configure a custom schema for LLR using a fully qualified LLR table name. All LLR connections will then use the named schema rather than the default schema when accessing the LLR transaction table.  - Use database specific administration tools to grant permission to access the named LLR table to all users that could access this table via a global transaction. By default, the LLR table is created during boot by the user configured for the connection in the data source. In most cases, the database will only allow access to this user and not allow access to mapped users. Connections within Transactions Now that we have covered the behavior of all of these various options, it’s time to discuss the exception to all of the rules.  When you get a connection within a transaction, it is associated with the transaction context on a particular WLS instance. When getting a connection with a data source configured with non-XA LLR or 1PC (using the JTS driver) with global transactions, the first connection obtained within the transaction is returned on subsequent connection requests regardless of the values of username/password specified and independent of the associated proxy user session, if any. The connection must be shared among all users of the connection when using LLR or 1PC. For XA data sources, the first connection obtained within the global transaction is returned on subsequent connection requests within the application server, regardless of the values of username/password specified and independent of the associated proxy user session, if any.  The connection must be shared among all users of the connection within a global transaction within the application server/JVM.

    Read the article

  • Adaptive Connections For ADFBC

    - by Duncan Mills
    Some time ago I wrote an article on Adaptive Bindings showing how the pageDef for a an ADF UI does not have to be wedded to a fixed data control or collection / View Object. This article has proved pretty popular, so as a follow up I wanted to cover another "Adaptive" feature of your ADF applications, the ability to make multiple different connections from an Application Module, at runtime. Now, I'm sure you'll be aware that if you define your application to use a data-source rather than a hard-coded JDBC connection string, then you have the ability to change the target of that data-source after deployment to point to a different database. So that's great, but the reality of that is that this single connection is effectively fixed within the application right?  Well no, this it turns out is a common misconception. To be clear, yes a single instance of an ADF Application Module is associated with a single connection but there is nothing to stop you from creating multiple instances of the same Application Module within the application, all pointing at different connections.  If fact this has been possible for a long time using a custom extension point with code that which extends oracle.jbo.http.HttpSessionCookieFactory. This approach, however, involves writing code and no-one likes to write any more code than they need to, so, is there an easier way? Yes indeed.  It is in fact  a little publicized feature that's available in all versions of 11g, the ELEnvInfoProvider. What Does it Do?  The ELEnvInfoProvider  is  a pre-existing class (the full path is  oracle.jbo.client.ELEnvInfoProvider) which you can plug into your ApplicationModule configuration using the jbo.envinfoprovider property. Visuallty you can set this in the editor, or you can also set it directly in the bc4j.xcfg (see below for an example) . Once you have plugged in this envinfoprovider, here's the fun bit, rather than defining the hard-coded name of a datasource instead you can plug in a EL expression for the connection to use.  So what's the benefit of that? Well it allows you to defer the selection of a connection until the point in time that you instantiate the AM. To define the expression itself you'll need to do a couple of things: First of all you'll need a managed bean of some sort – e.g. a sessionScoped bean defined in your ViewController project. This will need a getter method that returns the name of the connection. Now this connection itself needs to be defined in your Application Server, and can be managed through Enterprise Manager, WLST or through MBeans. (You may need to read the documentation [http://docs.oracle.com/cd/E28280_01/web.1111/b31974/deployment_topics.htm#CHDJGBDD] here on how to configure connections at runtime if you're not familiar with this)   The EL expression (e.g. ${connectionManager.connection} is then defined in the configuration by editing the bc4j.xcfg file (there is a hyperlink directly to this file on the configuration editing screen in the Application Module editor). You simply replace the hardcoded JDBCName value with the expression.  So your cfg file would end up looking something like this (notice the reference to the ELEnvInfoProvider that I talked about earlier) <BC4JConfig version="11.1" xmlns="http://xmlns.oracle.com/bc4j/configuration">   <AppModuleConfigBag ApplicationName="oracle.demo.model.TargetAppModule">   <AppModuleConfig DeployPlatform="LOCAL"  JDBCName="${connectionManager.connection}" jbo.project="oracle.demo.model.Model" name="TargetAppModuleLocal" ApplicationName="oracle.demo.model.TargetAppModule"> <AM-Pooling jbo.doconnectionpooling="true"/> <Database jbo.locking.mode="optimistic">       <Security AppModuleJndiName="oracle.demo.model.TargetAppModule"/>    <Custom jbo.envinfoprovider="oracle.jbo.client.ELEnvInfoProvider"/> </AppModuleConfig> </AppModuleConfigBag> </BC4JConfig> Still Don't Quite Get It? So far you might be thinking, well that's fine but what difference does it make if the connection is resolved "just in time" rather than up front and changed as required through Enterprise Manager? Well a trivial example would be where you have a single application deployed to your application server, but for different users you want to connect to different databases. Because, the evaluation of the connection is deferred until you first reference the AM you have a decision point that can take the user identity into account. However, think about it for a second.  Under what circumstances does a new AM get instantiated? Well at the first reference of the AM within the application yes, but also whenever a Task Flow is entered -  if the data control scope for the Task Flow is ISOLATED.  So the reality is, that on a single screen you can embed multiple Task Flows, all of which are pointing at different database connections concurrently. Hopefully you'll find this feature useful, let me know... 

    Read the article

  • The new direction of the gaming industry

    - by raccoon_tim
    Just recently I read a great blog post by David Darling, the founder of Codemasters: http://www.develop-online.net/blog/347/Jurassic-consoles-could-become-extinct. In the blog post he talks about how traditional retail games are experiencing a downfall thanks to the increasing popularity of digital distribution. I personally think of retail games as being relics of the past. It does not really make much sense to still keep distributing boxed games when the same game can be elegantly downloaded and updated over the air through a digital distribution channel. The world is not all rainbows, however. One big issue with mixing digital distribution with boxed retail games is that resellers will not condone you selling your game for 10€ digitally while their selling the same game for 70€. The only way to get around this issue is to move to full digital distribution. This has the added benefit of minimizing piracy as the game can be tightly bound to the service you downloaded the game from. Many players are, however, complaining about not being able to play the games offline. Having games tightly bound to the internet is a problem when games are bought from a retailer as we tend to expect that once we have the product we can use it anywhere because we physically own it. The truth is that we don’t actually own the product. Instead, the typical EULA actually states that we only have a license to use the product. We’re not, for instance, allowed to disassemble the product, which the owner is indeed permitted to do. Digital distribution allows us to provide games as services, instead of selling them as standalone products. This means that for a service to work you have to be connected to the internet but you still have the same rights to use the product. It’s really straightforward; if you downloaded a client from the internet you are expected to have an internet connection so you’re able to connect to the server. A game distributed digitally that is built using a client-server architecture has the added benefit of allowing you to play anywhere as long as you have the client installed and you are able to log in with your user information. Your save games can be backed up and your game can continue anywhere. Another development we’re seeing in the gaming industry is the increasing popularity of free-to-play games. These are games that let you play for free but allow you to boost your gaming experience with real world money. The nature of these games is that players are constantly rewarded with new content and the game can evolve according to their way of playing and their wishes can be incorporated into the product. Free-to-play games can quickly gain a large player basis and monetization is done by providing players valuable things to buy making their gaming experience more fun. I am personally very excited about free-to-play games as it’s possible to start building the game together with your players and there is no need to work on the game for 5 years from start to finish and only then see if it’s actually something the players like. This is a typical problem with big movie-like retail games and recent news about Radical Entertainment practically closing its doors paints a clear picture of what can happen when the risk does not pay off: http://news.teamxbox.com/xbox/25874/Prototype-Developer-Radical-Entertainment-Closes/.

    Read the article

  • How to programmatically add x509 certificate to local machine store using c#

    - by David
    I understand the question title may be a duplicate but I have not found an answer for my situation yet so here goes; I have this simple peice of code // Convert the Filename to an X509 Certificate X509Certificate2 cert = new X509Certificate2(certificateFilePath); // Get the server certificate store X509Store store = new X509Store(StoreName.TrustedPeople, StoreLocation.LocalMachine); store.Open(OpenFlags.MaxAllowed); store.Add(cert); // x509 certificate created from a user supplied filename But keep being presented with an "Access Denied" exception. I have read some information that suggests using StorePermissions would solve my issue but I don't think this is relevant in my code. Having said that, I did test it to to be sure and I couldn't get it to work. I also found suggestions that changing folder permissions within Windows was the way to go and while this may work(not tested), it doesn't seem practical for what will become distributed code. I also have to add that as the code will be running as a service on a server, adding the certificates to the current user store also seems wrong. Is there anyway to programmatically add a certificate into the local machine store?

    Read the article

  • Exception "The operation is not valid for the state of the transaction" using TransactionScope

    - by Lanfear
    We have a web service on server #1 and a database on server #2. Web service uses transaction scope to produce distributed transaction. Everything is correct. And we have another database on server #3. We had some problems with this server and we reinstalled operation system and software. We configured MSDTC and tried to use web service from server #1 to communicate with database on this server. And now after first select statement within transaction scope we get: "The operation is not valid for the state of the transaction". This exception falls in every web service request if it is using transaction scope. Server #2 and Server #3 is almost similar. The difference can be only in settings. .NET framework 3.5 SP1 installed and SQL Server SP3 on all servers. Full stacktrace: System.Transactions.TransactionState.EnlistPromotableSinglePhase(InternalTransaction tx, IPromotableSinglePhaseNotification promotableSinglePhaseNotification, Transaction atomicTransaction) ? System.Transactions.Transaction.EnlistPromotableSinglePhase(IPromotableSinglePhaseNotification promotableSinglePhaseNotification) ? System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction t ? System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction t ? System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) ? System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) ? System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) ? System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) ? System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) ? System.Data.SqlClient.SqlConnection.Open() ? NHibernate.Connection.DriverConnectionProvider.GetConnection() ? NHibernate.Impl.SessionFactoryImpl.OpenConnection() I searched this message but didn't found any appropriate solution. So what settings should I check and what exactly should I do to fix it?

    Read the article

  • iPhone: Software Development And Distribution

    - by xsl
    I have a few quick questions about the iPhone software development. I did some research about the topic, but there are a few specific things I would like to ask here, because I will have to estimate the cost of the required hardware and software, before I am allowed to buy anything. I never did any Mac development nor have I ever owned an iPhone, so needless to say this is quite hard for me. I will buy an iMac mini with 2 GB RAM for iPhone development. I will have to use it at the same time as my regular PC, but the majority of the time I won't use the Mac at all. Do I have to buy an additional monitor, a mouse and a keyboard or is there a better solution? I will have to port a C library to the iPhone platform and develop an iPhone application that uses the ported library. Do I need anything else than the iPhone SDK to do this? If I use an external library (see above), can I test the application with the integrated emulator, or is it recommend to buy the device? In a later phase of the project I will have to buy an iPhone, but I will have to wait until the iPhone 4 is released here in Europe, because the application requires a camera. In addition to this I will have to send data to a remote webservice. Aside from these two things I don't require any other features. Can I just buy the iPhone online from another country (the iPhones here are sim locked), or should I buy one with a contract? When the application is ready, it will be installed on a few iPhones owned by our customer. Because of security reasons it is crucial that there is no third party involved in this process (i.e. the application should not be distributed on the app store). Is this possible?

    Read the article

  • Notification Email Best Practices--From Server Setup to Programming

    - by Andrew Wagner
    All, I'm in the process now of building a SaaS tool that allows network admins to generate notification emails to the members of the end-users of our platform (among many many other things). I'm running into a bit of an "out of my expertise" wall, as I know there are a lot of variables involved with configuring an application that can: Run in a distributed way via load balancing and still-- Leverage a single mail server for sending notification emails Process unsubscribe requests Avoid any ISP blacklisting in the process. If anyone has the time and has done this before, I'd love if you could walk me through the A-Z of best practices both from a configuration perspective and an execution perspective for generating these emails (anything from necessary DNS settings to ideal SMTP setup and configuration) Currently, our application generates email via Google Apps using the PHPMailer class. While this works well, it doesn't queue messages (potential for timeout problems if any of our clients amass a very large list of end-users), and Google limits the amount of allowed generated email messages to 500/day. I know this is a lofty question, but any guidance you could provide would be smashing and a big help as we work through this hurtle in our beta development stage. Thanks!

    Read the article

  • Binary serialization/de-serialization in C++ and C#

    - by 6pack kid
    Hello. I am working on a distributed application which has two components. One is written in standard C++ (not managed C++) and the other one is written in C#. Both are communicating via a message bus. I have a situation in which I need to pass objects from C++ to C# application and for this I need to serialize those objects in C++ and de-serialize them in C# (something like marshaling/un-marshaling in .NET). I need to perform this serialization in binary and not in XML (due to performance reasons). I have used Boost.Serialization to do this when both ends were implemented in C++ but now that I have a .NET application on one end, Boost.Serialization is not a viable solution. I am looking for a solution that allows me to perform (de)serialization across C++ and .NET boundary i.e., cross platform binary serialization. I know I can implement the (de)serialization code in a C++ dll and use P/Invoke in the .NET application, but I want to keep that as a last resort. Also, I want to know if I use some standard like gzip, will that be efficient? Are there any other alternatives to gzip? What are the pros/cons of them? Thanks

    Read the article

  • How do I install LFE on Ubuntu Karmic?

    - by karlthorwald
    Erlang was already installed: $dpkg -l|grep erlang ii erlang 1:13.b.3-dfsg-2ubuntu2 Concurrent, real-time, distributed function ii erlang-appmon 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application monitor ii erlang-asn1 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP modules for ASN.1 support ii erlang-base 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP virtual machine and base applica ii erlang-common-test 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application for automated testin ii erlang-debugger 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application for debugging and te ii erlang-dev 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP development libraries and header [... many more] Erlang seems to work: $ erl Erlang R13B03 (erts-5.7.4) [source] [64-bit] [smp:2:2] [rq:2] [async-threads:0] [hipe] [kernel-poll:false] Eshell V5.7.4 (abort with ^G) 1> I downloaded lfe from github and checked out 0.5.2: git clone http://github.com/rvirding/lfe.git cd lfe git checkout -b local0.5.2 e207eb2cad $ configure configure: command not found $ make mkdir -p ebin erlc -I include -o ebin -W0 -Ddebug +debug_info src/*.erl #erl -I -pa ebin -noshell -eval -noshell -run edoc file src/leex.erl -run init stop #erl -I -pa ebin -noshell -eval -noshell -run edoc_run application "'Leex'" '"."' '[no_packages]' #mv src/*.html doc/ Must be something stupid i missed :o $ sudo make install make: *** No rule to make target `install'. Stop. $ erl -noshell -noinput -s lfe_boot start {"init terminating in do_boot",{undef,[{lfe_boot,start,[]},{init,start_it,1},{init,start_em,1}]}} Crash dump was written to: erl_crash.dump init terminating in do_boot () Is there an example how I would create a hello world source file and compile and run it?

    Read the article

  • Standard Apache (not OHS) with mod_osso for Single Signon

    - by Markos Fragkakis
    The mod_osso.so (the Apache plugin for Single Signon, provided by Oracle) is distributed with the Oracle HTTP Server (OHS), which is essentially a modified Apache. I am trying to use it on the standard Apache HTTP Server, and have not managed to get it to work. Configuration: Apache 2.2.15 OHS from the Oracle Web Tier Tools 11.1.1.2.0 Red Hat Linux 64 bit I have: Included the module in the modules directory (copied from corresponding modules dir in OHS) Included the libraries libiau.so and libclutsh.so.11.1 from Oracle Home. The absence of these libraries produced an error on starting Apache. Produced a osso.conf using the ssoreg.sh tool provided with OID (the LDAP implementation of Oracle) Created the required mod_osso.conf file, which I included in httpd.conf. The error I get when starting Apache is this: # /opt/apache_sso/bin/apachectl -k start httpd: Syntax error on line 1075 of /opt/apache_sso/conf/httpd.conf: Syntax error on line 1 of /opt/apache_sso/conf/mod_osso.conf: Cannot load /opt/apache_sso/modules/mod_osso.so into server: /opt/apache_sso/modules/mod_osso.so: undefined symbol: _audit_authentication_request My mod_osso.conf: # cat /opt/apache_sso/conf/mod_osso.conf LoadModule osso_module modules/mod_osso.so <IfModule mod_osso.c> OssoIdleTimeout off OssoIpCheck on OssoConfigFile conf/osso.conf #Location is the URI you want to protect <Location /myapp> require valid-user #OHS 11g AuthType Osso #OHS 10g AuthType Basic AuthType Osso </Location> </IfModule> Has anyone made mod_osso work on standard Apache HTTP server?

    Read the article

  • LINQ-to-SQL: ExecuteQuery(Type, String) populates one field, but not the other

    - by Daniel Schaffer
    I've written an app that I use as an agent to query data from a database and automatically load it into my distributed web cache. I do this by specifying an sql query and a type in a configuration. The code that actually does the querying looks like this: List<Object> result = null; try { result = dc.ExecuteQuery(elementType, entry.Command).OfType<Object>().ToList(); } catch (Exception ex) { HandleException(ex, WebCacheAgentLogEvent.DatabaseExecutionError); continue; } elementType is a System.Type created from the type specified in the configuration (using Type.GetType()), and entry.Command is the SQL query. The specific entity type I'm having an issue with looks like this: public class FooCount { [Column(Name = "foo_id")] public Int32 FooId { get; set; } [Column(Name = "count")] public Int32 Count { get; set; } } The SQL query looks like this: select foo_id as foo_id, sum(count) as [count] from foo_aggregates group by foo_id order by foo_id For some reason, when the query is executed, the "Count" property ends up populated, but not the "FooId" property. I tried running the query myself, and the correct column names are returned, and the column names match up with what I've specified in my mapping attributes. Help!

    Read the article

  • code review: Is it subjective or objective(quantifiable) ?

    - by Ram
    I am putting together some guidelines for code reviews. We do not have one formal process yet, and trying to formalize it. And our team is geographically distributed We are using TFS for source control (used it for tasks/bug tracking/project management as well, but migrated that to JIRA) with VS2008 for development. What are the things you look for when doing a code review ? These are the things I came up with Enforce FXCop rules (we are a Microsoft shop) Check for performance (any tools ?) and security (thinking about using OWASP- code crawler) and thread safety Adhere to naming conventions The code should cover edge cases and boundaries conditions Should handle exceptions correctly (do not swallow exceptions) Check if the functionality is duplicated elsewhere method body should be small(20-30 lines) , and methods should do one thing and one thing only (no side effects/ avoid temporal coupling -) Do not pass/return nulls in methods Avoid dead code Document public and protected methods/properties/variables What other things do you generally look for ? I am trying to see if we can quantify the review process (it would produce identical output when reviewed by different persons) Example: Saying "the method body should be no longer than 20-30 lines of code" as opposed to saying "the method body should be small" Or is code review very subjective ( and would differ from one reviewer to another ) ? The objective is to have a marking system (say -1 point for each FXCop rule violation,-2 points for not following naming conventions,2 point for refactoring etc) so that developers would be more careful when they check in their code.This way, we can identify developers who are consistently writing good/bad code.The goal is to have the reviewer spend about 30 minutes max, to do a review (I know this is subjective, considering the fact that the changeset/revision might include multiple files/huge changes to the existing architecture etc , but you get the general idea, the reviewer should not spend days reviewing someone's code) What other objective/quantifiable system do you follow to identify good/bad code written by developers? Book reference: Clean Code: A handbook of agile software craftmanship by Robert Martin

    Read the article

  • [OffTopic] PhD is it worth it?

    - by Zenzen
    So I'll be graduating next year (hopefully) and lately I started thinking about getting a PhD in computer science (to be more precise I was thinking about something related to "distributed systems", don't have any specific topics yet in mind), but is it really worth it? A PhD course here lasts 4 years and is free IF you're doing the "normal one" (which means you have class from monday to friday) OR you have to pay if you want to study on the weekends. So I've been thinking about getting a normal full time job (as a JEE programmer) and doing my PhD during the weekends OR doing the normal PhD course (mon-fri) and getting a part time job as a software dev (the main reason I want a job is simple - I need to eat). That would give me practical working experience and theoretical knowledge+a PhD, but it would also mean working 7days a week from 9 to 5 for 4 years straight (sounds like an overkill). Is it, job wise, really worth getting a PhD? As an employer do you prefer people with PhDs or MScs? This might be kinda important - I'm not from the US or western europe, how are PhDs from other countries (I am graduating one of the best, if not the best, school in my country but it's still a central european country...) seen? Are they useless? I won't hide it - after graduation I am thinking about moving abroad.

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >