Search Results

Search found 271 results on 11 pages for 'turing completeness'.

Page 7/11 | < Previous Page | 3 4 5 6 7 8 9 10 11  | Next Page >

  • NFS server generating "invalid extent" on EXT4 system disk?

    - by Stephen Winnall
    I have a server running Xen 4.1 with Oneiric in the dom0 and each of the 4 domUs. The system disks of the domUs are LVM2 volumes built on top of an mdadm RAID1. All the domU system disks are EXT4 and are created using snapshots of the same original template. 3 of them run perfectly, but one (called s-ub-02) keeps on being remounted read-only. A subsequent e2fsck results in a single "invalid extent" diagnosis: e2fsck 1.41.14 (22-Dec-2010) /dev/domu/s-ub-02-root contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Inode 525418 has an invalid extent (logical block 8959, invalid physical block 0, len 0) Clear<y>? yes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/domu/s-ub-02-root: 77757/655360 files (0.3% non-contiguous), 360592/2621440 blocks The console shows typically the following errors for the system disk (xvda2): [101980.903416] EXT4-fs error (device xvda2): ext4_ext_find_extent:732: inode #525418: comm apt-get: bad header/extent: invalid extent entries - magic f30a, entries 12, max 340(340), depth 0(0) [101980.903473] EXT4-fs (xvda2): Remounting filesystem read-only I have created new versions of the system disk. The same thing always happens. This, and the fact that the disk is ultimately on a RAID1, leads me to preclude a hardware disk error. The only obvious distinguishing feature of this domU is the presence of nfs-kernel-server, so I suspect that. Its exports file looks like this: /exports/users 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/media/music 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/media/pictures 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/opt 192.168.0.0/255.255.248.0(rw,sync,no_subtree_check) /exports/users and /exports/opt are LVM2 volumes from the same volume group as the system disk. /exports/media is an EXT2 volume. (There is an issue where clients see /exports/media/pictures as being a read-only volume, which I mention for completeness.) With the exception of the read-only problem, the NFS server appears to work correctly under light load for several hours before the "invalid extent" problem occurs. There are no helpful entries in /var/log. All of a sudden, no more files are written, so you can see when the disk was remounted read-only, but there is no indication of what the cause might be. Can anyone help me with this problem? Steve

    Read the article

  • Games at Work Part 2: Gamification and Enterprise Applications

    - by ultan o'broin
    Gamification and Enterprise Applications In part 1 of this article, we explored why people are motivated to play games so much. Now, let's think about what that means for Oracle applications user experience. (Even the coffee is gamified. Acknowledgement @noelruane. Check out the Guardian article Dublin's Frothing with Tech Fever. Game development is big business in Ireland too.) Applying game dynamics (gamification) effectively in the enterprise applications space to reflect business objectives is now a hot user experience topic. Consider, for example, how such dynamics could solve applications users’ problems such as: Becoming familiar or expert with an application or process Building loyalty, customer satisfaction, and branding relationships Collaborating effectively and populating content in the community Completing tasks or solving problems on time Encouraging teamwork to achieve goals Improving data accuracy and completeness of entry Locating and managing the correct resources or information Managing changes and exceptions Setting and reaching targets, quotas, or objectives Games’ Incentives, Motivation, and Behavior I asked Julian Orr, Senior Usability Engineer, in the Oracle Fusion Applications CRM User Experience (UX) team for his thoughts on what potential gamification might offer Oracle Fusion Applications. Julian pointed to the powerful incentives offered by games as the starting place: “The biggest potential for gamification in enterprise apps is as an intrinsic motivator. Mechanisms include fun, social interaction, teamwork, primal wiring, adrenaline, financial, closed-loop feedback, locus of control, flow state, and so on. But we need to know what works best for a given work situation.” For example, in CRM service applications, we might look at the motivations of typical service applications users (see figure 1) and then determine how we can 'gamify' these motivations with techniques to optimize the desired work behavior for the role (see figure 2). Description of Figure 1 Description of Figure 2 Involving Our Users Online game players are skilled collaborators as well as problem solvers. Erika Webb (@erikanollwebb), Oracle Fusion Applications UX Manager, has run gamification events for Oracle, including one on collaboration and gamification in Oracle online communities that involved Oracle customers and partners. Read more... However, let’s be clear: gamifying a user interface that’s poorly designed is merely putting the lipstick of gamification on the pig of work. Gamification cannot replace good design and killer content based on understanding how applications users really work and what motivates them. So, Let the Games Begin! Gamification has tremendous potential for the enterprise application user experience. The Oracle Fusion Applications UX team is innovating fast and hard in this area, researching with our users how gamification can make work more satisfying and enterprises more productive. If you’re interested in knowing more about our gamification research, sign up for more information or check out how your company can get involved through the Oracle Usability Advisory Board. Your thoughts? Find those comments.

    Read the article

  • Map and fill texture using PBO (OpenGL 3.3)

    - by NtscCobalt
    I'm learning OpenGL 3.3 trying to do the following (as it is done in D3D)... Create Texture of Width, Height, Pixel Format Map texture memory Loop write pixels Unmap texture memory Set Texture Render Right now though it renders as if the entire texture is black. I can't find a reliable source for information on how to do this though. Almost every tutorial I've found just uses glTexSubImage2D and passes a pointer to memory. Here is basically what my code does... (In this case it is generating an 1-byte Alpha Only texture but it is rendering it as the red channel for debugging) GLuint pixelBufferID; glGenBuffers(1, &pixelBufferID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); glBufferData(GL_PIXEL_UNPACK_BUFFER, 512 * 512 * 1, nullptr, GL_STREAM_DRAW); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); GLuint textureID; glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, 512, 512, 0, GL_RED, GL_UNSIGNED_BYTE, nullptr); glBindTexture(GL_TEXTURE_2D, 0); glBindTexture(GL_TEXTURE_2D, textureID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); void *Memory = glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY); // Memory copied here, I know this is valid because it is the same loop as in my working D3D version glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); And then here is the render loop. // This chunk left in for completeness glUseProgram(glProgramId); glBindVertexArray(glVertexArrayId); glBindBuffer(GL_ARRAY_BUFFER, glVertexBufferId); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 20, 0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 20, 12); GLuint transformLocationID = glGetUniformLocation(3, 'transform'); glUniformMatrix4fv(transformLocationID , 1, true, somematrix) // Not sure if this is all I need to do glBindTexture(GL_TEXTURE_2D, pTex->glTextureId); GLuint textureLocationID = glGetUniformLocation(glProgramId, "texture"); glUniform1i(textureLocationID, 0); glDrawArrays(GL_TRIANGLES, Offset*3, Triangles*3); Vertex Shader #version 330 core in vec3 Position; in vec2 TexCoords; out vec2 TexOut; uniform mat4 transform; void main() { TexOut = TexCoords; gl_Position = vec4(Position, 1.0) * transform; } Pixel Shader #version 330 core uniform sampler2D texture; in vec2 TexCoords; out vec4 fragColor; void main() { // Output color fragColor.r = texture2D(texture, TexCoords).r; fragColor.g = 0.0f; fragColor.b = 0.0f; fragColor.a = 1.0; }

    Read the article

  • Consolidating Oracle E-Business Suite R12 on Oracle's SPARC SuperCluster

    - by Giri Mandalika
    An Optimized Solution for Oracle E-Business Suite (EBS) R12 12.1.3 is now available on oracle.com.     The Oracle Optimized Solution for Oracle E-Business Suite This solution was centered around the engineered system, SPARC SuperCluster T4-4. Check the business and technical white papers along with a bunch of relevant useful resources online at the above optimized solution page for EBS. What is an Optimized Solution? Oracle's Optimized Solutions are designed, tested and fully documented architectures that are tuned for optimal performance and availability. Optimized solutions are NOT pre-packaged, fully tuned, ready-to-install software bundles that can be downloaded and installed. An optimized solution is usually a well documented architecture that was thoroughly tested on a target platform. The technical white paper details the deployed application architecture along with various observations from installing the application on target platform to its behavior and performance in highly available and scalable configurations. Oracle E-Business Suite R12 Use Case Multiple E-Business Suite R12 12.1.3 application modules were tested in this optimized solution -- Financials (online - oracle forms & web requests), Order Management (online - oracle forms & web req uests) and HRMS (online - web requests & payroll batch). The solution will be updated with additional application modules, when they are available. Oracle Solaris Cluster is responsible for the high availability portion of the solution. Performance Data For the sake of completeness, test results were also documented in the optimized solution white paper. Those test results are mainly for educational purposes only. They give good sense of application behavior under the circumstances the application was tested. Since the major focus of the optimized solution is around highly available and scalable configurations, the application was configured to me et those criteria. Hence the documented test results are not directly comparable to any other E-Business Suite performance test results published by any vendor including Oracle. Such an attempt may lead to skewed, incorrect conclusions. Questions & Requests Feel free to direct your questions to the author of the white papers. If you are a potential customer who would like to test a specific E-Business Suite application module on any non-engineered syste m such as SPARC T4-X or engineered system such as SPARC SuperCluster, contact Oracle Solution Center.

    Read the article

  • Pro SharePoint 2010 Business Intelligence Solutions

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Oh yeah baby, it’s out finally! This book is what I wanted to write for so long now, but never really got a chance to. For SharePoint 2007, I authored the SharePoint section of “Smart BI Solutions with SQL Server 2008” for MS Press. But never really got the time, to author a full book that this topic deserved. Until SharePoint 2010, we actually have a full book on this topic. So first things first, I didn’t actually write it. My role was limited to the overall concept, the outline, the layout, completion of it, code samples, identifying what we need in here, vouching for technical accuracy, identifying authors etc. The real work was done by Srini (5 chapters), and Steve (1 chapter). So credit given where it is due. But, with that said, this is a pretty good book. It has always been a challenge to find the superman that knows both, data ware housing concepts, and SharePoint concepts. The data ware housing concepts include basic stuff you need to know to work in the BI area such as cubes, MDX queries, etc. So chapter 1 covers that – and if you’re a hardcore DBA, feel free to skip Chapter 1. Then beyond that, we take every single SharePoint 2010 BI topic, and slice and dice it in detail. The topics we deal with are - Visio Services Reporting services Business Connectivity Services Excel Services PerformancePoint Services And in covering each of these topics, we ensure that a general layout was followed for each topic, to ensure completeness of content. We make sure we cover Setup related issues and advice Point and click usage Code usage, i.e. extensibility using visual studio and a walkthrough of the administration side of things, including powershell. (Yes, I insisted on that in being there in every chapter). Writing a book is always a lot of work, so we hope you find it useful. And it should go very well with the other book I just reviewed, which is Microsoft ADO.NET 4, step by step. Comment on the article ....

    Read the article

  • SEO to ensure visibility for a narrow, non-competitive, non-commercial site

    - by hen3ry
    I'm webmaster of a non-commercial site in English. A non-native-English speaker asked me why our site doesn't produce hits in Google searches she conducts for relevant keywords in her native language. I asked her for a list of keywords in her native language, and I naively tried inserting those into the META info in the page headers and waited a couple of weeks. No help. A little searching informed me that Google doesn't use the META info, and has not done so for a very long time. D'oh! To be entirely concrete, suppose the StackExchange folks want Russian speakers to find this site, Pro Webmasters. The direct translation in Russian of "webmaster" --thanks, Google Translator-- is: "?????????". (Not sure this will render properly, but that's not essential to my question.) Assuming Pro Webmasters has a common template for all pages it generates, inserting "?????????" into the Keywords META for that template won't help, it seems. What could StackExchange do to make this site visible to users searching with the Russian keyword "?????????" ? Pretty much all the advice I've seen boils down to this, if I understand correctly: use the desired search term often (but not too often) among site content, and the problem will be solved. That's great, but I don't think sprinkling "?????????" visibly all over Pro Webmasters is going to fly. Just for completeness, crawlers must be long immune to the invisible-to-visitors scheme, e.g, format "?????????" in a tiny text size in a color the same as an existing background, e.g. white-over-white. Or, put that text inside a div styled: ' style="visibility: hidden" '. Probably some other equivalents. I can only think of one slightly effective method, along these lines: place an unobtrusive link on the common template to a page titled "for international users" , and on that page list desired synonyms for "webmaster" in various languages on that page. A test case --admittedly, just one-- using my site implies that a Google search for "international users" ????????? will produce a hit for this page, and thus make the site minimally visible, despite the fact that the page will almost never be visited. At the moment, anyway. Note: All the SEO discussions I have found so far are about competitive and --almost certainly-- commercial sites. To repeat: my site is non-commercial, and it is about an obscure, narrow topic that is of interest to only a small number of people worldwide. This isn't about clawing our way to the top of competitive rankings, just making this content minimally visible to interested non-native-English speakers. Ideas? TIA

    Read the article

  • What Precalculus knowledge is required before learning Discrete Math Computer Science topics?

    - by Ein Doofus
    Below I've listed the chapters from a Precalculus book as well as the author recommended Computer Science chapters from a Discrete Mathematics book. Although these chapters are from two specific books on these subjects I believe the topics are generally the same between any Precalc or Discrete Math book. What Precalculus topics should one know before starting these Discrete Math Computer Science topics?: Discrete Mathematics CS Chapters 1.1 Propositional Logic 1.2 Propositional Equivalences 1.3 Predicates and Quantifiers 1.4 Nested Quantifiers 1.5 Rules of Inference 1.6 Introduction to Proofs 1.7 Proof Methods and Strategy 2.1 Sets 2.2 Set Operations 2.3 Functions 2.4 Sequences and Summations 3.1 Algorithms 3.2 The Growths of Functions 3.3 Complexity of Algorithms 3.4 The Integers and Division 3.5 Primes and Greatest Common Divisors 3.6 Integers and Algorithms 3.8 Matrices 4.1 Mathematical Induction 4.2 Strong Induction and Well-Ordering 4.3 Recursive Definitions and Structural Induction 4.4 Recursive Algorithms 4.5 Program Correctness 5.1 The Basics of Counting 5.2 The Pigeonhole Principle 5.3 Permutations and Combinations 5.6 Generating Permutations and Combinations 6.1 An Introduction to Discrete Probability 6.4 Expected Value and Variance 7.1 Recurrence Relations 7.3 Divide-and-Conquer Algorithms and Recurrence Relations 7.5 Inclusion-Exclusion 8.1 Relations and Their Properties 8.2 n-ary Relations and Their Applications 8.3 Representing Relations 8.5 Equivalence Relations 9.1 Graphs and Graph Models 9.2 Graph Terminology and Special Types of Graphs 9.3 Representing Graphs and Graph Isomorphism 9.4 Connectivity 9.5 Euler and Hamilton Ptahs 10.1 Introduction to Trees 10.2 Application of Trees 10.3 Tree Traversal 11.1 Boolean Functions 11.2 Representing Boolean Functions 11.3 Logic Gates 11.4 Minimization of Circuits 12.1 Language and Grammars 12.2 Finite-State Machines with Output 12.3 Finite-State Machines with No Output 12.4 Language Recognition 12.5 Turing Machines Precalculus Chapters R.1 The Real-Number System R.2 Integer Exponents, Scientific Notation, and Order of Operations R.3 Addition, Subtraction, and Multiplication of Polynomials R.4 Factoring R.5 Rational Expressions R.6 Radical Notation and Rational Exponents R.7 The Basics of Equation Solving 1.1 Functions, Graphs, Graphers 1.2 Linear Functions, Slope, and Applications 1.3 Modeling: Data Analysis, Curve Fitting, and Linear Regression 1.4 More on Functions 1.5 Symmetry and Transformations 1.6 Variation and Applications 1.7 Distance, Midpoints, and Circles 2.1 Zeros of Linear Functions and Models 2.2 The Complex Numbers 2.3 Zeros of Quadratic Functions and Models 2.4 Analyzing Graphs of Quadratic Functions 2.5 Modeling: Data Analysis, Curve Fitting, and Quadratic Regression 2.6 Zeros and More Equation Solving 2.7 Solving Inequalities 3.1 Polynomial Functions and Modeling 3.2 Polynomial Division; The Remainder and Factor Theorems 3.3 Theorems about Zeros of Polynomial Functions 3.4 Rational Functions 3.5 Polynomial and Rational Inequalities 4.1 Composite and Inverse Functions 4.2 Exponential Functions and Graphs 4.3 Logarithmic Functions and Graphs 4.4 Properties of Logarithmic Functions 4.5 Solving Exponential and Logarithmic Equations 4.6 Applications and Models: Growth and Decay 5.1 Systems of Equations in Two Variables 5.2 System of Equations in Three Variables 5.3 Matrices and Systems of Equations 5.4 Matrix Operations 5.5 Inverses of Matrices 5.6 System of Inequalities and Linear Programming 5.7 Partial Fractions 6.1 The Parabola 6.2 The Circle and Ellipse 6.3 The Hyperbola 6.4 Nonlinear Systems of Equations

    Read the article

  • The only metric with any value

    - by Malcolm Anderson
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} There's a lot of talk in the Scrum world about metrics. What's the velocity? How big is a story point?  How many story points is that team producing per man hour?   People are sadly missing the whole point.  Take your measurements up a level or two.  When you get down to it, the only metric that makes any difference, is ROI.   The problem is that often times, the developers work in a dark hole, far removed from the realities of how exactly they get paid.  A bigger problem is that mid-level managers tend to be further removed from the realities of ROI.  A lot of times mid-level managers get tasked with tracking their teams "productivity" using things like, "lines of code", or "completeness of the productivity reports."   Monetize your projects and then track your velocity against business value (real dollars).    When your development teams can say, "Last year, our team cost the business 2 million dollars and we know that because of our efforts, the company saved 2 million dollars in waste and increased revenues by another 4 million dollars." At that point you have just moved your development team from a cost center, to a profit center.  You might have to give them a raise, but they have demonstrated that they have earned it.

    Read the article

  • Interesting Topics in Comp. Sci. for New Students?

    - by SoulBeaver
    I hope this is the right forum to ask this question. Last friday I was in a discussion with my professors about the students' lack of motivation and interest in the field of Computer Science. All of the students are enrolled, but through questionnaires and other questions that my professor posed it was revealed that over 90% of all enrolled students are just in it for the reward of getting a job sometime in the future (since it's a growing field with high job potential) I asked my professor for the permission to take over the first couple of lectures and try and motivate, interest and inspire students for the field of Computer Science and programming in particular (this is the Intro to Programming course). This request was granted and I now have a week to come up with a lecture topic for my professor's five groups. My main goal isn't to teach, I just want to get students to be as interested in the field as I am. I want to show them what's possible, what awesome magical things have been done in the field, the future we are heading towards using programming and Comp. Sci. Therefore, I would like to pose this question: I have a few topics, materials and sample projects that I would like to talk about: -- Grace Hopper (It is my hope to interest the female programmers in the class. There are never more than two or three per group and they, more than males, are prone to jumping ship and abandoning Comp. Sci.) -- The Singularity Institute -- Alan Turing -- Robotics -- Programming not as a chore or a must, but the idea that we are, at our core, the nexus to which anything anybody does in the digital world is connected to. We are the problem solvers; we assemble all the parts together and we are the ones that, essentially, make the vision a reality. -- Give them an idea for a programming project which, through the help of the professor, could be significant to every student (I want students to not only feel interested in the topic, but they should feel important, that what they do here makes a difference) Do you have interesting topics worthy of discussion, something I can tell the students which they can get interested about? How would you approach the lecture? If you had 90 minutes worth of time to try and get students interested in the project, what would you do?

    Read the article

  • Why is everything crashing?

    - by Kopkins
    I've been using Ubuntu for a while now and I love it, I wouldn't think of using another OS unless I can't fix this issue. The install I'm on is only around a month and a half old. I'm running 12.04 64bit on a 8,1 MBP. Up until around 2 weeks ago everything was running smoothly. Around then applications started crashing and weird things started happening. At first I thought it was just certain applications. The first thing to start giving me trouble was compiz. Occasionally compiz will stop decorating windows and lost many other functionalities. running compiz --replace fixes this, but I don't feel like doing it usually once a day. The other thing with this is that after running compiz --replace, my conky window gets lost somewhere and so I run killall conky && conky -c .conkyrc. But this isn't with just a couple applications, it seems like it is proliferating through my system. Last week fontforge started crashing while doing whatever task. So I ended up unable to finish what I was working on to completeness. Didn't find a fix. Today rhythmbox started crashing. Whenever I try to play anything, Rhythmbox becomes unresponsive and needs to be forced to close. When I try to do certain things with the disk utility, it crashes. I get the Ubuntu has experienced an internal error message much more often than I would like. Frequently applications stop appearing in the launcher. Wine almost never does anymore. After not being active for a little while, thunderbird can only fetch my mail after restarting wireless, sudo rmmod b43 && sudo modprobe b43 Occasionally some of my startup apps don't start. What is my best option here? Could they be bugs? I don't want to submit a ton of vague bug reports. Reinstall? switch OS? Thank you to anyone who responds. Kopkins

    Read the article

  • Programming tourism

    - by Andrew_B
    I'm going on vacation to Paris, France for 10 days. Actually, it's my girlfriend's wish to go there but I'm not very interested in visiting, sightseeing, etc. Recently, I came up with an idea of trying to do something like programming tourism. :) I'd like to do something related to programming in a startup-like company. I do not want a salary or any kind of compensation. I want to overview process, social aspects, environment and "what it feels like" to development software in another country. I'm from Russia. I've been a software developer since 2003. I prefer C#4 but I'm ready to use anything Turing-complete. I have some MS certifications and am familiar with all .NETs since 1.1. Currently I'm finishing PhD in CS. I'm interested in multidimensional indexing and I can turn any piece of data and code to OLAP system. :) But it'd take too much time. What can I do? I have no more than one week. I want a totally complete project in a short amount of time. Implement some features in well-tested project Do a code review Debug memory, performance and concurrency issues Do unit testing So, about the questions: Is it legal? I'm ready to sign NDA if it's necessary. I'll have tourist visa. Is it possible? I'm really sure that bureaucratic companies with lots of HRs and PMs will not allow such experiments. But small companies can afford it. I'm ready to guarantee support on my code after leaving home :) P.S. I still havn't started learning French :) I hope it will not take too much time :) P.P.S. Yes, it's girlfriend-approved. What's in it for me? It's fun. It's fun to see new systems and people who created them. It's fun to complete meaningful things. Quickly. What's in it for them? Feature, debug, review or test. If my short-term colleagues will like this style of working I can invite them to make same trip into my company :) I think in Russia it's even more exciting :)

    Read the article

  • Did I choose proper career path? [closed]

    - by Liston Catch
    I am a C# Junior. My company has it's own enterprise documents-flow system written, my job along with 10 other programmers is to write modules/add-ons for it. I am totally bored of this job, I dont like Microsoft's technologies stack (dont hate me here, just subjective), but it's plain boring, enterprise is boring (subjective again, everyone's tastes differ), days on this work last long and I am tired of it. In short - I dont like my job. In my spare time I am doing PHP-development and I totally like it. I am also doing web-design, so I am LAMP-kind of guy who loves his Ubuntu and does design aswell. I know that most programmers don't do design themselves, so some person is either all about design or all about coding, but I enjoy both and do both. I often get interesting sites orders, I love to make whole websites with all the design, I love the feeling of site completeness, I enjoy talking with customers. I like that PHP is simple and skill cap is lower than one of java, meaning I can become expert in it after some years. But C# (and J2EE also) pay more, and I am doing really good in C#. But I dont like it. I can go for J2EE, platform itself seems more fun to me rather than .NET, but EE development is still boring to me. But it seems higher payed, easier to find job (since PHP is too common for its easiness. But if you are expert in something it doesnt matter, right? Just a higher skillcap.) Question: I want to go on with freelance. I want to have an opportunity to start my own startup in web. Actually I have a browser-game already written by myself, it earns me around 500$ per month which I am really proud of since I am 21 only and still noob in coding. I want to find part-time PHP job. 3 days per week so I can get some stable income, I can work in team and learn from them, social factor matters aswell as ensurance and diversity. I also want my total income (freelance + part-time job + own startup maybe)to be not too much less than one I have working in EE development sector. Maximum of 25% lower, but not more. Is it all possible if I stick with web-development (LAMP + HTML/CSS/JS/Jquery/AJAX)? Or is it easier to reach my goals with EE development?

    Read the article

  • Strange permission errors in new PostgreSQL installation

    - by Bart van Heukelom
    A freshly installed PostgreSQL (with configuration overwritten) won't start: $ sudo service postgresql start * Starting PostgreSQL 9.1 database server * Error: could not read /etc/postgresql/9.1/main/postgresql.conf: Permission denied Looks like it should be able to read it though: $ ls -l postgresql.conf -rw------- 1 postgres postgres 19450 2012-06-14 10:07 postgresql.conf But fine, I'll add chmod +r it to test if that works. $ sudo chmod +r postgresql.conf $ sudo service postgresql start * Starting PostgreSQL 9.1 database server * The PostgreSQL server failed to start. Please check the log output: Error: Could not open log file /var/log/postgresql/postgresql-9.1-main.log Huh? $ ls -l /var/log/postgresql/ total 4 -rw-r----- 1 postgres adm 827 2012-06-14 10:07 postgresql-9.1-main.log I don't get it. What can be wrong here? This used to work before. Can I maybe monitor what process attempts to open the file? It's Ubuntu 11.10 on EC2, using Chef. For completeness, here's the recipe: # Install PostgreSQL package "postgresql-9.1" # Stop server service "postgresql" do action :stop end # Overwrite configuration (setting data dir) template "/etc/postgresql/9.1/main/postgresql.conf" do source "postgresql-conf.erb" owner "postgres" group "postgres" end # Start server service "postgresql" do action :start end

    Read the article

  • How do I diagnose a bottleneck in an Intel Atom based Ubuntu server?

    - by Jon Cage
    I have a small media server at home which has software raid and a gigabit link to the rest of my network. For some reason though, I only get ~10MB/s transfers when copying to/from the server. I use software RAID5 (mdadm) over 4 1TB disks. On top of that I then use LVM to give me a huge pool of disk space which is then split up into multiple partitions which can be resized as and when they need it. I'm guessing this it most likely the cause, but I'd like to know for sure where the root cause is. So, how can I benchmark network throughput (Windows 7 desktop <- Ubuntu server) and hard disk performance to try and identify where my bottleneck might be? [Edit] If anyone's interested, the motherboard is an Intel Desktop Board D945GCLF2. So that's a 300 series Atom processor with the Intel® 945GC Express Chipset [Edit2] I feel like such a fool! I just checked my desktop and I had the slower of the two onboard NICs plugged in so the server is probably not at fault here. Transferring a copy of ubuntu off the server I get ~35-40MB/s according to Windows 7. I'll do those HD tests when I get a chance though (just for completeness).

    Read the article

  • How can I recover a Fedora 12 installation that is showing signs of disk errors?

    - by Bob Cross
    I am currently overseas (i.e., very far from my normal library of tools) and my primary machine that would normally act as the data server in the performance test that we're trying to run is failing to boot to Fedora 12 properly. This is a machine that, as of yesterday, was booting fine. However, this morning, very strange portions of the boot process were complaining with messages such as "unexpected 0x0 in rpcbind" and "bad file descriptor" (I don't have the error in front of me - scavenged a windows installation to get onto serverfault). Eventually, the boot hung for a long time at the NFS service and then brought up what looked like the KDE login screen but neither the mouse nor keyboard functioned. In olden days, I would try to get to a point where I could manage to run fsck and pray that the bad sectors would come back into alignment just long enough for me to scrape the critical data off of the machine. However, now that we live in the future, it seems like our options in situations like this should be a little more varied. Is there a way to recover a Fedora 12 installation with bad disk sectors that won't boot properly? For completeness, I am comfortable working with bootable recovery distros-on-CD and such but I don't know which one is likely to work best with modern Fedora. In the absence of guidance, I'm frantically torrenting the Fedora 12 Live CD and DVD, hoping to try rescue mode before tomorrow morning.

    Read the article

  • Being a more attractive job candidate - Certs XOR Degree

    - by Zephyr Pellerin
    I'm currently working in an IT position, where I do helpdesk stuff, and predominantly security related issues/consulting (In the loosest sense of the term) In-House and for Service-Contract clients (as the only/acting CCSP [I guess I should say only person with Cisco experience] in my organization). I've professionally written Kernel Mode drivers for a gaming company. Among other things that I'm proud to put on a resume. I think of myself as very reasonably qualified as a System Administrator, With excellent Cisco experience, among other things I think would make a good addition to almost any IT staff in need of a new employee. However, Something has always tripped me up - Human Resources. Let me explain, I decided to skip the university route - I'm immensely glad that I did, The computer science graduates that I've met and work with rarely know much of anything about Computers (Until they gain some 'real' experience), Even when asked about Theoretical Computing fundamentals they can rattle something off about Turing Completeness but rarely do they understand the mathematical underpinnings. In short, I think instead of going to college, I'd rather pick up some real world experience. However, Apparently, Employers rarely think the same way. A quick perusal of jobs through the standard job search engine yields nothing short of a conspiracy to exclude anyone without 'A Bachelors Degree in Computer Science or Equivalent'. Interviews I've had in the past have almost always been entangled with - 1. My Age (Which I can't really change) and 2. Lack of Degree. Employers frequently disregard the CCNA/CCSP, The experience I've gained through internships, My extensive experience in x86 assembly and C, among so many other things I like to think are valuable to employers - In lieu of the fact that I don't have a piece of paper. So, AS AN EMPLOYER - Is it even worth working on my CCIE? Or should I pad my resume with certifications that are easier to acquire (Like CISSP, MSCE, Network+, etc.). Or should I ditch the whole idea and head back to get a Mathematics or CS degree?

    Read the article

  • GitLab post-receive hook not firing

    - by Ben Graham
    Apologies if this isn't the right stackexchange. I have a GitLab install. It was installed over the top of a gitolite install that was only a few days old, and I assume this non-standard setup is at the root of my problem, but I cannot pin it down. The problem is straightforward: post-receive hooks are not fired. This prevents 'project activity' appearing in GitLab. The problem looks like: $ git push #... error: cannot run hooks/post-receive: No such file or directory Hook Exists The post-receive hook/symlink exists and is executable: -rwxr-xr-x 1 git git 470 Oct 3 2012 .gitolite/hooks/common/post-receive lrwxrwxrwx 1 git git 45 Oct 3 2012 repositories/project.git/hooks/post-receive -> /home/git/.gitolite/hooks/common/post-receive It's Executable By GitLab The gitlab user can execute the script (I have removed the /dev/null redirect and fed in blank input to get an 'OK' as output): sudo su - gitlab -c /home/git/.gitolite/hooks/common/post-receive OK GitLab Can Find It GitLab is looking for hooks in the correct location: $ grep hooks /srv/gitlab/gitlab/config/gitlab.yml hooks_path: /home/git/.gitolite/hooks/ and $ bundle exec rake gitlab:app:status RAILS_ENV=production # ... /home/git/.gitolite/hooks/common/post-receive exists? ............YES Environment The env -i line in the hook is commonly cited as an issue. I think that would occur after this problem, but for completeness, redis-cli is found OK: $ env -i redis-cli redis> I've run out of debugging ideas on this one. Does anybody have any suggestions?

    Read the article

  • Which revision control system for single user

    - by G. Bach
    I'm looking to set up a revision control system with me as a single user. I'd like to have access (read and write) protected using SSL, little overhead, and preferrably a simple setup. I'm looking to do this on my own server, so I don't want to use the option of registering with some professional provider of such a service (I like having direct control over my data; also, I'd like to know how to set up something like that). As far as I'm aware, what kind of project I want to subject to revision control doesn't really matter, but just for completeness' sake, I'm planning on using this for Java project, some html/css/php stuff, and in the future possibly as a synchronizing tool for small data bases (ignore that later one if it doesn't fit in with the paradigm of revision control). My questions primarily arise from the fact that I only ever used Subversion from Eclipse, so I don't have thorough knowledge of what's out there, what fits better for which needs, etc. So far I've heard of Subversion, Git, Mercurial, but I'm open to any system that's widely used and well supported. My server is running Ubuntu 11.10. Which system should I choose, what are the advantages of the respective systems, and if you know of any particularly useful ones, are there tutorials regarding the setup of the system I should choose that you could recommend?

    Read the article

  • LXC, Port forwarding and iptables

    - by Roberto Aloi
    I have a LXC container (10.0.3.2) running on a host. A service is running inside the container on port 7000. From the host (10.0.3.1, lxcbr0), I can reach the service: $ telnet 10.0.3.2 7000 Trying 10.0.3.2... Connected to 10.0.3.2. Escape character is '^]'. I'd love to make the service running inside the container accessible to the outer world. Therefore, I want to forward port 7002 on the host to port 7000 on the container: iptables -t nat -A PREROUTING -p tcp --dport 7002 -j DNAT --to 10.0.3.2:7000 Which results in (iptables -t nat -L): DNAT tcp -- anywhere anywhere tcp dpt:afs3-prserver to:10.0.3.2:7000 Still, I cannot access the service from the host using the forwarded port: $ telnet 10.0.3.1 7002 Trying 10.0.3.1... telnet: Unable to connect to remote host: Connection refused I feel like I'm missing something stupid here. What things should I check? What's a good strategy to debug these situations? For completeness, here is how iptables are set on the host: iptables -F iptables -F -t nat iptables -F -t mangle iptables -X iptables -P INPUT DROP iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -t nat -A POSTROUTING -o lxcbr0 -j MASQUERADE iptables -t nat -A PREROUTING -p tcp --dport 7002 -j DNAT --to 10.0.3.2:7000

    Read the article

  • What's up with LDoms: Part 9 - Direct IO

    - by Stefan Hinker
    In the last article of this series, we discussed the most general of all physical IO options available for LDoms, root domains.  Now, let's have a short look at the next level of granularity: Virtualizing individual PCIe slots.  In the LDoms terminology, this feature is called "Direct IO" or DIO.  It is very similar to root domains, but instead of reassigning ownership of a complete root complex, it only moves a single PCIe slot or endpoint device to a different domain.  Let's look again at hardware available to mars in the original configuration: root@sun:~# ldm ls-io NAME TYPE BUS DOMAIN STATUS ---- ---- --- ------ ------ pci_0 BUS pci_0 primary pci_1 BUS pci_1 primary pci_2 BUS pci_2 primary pci_3 BUS pci_3 primary /SYS/MB/PCIE1 PCIE pci_0 primary EMP /SYS/MB/SASHBA0 PCIE pci_0 primary OCC /SYS/MB/NET0 PCIE pci_0 primary OCC /SYS/MB/PCIE5 PCIE pci_1 primary EMP /SYS/MB/PCIE6 PCIE pci_1 primary EMP /SYS/MB/PCIE7 PCIE pci_1 primary EMP /SYS/MB/PCIE2 PCIE pci_2 primary EMP /SYS/MB/PCIE3 PCIE pci_2 primary OCC /SYS/MB/PCIE4 PCIE pci_2 primary EMP /SYS/MB/PCIE8 PCIE pci_3 primary EMP /SYS/MB/SASHBA1 PCIE pci_3 primary OCC /SYS/MB/NET2 PCIE pci_3 primary OCC /SYS/MB/NET0/IOVNET.PF0 PF pci_0 primary /SYS/MB/NET0/IOVNET.PF1 PF pci_0 primary /SYS/MB/NET2/IOVNET.PF0 PF pci_3 primary /SYS/MB/NET2/IOVNET.PF1 PF pci_3 primary All of the "PCIE" type devices are available for SDIO, with a few limitations.  If the device is a slot, the card in that slot must support the DIO feature.  The documentation lists all such cards.  Moving a slot to a different domain works just like moving a PCI root complex.  Again, this is not a dynamic process and includes reboots of the affected domains.  The resulting configuration is nicely shown in a diagram in the Admin Guide: There are several important things to note and consider here: The domain receiving the slot/endpoint device turns into an IO domain in LDoms terminology, because it now owns some physical IO hardware. Solaris will create nodes for this hardware under /devices.  This includes entries for the virtual PCI root complex (pci_0 in the diagram) and anything between it and the actual endpoint device.  It is very important to understand that all of this PCIe infrastructure is virtual only!  Only the actual endpoint devices are true physical hardware. There is an implicit dependency between the guest owning the endpoint device and the root domain owning the real PCIe infrastructure: Only if the root domain is up and running, will the guest domain have access to the endpoint device. The root domain is still responsible for resetting and configuring the PCIe infrastructure (root complex, PCIe level configurations, error handling etc.) because it owns this part of the physical infrastructure. This also means that if the root domain needs to reset the PCIe root complex for any reason (typically a reboot of the root domain) it will reset and thus disrupt the operation of the endpoint device owned by the guest domain.  The result in the guest is not predictable.  I recommend to configure the resulting behaviour of the guest using domain dependencies as described in the Admin Guide in Chapter "Configuring Domain Dependencies". Please consult the Admin Guide in Section "Creating an I/O Domain by Assigning PCIe Endpoint Devices" for all the details! As you can see, there are several restrictions for this feature.  It was introduced in LDoms 2.0, mainly to allow the configuration of guest domains that need access to tape devices.  Today, with the higher number of PCIe root complexes and the availability of SR-IOV, the need to use this feature is declining.  I personally do not recommend to use it, mainly because of the drawbacks of the depencies on the root domain and because it can be replaced with SR-IOV (although then with similar limitations). This was a rather short entry, more for completeness.  I believe that DIO can usually be replaced by SR-IOV, which is much more flexible.  I will cover SR-IOV in the next section of this blog series.

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-26

    - by Bob Rhubart
    Software Architecture for High Availability in the Cloud | Brian Jimerson Brian Jimerson looks at the paradigm shifts from machine-based architectures to cloud-based architectures when designing fault tolerance, and how enterprise applications need to be engineered to ensure the highest level of availability in the cloud. SOA, Cloud & Service Technology Symposium 2012 London - Special Oracle Discount Registration is now open for one of the premier SOA, Cloud, and Service Technology events. Once again, the Oracle community is well-represented in the session schedule. And now you can save on registration with a special Oracle discount code. Progress 4GL and DB to Oracle and cloud | Tom Laszewski "Getting from client/server based 4GLs and databases where the 4GL is tightly linked to the database to Oracle and the cloud is not easy," says cloud migration expert Tom Laszewski. "The least risky and expensive option...is to use the Progress OpenEdge DataServer for Oracle." Embrace 'big data' now or fall behind the competition, analyst warns | TechTarget TechTarget's Mark Brunelli's story says, in essence, that Big Data is not your fathers Business Intelligence. Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache | Ricardo Ferreira Ferreira illustrates a programmatic way to use the Oracle Coherence API to calculate the total size of a specific cache that resides in the data grid. WebCenter Portal Tutorial Part 7: Integrating Discussions and Link service | Yannick Ongena The latest chapter in Oracle ACE Yannick Ongena's ongoing series. How to Setup JDeveloper workspace for ADF Fusion Applications to run Business Component Tester? | Jack Desai Helpful technical tips from yet another member of the Oracle Fusion Middleware Architecture Team. Big Data for the Enterprise; Software Architecture for High Availability in the Cloud; Why Cloud Computing is a Paradigm Shift - And Why It Isn't This week on the OTN Solution Architect Homepage, along with an updated events list and this weeks list of selected community blog posts. Worst Practices for Big Data | Dain Hansen Dain Hansen shares some insight on what NOT to do if you want to captialize on Big Data. Free Virtual Developer Day - Oracle Fusion Development | Grant Ronald "The online conference will include seminars, hands-on lab and live chats with our technical staff including me!" says Grant Ronald. "And the best bit, it doesn't cost you a single penny. It's free and available right on your desktop." Penguin is Getting Ready for Oracle OpenWorld 2012 | Zeynep Koch Linux fan? Check out Zeynep Koch's post for a list of Linux-based sessions at Oracle OpenWorld 2012 in San Francisco. Amazon Web Services (AWS) Autoscaling | Frank Munz "Autoscaling on AWS can only be configured with lengthy commands from the command line but not from the web cased AWS console," says Frank Munz. "Getting all the parameters right can be tricky." He demonstrates one easy example in this video. Oracle Fusion Applications Design Patterns Now Available For Developers | Ultan O'Broin "These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight," says O'Broin. How Much Data Is Created Every Minute? [INFOGRAPHIC] | Mashable Explaining what the "Big" in Big Data really means -- and it's more than a little mind-boggling. Thought for the Day "Real, though miniature, Turing Tests are happening all the time, every day, whenever a person puts up with stupid computer software." — Jaron Lanier Source: SoftwareQuotes.com

    Read the article

  • forEach and Facelets - a bugfarm just waiting for harvest

    - by Duncan Mills
    An issue that I've encountered before and saw again today seems worthy of a little write-up. It's all to do with a subtle yet highly important difference in behaviour between JSF 2 running with JSP and running on Facelets (.jsf pages). The incident I saw today can be seen as a report on the ADF EMG bugzilla (Issue 53) and in a blog posting by Ulrich Gerkmann-Bartels who reported the issue to the EMG. Ulrich's issue nicely shows how tricky this particular gochya can be. On the surface, the problem is squarely the fault of MDS but underneath MDS is, in fact, innocent. To summarize the problem in a simpler testcase than Ulrich's example, here's a simple fragment of code: <af:forEach var="item" items="#{itemList.items}"> <af:commandLink id="cl1" text="#{item.label}" action="#{item.doAction}"  partialSubmit="true"/> </af:forEach> Looks innocent enough right? We see a bunch of links printed out, great. The issue here though is the id attribute. Logically you can kind of see the problem. The forEach loop is creating (presumably) multiple instances of the commandLink, but only one id is specified - cl1. We know that IDs have to be unique within a JSF component tree, so that must be a bad thing?  The problem is that JSF under JSP implements some hacks when the component tree is generated to transparently fix this problem for you. Behind the scenes it ensures that each instance really does have a unique id. Really nice of it to do so, thank you very much. However, (you could see this coming), the same is not true when running with Facelets  (this is under 11.1.2.n)  in that case, what you put for the id is what you get, and JSF does not mess around in the background for you. So you end up with a component tree that contains duplicate ids which are only created at runtime.  So subtle chaos can ensue.  The symptoms are wide and varied, from something pretty obscure such as the combination Ulrich uncovered, to something as frustrating as your ActionListener just not being triggered. And yes I've wasted hours on just such an issue.  The Solution  Once you're aware of this one it's really simple to fix it, there are two options: Remove the id attribute on components that will cause some kind of submission within the forEach loop altogether and let JSF do the right thing in generating them. Then you'll be assured of uniqueness. Use the var attribute of the loop to generate a unique id for each child instance.  for example in the above case: <af:commandLink id="cl1_#{item.index}" ... />.  So one to watch out for in your upgrades to JSF 2 and one perhaps, for your coding standards today to prepare you for. For completeness, here's the reference to the underlying JSF issue that's at the heart of this: JAVASERVERFACES-1527

    Read the article

  • A developer&rsquo;s WBS &ndash; 3 factors of 5

    - by johndoucette
    As a development manager, I have requested work breakdown structures (WBS) many times from the dev leads. Everyone has their own approach and why it takes sometimes days to get this simple list is often frustrating. Here is a simple way to get that elusive WBS done in 30 minutes and have 125 items in your list – well, 126. The WBS is made up of parent-child entities representing the overall outcome of the project. At the bottom of the hierarchical list should be the task item that a developer would perform in support of the branch in the list or WBS. Because I work with different dev leads on every project, I always ask the “what time value would you like to see at the lowest task in order to assign it to a developer and ensure it gets done within the timeframe”. I am particular to a task being 8 hours. Some like 8 to 24 hours. Stay away from tasks defaulting to 1 week. The task becomes way to vague and hard to manage completeness, especially on short budgets. As a developer, your focus is identifying the tasks you to accomplish in order to deliver the product. As a project manager, you will take the developer's WBS and add all the “other stuff” like quality testing, meetings, documentation, transition to maintenance, etc… Start your exercise with the name of the product you are delivering as a result of the project. You should be able to represent what you are building and deploying with one to three words. Example; XYZ Public Website Middleware BizTalk Application The reason you start with that single identifier is to always see the list as the product. It helps during each of the next three passes. Now, choose 5 tasks which in their entirety represent the product you will be delivering and add them to list under the product name you created earlier; Public Website     Security     Sites     Infrastructure     Publishing     Creative Continue this concept of seeing the list as the complete picture and decompose it one more level. You should have 25 items. Public Website     Security         Authentication         Login Control         Administration         DRM         Workflow     Sites         Masterpages         Page Layouts         Web Parts (RIA, Multimedia)         Content Types         Structures     Infrastructure         ...     Publishing         ...     Creative         ... And one more time for a total of 125 items. The top item makes the list 126. Public Website     Security         Authentication             Install (AD/ADAM/LDAP/SQL)             Configuration             Management             Web App Configuration             Implement Provider         Login Control             Login Form             Login/Logoff             pw change             pw recover/forgot             email verification         Administration             ...         DRM             ...         Workflow             ...     Sites         Masterpages         Page Layouts         Web Parts (RIA, Multimedia)         Content Types         Structures     Infrastructure         ...     Publishing         ...     Creative         ... The next step is to make sure the task at the bottom of every branch represents the “time value” you planned for the project. You can add more to the WBS and of course if you can’t find 5 items, 4 is fine. If a task can be done in a fraction of the time value you determined for the project, try to roll it up into a larger task. In the task actions (later when the iteration is being planned), decompose the details back to the simple tasks. Now, go estimate!

    Read the article

  • Web Services Example - Part 1: Declarative

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 1 of our Web Service examples. In this posting we'll take a look at using a declarative SOAP Web Service. Getting the sample code: Just click here to download a zip of the entire project. You can unzip it and load it into JDeveloper and deploy it either to iOS or Android. Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed. Defining our Web Service: First off, we should mention that this sample code is using a public web service provided free by CDYNE Corporation that provides weather forecasts by zipcode. Sometimes this service goes down so please ensure you know it's up before reporting this example isn't working. Let's take a look at the web service.  We created this by using the "Web Service Data Control" from the New Gallery and using this link to this wsdl:  "http://wsf.cdyne.com/WeatherWS/Weather.asmx?WSDL"   This web service has several methods but we're interested in GetCityForecastByZIP which takes a single string parameter for the zipcode and the second method, GetWeatherInformation that enumerates all possible forecast descriptions and associated image URLs.  The latter we'll use in the next edition but we included it here for completeness. Defing the Application: After adding a feature to the adfmf-feature.xml file, we added a taskflow to host the application flow.  This comprises of a home screen with a list with items for each method in the web service, "Forecast by Zip" and "Weather Info".  In this application we've also decided to hide the navigation bar since there is only one feature in the application. Forecast by Zip: The "Forecast By ZIP" option first presents the user with a screen where they can enter a zipcode and when the "Search" button is tapped, it executes the GetCityForecastByZIP method.  This is done by binding an Action binding to that method. The easiest way to accomplish this is to just drag & drop the method from the Data Control palette to the AMX page and drop it as a button and let the framework hook it up for you.  There is an inputText component on the page that is bound to a pageFlowScope variable called "zip".  This is used as the parameter to the Action binding when it is executed.  Because the actionListener attribute of the commandButton executes the Web Service each time, we ensure that the method is invoked every time the button is clicked. Weather Info: Unlike the previous method, this time instead of explictly executing the web service method we are using deferred invocation.  What this means is that we will bind to the results of the method and the framework will execute the method when it the data is required to be rendered.  We do this by simply doing a drag & drop of the results of the GetWeatherInformation to the AMX page.  When the page is rendered and the bindings are resolved the framework invokes the method.  This executes the method only when it is needed and fills the Data Control provider.  Because we never re-execute the method, you can click from Home to Weather Info and back many times and the web service is only ever invoked once. Issues and Possible Improvements: One thing you will quickly realize with this example is that the error handling is done by the framework for you. For simple examples this is fine but for real applications you'll want to customize these error messages.  With the declarative invocation of web services, this is difficult.  This is one aspect we'll address in the second installment of the web service examples where we will show you how to do programmatic invocation which allows you better error handling. Another issue you will notice with this example is that we can enumerate the weather information but there isn't an easy way to use that information to show the corresponding description and image as part of the forecast results.  We'll show you how to do this in the next example.

    Read the article

  • Violation of the DRY Principle

    - by Onorio Catenacci
    I am sure there's a name for this anti-pattern somewhere; however I am not familiar enough with the anti-pattern literature to know it. Consider the following scenario: or0 is a member function in a class. For better or worse, it's heavily dependent on class member variables. Programmer A comes along and needs functionality like or0 but rather than calling or0, Programmer A copies and renames the entire class. I'm guessing that she doesn't call or0 because, as I say, it's heavily dependent on member variables for its functionality. Or maybe she's a junior programmer and doesn't know how to call it from other code. So now we've got or0 and c0 (c for copy). I can't completely fault Programmer A for this approach--we all get under tight deadlines and we hack code to get work done. Several programmers maintain or0 so it's now version orN. c0 is now version cN. Unfortunately most of the programmers that maintained the class containing or0 seemed to be completely unaware of c0--which is one of the strongest arguments I can think of for the wisdom of the DRY principle. And there may also have been independent maintainance of the code in c. Either way it appears that or0 and c0 were maintained independent of each other. And, joy and happiness, an error is occurring in cN that does not occur in orN. So I have a few questions: 1.) Is there a name for this anti-pattern? I've seen this happen so often I'd find it hard to believe this is not a named anti-pattern. 2.) I can see a few alternatives: a.) Fix orN to take a parameter that specifies the values of all the member variables it needs. Then modify cN to call orN with all of the needed parameters passed in. b.) Try to manually port fixes from orN to cN. (Mind you I don't want to do this but it is a realistic possibility.) c.) Recopy orN to cN--again, yuck but I list it for sake of completeness. d.) Try to figure out where cN is broken and then repair it independently of orN. Alternative a seems like the best fix in the long term but I doubt the customer will let me implement it. Never time or money to fix things right but always time and money to repair the same problem 40 or 50 times, right? Can anyone suggest other approaches I may not have considered? If you were in my place, which approach would you take? If there are other questions and answers here along these lines, please post links to them. I don't mind removing this question if it's a dupe but my searching hasn't turned up anything that addresses this question yet. EDIT: Thanks everyone for all the thoughtful responses. I asked about a name for the anti-pattern so I could research it further on my own. I'm surprised this particular bad coding practice doesn't seem to have a "canonical" name for it.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11  | Next Page >