Search Results

Search found 264018 results on 10561 pages for 'stack based'.

Page 239/10561 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • How should I make progress further as a programmer?

    - by mushfiq
    Hello, I have just left my college after doing graduation in computer engineering,during my college life I tried to do some freelancing in local market.I succeeded in the last year and earned some small amounts based on joomla,wordpress and visual basic based job.I had some small projects on php,mysql also. After finishing my undergrad life,I sat for an written test for post of python programmer and luckily I got the job and is working there(Its a small software firm do most of the task in python).Day by day I have gained some experience with core python. Meanwhile an USA based web service firm called me for the interview and after finishing three steps(oral+mini coding project+final oral)they selected me(i was wondered!).And I am going to join their with in few days.There I have to work in python(based on Django framework,I know only basic of this framework). My problem is when I started to work with python simultaneously I worked in Odesk as a wordpress,joomla,drupal,php developer. Now a days I am feeling that I am getting "Jack of all trades master of none". My current situation is i am familiar with several popular web technologies but not an expert.I want to make myself skilled. How should I organize myself to be a skilled web programmer?

    Read the article

  • Raspberry Pi and Java SE: A Platform for the Masses

    - by Jim Connors
    One of the more exciting developments in the embedded systems world has been the announcement and availability of the Raspberry Pi, a very capable computer that is no bigger than a credit card.  At $35 US, initial demand for the device was so significant, that very long back orders quickly ensued. After months of patiently waiting, mine finally arrived.  Those initial growing pains appear to have been fixed, so availability now should be much more reasonable. At a very high level, here are some of the important specs: Broadcom BCM2835 System on a chip (SoC) ARM1176JZFS, with floating point, running at 700MHz Videocore 4 GPU capable of BluRay quality playback 256Mb RAM 2 USB ports and Ethernet Boots from SD card Linux distributions (e.g. Debian) available So what's taking place taking place with respect to the Java platform and Raspberry Pi? A Java SE Embedded binary suitable for the Raspberry Pi is available for download (Arm v6/7) here.  Note, this is based on the armel architecture, a variety of Arm designed to support floating point through a compatibility library that operates on more platforms, but can hamper performance.  In order to use this Java SE binary, select the available Debian distribution for your Raspberry Pi. The more recent Raspbian distribution is based on the armhf (hard float) architecture, which provides for more efficient hardware-based floating point operations.  However armhf is not binary compatible with armel.  As of the writing of this blog, Java SE Embedded binaries are not yet publicly available for the armhf-based Raspbian distro, but as mentioned in Henrik Stahl's blog, an armhf release is in the works. As demonstrated at the just-completed JavaOne 2012 San Francisco event, the graphics processing unit inside the Raspberry Pi is very capable indeed, and makes for an excellent candidate for JavaFX.  As such, plans also call for a Pi-optimized version of JavaFX in a future release too. A thriving community around the Raspberry Pi has developed at light speed, and as evidenced by the packed attendance at Pi-specific sessions at Java One 2012, the interest in Java for this platform is following suit. So stay tuned for more developments...

    Read the article

  • Expanding existing DVCS Wiki

    - by A Lion
    A portion of my job is to maintain technical documentation for a rapidly expanding manufacturing company. Because it is only a portion of my job and the company's product line is expanding so quickly, I can't stay on top of the documentation. As a result, I've been yearning for an information management system with a handful of specific features. I've found many products that have a subset, but none that have all the features I'm looking for. I'm at the point of picking an existing product and expanding it to cover my desired feature set, however, this will be a pet project and I will be learning the underlying language as I go. So, the main question is which existing product will be the easiest to expand to cover the full feature set and has a relatively easy to learn language? Alternatively, have I missed another existing program that will cover the feature set or should be in my list of "close, but not quite there"? Feature Set web interface based on a distributed version control system (e.g., git) easy to edit by logged in novices (e.g. wiki, multimarkdown) outputs in more traditional formats (e.g., doc, odt, pdf) edits held in queue until editor/engineer/manager approves them (e.g., MS Word editing) [this is the really big elephant in list - suggestions on where to start appreciated] edits held in queue specifically for engineer approval [extra limb of the elephant in the list] well-supported in the open source community Closest, but not quite there ikiwiki - http://ikiwiki.info (php) lots of awesome functionality and extensions, including easy to edit and based on DVCS lacks a review/forward for review queue appears to be well-supported within the OSS community gitit - http://gitit.net/ (haskell) easy to edit and based on DVCS lots of outputs in traditional formats a great web-based gui diff interface lacks a review/forward for review queue appears to be primarily maintained by one individual

    Read the article

  • How do I monitor IIS7 output caching?

    - by foosnazzy
    I have dynamic content that I've configured output caching upon. Based on my tests it doesn't seem like IIS is seeing the content as cache-worthy. How can I monitor what IIS is doing? It appears as though PerfMon has some counters I'm interested in, but I'm not sure which ones to look at. If my content is not querystring or form parameter based, but URI based will my content not be deemed cache-worthy?

    Read the article

  • My .NET Technology picks for 2011

    - by shiju
    My Technology predictions for 2011 Cloud computing and Mobile application development will be the hottest trends for 2011. I hope that Windows Azure will be very hot in year 2011 and lot of cloud computing adoption will be happen with Windows Azure on 2011. Web application scalability will be the big challenge for Architects in the next year and architecture approaches like CQRS will get some attention on next year. Architects will look on different options for web application scalability and adoption of NoSQL and Document databases will be more in the year 2011. The following are the my technology picks for .Net stack Windows Azure Windows Azure will be one of the hottest technologies of 2011. Adoption of Cloud and Windows Azure will get big attention on next year. The Windows Azure platform is a flexible cloud–computing platform that lets you focus on solving business problems and addressing customer needs. No need to invest upfront on expensive infrastructure. Pay only for what you use, scale up when you need capacity and pull it back when you don’t. We handle all the patches and maintenance — all in a secure environment with over 99.9% uptime. Silverlight 5 Silverlight is becoming a common technology for variety of development platforms. You can develop Silverlight applications for web, desktop and windows phone. The new Silverlight 5 beta will be available during the starting quarter of the next year with new capabilities and lot of new features. Silverlight 5 will be powerful development platform for both web-based business apps and rich media solutions. We can expect final version of Silverlight 5 on end of 2011. Windows Phone 7 Development Tools Mobile application development will be very hot in year 2011 and Windows Phone 7 will be one of the hottest technologies of next year. You can get introduction on Windows Phone 7 Development Tools from somasegar’s blog post and MSDN documentation available from here. EF Code First I am a big fan of Entity Framework’s Code First approach and hope that Code First approach will attract more people onto Entity Framework 4. EF Code First lets you focus on domain model which will enable Domain-Driven Development for applications. I hope that DDD fans will love the EF Code First approach. The Entity Framework 4 now supports three types of approaches and these will attract different types of developer audience. ASP.NET MVC 3 The ASP.NET MVC 3 will be the hottest technology of Microsoft web stack on the next year. ASP.NET developers will widely move to the ASP.NET MVC Framework from their WebForms development. The new Razor view engine is great and it will increase the adoption of ASP.NET MVC 3. Razor the will improve the productivity when working with ASP.NET MVC 3 Views. You can build great web applications using ASP.NET MVC 3 and jQuery with better maintainability, generation of clean HTML and even better performance. In my opinion, the best technology stack for web development is ASP.NET MVC 3 and Entity Framework 4 Code First as ORM. On the next year, you can expect more articles from my blog on ASP.NET MVC 3 and Entity Framework 4 Code First. RavenDB NoSQL and Document databases will get more attention on the coming year and RavenDB will be the most notable document database in the .NET stack. RavenDB is an Open Source (with a commercial option) document database for the .NET/Windows platform developed by Ayende Rahien. RavenDB is .NET focused document database which comes with a fully functional .NET client API and supports LINQ. I have written few articles on RavenDB and you can read it from here. Managed Extensibility Framework (MEF) Many people didn't realized the power of MEF. The MEF lets you create extensible applications and provides a great solution for the runtime extensibility problem. I hope that .NET developers will more adopt the MEF on the next year for their .NET applications. You can get an excellent introduction on MEF from Anoop Madhusudanan’s blog post MEF or Managed Extensibility Framework – Creating a Zoo and Animals

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Yelp, Google's API for restaurants help

    - by chris
    Ok I have looked into this, and I'm not sure if anyone else has experience with it. I'm having termendous difficulties with Yelp and Google's API. To help explain what I am trying to do here is the concept of the website. We would have to pull restaurants based on user distance, and then randomize them based on quality of restaurant based on feedback from review websites (Yelp, Google, urbanspoon, zagat, opentable, kudzu, yahoo - doesn't have to be from all), and feedback from our users (on results page for the random restaurant users can select good recommendation/bad recommendation). There’s a lot we could calculate for our formula. Things that will dictate your results will be based on if you’re at home or work. If you’re at home you will have more time to drive out to the city to grab some dinner or lunch. If you’re at work we would have to recommend restaurants nearby as lunch is typically 30 minutes to a hour. A 30 minute lunch would require take out most likely or quick service. A hour lunch break you could dine in at a local fine dining restaurant. So in a nutshell, user comes to website. Select if they're at home or work, click submit and we will have a random restaurant selected for them to go. If they don't like it they can click retry and a new restaurant can show. The issue I am having is using the API to gather all the restaurants in the US. I know it can be done because there are similiar websites/apps that pull restaurants that are closest to you such as Ness, Alfred, and I believe there's two more but I can't remember the names. Anyone know if this can be accomplish?

    Read the article

  • High availability virtual machines

    - by Jeremy
    I've been reading a lot about high availability virtualization, either via Hyper-V or VMWare. In that context, essentially high availabliity means that the VM is hosted by a closter of physical servers (nodes), so if one of the physical servers goes down, the VM can still be served by other physical servers. So far so good, the physical cluster and the VM itself are highly available. However if the service being provided, let's say SQL server, MSDTC, or any other service, are actually being provided by the VM image and the virtualized operating system. So I imagine that there is still a point of failure at the virtual layer that isn't accounted for. Something could happen within the virtual machine itself that the physican cluster can not account for, correct? In that instance the physican failover cluster (Hyper-V) or VMWare host, can not fail over, because the issue is not with one of the servers in the physical cluster - failing over a physical node would not do any good. Does this necessitate building a virtual failover cluster on top of the physical one, or is this not necessary? Alternatively, I suppose you could skip the phsyical clustering, and just cluster at the virtual layer (Child based failover clustering), because that should still survive a physical failure. See image below showing parent based (left), child based (right) and a combination (center). Is parent based as far as you need to go, or is child based more appropriate?

    Read the article

  • Optimal Data Structure for our own API

    - by vermiculus
    I'm in the early stages of writing an Emacs major mode for the Stack Exchange network; if you use Emacs regularly, this will benefit you in the end. In order to minimize the number of calls made to Stack Exchange's API (capped at 10000 per IP per day) and to just be a generally responsible citizen, I want to cache the information I receive from the network and store it in memory, waiting to be accessed again. I'm really stuck as to what data structure to store this information in. Obviously, it is going to be a list. However, as with any data structure, the choice must be determined by what data is being stored and what how it will be accessed. What, I would like to be able to store all of this information in a single symbol such as stack-api/cache. So, without further ado, stack-api/cache is a list of conses keyed by last update: `(<csite> <csite> <csite>) where <csite> would be (1362501715 . <site>) At this point, all we've done is define a simple association list. Of course, we must go deeper. Each <site> is a list of the API parameter (unique) followed by a list questions: `("codereview" <cquestion> <cquestion> <cquestion>) Each <cquestion> is, you guessed it, a cons of questions with their last update time: `(1362501715 <question>) (1362501720 . <question>) <question> is a cons of a question structure and a list of answers (again, consed with their last update time): `(<question-structure> <canswer> <canswer> <canswer> and ` `(1362501715 . <answer-structure>) This data structure is likely most accurately described as a tree, but I don't know if there's a better way to do this considering the language, Emacs Lisp (which isn't all that different from the Lisp you know and love at all). The explicit conses are likely unnecessary, but it helps my brain wrap around it better. I'm pretty sure a <csite>, for example, would just turn into (<epoch-time> <api-param> <cquestion> <cquestion> ...) Concerns: Does storing data in a potentially huge structure like this have any performance trade-offs for the system? I would like to avoid storing extraneous data, but I've done what I could and I don't think the dataset is that large in the first place (for normal use) since it's all just human-readable text in reasonable proportion. (I'm planning on culling old data using the times at the head of the list; each inherits its last-update time from its children and so-on down the tree. To what extent this cull should take place: I'm not sure.) Does storing data like this have any performance trade-offs for that which must use it? That is, will set and retrieve operations suffer from the size of the list? Do you have any other suggestions as to what a better structure might look like?

    Read the article

  • What are the specific uses of these hardware? [closed]

    - by vincentbelkin
    So I'm trying to learn more about databases and information systems. However I need more explanation on the specific purpose of these hardware. I'm not sure if they are servers or what. HP AlphaServer ES47 Tower Tru64 Unix Intel-Based B, Proliant ML350GA Two Intel-Based A Proliant ML570T03 Intel-Based X236 Intel Quad Core Xeon Category B X3500 Poweredge 1400 SC Btw I'm just a college student who wants to know more about these hardware

    Read the article

  • MySQL December Webinars

    - by Bertrand Matthelié
    We'll be running 3 webinars next week and hope many of you will be able to join us: MySQL Replication: Simplifying Scaling and HA with GTIDs Wednesday, December 12, at 15.00 Central European TimeJoin the MySQL replication developers for a deep dive into the design and implementation of Global Transaction Identifiers (GTIDs) and how they enable users to simplify MySQL scaling and HA. GTIDs are one of the most significant new replication capabilities in MySQL 5.6, making it simple to track and compare replication progress between the master and slave servers. Register Now MySQL 5.6: Building the Next Generation of Web/Cloud/SaaS/Embedded Applications and Services Thursday, December 13, at 9.00 am Pacific Time As the world's most popular web database, MySQL has quickly become the leading cloud database, with most providers offering MySQL-based services. Indeed, built to deliver web-based applications and to scale out, MySQL's architecture and features make the database a great fit to deliver cloud-based applications. In this webinar we will focus on the improvements in MySQL 5.6 performance, scalability, and availability designed to enable DBA and developer agility in building the next generation of web-based applications. Register Now Getting the Best MySQL Performance in Your Products: Part IV, Partitioning Friday, December 14, at 9.00 am Pacific Time We're adding Partitioning to our extremely popular "Getting the Best MySQL Performance in Your Products" webinar series. Partitioning can greatly increase the performance of your queries, especially when doing full table scans over large tables. Partitioning is also an excellent way to manage very large tables. It's one of the best ways to build higher performance into your product's embedded or bundled MySQL, and particularly for hardware-constrained appliances and devices. Register Now We have live Q&A during all webinars so you'll get the opportunity to ask your questions!

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

  • Doing CRUD on XML using id attributes in C# ASP.NET

    - by Brandon G
    I'm a LAMP guy and ended up working this small news module for an asp.net site, which I am having some difficulty with. I basically am adding and deleting elements via AJAX based on the id. Before, I had it working based on the the index of a set of elements, but would have issues deleting, since the index would change in the xml file and not on the page (since I am using ajax). Here is the rundown news.xml <?xml version="1.0" encoding="utf-8"?> <news> <article id="1"> <title>Red Shield Environmental implements the PARCSuite system</title> <story>Add stuff here</story> </article> <article id="2"> <title>Catalyst Paper selects PARCSuite for its Mill-Wide Process...</title> <story>Add stuff here</story> </article> <article id="3"> <title>Weyerhaeuser uses Capstone Technology to provide Control...</title> <story>Add stuff here</story> </article> </news> Page sending del request: <script type="text/javascript"> $(document).ready(function () { $('.del').click(function () { var obj = $(this); var id = obj.attr('rel'); $.post('add-news-item.aspx', { id: id }, function () { obj.parent().next().remove(); obj.parent().remove(); } ); }); }); </script> <a class="del" rel="1">...</a> <a class="del" rel="1">...</a> <a class="del" rel="1">...</a> My functions protected void addEntry(string title, string story) { XmlDocument news = new XmlDocument(); news.Load(Server.MapPath("../news.xml")); XmlAttributeCollection ids = news.Attributes; //Create a new node XmlElement newelement = news.CreateElement("article"); XmlElement xmlTitle = news.CreateElement("title"); XmlElement xmlStory = news.CreateElement("story"); XmlAttribute id = ids[0]; int myId = int.Parse(id.Value + 1); id.Value = ""+myId; newelement.SetAttributeNode(id); xmlTitle.InnerText = this.TitleBox.Text.Trim(); xmlStory.InnerText = this.StoryBox.Text.Trim(); newelement.AppendChild(xmlTitle); newelement.AppendChild(xmlStory); news.DocumentElement.AppendChild(newelement); news.Save(Server.MapPath("../news.xml")); } protected void deleteEntry(int selectIndex) { XmlDocument news = new XmlDocument(); news.Load(Server.MapPath("../news.xml")); XmlNode xmlnode = news.DocumentElement.ChildNodes.Item(selectIndex); xmlnode.ParentNode.RemoveChild(xmlnode); news.Save(Server.MapPath("../news.xml")); } I haven't updated deleteEntry() and you can see, I was using the array index but need to delete the article element based on the article id being passed. And when adding an entry, I need to set the id to the last elements id + 1. Yes, I know SQL would be 100 times easier, but I don't have access so... help?

    Read the article

  • What git branching models actually work - the final question

    - by UncleCJ
    In our company we have successfully deployed git and we are currently using a simple trunk/release/hotfixes branching model. However, this has it's problems, I have some key issues of confusion in the community which would be awesome to have answered here. Maybe my hopes for an Alexander stroke are too great, quite possibly I'll decompose this question into more manageable issues, but here's my first shot. Workflows / branching models - below are the three main descriptions of this I have seen, but they are partially contradicting each other or don't go far enough to sort out the subsequent issues we've run into (as described below). Thus our team so far defaults to not so great solutions. Are you doing something better? gitworkflows(7) Manual Page (nvie) A successful Git branching model (reinh) A Git Workflow for Agile Teams Merging vs rebasing (tangled vs sequential history) - the bids on this are as confusing as it gets. Should one pull --rebase or wait with merging back to the mainline until your task is finished? Personally I lean towards merging since this preserves a visual illustration of on which base a task was started and finished, and I even prefer merge --no-ff for this purpose. It has other drawbacks however. Also many haven't realized the useful property of merging - that it isn't commutative (merging a topic branch into master does not mean merging master into the topic branch). I am looking for a natural workflow - sometimes mistakes happen because our procedures don't capture a specific situation with simple rules. For example a fix needed for earlier releases should of course be based sufficiently downstream to be possible to merge upstream into all branches necessary (is the usage of these terms clear enough?). However it happens that a fix makes it into the master before the developer realizes it should have been placed further downstream, and if that is already pushed (even worse, merged or something based on it) then the option remaining is cherry-picking, with it's associated perils... What simple rules like such do you use? Also in this is included the awkwardness of one topic branch necessarily excluding other topic branches (assuming they are branched from a common baseline). Developers don't want to finish a feature to start another one feeling like the code they just wrote is not there anymore How to avoid creating merge conflicts (due to cherry-pick)? What seems like a sure way to create a merge conflict is to cherry-pick between branches, they can never be merged again? Would applying the same commit in revert (how to do this?) in either branch possibly solve this situation? This is one reason I do not dare to push for a largely merge-based workflow. How to decompose into topical branches? - We realize that it would be awesome to assemble a finished integration from topic branches, but often work by our developers is not clearly defined (sometimes as simple as "poking around") and if some code has already gone into a "misc" topic, it can not be taken out of there again, according to the question above? How do you work with defining/approving/graduating/releasing your topic branches? Proper procedures like code review and graduating would of course be lovely, but we simply cannot keep things untangled enough to manage this - any suggestions? integration branches, illustration please? Vote and comment as much as you'd like, I'll try to keep the issue page clear and informative enough. Thanks! Below is a list of related topics on stackoverflow I have checked out: What are some good strategies to allow deployed applications to be hotfixable? Workflow description for git usage for in-house development Git workflow for corporate Linux kernel development How do you maintain development code and production code? (thanks for this PDF!) git releases management Git Cherry-pick vs Merge Workflow How to cherry-pick multiple commits How do you merge selective files with git-merge? How to cherry pick a range of commits and merge into another branch ReinH Git Workflow git workflow for making modifications you’ll never push back to origin Cherry-pick a merge Proper Git workflow for combined OS and Private code? Maintaining Project with Git Why cant Git merge file changes with a modified parent/master. Git branching / rebasing good practices When will "git pull --rebase" get me in to trouble?

    Read the article

  • Check Checkboxes dynamically

    - by Selom
    Hi, ive been dealing with this for some time now and need your help. well I have an array $arrayAmenities which contains a combination of the following data based on what is fetched from the database: Air Conditioned Bar Brunch Party Room Tea Room Terrace Valet I would like the application to dynamically check the following set of checkboxes based on the data contained in the array. With my code only one checkbox is checked based on the first data contained in the array. Can you please tell what Im missing? Thanks for answering. code: //get amenities one by one in order to set the checkboxes $arrayAmenities = explode(',', $rest_amenities ); $i=0; while(count($arrayAmenities) > $i) { var_dump($arrayAmenities[$i]); switch($arrayAmenities[$i]) { case 'Air Conditioned': $checkedAir = 'checked=true'; break; case 'Bar': $checkedBar = 'checked=true'; break; case 'Brunch': $checkedBru = 'checked=true'; break; case 'Party Room'; $checkedPar = 'checked=true'; break; } $i+=1; } } checkboxes <table cellpadding="0" cellspacing="0" style="font-size:10px"> <tr> <td style="border-top:1px solid #CCC;border-right:1px solid #CCC;border-left:1px solid #CCC; padding-left:5px ">Air Conditioned <input type="checkbox" name="air_cond" <?php print $checkedAir;?> value="Air Conditioned"></td> <td style="padding-left:10px; border-top:1px solid #CCC;border-right:1px solid #CCC;">Bar <input type="checkbox" name="bar" value="Bar" <?php print $checkedBar;?>></td> <td style="padding-left:10px; border-top:1px solid #CCC;border-right:1px solid #CCC; ">Brunch <input type="checkbox" name="brunch" value="Brunch" <?php print $checkedBru;?>></td> </tr> <tr> <td style="border-top:1px solid #CCC;border-right:1px solid #CCC; border-bottom:1px solid #CCC; border-left:1px solid #CCC; padding-left:5px">Party Room <input <?php print $checkedPar;?> type="checkbox" name="party_room" value="Party Room" ></td> <td style="padding-left:10px; border-top:1px solid #CCC;border-right:1px solid #CCC; border-bottom:1px solid #CCC;">Tea Room <input type="checkbox" name="tea_room" value="Tea Room" ></td> <td style="padding-left:10px; border-top:1px solid #CCC;border-right:1px solid #CCC; border-bottom:1px solid #CCC;">Terrace <input type="checkbox" name="terrace" value="Terrace"></td> </tr> <tr> <td colspan="3" style="border-bottom:1px solid #CCC; border-left:1px solid #CCC; border-right:1px solid #CCC; padding-left:5px">Valet <input type="checkbox" name="valet" value="Valet" ></td> </tr> </table>

    Read the article

  • Adding functionality to any TextReader

    - by strager
    I have a Location class which represents a location somewhere in a stream. (The class isn't coupled to any specific stream.) The location information will be used to match tokens to location in the input in my parser, to allow for nicer error reporting to the user. I want to add location tracking to a TextReader instance. This way, while reading tokens, I can grab the location (which is updated by the TextReader as data is read) and give it to the token during the tokenization process. I am looking for a good approach on accomplishing this goal. I have come up with several designs. Manual location tracking Every time I need to read from the TextReader, I call AdvanceString on the Location object of the tokenizer with the data read. Advantages Very simple. No class bloat. No need to rewrite the TextReader methods. Disadvantages Couples location tracking logic to tokenization process. Easy to forget to track something (though unit testing helps with this). Bloats existing code. Plain TextReader wrapper Create a LocatedTextReaderWrapper class which surrounds each method call, tracking a Location property. Example: public class LocatedTextReaderWrapper : TextReader { private TextReader source; public Location Location { get; set; } public LocatedTextReaderWrapper(TextReader source) : this(source, new Location()) { } public LocatedTextReaderWrapper(TextReader source, Location location) { this.Location = location; this.source = source; } public override int Read(char[] buffer, int index, int count) { int ret = this.source.Read(buffer, index, count); if(ret >= 0) { this.location.AdvanceString(string.Concat(buffer.Skip(index).Take(count))); } return ret; } // etc. } Advantages Tokenization doesn't know about Location tracking. Disadvantages User needs to create and dispose a LocatedTextReaderWrapper instance, in addition to their TextReader instance. Doesn't allow different types of tracking or different location trackers to be added without layers of wrappers. Event-based TextReader wrapper Like LocatedTextReaderWrapper, but decouples it from the Location object raising an event whenever data is read. Advantages Can be reused for other types of tracking. Tokenization doesn't know about Location tracking or other tracking. Can have multiple, independent Location objects (or other methods of tracking) tracking at once. Disadvantages Requires boilerplate code to enable location tracking. User needs to create and dispose the wrapper instance, in addition to their TextReader instance. Aspect-orientated approach Use AOP to perform like the event-based wrapper approach. Advantages Can be reused for other types of tracking. Tokenization doesn't know about Location tracking or other tracking. No need to rewrite the TextReader methods. Disadvantages Requires external dependencies, which I want to avoid. I am looking for the best approach in my situation. I would like to: Not bloat the tokenizer methods with location tracking. Not require heavy initialization in user code. Not have any/much boilerplate/duplicated code. (Perhaps) not couple the TextReader with the Location class. Any insight into this problem and possible solutions or adjustments are welcome. Thanks! (For those who want a specific question: What is the best way to wrap the functionality of a TextReader?) I have implemented the "Plain TextReader wrapper" and "Event-based TextReader wrapper" approaches and am displeased with both, for reasons mentioned in their disadvantages.

    Read the article

  • What are the Open Source alternatives to WPF/XAML?

    - by Evan Plaice
    If we've learned anything from HTML/CSS it's that, declarative languages (like XML) work best to describe User Interfaces because: It's easy to build code preprocessors that can template the code effectively. The code is in a well defined well structured (ideally) format so it's easy to parse. The technology to effectively parse or crawl an XML based source file already exists. The UIs scripted code becomes much simpler and easier to understand. It simple enough that designers are able to design the interface themselves. Programmers suck at creating UIs so it should be made easy enough for designers. I recently took a look at the meat of a WPF application (ie. the XAML) and it looks surprisingly familiar to the declarative language style used in HTML. It's apparent to me that the current state of desktop UI development is largely fractionalized, otherwise there wouldn't be so much duplicated effort in the domain of graphical user interface design (IE. GTK, XUL, Qt, Winforms, WPF, etc). There are 45 GUI platforms for Python alone It's seems reasonable to me that there should be a general purpose, open source, standardized, platform independent, markup language for designing desktop GUIs. Much like what the W3C made HTML/CSS into. WPF, or more specifically XAML seems like a pretty likely step in the right direction. Now that the 'browser wars' are over should we look forward to a future of 'desktop gui wars?' Note: This topic is relatively subjective in the attempt to be 'future-thinking.' I think that desktop GUI development in its current state sucks ((really)hard) and, even though WPF is still in it's infancy, it presents a likely solution to the problem. Update: Thanks a lot for the info, keep it comin'. Here's are the options I've gathered from the comments and answers. GladeXML Editor: Glade Interface Designer OS Platforms: All GUI Platform: GTK+ Languages: C (libglade), C++, C# (Glade#), Python, Ada, Pike, Perl, PHP, Eiffel, Ruby XRC (XML Resource) Editors: wxGlade, XRCed, wxDesigner, DialogBlocks (non-free) OS Platforms: All GUI Platform: wxWidgets Languages: C++, Python (wxPython), Perl (wxPerl), .NET (wx.NET) XML based formats that are either not free, not cross-platform, or language specific XUL Editor: Any basic text editor OS Platforms: Any OS running a browser that supports XUL GUI Platform: Gecko Engine? Languages: C++, Python, Ruby as plugin languages not base languages Note: I'm not sure if XUL deserves mentioning in this list because it's less of a desktop GUI language and more of a make-webapps-run-on-the-desktop language. Plus, it requires a browser to run. IE, it's 'DHTML for the desktop.' CookSwing Editor: Eclipse via WindowBuilder, NetBeans 5.0 (non-free) via Swing GUI Builder aka Matisse OS Platforms: All GUI Platform: Java Languages: Java only XAML (Moonlight) Editor: MonoDevelop OS Platforms: Linux and other Unix/X11 based OSes only GUI Platforms: GTK+ Languages: .NET Note: XAML is not a pure Open Source format because Microsoft controls its terms of use including the right to change the terms at any time. Moonlight can not legally be made to run on Windows or Mac. In addition, the only platform that is exempt from legal action is Novell. See this for a full description of what I mean.

    Read the article

  • Log a user in to an ASP.net application using Windows Authentication without using Windows Authentic

    - by Rising Star
    I have an ASP.net application I'm developing authentication for. I am using an existing cookie-based log on system to log users in to the system. The application runs as an anonymous account and then checks the cookie when the user wants to do something restricted. This is working fine. However, there is one caveat: I've been told that for each page that connects to our SQL server, I need to make it so that the user connects using an Active Directory account. because the system I'm using is cookie based, the user isn't logged in to Active Directory. Therefore, I use impersonation to connect to the server as a specific account. However, the powers that be here don't like impersonation; they say that it clutters up the code. I agree, but I've found no way around this. It seems that the only way that a user can be logged in to an ASP.net application is by either connecting with Internet Explorer from a machine where the user is logged in with their Active Directory account or by typing an Active Directory username and password. Neither of these two are workable in my application. I think it would be nice if I could make it so that when a user logs in and receives the cookie (which actually comes from a separate log on application, by the way), there could be some code run which tells the application to perform all network operations as the user's Active Directory account, just as if they had typed an Active Directory username and password. It seems like this ought to be possible somehow, but the solution evades me. How can I make this work? Update To those who have responded so far, I apologize for the confusion I have caused. The responses I've received indicate that you've misunderstood the question, so please allow me to clarify. I have no control over the requirement that users must perform network operations (such as SQL queries) using Active Directory accounts. I've been told several times (online and in meat-space) that this is an unusual requirement and possibly bad practice. I also have no control over the requirement that users must log in using the existing cookie-based log on application. I understand that in an ideal MS ecosystem, I would simply dis-allow anonymous access in my IIS settings and users would log in using Windows Authentication. This is not the case. The current system is that as far as IIS is concerned, the user logs in anonymously (even though they supply credentials which result in the issuance of a cookie) and we must programmatically check the cookie to see if the user has access to any restricted resources. In times past, we have simply used a single SQL account to perform all queries. My direct supervisor (who has many years of experience with this sort of thing) wants to change this. He says that if each user has his own AD account to perform SQL queries, it gives us more of a trail to follow if someone tries to do something wrong. The closest thing I've managed to come up with is using WIF to give the user a claim to a specific Active Directory account, but I still have to use impersonation because even still, the ASP.net process presents anonymous credentials to the SQL server. It boils down to this: Can I log users in with Active Directory accounts in my ASP.net application without having the users manually enter their AD credentials? (Windows Authentication)

    Read the article

  • [C#][XNA] Draw() 20,000 32 by 32 Textures or 1 Large Texture 20,000 Times

    - by Rudi
    The title may be confusing - sorry about that, it's a poor summary. Here's my dilemma. I'm programming in C# using the .NET Framework 4, and aiming to make a tile-based game with XNA. I have one large texture (256 pixels by 4096 pixels). Remember this is a tile-based game, so this texture is so massive only because it contains many tiles, which are each 32 pixels by 32 pixels. I think the experts will definitely know what a tile-based game is like. The orientation is orthogonal (like a chess board), not isometric. In the Game.Draw() method, I have two choices, one of which will be incredibly more efficient than the other. Choice/Method #1: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { SpriteBatch.Draw( MyLargeTexture, // One large 256 x 4096 texture new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(x, y, 32, 32), // Notice the source rectangle 'cuts out' 32 by 32 squares from the texture corresponding to the loop Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the first method is referencing one large texture many many times, each time using a small rectangle of this large texture to draw the appropriate tile image. Choice/Method #2: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { Texture2D tileTexture = map.GetTileTexture(x, y); // Getting a small 32 by 32 texture (different each iteration of the loop) SpriteBatch.Draw( tileTexture, new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(0, 0, tileTexture.Width, tileTexture.Height), // Notice the source rectangle uses the entire texture, because the entire texture IS 32 by 32 Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the second method is drawing many small textures many times. The Question: Which method and why? Personally, I would think it would be incredibly more efficient to use the first method. If you think about what that means for the tile array in a map (think of a large map with 2000 by 2000 tiles, let's say), each Tile object would only have to contain 2 integers, for the X and Y positions of the source rectangle in the one large texture - 8 bytes. If you use method #2, however, each Tile object in the tile array of the map would have to store a 32by32 Texture - an image - which has to allocate memory for the R G B A pixels 32 by 32 times - is that 4096 bytes per tile then? So, which method and why? First priority is speed, then memory-load, then efficiency or whatever you experts believe.

    Read the article

  • TDD - beginner problems and stumbling blocks

    - by Noufal Ibrahim
    While I've written unit tests for most of the code I've done, I only recently got my hands on a copy of TDD by example by Kent Beck. I have always regretted certain design decisions I made since they prevented the application from being 'testable'. I read through the book and while some of it looks alien, I felt that I could manage it and decided to try it out on my current project which is basically a client/server system where the two pieces communicate via. USB. One on the gadget and the other on the host. The application is in Python. I started off and very soon got entangled in a mess of rewrites and tiny tests which I later figured didn't really test anything. I threw away most of them and and now have a working application for which the tests have all coagulated into just 2. Based on my experiences, I have a few questions which I'd like to ask. I gained some information from http://stackoverflow.com/questions/1146218/new-to-tdd-are-there-sample-applications-with-tests-to-show-how-to-do-tdd but have some specific questions which I'd like answers to/discussion on. Kent Beck uses a list which he adds to and strikes out from to guide the development process. How do you make such a list? I initially had a few items like "server should start up", "server should abort if channel is not available" etc. but they got mixed and finally now, it's just something like "client should be able to connect to server" (which subsumed server startup etc.). How do you handle rewrites? I initially selected a half duplex system based on named pipes so that I could develop the application logic on my own machine and then later add the USB communication part. It them moved to become a socket based thing and then moved from using raw sockets to using the Python SocketServer module. Each time things changed, I found that I had to rewrite considerable parts of the tests which was annoying. I'd figured that the tests would be a somewhat invariable guide during my development. They just felt like more code to handle. I needed a client and a server to communicate through the channel to test either side. I could mock one of the sides to test the other but then the whole channel wouldn't be tested and I worry that I'd miss that. This detracted from the whole red/green/refactor rhythm. Is this just lack of experience or am I doing something wrong? The "Fake it till you make it" left me with a lot of messy code that I later spent a lot of time to refactor and clean up. Is this the way things work? At the end of the session, I now have my client and server running with around 3 or 4 unit tests. It took me around a week to do it. I think I could have done it in a day if I were using the unit tests after code way. I fail to see the gain. I'm looking for comments and advice from people who have implemented large non trivial projects completely (or almost completely) using this methodology. It makes sense to me to follow the way after I have something already running and want to add a new feature but doing it from scratch seems to tiresome and not worth the effort. P.S. : Please let me know if this should be community wiki and I'll mark it like that. Update 0 : All the answers were equally helpful. I picked the one I did because it resonated with my experiences the most. Update 1: Practice Practice Practice!

    Read the article

  • Windows App. Thread Aborting Issue

    - by Patrick
    I'm working on an application that has to make specific decisions based on files that are placed into a folder being watched by a file watcher. Part of this decision making process involves renaming files before moving them off to another folder to be processed. Since I'm working with files of all different sizes I created an object that checks the file in a seperate thread to verify that it is "available" and when it is it fires an event. When I run the rename code from inside this available event it works. public void RenameFile_Test() { string psFilePath = @"C:\File1.xlsx"; tgt_File target = new FileObject(psFilePath); target.FileAvailable += new FileEventHandler(OnFileAvailable); target.FileUnAvailable += new FileEventHandler(OnFileUnavailable); } private void OnFileAvailable(object source, FileEventArgs e) { ((FileObject)source).RenameFile(@"C:\File2.xlsx"); } The problem I'm running into is that when the extensions are different from the source file and the rename to file I am making a call to a conversion factory that returns a factory object based on the type of conversion and then converts the file accordingly before doing the rename. When I run that particular piece of code in unit test it works, the factory object is returned, and the conversion happens correctly. But when I run it within the process I get up to the... moExcelApp = new Application(); part of converting an .xls or .xlsx to a .csv and i get a "Thread was being Aborted" error. Any thoughts? Update: There is a bit more information and a bit of map of how the application works currently. Client Application running FSW On File Created event Creates a FileObject passing in the path of the file. On construction the file is validated: if file exists is true then, Thread toAvailableCheck = new Thread(new ThreadStart(AvailableCheck)); toAvailableCheck.Start(); The AvailableCheck Method repeatedly tries to open a streamreader to the file until the reader is either created or the number of attempts times out. If the reader is opened, it fires the FileAvailable event, if not it fires the FileUnAvailable event, passing back itself in the event. The client application is wired to catch those events from inside the Oncreated event of the FSW. the OnFileAvailable method then calls the rename functionality which contains the excel interop call. If the file is being renamed (not converted, extensions stay the same) it does a move to change the name from the old file name to the new, and if its a conversion it runs a conversion factory object which returns the correct type of conversion based on the extensions of the source file and the destination file name. If it is a simple rename it works w/o a problem. If its a conversion (which is the XLS to CSV object that is returned as a part of the factory) the very first thing it does is create a new application object. That is where the application bombs. When i test the factory and conversion/rename process outside of the thread and in its own unit test the process works w/o a problem. Update: I tested the Excel Interop inside a thread by doing this: [TestMethod()] public void ExcelInteropTest() { Thread toExcelInteropThreadTest = new Thread(new ThreadStart(Instantiate_App)); toExcelInteropThreadTest.Start(); } private void Instantiate_App() { Application moExcelApp = new Application(); moExcelApp.Quit(); } And on the line where the application is instatntiated I got the 'A first chance exception of type 'System.Threading.ThreadAbortException' error. So I added; toExcelInteropThreadTest.SetApartmentState(ApartmentState.MTA); after the thread instantiation and before the thread start call and still got the same error. I'm getting the notion that I'm going to have to reconsider the design.

    Read the article

  • Which of CouchDB or MongoDB suits my needs?

    - by vonconrad
    Where I work, we use Ruby on Rails to create both backend and frontend applications. Usually, these applications interact with the same MySQL database. It works great for a majority of our data, but we have one situation which I would like to move to a NoSQL environment. We have clients, and our clients have what we call "inventories"--one or more of them. An inventory can have many thousands of items. This is currently done through two relational database tables, inventories and inventory_items. The problems start when two different inventories have different parameters: # Inventory item from inventory 1, televisions { inventory_id: 1 sku: 12345 name: Samsung LCD 40 inches model: 582903-4 brand: Samsung screen_size: 40 type: LCD price: 999.95 } # Inventory item from inventory 2, accomodation { inventory_id: 2 sku: 48cab23fa name: New York Hilton accomodation_type: hotel star_rating: 5 price_per_night: 395 } Since we obviously can't use brand or star_rating as the column name in inventory_items, our solution so far has been to use generic column names such as text_a, text_b, float_a, int_a, etc, and introduce a third table, inventory_schemas. The tables now look like this: # Inventory schema for inventory 1, televisions { inventory_id: 1 int_a: sku text_a: name text_b: model text_c: brand int_b: screen_size text_d: type float_a: price } # Inventory item from inventory 1, televisions { inventory_id: 1 int_a: 12345 text_a: Samsung LCD 40 inches text_b: 582903-4 text_c: Samsung int_a: 40 text_d: LCD float_a: 999.95 } This has worked well... up to a point. It's clunky, it's unintuitive and it lacks scalability. We have to devote resources to set up inventory schemas. Using separate tables is not an option. Enter NoSQL. With it, we could let each and every item have their own parameters and still store them together. From the research I've done, it certainly seems like a great alterative for this situation. Specifically, I've looked at CouchDB and MongoDB. Both look great. However, there are a few other bits and pieces we need to be able to do with our inventory: We need to be able to select items from only one (or several) inventories. We need to be able to filter items based on its parameters (eg. get all items from inventory 2 where type is 'hotel'). We need to be able to group items based on parameters (eg. get the lowest price from items in inventory 1 where brand is 'Samsung'). We need to (potentially) be able to retrieve thousands of items at a time. We need to be able to access the data from multiple applications; both backend (to process data) and frontend (to display data). Rapid bulk insertion is desired, though not required. Based on the structure, and the requirements, are either CouchDB or MongoDB suitable for us? If so, which one will be the best fit? Thanks for reading, and thanks in advance for answers. EDIT: One of the reasons I like CouchDB is that it would be possible for us in the frontend application to request data via JavaScript directly from the server after page load, and display the results without having to use any backend code whatsoever. This would lead to better page load and less server strain, as the fetching/processing of the data would be done client-side.

    Read the article

  • Software Architecture: Quality Attributes

    Quality is what all software engineers should strive for when building a new system or adding new functionality. Dictonary.com ambiguously defines quality as a grade of excellence. Unfortunately, quality must be defined within the context of a situation in that each engineer must extract quality attributes from a project’s requirements. Because quality is defined by project requirements the meaning of quality is constantly changing base on the project. Software architecture factors that indicate the relevance and effectiveness The relevance and effectiveness of architecture can vary based on the context in which it was conceived and the quality attributes that are required to meet. Typically when evaluating architecture for a specific system regarding relevance and effectiveness the following questions should be asked.   Architectural relevance and effectiveness questions: Does the architectural concept meet the needs of the system for which it was designed? Out of the competing architectures for a system, which one is the most suitable? If we look at the first question regarding meeting the needs of a system for which it was designed. A system that answers yes to this question must meet all of its quality goals. This means that it consistently meets or exceeds performance goals for the system. In addition, the system meets all the other required system attributers based on the systems requirements. The suitability of a system is based on several factors. In order for a project to be suitable the necessary resources must be available to complete the task. Standard Project Resources: Money Trained Staff Time Life cycle factors that affect the system and design The development life cycle used on a project can drastically affect how a system’s architecture is created as well as influence its design. In the case of using the software development life cycle (SDLC) each phase must be completed before the next can begin.  This waterfall approach does not allow for changes in a system’s architecture after that phase is completed. This can lead to major system issues when the architecture for the system is not as optimal because of missed quality attributes. This can occur when a project has poor requirements and makes misguided architectural decisions to name a few examples. Once the architectural phase is complete the concepts established in this phase must move on to the design phase that is bound to use the concepts and guidelines defined in the previous phase regardless of any missing quality attributes needed for the project. If any issues arise during this phase regarding the selected architectural concepts they cannot be corrected during the current project. This directly has an effect on the design of a system because the proper qualities required for the project where not used when the architectural concepts were approved. When this is identified nothing can be done to fix the architectural issues and system design must use the existing architectural concepts regardless of its missing quality properties because the architectural concepts for the project cannot be altered. The decisions made in the design phase then preceded to fall down to the implementation phase where the actual system is coded based on the approved architectural concepts established in the architecture phase regardless of its architectural quality. Conversely projects using more of an iterative or agile methodology to implement a system has more flexibility to correct architectural decisions based on missing quality attributes. This is due to each phase of the SDLC is executed more than once so any issues identified in architecture of a system can be corrected in the next architectural phase. Subsequently the corresponding changes will then be adjusted in the following design phase so that when the project is completed the optimal architectural and design decision are applied to the solution. Architecture factors that indicate functional suitability Systems that have function shortcomings do not have the proper functionality based on the project’s driving quality attributes. What this means in English is that the system does not live up to what is required of it by the stakeholders as identified by the missing quality attributes and requirements. One way to prevent functional shortcomings is to test the project’s architecture, design, and implementation against the project’s driving quality attributes to ensure that none of the attributes were missed in any of the phases. Another way to ensure a system has functional suitability is to certify that all its requirements are fully articulated so that there is no chance for misconceptions or misinterpretations by all stakeholders. This will help prevent any issues regarding interpreting the system requirements during the initial architectural concept phase, design phase and implementation phase. Consider the applicability of other architectural models When considering an architectural model for a project is also important to consider other alternative architectural models to ensure that the model that is selected will meet the systems required functionality and high quality attributes. Recently I can remember talking about a project that I was working on and a coworker suggested a different architectural approach that I had never considered. This new model will allow for the same functionally that is offered by the existing model but will allow for a higher quality project because it fulfills more quality attributes. It is always important to seek alternatives prior to committing to an architectural model. Factors used to identify high-risk components A high risk component can be defined as a component that fulfills 2 or more quality attributes for a system. An example of this can be seen in a web application that utilizes a remote database. One high-risk component in this system is the TCIP component because it allows for HTTP connections to handle by a web server and as well as allows for the server to also connect to a remote database server so that it can import data into the system. This component allows for the assurance of data quality attribute and the accessibility quality attribute because the system is available on the network. If for some reason the TCIP component was to fail the web application would fail on two quality attributes accessibility and data assurance in that the web site is not accessible and data cannot be update as needed. Summary As stated previously, quality is what all software engineers should strive for when building a new system or adding new functionality. The quality of a system can be directly determined by how closely it is implemented when compared to its desired quality attributes. One way to insure a higher quality system is to enforce that all project requirements are fully articulated so that no assumptions or misunderstandings can be made by any of the stakeholders. By doing this a system has a better chance of becoming a high quality system based on its quality attributes

    Read the article

  • Speaker at the German Visual FoxPro Developer Conference 2004

    The following is an excerpt from the UniversalThread conference coverage of the German Visual FoxPro Developer Conference 2004 written by Hans-Otto Lochmann, Armin Neudert and myself. TRACK Active FoxPro Pages Back in 1996 Peter Herzog invented a FoxPro based solution to provide intranet capabilities for one of his customers. Nearly at the same time Rick Strahl had the same task and created WestWind Web Connection (WWWC). The aspect that developers have to have a full Visual FoxPro development environment to create WWWC solutions was the starting point of a "personal sportive competition" of Peter to write his own solution. But the main aspect has to be that it doesn't rely on a full VFP version in order to run. The VFP runtime should enough and the source code has to be compiled and interpreted on the fly. So, as Microsoft released Active Server Pages a name for Peter's solution was found: Active FoxPro Pages (AFP). During the years many drawbacks, design aspects as well as technological hassles forced ProLib Software to refactor the product. This way many limits like DCOM configuration, file-based information transfer between Web server and AFP, missing features (like upload forms or other Web servers than IIS) and extensibility were eliminated. As a consequence ProLib Software decided to rewrite Active FoxPro Pages in mid of 2002 completely. Christof Wollenhaupt, before his marriage known as Christof Lange, and Jochen Kirstätter had to solve this task. AFP 3.0 was officially released at German Devcon in November 2002. Today AFP has six distributors world-wide and there is a lot more information available online than before version 3.0. Directly after a short welcome speech by Rainer Becker, Jochen Kirstätter - aka JoKi - opened today's AFP track and introduced the basic concepts how Active FoxPro Pages works in general, explained the AFP terminilogy and every single component, and presented a small Walk-Through about how to write an AFP-based Web solution. Actually his presentation slides themselves were an AFP Web application. This way it was easy to integrate accompanying AFP samples on the fly. Additionally it was shown that no Visual FoxPro development environment is needed to create a Web application. A simple text editor like NotePad or any WYSIWYG editor on the market is usable to fullfil customer's requirements.Welcome at least two new speakers - Nina Schwanzer and Bernhard Reiter. Both are working at ProLib Software and this year's conference is their first time as speakers. And they did their job very well. The whole session was kind of a "ping pong" game and those two complemented each other to keep the audience in tension. First, they described typical requirements a modern desktop application should fullfil - online registration and activation, auto-update capabilities, or even frontend to administer a Web application on a remote system via internet, and explained how possible solutions like Web Services (using the SOAP interface), DCOM, and even .NET might solve those requirements. But any of those ways has different drawbacks like complicated installation or configuration, or extraordinary download sizes. Next, they introduced a technology they developed and used in a customer's project: Active FoxPro Pages Remote Procedure Call (AFP RPC). [...]   In the next session JoKi described how to extend Active FoxPro Pages. On the one hand AFP provides a plugin interface, and on the other hand any addon for Visual FoxPro might be usable as well. During the first half he spoke about the plugin interface and wrote live a new AFP extension - the Devcon plugin. Later he questioned any former step and showed that a single AFP document may solve the problem as well. So, developing extensions is only interesting if they are re-usable and generic. At the end he talked about multiple interfaces for the same business logic. For instance plain VFP class, COM server and .NET integration. Currently there are several specialized AFP extensions for sending mail, for using cryptographic routines (ie. based on .NET classes), or enhanced methods to handle HTML/XML strings.Rainer Becker and Peter Herzog introduced a new development for Visual Extend (VFX) - an AFP form builder. With this builder creating an AFP Web form designed with Visual FoxPro's form designer was a matter of seconds. The builder itself is currently in pre-release status and will be part of the VFX framework in the future. It was very impressive to see that the whole design of a form as well as most parts of its functionality were exported to a combination of HTML, JavaScript and Active FoxPro Pages. At half-time Jürgen "wOOdy" Wondzinski and JoKi changed places with Rainer and Peter, and presented some Web solutions in AFP. [...] Visual FoxPro 9.0 und Linux Is Linux still a topic for Visual FoxPro developers based on the activities during this year? In his session Jochen Kirstätter - aka JoKi - went not through the technical steps and requirements on how to setup and run FoxPro on a Linux client. Instead, he explained what Linux actually is, and talked about the high variety of distributions. In fact there are a lot of distributions around but since some several years there are some specialized ones available: Live Distributions (aka LiveCDs).The intension of LiveCDs is to run a full-featured Linux operating system on any personal computer directly from a bootable medium, like CD, DVD, or even USB memory stick, without installation on a hard disk. One of the first Linux LiveCDs was made by Klaus Knopper and is well-known as Knoppix. Today, many other LiveCDs are based on the concepts of Knoppix. During the session Jochen booted Morphix, a very light-weighted LiveCD, on his notebook, and actually showed the attendees that testing and playing around with Linux is absolutely easy. Running a text processing application swept away most of the contrary aspects the audience had. Okay, where is the part about FoxPro? Well, there are several scenarios a customer might require usage of Linux, and actually with all of them FoxPro could deal with. I guess that one of the more common ones is the situation that a customer has a heterogeneous intranet with Windows clients and Linux servers, i.e. Windows XP Professional and any Linux distribution on their servers. Even in this scenario there are two variants hidden! Why? Well, on the one hand there is a software package called Samba, that provides Windows server capabilities to a Linux system, and on the other hand there are several SQL servers for Linux, like PostgreSQL, DB2 and MySQL. Either way, FoxPro is able to deal with these scenarios, but you as developer have to know what you are talking about with your customers. And even if there's no Windows operating system, you are able to provide a FoxPro-based solution. Using the wine library - wine stands for Wine Is Not an Emulator - you are able to run your VFP applications on Linux clients, too; but not without reading VFP's EULA. Licenses were also part the session, and Jochen discussed the meaning of Open Source and its misunderstanding throughout most developers. Open Source does not mean that it's without a fee. Instead, it stands for access to the source code of an application or tool. And, VFP itself is one of the best samples to explain Open Source due to fact that since years, VFP is shipped with the xSource.zip archive. [...]

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >