Search Results

Search found 1139 results on 46 pages for 'hierarchical trees'.

Page 42/46 | < Previous Page | 38 39 40 41 42 43 44 45 46  | Next Page >

  • Git for Websites / post-receive / Separation of Test and Production Sites

    - by Walt W
    Hi all, I'm using Git to manage my website's source code and deployment, and currently have the test and live sites running on the same box. Following this resource http://toroid.org/ams/git-website-howto originally, I came up with the following post-receive hook script to differentiate between pushes to my live site and pushes to my test site: while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git --work-tree=c:/temp/BLAH checkout -f master echo "Updated master" ;; refs/heads/testbranch ) git --work-tree=c:/temp/BLAH2 checkout -f testbranch echo "Updated testbranch" ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" However, I have doubts that this is actually safe :) I'm by no means a Git expert, but I am guessing that Git probably keeps track of the current checked-out branch head, and this approach probably has the potential to confuse it to no end. So a few questions: IS this safe? Would a better approach be to have my base repository be the test site repository (with corresponding working directory), and then have that repository push changes to a new live site repository, which has a corresponding working directory to the live site base? This would also allow me to move the production to a different server and keep the deployment chain intact. Is there something I'm missing? Is there a different, clean way to differentiate between test and production deployments when using Git for managing websites? As an additional note in light of Vi's answer, is there a good way to do this that would handle deletions without mucking with the file system much? Thank you, -Walt PS - The script I came up with for the multiple repos (and am using unless I hear better) is as follows: sitename=`basename \`pwd\`` while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git checkout -q -f master if [ $? -eq 0 ]; then echo "Test Site checked out properly" else echo "Failed to checkout test site!" fi ;; refs/heads/live-site ) git push -q ../Live/$sitename live-site:master if [ $? -eq 0 ]; then echo "Live Site received updates properly" else echo "Failed to push updates to Live Site" fi ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" And then the repo in ../Live/$sitename (these are "bare" repos with working trees added after init) has the basic post-receive: git checkout -f if [ $? -eq 0 ]; then echo "Live site `basename \`pwd\`` checked out successfully" else echo "Live site failed to checkout" fi

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • TicTacToe AI Making Incorrect Decisions

    - by Chris Douglass
    A little background: as a way to learn multinode trees in C++, I decided to generate all possible TicTacToe boards and store them in a tree such that the branch beginning at a node are all boards that can follow from that node, and the children of a node are boards that follow in one move. After that, I thought it would be fun to write an AI to play TicTacToe using that tree as a decision tree. TTT is a solvable problem where a perfect player will never lose, so it seemed an easy AI to code for my first time trying an AI. Now when I first implemented the AI, I went back and added two fields to each node upon generation: the # of times X will win & the # of times O will win in all children below that node. I figured the best solution was to simply have my AI on each move choose and go down the subtree where it wins the most times. Then I discovered that while it plays perfect most of the time, I found ways where I could beat it. It wasn't a problem with my code, simply a problem with the way I had the AI choose it's path. Then I decided to have it choose the tree with either the maximum wins for the computer or the maximum losses for the human, whichever was more. This made it perform BETTER, but still not perfect. I could still beat it. So I have two ideas and I'm hoping for input on which is better: 1) Instead of maximizing the wins or losses, instead I could assign values of 1 for a win, 0 for a draw, and -1 for a loss. Then choosing the tree with the highest value will be the best move because that next node can't be a move that results in a loss. It's an easy change in the board generation, but it retains the same search space and memory usage. Or... 2) During board generation, if there is a board such that either X or O will win in their next move, only the child that prevents that win will be generated. No other child nodes will be considered, and then generation will proceed as normal after that. It shrinks the size of the tree, but then I have to implement an algorithm to determine if there is a one move win and I think that can only be done in linear time (making board generation a lot slower I think?) Which is better, or is there an even better solution?

    Read the article

  • Arguments for moving from LINQtoSQL to Nhibernate?

    - by sah302
    Backstory: Hi all, I just spent a lot of time reading many of the LINQ vs Nhibernate threads here and on other sites. I work in a small development team of 4 people and we don't even have really any super experienced developers. We work for a small company that has a lot of technical needs but not enough developers to implement them (and hiring more is out of the question right now). Typically our projects (which individually are fairly small) have been coded separately and weren't really layered in anyway, code wasn't re-used, no class libraries, and we just use the LINQtoSQL .dbml files for our pojects, we really don't even use objects but pass around values and stuff, the only time we use objects is when inserting to a database (heck not even querying since you don't need to assign it to a type and can just bind to gridview). Despite all this as I said our company has a lot of technical needs, no one could come to us for a year and we would have plenty of work to implement requested features. Well I have decided to change that a bit first by creating class libraries and actually adding layers to our applications. I am trying to meet these guys halfway by still using LINQtoSQL as the ORM yet and still use VB as the language. However I am finding it a b***h of a time dealing with so many thing in LINQtoSQL that I found easy in Nhibernate (automatic handling of the session, criteria creation easier than expression trees, generic an dynamic querying easier etc.) So... Question: How can I convince my lead developers and other senior programmers that switching to Nhibernate is a good thing? That being in control of our domain objects is a good thing? That being able to implement interfaces is a good? I've tried exlpaining the advantages of this before but it's not understood by them because they've never programmed in a true OO & layered way. Also one of the counter arguments to this I can see is sqlMetal generates those classes automatically and therefore it saves a lot of time. I can't really counter that other than saying spending more time on infrastructure to make it more scalable and flexible is good, but they can't see how. Again, I know the features and advantages (somewhat enough I believe) of each, but I need arguments applicable to my context, hence why I provided the context. I just am not a very good arguer I guess. (Caveat: For all the LINQtoSQL lovers, I may just not be super proficient as LINQ, but I find it very cumbersome that you are required to download some extra library for dynamic queries which don't by default support guid comparisons, and I also find the way of updating entitites to be cumbersome as well in terms of data context managing, so it could just be that I suck hehe.)

    Read the article

  • Closure Tables - Is this enough data to display a tree view?

    - by James Pitt
    Here is the table I have created by testing the closure table method. | id | parentId | childId | hops | | | | | 270 | 6 | 6 | 0 | 271 | 7 | 7 | 0 | 272 | 8 | 8 | 0 | 273 | 9 | 9 | 0 | 276 | 10 | 10 | 0 | 281 | 9 | 10 | 1 | 282 | 7 | 9 | 1 | 283 | 7 | 10 | 2 | 285 | 7 | 8 | 1 | 286 | 6 | 7 | 1 | 287 | 6 | 9 | 2 | 288 | 6 | 10 | 3 | 289 | 6 | 8 | 2 | 293 | 6 | 9 | 1 | 294 | 6 | 10 | 2 I am trying to create a simple tree of this using PHP. There does not seem to be enough data to create the table. For example, when I look purely at parentId = 6: -Part 6 -Part 7 - ? - ? -Part 9 - ? - ? We know that parts 8 and 10 exists below Part 7 or 9, but not which. We know that part 10 exists at both 3 and 4 nodes deep but where? If I look at other data in the table it is possible to tell it should be: - Part 6 - Part 7 - Part 9 - Part 10 - Part 9 - Part 10 I thought one of the benefits of closure tables was there was no need for recursive queries? Could you help explain what I am doing wrong? EDIT: For clarification, this is a mapping table. There is another table called "parts" which has a column called part_id that correlates to both the parentId and childId columns in the "closure" table. The "id" column in the table above (closure) is just for the purposes of maintaining a primary key. It is not really necessary. The methods I have used to create this closure table is described in the following article: http://dirtsimple.org/2010/11/simplest-way-to-do-tree-based-queries.html EDIT2: It can have two and three hops. I will explain easier by assigning names to the items. Part 6 = Bicycle Part 7 = Gears Part 8 = Chain Part 9 = Bolt Part 10 = Nut Nut is part of Bolt. The Bolt and Nut combo exists directly within Bicycle and within Gears which is part of Bicycle. In relation to what method to use I have looked at Adjacency, Edges, Enum Paths, Closures, DAGS(networks) and the Nested Set Model. I am still trying to work out what is what, but this is an extremely complex component database where there are multiple parents and any modification to a sub-tree must propogate through the other trees. More importantly there will be insertions, deletions and tree views that I wish to avoid recursion during general use, even at the cost of database space and query time during entry.

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • If Then Statement Condition Being Ignored With Optimisations On

    - by Matma
    I think im going mad but can some show me what im missing, it must be some stupidly simple i just cant see the wood for the trees. BOTH side of this if then else statement are being executed? Ive tried commenting out the true side and moving the condition to a seperate variable with the same result. However if i explicitly set the condition to 1=0 or 1=1 then the if then statement is operating as i would expect. i.e. only executing one side of the equation... The only time ive seen this sort of thing is when the compiler has crashed and is no longer compiling (without visible indication that its not) but ive restarted studio with the same results, ive cleaned the solution, built and rebuilt with no change? please show me the stupid mistake im making using vs2005 if it matters. Dim dset As DataSet = New DataSet If (CboCustomers.SelectedValue IsNot Nothing) AndAlso (CboCustomers.SelectedValue <> "") Then Dim Sql As String = "Select sal.SalesOrderNo As SalesOrder,cus.CustomerName,has.SerialNo, convert(varchar,sal.Dateofpurchase,103) as Date from [dbo].[Customer_Table] as cus " & _ " inner join [dbo].[Hasp_table] as has on has.CustomerID=cus.CustomerTag " & _ " inner join [dbo].[salesorder_table] as sal On sal.Hasp_ID =has.Hasp_ID Where cus.CustomerTag = '" & CboCustomers.SelectedValue.ToString & "'" Dim dap As SqlDataAdapter = New SqlDataAdapter(Sql, FormConnection) dap.Fill(dset, "dbo.Customer_Table") DGCustomer.DataSource = dset.Tables("dbo.Customer_Table") Else Dim erm As String = "wtf" End If EDIT: i have found that this is something to do with the release config settings im using, i guesing its the optimisations bit. does anyone know of any utils/addons for vs that show if a line has been optimised out. delphi, my former language showed blue dots in the left margin to show that it was a compiled line, no dot meaning it wasnt compiled in, is there anything like that for vs? alternatively can someone explain how optimisations would affect this simple if statement causeing it to run both sides? EDIT2: using this thread as possible causes/solutions : http://stackoverflow.com/questions/2135509/bug-only-occurring-when-compile-optimization-enabled. It does the same with release = optimisations on, x86, x64 and AnyCPU Goes away with optimisations off. Im using V2005 on a x64 win7 machine, if that matters. Thanks

    Read the article

  • How do I do high quality scaling of a image?

    - by pbhogan
    I'm writing some code to scale a 32 bit RGBA image in C/C++. I have written a few attempts that have been somewhat successful, but they're slow and most importantly the quality of the sized image is not acceptable. I compared the same image scaled by OpenGL (i.e. my video card) and my routine and it's miles apart in quality. I've Google Code Searched, scoured source trees of anything I thought would shed some light (SDL, Allegro, wxWidgets, CxImage, GD, ImageMagick, etc.) but usually their code is either convoluted and scattered all over the place or riddled with assembler and little or no comments. I've also read multiple articles on Wikipedia and elsewhere, and I'm just not finding a clear explanation of what I need. I understand the basic concepts of interpolation and sampling, but I'm struggling to get the algorithm right. I do NOT want to rely on an external library for one routine and have to convert to their image format and back. Besides, I'd like to know how to do it myself anyway. :) I have seen a similar question asked on stack overflow before, but it wasn't really answered in this way, but I'm hoping there's someone out there who can help nudge me in the right direction. Maybe point me to some articles or pseudo code... anything to help me learn and do. Here's what I'm looking for: 1. No assembler (I'm writing very portable code for multiple processor types). 2. No dependencies on external libraries. 3. I am primarily concerned with scaling DOWN, but will also need to write a scale up routine later. 4. Quality of the result and clarity of the algorithm is most important (I can optimize it later). My routine essentially takes the following form: DrawScaled( uint32 *src, uint32 *dst, src_x, src_y, src_w, src_h, dst_x, dst_y, dst_w, dst_h ); Thanks! UPDATE: To clarify, I need something more advanced than a box resample for downscaling which blurs the image too much. I suspect what I want is some kind of bicubic (or other) filter that is somewhat the reverse to a bicubic upscaling algorithm (i.e. each destination pixel is computed from all contributing source pixels combined with a weighting algorithm that keeps things sharp. EXAMPLE: Here's an example of what I'm getting from the wxWidgets BoxResample algorithm vs. what I want on a 256x256 bitmap scaled to 55x55. And finally: the original 256x256 image

    Read the article

  • Help with C# program design implementation: multiple array of lists or a better way?

    - by Bob
    I'm creating a 2D tile-based RPG in XNA and am in the initial design phase. I was thinking of how I want my tile engine to work and came up with a rough sketch. Basically I want a grid of tiles, but at each tile location I want to be able to add more than one tile and have an offset. I'd like this so that I could do something like add individual trees on the world map to give more flair. Or set bottles on a bar in some town without having to draw a bunch of different bar tiles with varying bottles. But maybe my reach is greater than my grasp. I went to implement the idea and had something like this in my Map object: List<Tile>[,] Grid; But then I thought about it. Let's say I had a world map of 200x200, which would actually be pretty small as far as RPGs go. That would amount to 40,000 Lists. To my mind I think there has to be a better way. Now this IS pre-mature optimization. I don't know if the way I happen to design my maps and game will be able to handle this, but it seems needlessly inefficient and something that could creep up if my game gets more complex. One idea I have is to make the offset and the multiple tiles optional so that I'm only paying for them when needed. But I'm not sure how I'd do this. A multiple array of objects? object[,] Grid; So here's my criteria: A 2D grid of tile locations Each tile location has a minimum of 1 tile, but can optionally have more Each extra tile can optionally have an x and y offset for pinpoint placement Can anyone help with some ideas for implementing such a design (don't need it done for me, just ideas) while keeping memory usage to a minimum? If you need more background here's roughly what my Map and Tile objects amount to: public struct Map { public Texture2D Texture; public List<Rectangle> Sources; //Source Rectangles for where in Texture to get the sprite public List<Tile>[,] Grid; } public struct Tile { public int Index; //Where in Sources to find the source Rectangle public int X, Y; //Optional offsets }

    Read the article

  • dijit/form/Select broken in Internet Explorer using Esri Javascript 3.7

    - by disuse
    After developing a web map app in Firefox, I tested my code in Internet Explorer (company standard) to discover that the dijit/form/Select is misbehaving using the latest Esri JavaScript v3.7. The issue I am seeing is that the Select will not update/change from the first option in the list when using v3.7. If I bump the version down to 3.6, it works as expected. I've tried IE browser modes from 7 to 10 and am experiencing the same behavior between all of them. Can someone confirm they are experiencing the same thing? Example in 3.7 - http://jsbin.com/aVIsApO/1/edit Example in 3.6 - http://jsbin.com/odIxETu/7/edit Codeblock var url = "http://services.arcgis.com/V6ZHFr6zdgNZuVG0/ArcGIS/rest/services/Street_Trees/FeatureServer/0"; var frmTrees; require([ "esri/tasks/query", "esri/tasks/QueryTask", "dojo/dom-construct", "dijit/form/Select", "dojo/parser", "dijit/registry", "dojo/on", "dojo/ready", "dojo/_base/connect", "dojo/domReady!" ], function( Query, QueryTask, domConstruct, Select, parser, registry, on, ready, connect ) { ready(function() { frmTrees = registry.byId("trees"); var qt = new QueryTask(url); var query = new Query(); query.where = "FID < 25"; query.orderByFields = ["qSpecies"]; query.returnGeometry = false; query.outFields = ["qSpecies", "TreeID"]; query.groupByFieldsForStatistics = ["qSpecies"]; //query.returnDistinctValues = true; qt.execute(query, function(results) { //var frm_domain_area = dom.byId("domain_area"); var testVals = {}; for (var i = 0; i < results.features.length; i++) { var id = results.features[i].attributes.TreeID; var desc = results.features[i].attributes.qSpecies; if (!testVals[id]) { testVals[id] = true; var selectElem = domConstruct.create("option",{ label: desc + " (" + id + ")", value: id }); frmTrees.addOption(selectElem); } } }); frmTrees.on("change", function() { console.debug(frmTrees.get("value")); }); }); });

    Read the article

  • Coping with feelings of technical mediocrity

    - by Karim
    As I've progressed as a programmer, I noticed more nuance and areas I could study in depth. In part, I've come to think of myself from, at one point, a "guru" to now much less, even mediocre or inadequate. Is this normal, or is it a sign of a destructive excessive ambition? Background I started to program when I was still a kid, I had about 10 or 11 years. I really enjoy my work and never get bored from it. It's amazing how somebody could be paid for what he really likes to do and would be doing it anyway even for free. When I first started to program, I was feeling proud of what I was doing, each application I built was for me a success and after 2-3 year I had a feeling that I'm a coding guru. It was a nice feeling. ;-) But the more I was in the field and the more types of software I started to develop, I was starting to have a feeling that I'm completely wrong in thinking I'm a guru. I felt that I'm not even a mediocre developer. Each new field I start to work on is giving me this feeling. Like when I once developed a device driver for a client, I saw how much I need to learn about device drivers. When I developed a video filter for an application, I saw how much do I still need to learn about DirectShow, Color Spaces, and all the theory behind that. The worst thing was when I started to learn algorithms. It was several years ago. I knew then the basic structures and algorithms like the sorting, some types of trees, some hashtables, strings, etc. and when I really wanted to learn a group of structures I learned about 5-6 new types and saw that in fact even this small group has several hundred subtypes of structures. It's depressing how little time people have in their lives to learn all this stuff. I'm now a software developer with about 10 years of experience and I still feel that I'm not a proficient developer when I think about things that others do in the industry.

    Read the article

  • How can I represent a line of music notes in a way that allows fast insertion at any index?

    - by chairbender
    For "fun", and to learn functional programming, I'm developing a program in Clojure that does algorithmic composition using ideas from this theory of music called "Westergaardian Theory". It generates lines of music (where a line is just a single staff consisting of a sequence of notes, each with pitches and durations). It basically works like this: Start with a line consisting of three notes (the specifics of how these are chosen are not important). Randomly perform one of several "operations" on this line. The operation picks randomly from all pairs of adjacent notes that meet a certain criteria (for each pair, the criteria only depends on the pair and is independent of the other notes in the line). It inserts 1 or several notes (depending on the operation) between the chosen pair. Each operation has its own unique criteria. Continue randomly performing these operations on the line until the line is the desired length. The issue I've run into is that my implementation of this is quite slow, and I suspect it could be made faster. I'm new to Clojure and functional programming in general (though I'm experienced with OO), so I'm hoping someone with more experience can point out if I'm not thinking in a functional paradigm or missing out on some FP technique. My current implementation is that each line is a vector containing maps. Each map has a :note and a :dur. :note's value is a keyword representing a musical note like :A4 or :C#3. :dur's value is a fraction, representing the duration of the note (1 is a whole note, 1/4 is a quarter note, etc...). So, for example, a line representing the C major scale starting on C3 would look like this: [ {:note :C3 :dur 1} {:note :D3 :dur 1} {:note :E3 :dur 1} {:note :F3 :dur 1} {:note :G3 :dur 1} {:note :A4 :dur 1} {:note :B4 :dur 1} ] This is a problematic representation because there's not really a quick way to insert into an arbitrary index of a vector. But insertion is the most frequently performed operation on these lines. My current terrible function for inserting notes into a line basically splits the vector using subvec at the point of insertion, uses conj to join the first part + notes + last part, then uses flatten and vec to make them all be in a one-dimensional vector. For example if I want to insert C3 and D3 into the the C major scale at index 3 (where the F3 is), it would do this (I'll use the note name in place of the :note and :dur maps): (conj [C3 D3 E3] [C3 D3] [F3 G3 A4 B4]), which creates [C3 D3 E3 [C3 D3] [F3 G3 A4 B4]] (vec (flatten previous-vector)) which gives [C3 D3 E3 C3 D3 F3 G3 A4 B4] The run time of that is O(n), AFAIK. I'm looking for a way to make this insertion faster. I've searched for information on Clojure data structures that have fast insertion but haven't found anything that would work. I found "finger trees" but they only allow fast insertion at the start or end of the list. Edit: I split this into two questions. The other part is here.

    Read the article

  • Useful software for netbook?

    - by Moayad Mardini
    I'm looking for recommendations of good software that are particularly useful for netbooks. Software that run great on small screens and low CPU/RAM requirments. I'll start off with the following : Operating Systems: Ubuntu Netbook Remix. Easy Peasy: A fork of Ubuntu Netbook Remix that was once called UBuntu EEE. It isn't just for eeePCs though. Definitely worth a look if vanilla Netbook Remix isn't cutting it. (MarkM) Damn Small Linux (Source) Windows 7: With trimming the installation or compressing the Windows directory to fit on an 8GB SSD. (Will Eddins) nLite: A utility to install a lightweight version of Windows XP without the unnecessary components (like Media Player, Internet Explorer, Outlook Express, MSN Explorer, Messenger...). Utilites: TouchFreeze: To disable the touch pad while typing (Source) InSSIDer: Not only does it make it easier to find and keep a wireless connection, but it turns a netbook into the perfect mobile tool for troubleshooting wireless networks. (phenry) AltMove: Adds more functionality to your mouse for interacting with windows. (Rob) ASUS Font Resizer Utility and other tools by ASUS, specific to ASUS Eee PC series. Internet: Run FileZilla FTP client for a small screen : You can hide a lot of FileZilla's interface parts in the View menu, even the directory trees. Go into Settings = Interface and move the message log next to the transfer queue, if you haven't hidden them both or you want to see them. Select a theme with 16x16 icons. (Source) IDEs and Text Editors: Best lightweight IDE/Text Editor: A question on Stack Overflow that has many good suggestions of IDEs and general text editors for programmers. What’s a good linux C/C++ IDE for a low-res screen?: IDEs for Linux-powered netbooks. Online tools: Dropbox: Since the Netbook has limited disk space, you would like to use Cloud Apps like Dropbox and Ubuntu One so that you don't run out of space especially if you are on a holiday. Later when you go back to your desktop with big hard disk,you can take out the files from your dropbox repo. (Manish Sinha) Google products: like Docs, Calendar and Reader (aviraldg) Web sites and software lists: Netbookfiles.com: Netbook specific software downloads. Software Apps to Maximise your Netbook Battery Power: Netbooks are known for their portability. Not only are they small and lightweight but with their increased power efficiency, batteries can last much longer than conventional laptops. This also means you no longer have to carry a power adapter with you! Several brands emphasis the longevity of the battery as a strong selling point, and for those people who travel a lot, it sure is. Free Must-Have Netbook Apps: Finding software for netbooks can present challenges due to limited hard drive space, processor power, RAM, and screen real-estate. That doesn't mean you have to do without essential programs. The apps below cover all the bases -- entertainment, productivity, security, and communication -- without compromising on performance or usability. Best of all, they're free! Useful Netbook Software: With short battery lives and small resolution screens Netbooks, unlike many other computers on the market, could so with some specific software for their use. Now, not all of those I’ve found are specifically designed for Netbooks, but all are relevant. And they’re designed for Windows XP. The question is community wiki, so feel free to edit it. Updated, thank you all for suggestions.

    Read the article

  • What was scientifically shown to support productivity when organizing/accessing file and folders?

    - by Tom Wijsman
    I have gathered terabytes of data but it has became a habit to store files and folders to the same folder, that folder could be kind of seen as a Inbox where most files (non-installations) enter my system. This way I end up with a big collections of files that are hard to organize properly, I mostly end up making folders that match their file type but then I still have several gigabytes of data per folder which doesn't make it efficient such that I can productively use the folder. I'd rather do a few clicks than having to search through the files, whether that's by some software product or by looking through the folder. Often the file names themselves are not proper so it would be easier to recognize them if there were few in a folder, rather than thousands of them. Scaling in the structure of directory trees in a computer cluster summarizes this problem as following: The processes of storing and retrieving information are rapidly gaining importance in science as well as society as a whole [1, 2, 3, 4]. A considerable effort is being undertaken, firstly to characterize and describe how publicly available information, for example in the world wide web, is actually organized, and secondly, to design efficient methods to access this information. [1] R. M. Shiffrin and K. B¨orner, Proc. Natl. Acad. Sci. USA 101, 5183 (2004). [2] S. Lawrence, C.L. Giles, Nature 400, 107–109 (1999). [3] R.F.I. Cancho and R.V. Sol, Proc. R. Soc. London, Ser. B 268, 2261 (2001). [4] M. Sigman and G. A. Cecchi, Proc. Natl. Acad. Sci. USA 99, 1742 (2002). It goes further on explaining how the data is usually organized by taking general looks at it, but by looking at the abstract and conclusion it doesn't come with a conclusion or approach which results in a productive organization of a directory hierarchy. So, in essence, this is a problem for which I haven't found a solution yet; and I would love to see a scientific solution to this problem. Upon searching further, I don't seem to find anything useful or free papers that approach this problem so it might be that I'm looking in the wrong place. I've also noted that there are different ways to term this problem, which leads out to different results of papers. Perhaps a paper is out there, but I'm not just using the same terms as that paper uses? They often use more scientific terms. I've once heard a story about an advocate with a laptop which has simply outperformed an advocate with had tons of papers, which shows how proper organization leads to productivity; but that story didn't share details on how the advocate used the laptop or how he had organized his data. But in any case, it was way more useful than how most of us organize our data these days... Advice me how I should organize my data, I'm not looking for suggestions here. I would love to see statistics or scientific measurement approaches that help me confirm that it does help me reach my goal.

    Read the article

  • Incorrect gzipping of http requests, can't find who's doing it

    - by Ned Batchelder
    We're seeing some very strange mangling of HTTP responses, and we can't figure out what is doing it. We have an app server handling JSON requests. Occasionally, the response is returned gzipped, but with incorrect headers that prevent the browser from interpreting it correctly. The problem is intermittent, and changes behavior over time. Yesterday morning it seemed to fail 50% of the time, and in fact, seemed tied to one of our two load-balanced servers. Later in the afternoon, it was failing only 20 times out of 1000, and didn't correlate with an app server. The two app servers are running Apache 2.2 with mod_wsgi and a Django app stack. They have identical Apache configs and source trees, and even identical packages installed on Red Hat. There's a hardware load balancer in front, I don't know the make or model. Akamai is also part of the food chain, though we removed Akamai and still had the problem. Here's a good request and response: * Connected to example.com (97.7.79.129) port 80 (#0) > POST /claim/ HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > Referer: http://example.com/apps/ > Accept-Encoding: gzip,deflate > Content-Length: 29 > Content-Type: application/x-www-form-urlencoded > } [data not shown] < HTTP/1.1 200 OK < Server: Apache/2 < Content-Language: en-us < Content-Encoding: identity < Content-Length: 47 < Content-Type: application/x-javascript < Connection: keep-alive < Vary: Accept-Encoding < { [data not shown] * Connection #0 to host example.com left intact * Closing connection #0 {"msg": "", "status": "OK", "printer_name": ""} And here's a bad one: * Connected to example.com (97.7.79.129) port 80 (#0) > POST /claim/ HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > Referer: http://example.com/apps/ > Accept-Encoding: gzip,deflate > Content-Length: 29 > Content-Type: application/x-www-form-urlencoded > } [data not shown] < HTTP/1.1 200 OK < Server: Apache/2 < Content-Language: en-us < Content-Encoding: identity < Content-Type: application/x-javascript < Content-Encoding: gzip < Content-Length: 59 < Connection: keep-alive < Vary: Accept-Encoding < X-N: S < { [data not shown] * Connection #0 to host example.com left intact * Closing connection #0 ?V?-NW?RPR?QP*.I,)-???A??????????T??Z? ??/ There are two things to notice about the bad response: It has two Content-Encoding headers, and the browsers seem to use the first. So they see an identity encoding header, and gzipped content, so they can't interpret the response. The bad response has an extra "X-N: S" header. Perhaps if I could find out what intermediary adds "X-N: S" headers to responses, I could track down the culprit...

    Read the article

  • How to recover data files from xampp-windows to xampp-linux after crash?

    - by David Buehler
    My Windows box died after I developed a database in xampp on it; fortunately I have a backup of the entire F:/TestWeb/Xampp partition. Unfortunately, I did not do an Export (nor dump) of the "Lws2" database before the crash. I have replaced the defunct machine with one running Mint7 (based on Ubuntu 9.04 "Jaunty Jackalope") and installed xampp-linux into the /opt partition, so the new xampp now runs fine in /opt/lampp, and says all the elements are secured by passwords (which I just assigned during this installation.) I assumed that Xamp-Windows installed in November would migrate easily to xampp-linux installed iin February -- a bad assumption. It apparently would have been simple if I had known enough to do an Export or a Dump before the crash, but.... The backup was done to a Network Attached Storage drive, which is formatted as "vfat" so the backup does not carry with it any valid ownership permissions from MySql on NTFS. I now see from my backup that the old data resided in \TestWeb\Xampp\Mysql\Data\Lws2\ and consists of 7 ".frm" files which define my tables. The actual data -- I suppose a ".sql" file or files -- has disappeared, and I am resigning myself to two days of retyping it. But I do not wish to do the table layouts all over again. So I copied Data tree to /opt/lampp/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/Lws2 -- PhpMyAdmin does not see it. So I copied Data tree to /opt/lampp/var/mysql/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/var/mysql/Lws2 -- PhpMyAdmin does not see it. So I adjusted all the permissions to stop saying owner "nobody" to owner "root" and gave full permissions to all groups and to all others, with permissions percolating down, in all 4 trees. You guessed it -- PhpMyAdmin does not see any database named Lws2, only its 4 default ones. I double-checked the permissions and rebooted Linux and repeated the tests. At some point in that process I did see PhpMyAdmin showing "lws2(7)" but when I clicked on it I saw a "no table found" message. I have not been able to recreate that experience. Apparently there are some setup files for MySql and for PhpMyAdmin which need to be set up by running a wizard or two or by editing the files directly. I grepped the TestWeb tree and found an old "ldir = "C:TestWeb\Xampp\MySql\" and a "DataDir = C:TestWeb\Xampp\MySql\" in a .php file and in a .bat file, but I cannot find the corresponding config file names on the /opt partition/ -- so it looks as if these wizards have not been run to create them. What config files files does Linux use to setup MySql config files for PhpMyAdmin? What wizards do I need to run to point the MySql engine and the PhpMyAdmin at the folder /opt/lampp/data/ with its lws2 folder inside it? Or which files do I need to edit, with a sample of what it normally says under Linux? Incidentally, I remember I converted from MyISAM with its .MYD and .MYI files to InnoDB after entering only a small amount of the data -- and I do not know what file types to look for -- perhaps my data is still there but under another guise or in another place? Is it something as simple as linux needing to see "/data/" instead of /Data? I will check that out while waiting for a response. If anyone can point me to documentation that discusses this level of detail -- I will read it avidly! In any case, thanks for any clarification you can give on this thorny problem. wizdum

    Read the article

  • what is the best mid/high-end class audio/music creation audio sound card?

    - by Chris
    Hello, I have a computershop myself, and I repair computers. But one of the things I really don't know (yet) is the performace od audio cards for music creation with midi. I have searched and searched and came up with some good reviews, but after browsing for a couple of hours I could't see the trees trough the forrest :-D (it's a dutch expression) At one moment I thought the M-Audio - Delta 1010LT would be a good PCIe card, later on I read that this card was released years ago. (but that could be false information) Also any personal expierence would be great, but not necessairy. I have searched a few cards, and I hope someone can help me make a choice for a friend of mine. He's buget is between $100 and $350 I know there are audio cards from $ 500 - $1850,- this is just too expensive. The following specs are crucial: ASIO Midi Mic in minimal 5.1, 7.1 recommended it's not for airplay, but just to compose music at home. using Ableton and midi keyboard. 1. M-Audio - Delta 1010LT: 8 x 8 analog I/O 2 mic preamps or line inputs S/PDIF digital I/O (coaxial) with 2-channel PCM SCMS copy protection control digital I/O supports surround-encoded AC-3 and DTS pass-through 1 x 1 MIDI I/O directly drive up to 7.1 surround (bass management software included) software controlled 36-bit internal DSP digital mixing/routing +4dbu/-10dBV operation individually switched in software word clock I/O for sample accurate device synchronization 2. RME HDSP 9632: * Stereo Analog Ein- und Ausgang, symmetrisch*, 24-Bit/192kHz, > 110 dB SNR * Optionale Erweiterungsboards mit je 4 symmetrischen Ein- und Ausgängen * Alle analogen I/Os voll 192 kHz-fähig, also keine Reduzierung der Kanalzahl * 1 x ADAT Digital In/Out, 96 kHz-fähig (S/MUX) * 1 x SPDIF Digital In/Out, 192 kHz-fähig * 1 x Breakout Kabel für koaxialen SPDIF-Betrieb* * Also bis zu 16 Ein-und Ausgänge gleichzeitig nutzbar! * 1 x Stereo Kopfhörerausgang, parallel zum analogen Ausgang, aber eigene Pegelanpassung * 1 x MIDI I/O für 16 Kanäle Hi-Speed MIDI über Breakout Kabel * DIGICheck, RMEs einzigartiges Meter- und Analysetool mit Spectral Analyser, Professionelle Level Meter 2/8/16-Kanalig, Vector Audio Scope und diversen weiteren Analysefunktionen * HDSP Meter Bridge: Frei skalierbare Levelmeter mit Peak- und RMS Berechnung in Hardware * TotalMix: 512-Kanal Mischer mit 40 Bit interner Auflösung 3. EMU 1212M (1212 M) PCIe: * Top kwaliteit convertors 24-bit/192kHz convertors. * Hardware gestuurde effecten. * DSP zero-latency hardware mixen en monitoring. * Analoge en digitale I/O plus MIDI. * EMU Production Tools Software Bundle - Cakewalk SONAR , Steinberg Cubase LE, Ableton Live E-MU Edition **EMU 1212M PCI-e inputs/outputs:** * 2 balanced jack inputs. * 2 balanced jack outputs. * 24-bit/192kHz ADAT I/O. * 24-bit/192kHz Coaxiale S/PDif I/O switchable to AES/EBU. * MIDI I/O. 4. M-Audio Audiophile 192: - Up to 24-bit/192kHz audio - 2 balanced analog inputs (1/4” TRS) - 2 balanced analog outputs (1/4” TRS) - S/PDIF digital I/O (coaxial RCA connectors) with 2-channel PCM - SCMS copy protection control - Digital I/O supports surround-encoded AC-3 and DTS pass-through - Direct hardware input monitoring via separate balanced 1/4” TRS monitor outputs - Software routing of inputs and outputs - Digital I/O can be routed to/from external effects - 16-channel MIDI I/O - ASIO, WDM, GSIF 2 and Core Audio driver support for compatibility with most applications - 64-bit driver support for Windows - PCI 2.2 compatibility - Apple G5 compatible - Incompatible exceptions - Includes Ableton Live Lite music production software, so you can make music right away - Works with other Delta cards Technical Specifcations: - Compatibility - ASIO - WDM - GSIF 2 - Core Audio

    Read the article

  • WPF TabControl - how to preserve control state within tab items (MVVM pattern)

    - by Tim Coulter
    I am a newcomer to WPF, attempting to build a project that follows the recommendations of Josh Smith's excellent article describing The Model-View-ViewModel Design Pattern. Using Josh's sample code as a base, I have created a simple application that contains a number of "workspaces", each represented by a tab in a TabControl. In my application, a workspace is a document editor that allows a hierarchical document to be manipulated via a TreeView control. Although I have succeeded in opening multiple workspaces and viewing their document content in the bound TreeView control, I find that the TreeView "forgets" its state when switching between tabs. For example, if the TreeView in Tab1 is partially expanded, it will be shown as fully collapsed after switching to Tab2 and returning to Tab1. This behaviour appears to apply to all aspects of control state for all controls. After some experimentation, I have realized that I can preserve state within a TabItem by explicitly binding each control state property to a dedicated property on the underlying ViewModel. However, this seems like a lot of additional work, when I simply want all my controls to remember their state when switching between workspaces. I assume I am missing something simple, but I am not sure where to look for the answer. Any guidance would be much appreciated. Thanks, Tim Update: As requested, I will attempt to post some code that demonstrates this problem. However, since the data that underlies the TreeView is complex, I will post a simplified example that exhibits the same symtoms. Here is the XAML from the main window: <TabControl IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding Path=Docs}"> <TabControl.ItemTemplate> <DataTemplate> <ContentPresenter Content="{Binding Path=Name}" /> </DataTemplate> </TabControl.ItemTemplate> <TabControl.ContentTemplate> <DataTemplate> <view:DocumentView /> </DataTemplate> </TabControl.ContentTemplate> </TabControl> The above XAML correctly binds to an ObservableCollection of DocumentViewModel, whereby each member is presented via a DocumentView. For the simplicity of this example, I have removed the TreeView (mentioned above) from the DocumentView and replaced it with a TabControl containing 3 fixed tabs: <TabControl> <TabItem Header="A" /> <TabItem Header="B" /> <TabItem Header="C" /> </TabControl> In this scenario, there is no binding between the DocumentView and the DocumentViewModel. When the code is run, the inner TabControl is unable to remember its selection when the outer TabControl is switched. However, if I explicitly bind the inner TabControl's SelectedIndex property ... <TabControl SelectedIndex="{Binding Path=SelectedDocumentIndex}"> <TabItem Header="A" /> <TabItem Header="B" /> <TabItem Header="C" /> </TabControl> ... to a corresponding dummy property on the DocumentViewModel ... public int SelecteDocumentIndex { get; set; } ... the inner tab is able to remember its selection. I understand that I can effectively solve my problem by applying this technique to every visual property of every control, but I am hoping there is a more elegant solution.

    Read the article

  • A little bit of Ajax goes a long way

    - by Holland
    ..except when you're having problems. My problem is this: I have a hierarchical list of categories stored in a database which I wish to output in a dropdown list. The hierarchy comes into place when the subcategories are to be displayed, which are dependent on a parent id (which equals out to the first seven or so main categories listed). My thoughts are relatively simple: when the user clicks the dynamically allocated list of main categories, they are clicking on an option tag. For each option tag, an id (i.e., the parent) is listed in the value attribute, as well as an argument which is sent to a Javascript function which then uses AJAX to get the data via PHP and sends it to my 'javascript.php' file. The file then does magic, and populates the subcategory list, which is dependent on the main category selected. I believe I have the idea down, it's just that I'm implementing the solution improperly, for some reason. Here's what I have so far: from javascript.php <script type="text/javascript" src=<?=JPATH_BASE.DS.'includes'.DS.'jquery.js';?>> var ajax = { ajax.sendAjaxData = function(method, url, dataTitle, ajaxData) { $.ajax({ type: 'post', url: "C:/wamp/www/patention/components/com_adsmanagar/views/edit/tmpl/javascript.php", data: { 'data' : ajaxData }, success: function(result){ // we have the response alert("Your request was successful." + result); }, error: function(e){ alert('Error: ' + e); } }); } ajax.printSubCategoriesOnClick = function(parent) { alert("hello world!"); ajax.sendAjaxData('post', 'javascript.php', 'data' parent); <?php $categories = $this->formData->getCategories($_POST['data']); ?> ajax.printSubCategories(<?=$categories;?>); } ajax.printSubCategories = function(categories) { var select = document.getElementById("subcategories"); for (var i = 0; i < categories.length; i++) { var opt = document.createElement("option"); opt.text = categories['name']; opt.value = categories['id']; } } } </script> the function used to populate the form data function populateCategories($parent, FormData $formData) { $categories = $formData->getCategories($parent); echo "<pre>"; print_r($categories); echo "</pre>"; foreach($categories as $section => $row){ ?> <option value=<?=$row['id'];?> onclick="ajax.printSubCategoriesOnClick(<?=$row['id']?>)"> <? echo $row['name']; ?> </option> <?php } } The problem is that when I try to do a print_r on my $_POST variable, nothing shows up. I also receive an "undefined index" error (which refers to the key I set for the data type). I'm thinking that for some reason, the data is being sent to my form.php file, or my default.php file which includes both the form.php and javascript.php files via a function. Is there something specific that I'm missing here? Just looking up basic AJAX syntax (via jQuery) hasn't helped out, unfortunately.

    Read the article

  • HttpParsing for hypertext

    - by Nani
    I am in process of getting all hierarchical links from a given link and validating them; This is the code I wrote. But I am not feeling it as efficient. Reasons are: 1.For the non unique links which open same page, code is getting sub-links again and again 2.Is the code getting all links? 3.Is it making valid URLs from the sub-links it derived? 4.May be some other reasons about which I have no idea. Please suggest me how to make this piece of code efficient . Thank you. class Program { public static ArrayList sublink = new ArrayList(); public static ArrayList subtitle = new ArrayList(); public static int ini = 0, len_o, len_n, counter = 0; static void Main(string[] args) { // Address of URL string URL = "http://www.techonthenet.com/"; sublink.Add(URL); l: len_o = sublink.Count; len_o); Console.WriteLine("-------------Level:" + counter++); for (int i = ini; i < len_o; i++) test(sublink[i].ToString()); len_n = sublink.Count; if (len_o < len_n) { ini = len_o; goto l; } Console.ReadKey(); } //method to get the sub-links public static void test(string URL) { try { // Get HTML data WebClient client = new WebClient(); Stream data = client.OpenRead(URL); StreamReader reader = new StreamReader(data); string str = "", htmldata = "", temp; int n1, n2; str = reader.ReadLine(); while (str != null) { htmldata += str; str = reader.ReadLine(); } data.Close(); for (int i = 0; i < htmldata.Length - 5; i++) { if (htmldata.Substring(i, 5) == "href=") { n1 = htmldata.Substring(i + 6, htmldata.Length - (i + 6)).IndexOf("\""); temp = htmldata.Substring(i + 6, n1); if (temp.Length > 4 && temp.Substring(0, 4) != "http") { if(temp.Substring(0,1)!="/") temp=URL.Substring(0,URL.IndexOf(".com/")+5)+temp; else temp = URL.Substring(0, URL.IndexOf(".com/") + 5) + temp.Remove(0,1); } if (temp.Length < 4) temp = URL.Substring(0, URL.IndexOf(".com/") + 5) + temp; sublink.Add(temp); n2 = htmldata.Substring(i + n1 + 1, htmldata.Length - (i + n1 + 1)).IndexOf("<"); subtitle.Add(htmldata.Substring(i + 6 + n1 + 2, n2 - 7)); i += temp.Length + htmldata.Substring(i + 6 + n1 + 2, n2 - 7).Length; } } for (int i = len_n; i < sublink.Count; i++) Console.WriteLine(i + "--> " + sublink[i]); } catch (WebException exp) { Console.WriteLine("URL Could not be Resolved" + URL); Console.WriteLine(exp.Message, "Exception"); } } }

    Read the article

  • Silverlight: Binding a custom control to an arbitrary object

    - by Ryan Bates
    I am planning on writing a hierarchical organizational control, similar to an org chart. Several org chart implementations are out there, but not quite fit what I have in mind. Binding fields in a DataTemplate to a custom object does not seem to work. I started with a generic, custom control, i.e. public class NodeBodyBlock : ContentControl { public NodeBodyBlock() { this.DefaultStyleKey = typeof(NodeBodyBlock); } } It has a simple style in generic.xaml: <Style TargetType="org:NodeBodyBlock"> <Setter Property="Width" Value="200" /> <Setter Property="Height" Value="100" /> <Setter Property="Background" Value="Lavender" /> <Setter Property="FontSize" Value="11" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="org:NodeBodyBlock"> <Border Width="{TemplateBinding Width}" Height="{TemplateBinding Height}" Background="{TemplateBinding Background}" CornerRadius="4" BorderBrush="Black" BorderThickness="1" > <Grid> <VisualStateManager/> ... clipped for brevity </VisualStateManager.VisualStateGroups> <ContentPresenter Content="{TemplateBinding Content}" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" /> </Grid> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> My plan now is to be able to use this common definition as a base definition of sorts, with customized version of it used to display different types of content. A simple example would be to use this on a user control with the following style: <Style TargetType="org:NodeBodyBlock" x:Key="TOCNode2"> <Setter Property="ContentTemplate"> <Setter.Value> <DataTemplate> <StackPanel> <TextBlock Text="{Binding Path=NodeTitle}"/> </StackPanel> </DataTemplate> </Setter.Value> </Setter> </Style> and an instance defined as <org:NodeBodyBlock Style="{StaticResource TOCNode2}" x:Name="stTest" DataContext="{StaticResource DummyData}" /> The DummyData is defined as <toc:Node NodeNumber="mynum" NodeStatus="A" NodeTitle="INLine Node Title!" x:Key="DummyData"/> With a simple C# class behind it, where each of the fields is a public property. When running the app, the Dummy Data values simply do not show up in the GUI. A trivial test such as <TextBlock Text="{Binding NodeTitle}" DataContext="{StaticResource DummyData}"/> works just fine. Any ideas around where I am missing the plot?

    Read the article

  • Entity Framework version 1- Brief Synopsis and Tips &ndash; Part 1

    - by Rohit Gupta
    To Do Eager loading use Projections (for e.g. from c in context.Contacts select c, c.Addresses)  or use Include Query Builder Methods (Include(“Addresses”)) If there is multi-level hierarchical Data then to eager load all the relationships use Include Query Builder methods like customers.Include("Order.OrderDetail") to include Order and OrderDetail collections or use customers.Include("Order.OrderDetail.Location") to include all Order, OrderDetail and location collections with a single include statement =========================================================================== If the query uses Joins then Include() Query Builder method will be ignored, use Nested Queries instead If the query does projections then Include() Query Builder method will be ignored Use Address.ContactReference.Load() OR Contact.Addresses.Load() if you need to Deferred Load Specific Entity – This will result in extra round trips to the database ObjectQuery<> cannot return anonymous types... it will return a ObjectQuery<DBDataRecord> Only Include method can be added to Linq Query Methods Any Linq Query method can be added to Query Builder methods. If you need to append a Query Builder Method (other than Include) after a LINQ method  then cast the IQueryable<Contact> to ObjectQuery<Contact> and then append the Query Builder method to it =========================================================================== Query Builder methods are Select, Where, Include Methods which use Entity SQL as parameters e.g. "it.StartDate, it.EndDate" When Query Builder methods do projection then they return ObjectQuery<DBDataRecord>, thus to iterate over this collection use contact.Item[“Name”].ToString() When Linq To Entities methods do projection, they return collection of anonymous types --- thus the collection is strongly typed and supports Intellisense EF Object Context can track changes only on Entities, not on Anonymous types. If you use a Defining Query for a EntitySet then the EntitySet becomes readonly since a Defining Query is the same as a View (which is treated as a ReadOnly by default). However if you want to use this EntitySet for insert/update/deletes then we need to map stored procs (as created in the DB) to the insert/update/delete functions of the Entity in the Designer You can use either Execute method or ToList() method to bind data to datasources/bindingsources If you use the Execute Method then remember that you can traverse through the ObjectResult<> collection (returned by Execute) only ONCE. In WPF use ObservableCollection to bind to data sources , for keeping track of changes and letting EF send updates to the DB automatically. Use Extension Methods to add logic to Entities. For e.g. create extension methods for the EntityObject class. Create a method in ObjectContext Partial class and pass the entity as a parameter, then call this method as desired from within each entity. ================================================================ DefiningQueries and Stored Procedures: For Custom Entities, one can use DefiningQuery or Stored Procedures. Thus the Custom Entity Collection will be populated using the DefiningQuery (of the EntitySet) or the Sproc. If you use Sproc to populate the EntityCollection then the query execution is immediate and this execution happens on the Server side and any filters applied will be applied in the Client App. If we use a DefiningQuery then these queries are composable, meaning the filters (if applied to the entityset) will all be sent together as a single query to the DB, returning only filtered results. If the sproc returns results that cannot be mapped to existing entity, then we first create the Entity/EntitySet in the CSDL using Designer, then create a dummy Entity/EntitySet using XML in the SSDL. When creating a EntitySet in the SSDL for this dummy entity, use a TSQL that does not return any results, but does return the relevant columns e.g. select ContactID, FirstName, LastName from dbo.Contact where 1=2 Also insure that the Entity created in the SSDL uses the SQL DataTypes and not .NET DataTypes. If you are unable to open the EDMX file in the designer then note the Errors ... they will give precise info on what is wrong. The Thrid option is to simply create a Native Query in the SSDL using <Function Name="PaymentsforContact" IsComposable="false">   <CommandText>SELECT ActivityId, Activity AS ActivityName, ImagePath, Category FROM dbo.Activities </CommandText></FuncTion> Then map this Function to a existing Entity. This is a quick way to get a custom Entity which is regular Entity with renamed columns or additional columns (which are computed columns). The disadvantage to using this is that It will return all the rows from the Defining query and any filter (if defined) will be applied only at the Client side (after getting all the rows from DB). If you you DefiningQuery instead then we can use that as a Composable Query. The Fourth option (for mapping a READ stored proc results to a non-existent Entity) is to create a View in the Database which returns all the fields that the sproc also returns, then update the Model so that the model contains this View as a Entity. Then map the Read Sproc to this View Entity. The other option would be to simply create the View and remove the sproc altogether. ================================================================ To Execute a SProc that does not return a entity, use a EntityCommand to execute that proc. You cannot call a sproc FunctionImport that does not return Entities From Code, the only way is to use SSDL function calls using EntityCommand.  This changes with EntityFramework Version 4 where you can return Scalar Types, Complex Types, Entities or NonQuery ================================================================ UDF when created as a Function in SSDL, we need to set the Name & IsComposable properties for the Function element. IsComposable is always false for Sprocs, for UDF's set this to true. You cannot call UDF "Function" from within code since you cannot import a UDF Function into the CSDL Model (with Version 1 of EF). only stored procedures can be imported and then mapped to a entity ================================================================ Entity Framework requires properties that are involved in association mappings to be mapped in all of the function mappings for the entity (Insert, Update and Delete). Because Payment has an association to Reservation... hence we need to pass both the paymentId and reservationId to the Delete sproc even though just the paymentId is the PK on the Payment Table. ================================================================ When mapping insert, update and delete procs to a Entity, insure that all the three or none are mapped. Further if you have a base class and derived class in the CSDL, then you must map (ins, upd, del) sprocs to all parent and child entities in the inheritance relationship. Note that this limitation that base and derived entity methods must all must be mapped does not apply when you are mapping Read Stored Procedures.... ================================================================ You can write stored procedures SQL directly into the SSDL by creating a Function element in the SSDL and then once created, you can map this Function to a CSDL Entity directly in the designer during Function Import ================================================================ You can do Entity Splitting such that One Entity maps to multiple tables in the DB. For e.g. the Customer Entity currently derives from Contact Entity...in addition it also references the ContactPersonalInfo Entity. One can copy all properties from the ContactPersonalInfo Entity into the Customer Entity and then Delete the CustomerPersonalInfo entity, finall one needs to map the copied properties to the ContactPersonalInfo Table in Table Mapping (by adding another table (ContactPersonalInfo) to the Table Mapping... this is called Entity Splitting. Thus now when you insert a Customer record, it will automatically create SQL to insert records into the Contact, Customers and ContactPersonalInfo tables even though you have a Single Entity called Customer in the CSDL =================================================================== There is Table by Type Inheritance where another EDM Entity can derive from another EDM entity and absorb the inherted entities properties, for example in the Break Away Geek Adventures EDM, the Customer entity derives (inherits) from the Contact Entity and absorbs all the properties of Contact entity. Thus when you create a Customer Entity in Code and then call context.SaveChanges the Object Context will first create the TSQL to insert into the Contact Table followed by a TSQL to insert into the Customer table =================================================================== Then there is the Table per Hierarchy Inheritance..... where different types are created based on a condition (similar applying a condition to filter a Entity to contain filtered records)... the diference being that the filter condition populates a new Entity Type (derived from the base Entity). In the BreakAway sample the example is Lodging Entity which is a Abstract Entity and Then Resort and NonResort Entities which derive from Lodging Entity and records are filtered based on the value of the Resort Boolean field =================================================================== Then there is Table per Concrete Type Hierarchy where we create a concrete Entity for each table in the database. In the BreakAway sample there is a entity for the Reservation table and another Entity for the OldReservation table even though both the table contain the same number of fields. The OldReservation Entity can then inherit from the Reservation Entity and configure the OldReservation Entity to remove all Scalar Properties from the Entity (since it inherits the properties from Reservation and filters based on ReservationDate field) =================================================================== Complex Types (Complex Properties) Entities in EF can also contain Complex Properties (in addition to Scalar Properties) and these Complex Properties reference a ComplexType (not a EntityType) DropdownList, ListBox, RadioButtonList, CheckboxList, Bulletedlist are examples of List server controls (not data bound controls) these controls cannot use Complex properties during databinding, they need Scalar Properties. So if a Entity contains Complex properties and you need to bind those to list server controls then use projections to return Scalar properties and bind them to the control (the disadvantage is that projected collections are not tracked by the Object Context and hence cannot persist changes to the projected collections bound to controls) ObjectDataSource and EntityDataSource do account for Complex properties and one can bind entities with Complex Properties to Data Source controls and they will be tracked for changes... with no additional plumbing needed to persist changes to these collections bound to controls So DataBound controls like GridView, FormView need to use EntityDataSource or ObjectDataSource as a datasource for entities that contain Complex properties so that changes to the datasource done using the GridView can be persisted to the DB (enabling the controls for updates)....if you cannot use the EntityDataSource you need to flatten the ComplexType Properties using projections With EF Version 4 ComplexTypes are supported by the Designer and can add/remove/compose Complex Types directly using the Designer =================================================================== Conditional Mapping ... is like Table per Hierarchy Inheritance where Entities inherit from a base class and then used conditions to populate the EntitySet (called conditional Mapping). Conditional Mapping has limitations since you can only use =, is null and IS NOT NULL Conditions to do conditional mapping. If you need more operators for filtering/mapping conditionally then use QueryView(or possibly Defining Query) to create a readonly entity. QueryView are readonly by default... the EntitySet created by the QueryView is enabled for change tracking by the ObjectContext, however the ObjectContext cannot create insert/update/delete TSQL statements for these Entities when SaveChanges is called since it is QueryView. One way to get around this limitation is to map stored procedures for the insert/update/delete operations in the Designer. =================================================================== Difference between QueryView and Defining Query : QueryView is defined in the (MSL) Mapping File/section of the EDM XML, whereas the DefiningQuery is defined in the store schema (SSDL). QueryView is written using Entity SQL and is this database agnostic and can be used against any database/Data Layer. DefiningQuery is written using Database Lanaguage i.e. TSQL or PSQL thus you have more control =================================================================== Performance: Lazy loading is deferred loading done automatically. lazy loading is supported with EF version4 and is on by default. If you need to turn it off then use context.ContextOptions.lazyLoadingEnabled = false To improve Performance consider PreCompiling the ObjectQuery using the CompiledQuery.Compile method

    Read the article

  • Start a Mapping or Process Flow from OWB Browser

    - by Dong Ruirong
    Basically, we start a Mapping or Process Flow from Oracle Warehouse Builder (OWB) Design Client. But actually we can also start a Mapping or Process Flow from OWB Browser. This paper will introduce the Start Report first and then introduce how to start/rerun a Mapping or Process Flow from OWB Browser. Start Report Start Report is used to start an execution of a Mapping or Process Flow. So there are two kinds of Start Report: Mapping Start Report (See Figure 1) and Process Flow Start Report (See Figure 2). Start Report shows the Mapping or Process Flow identification properties, including latest deployment and latest execution, lists all execution parameters for the Mapping or Process Flow, which were specified by the latest deployment, and assigns parameter default values from the latest deployment specification. You can do a couple of things from Start Report: Sort execution parameters on name, category. Table 1 lists all parameters of a Mapping. Table 2 lists all parameters of a Process Flow. Change values of any input parameter where permitted. For some parameters, selection lists are provided. For example, Mapping’s parameter Audit Level has a selection list. Reset all parameter settings to their default values. Apply basic validation to parameter values before starting an execution. Start the Mapping or Process Flow, which means it is executed immediately. Navigate to Deployment Report for latest deployment details of the Mapping or Process Flow. Navigate to Execution Job Report for latest execution of current Mapping or Process Flow Link to on-link help Warehouse Report Page, Deployment Report, Execution Report, Execution Schedule Report and Execution Summary Report. Figure 1 Mapping Start Report Table 1 Execution Parameters and default values for a Mapping Category Name Mode Input Value System Audit Level In Error Details System Bulk Size In 1000 System Commit Frequency In 1000 System EXECUTE_RESUME_TASK In FALSE System FORCE_RESUME_OPTION In FALSE System Max No of Errors In 50 System NUMBER_OF_TIMES_TO_RETRY In 2 System Operating Mode In Set Based Fail Over to Row Based System PARALLEL_LEVEL In 0 System Procedure Name In main System Purge Group In WB Figure 2 Process Flow Start Report Table 2 Execution Parameters and default values for a Process Flow Category Name Mode Input Value System EVAL_LOCATION In   System Item Key In-Out   System Item Type In PFPKG_1 Start a Mapping or Process Flow To navigate to Start Report, it’s better to login OWB Browser with Control Center option; if not, after logging in OWB Browser, go to Control Center first. Then you can follow the ways introduced in this section to navigate to Start Report. One more thing you need to pay attention to is that you are not allowed to deploy any Mappings and Process Flows from OWB Browser as it’s not supported. So it’s necessary to deploy the Mappings and Process Flows first before starting them from OWB Browser. If you have deployed a Mapping or Process Flow but have not started it, please navigate from Object Summary Report or Deployment Schedule Report to Start Report. 1. Navigating from Object Summary Report to Start Report Open the Object Summary Report to see all deployed Mappings and Process Flows. Click the Mapping Name or Process Flow Name link to see its Deployment Report. Select the Start link in the Available Reports tab for the given Mapping or Process Flow to display a Start Report for the Mapping or Process Flow. The execution parameters have the default deployment-time settings. Change any of the input parameter values as required. Click Start Execution button to execute the Mapping or Process Flow. 2. Navigating from Deployment Schedule Report to Start Report Open the Deployment Schedule Report to see deployment details of Mapping and Process Flow. Expand the project trees to find the deployed Mappings and Process Flows. Click the Mapping Name or Process Flow Name link to see its Deployment Report. Select the Start link in the Available Reports tab for the given Mapping or Process Flow to display a Start Report for the Mapping or Process Flow. The execution parameters have the default deployment-time settings. Change any of the input parameter values as required. Click Start Execution button to execute the Mapping or Process Flow. Re-run a Mapping or Process Flow If you have executed a Mapping or Process Flow, you can navigate from Object Summary Report, Deployment Schedule Report, Execution Summary Report or Execution Schedule Report to Start Report. 1. Navigating from the Execution Summary Report to Start Report Open the Execution Summary Report to see all execution jobs including Mapping jobs and Process Flow jobs. Click on the Mapping Name or Process Flow Name to see its Execution Report. Select the Start link in the Available Reports tab for the given Mapping or Process Flow to display a Start Report for the Mapping or Process Flow. The execution parameters have the default deployment-time settings. Change any of the input parameter values as required. Click Start Execution button to execute the Mapping or Process Flow. 2. Navigating from the Execution Schedule Report to Start Report Open the Execution Schedule Report to see list of all executions of Mapping and Process Flow. Click on the Mapping Name or Process Flow Name to see its Execution Report. Select the Start link in the Available Reports tab for the given Mapping or Process Flow to display a Start Report for the Mapping or Process Flow. The execution parameters have the default deployment-time settings. Change any of the input parameter values as required. Click Start Execution button to execute the Mapping or Process Flow. If the execution of a Mapping or Process Flow is successful, you will see this message from the Start Report: Start Execution request successful. (See Figure 3) Figure 3 Execution Result You can also confirm the execution of the Mapping or Process Flow by referring to Execution Report of the current Mapping or Process Flow by clicking the link in the Available Reports tab for the given Mapping or Process Flow. One new record of execution job details is added to Execution Report of the Mapping or Process Flow which shows the details of the execution such as Start Time, Elapsed Time, Status, the number of records selected, inserted, updated, deleted etc.

    Read the article

  • DocumentDB - Another Azure NoSQL Storage Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/08/25/documentdb---another-azure-nosql-storage-service.aspxMicrosoft just released a bunch of new features for Azure on 22nd and one of them I was interested in most is DocumentDB, a document NoSQL database service on the cloud.   Quick Look at DocumentDB We can try DocumentDB from the new azure preview portal. Just click the NEW button and select the item named DocumentDB to create a new account. Specify the name of the DocumentDB, which will be the endpoint we are going to use to connect later. Select the capacity unit, resource group and subscription. In resource group section we can select which region our DocumentDB will be located. Same as other azure services select the same location with your consumers of the DocumentDB, for example the website, web services, etc.. After several minutes the DocumentDB will be ready. Click the KEYS button we can find the URI and primary key, which will be used when connecting. Now let's open Visual Studio and try to use the DocumentDB we had just created. Create a new console application and install the DocumentDB .NET client library from NuGet with the keyword "DocumentDB". You need to select "Include Prerelase" in NuGet Package Manager window since this library was not yet released. Next we will create a new database and document collection under our DocumentDB account. The code below created an instance of DocumentClient with the URI and primary key we just copied from azure portal, and create a database and collection. And it also prints the document and collection link string which will be used later to insert and query documents. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7: Run(client).Wait(); 8:  9: Console.WriteLine("done"); 10: Console.ReadKey(); 11: } 12:  13: static async Task Run(DocumentClient client) 14: { 15:  16: var database = new Database() { Id = "testdb" }; 17: database = await client.CreateDatabaseAsync(database); 18: Console.WriteLine("database link = {0}", database.SelfLink); 19:  20: var collection = new DocumentCollection() { Id = "testcol" }; 21: collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection); 22: Console.WriteLine("collection link = {0}", collection.SelfLink); 23: } Below is the result from the console window. We need to copy the collection link string for future usage. Now if we back to the portal we will find a database was listed with the name we specified in the code. Next we will insert a document into the database and collection we had just created. In the code below we pasted the collection link which copied in previous step, create a dynamic object with several properties defined. As you can see we can add some normal properties contains string, integer, we can also add complex property for example an array, a dictionary and an object reference, unless they can be serialized to JSON. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: // collection link pasted from the result in previous demo 9: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 10:  11: // document we are going to insert to database 12: dynamic doc = new ExpandoObject(); 13: doc.firstName = "Shaun"; 14: doc.lastName = "Xu"; 15: doc.roles = new string[] { "developer", "trainer", "presenter", "father" }; 16:  17: // insert the docuemnt 18: InsertADoc(client, collectionLink, doc).Wait(); 19:  20: Console.WriteLine("done"); 21: Console.ReadKey(); 22: } the insert code will be very simple as below, just provide the collection link and the object we are going to insert. 1: static async Task InsertADoc(DocumentClient client, string collectionLink, dynamic doc) 2: { 3: var document = await client.CreateDocumentAsync(collectionLink, doc); 4: Console.WriteLine(await JsonConvert.SerializeObjectAsync(document, Formatting.Indented)); 5: } Below is the result after the object had been inserted. Finally we will query the document from the database and collection. Similar to the insert code, we just need to specify the collection link so that the .NET SDK will help us to retrieve all documents in it. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 9:  10: SelectDocs(client, collectionLink); 11:  12: Console.WriteLine("done"); 13: Console.ReadKey(); 14: } 15:  16: static void SelectDocs(DocumentClient client, string collectionLink) 17: { 18: var docs = client.CreateDocumentQuery(collectionLink + "docs/").ToList(); 19: foreach(var doc in docs) 20: { 21: Console.WriteLine(doc); 22: } 23: } Since there's only one document in my collection below is the result when I executed the code. As you can see all properties, includes the array was retrieve at the same time. DocumentDB also attached some properties we didn't specified such as "_rid", "_ts", "_self" etc., which is controlled by the service.   DocumentDB Benefit DocumentDB is a document NoSQL database service. Different from the traditional database, document database is truly schema-free. In a short nut, you can save anything in the same database and collection if it could be serialized to JSON. We you query the document database, all sub documents will be retrieved at the same time. This means you don't need to join other tables when using a traditional database. Document database is very useful when we build some high performance system with hierarchical data structure. For example, assuming we need to build a blog system, there will be many blog posts and each of them contains the content and comments. The comment can be commented as well. If we were using traditional database, let's say SQL Server, the database schema might be defined as below. When we need to display a post we need to load the post content from the Posts table, as well as the comments from the Comments table. We also need to build the comment tree based on the CommentID field. But if were using DocumentDB, what we need to do is to save the post as a document with a list contains all comments. Under a comment all sub comments will be a list in it. When we display this post we just need to to query the post document, the content and all comments will be loaded in proper structure. 1: { 2: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 3: "title": "xxxxx", 4: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 5: "postedOn": "08/25/2014 13:55", 6: "comments": 7: [ 8: { 9: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 10: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 11: "commentedOn": "08/25/2014 14:00", 12: "commentedBy": "xxx" 13: }, 14: { 15: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 16: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 17: "commentedOn": "08/25/2014 14:10", 18: "commentedBy": "xxx", 19: "comments": 20: [ 21: { 22: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 23: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 24: "commentedOn": "08/25/2014 14:18", 25: "commentedBy": "xxx", 26: "comments": 27: [ 28: { 29: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 30: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 31: "commentedOn": "08/25/2014 18:22", 32: "commentedBy": "xxx", 33: } 34: ] 35: }, 36: { 37: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 38: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 39: "commentedOn": "08/25/2014 15:02", 40: "commentedBy": "xxx", 41: } 42: ] 43: }, 44: { 45: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 46: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 47: "commentedOn": "08/25/2014 14:30", 48: "commentedBy": "xxx" 49: } 50: ] 51: }   DocumentDB vs. Table Storage DocumentDB and Table Storage are all NoSQL service in Microsoft Azure. One common question is "when we should use DocumentDB rather than Table Storage". Here are some ideas from me and some MVPs. First of all, they are different kind of NoSQL database. DocumentDB is a document database while table storage is a key-value database. Second, table storage is cheaper. DocumentDB supports scale out from one capacity unit to 5 in preview period and each capacity unit provides 10GB local SSD storage. The price is $0.73/day includes 50% discount. For storage service the highest price is $0.061/GB, which is almost 10% of DocumentDB. Third, table storage provides local-replication, geo-replication, read access geo-replication while DocumentDB doesn't support. Fourth, there is local emulator for table storage but none for DocumentDB. We have to connect to the DocumentDB on cloud when developing locally. But, DocumentDB supports some cool features that table storage doesn't have. It supports store procedure, trigger and user-defined-function. It supports rich indexing while table storage only supports indexing against partition key and row key. It supports transaction, table storage supports as well but restricted with Entity Group Transaction scope. And the last, table storage is GA but DocumentDB is still in preview.   Summary In this post I have a quick demonstration and introduction about the new DocumentDB service in Azure. It's very easy to interact through .NET and it also support REST API, Node.js SDK and Python SDK. Then I explained the concept and benefit of  using document database, then compared with table storage.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • CodePlex Daily Summary for Tuesday, September 18, 2012

    CodePlex Daily Summary for Tuesday, September 18, 2012Popular ReleasesfastJSON: v2.0.5: 2.0.5 - fixed number parsing for invariant format - added a test for German locale number testing (,. problems)????????API for .Net SDK: SDK for .Net ??? Release 4: 2012?9?17??? ?????,???????????????。 ?????Release 3??????,???????,???,??? ??????????????????SDK,????????。 ??,??????? That's all.VidCoder: 1.4.0 Beta: First Beta release! Catches up to HandBrake nightlies with SVN 4937. Added PGS (Blu-ray) subtitle support. Additional framerates available: 30, 50, 59.94, 60 Additional sample rates available: 8, 11.025, 12 and 16 kHz Additional higher bitrates available for audio. Same as Source Constant Framerate available. Added Apple TV 3 preset. Added new Bob deinterlacing option. Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will keep running and continue pro...DNN Metro7 style Skin package: Metro7 style Skin for DotNetNuke 06.02.01: Stabilization release fixed this issues: Links not worked on FF, Chrome and Safari Modified packaging with own manifest file for install and source package. Moved the user Image on the Login to the left side. Moved h2 font-size to 24px. Note : This release Comes w/o source package about we still work an a solution. Who Needs the Visual Studio source files please go to source and download it from there. Known 16 CSS issues that related to the skin.css. All others are DNN default o...Visual Studio Icon Patcher: Version 1.5.1: This fixes a bug in the 1.5 release where it would crash when no language packs were installed for VS2010.sheetengine - Isometric HTML5 JavaScript Display Engine: sheetengine v1.1.0: This release of sheetengine introduces major drawing optimizations. A background canvas is created with the full drawn scenery onto which only the changed parts are redrawn. For example a moving object will cause only its bounding box to be redrawn instead of the full scene. This background canvas is copied to the main canvas in each iteration. For this reason the size of the bounding box of every object needs to be defined and also the width and height of the background canvas. The example...VFPX: Desktop Alerts 1.0.2: This update for the Desktop Alerts contains changes to behavior for setting custom sounds for alerts. I have removed ALERTWAV.TXT from the project, and also removed DA_DEFAULTSOUND from the VFPALERT.H file. The AlertManager class and Alert class both have a "default" cSound of ADDBS(JUSTPATH(_VFP.ServerName))+"alert.wav" --- so, as long as you distribute a sound file with the file name "alert.wav" along with the EXE, that file will be used. You can set your own sound file globally by setti...MCEBuddy 2.x: MCEBuddy 2.2.15: Changelog for 2.2.15 (32bit and 64bit) 1. Added support for %originalfilepath% to get the source file full path. Used for custom commands only. 2. Added support for better parsing of Media Portal XML files to extract ShowName and Episode Name and download additional details from TVDB (like Season No, Episode No etc). 3. Added support for TVDB seriesID in metadata 4. Added support for eMail non blocking UI testCrashReporter.NET : Exception reporting library for C# and VB.NET: CrashReporter.NET 1.2: *Added html mail format which shows hierarchical exception report for better understanding.PDF Viewer Web part: PDF Viewer Web Part: PDF Viewer Web PartMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.67: Fix issue #18629 - incorrectly handling null characters in string literals and not throwing an error when outside string literals. update for Issue #18600 - forgot to make the ///#DEBUG= directive also set a known-global for the given debug namespace. removed the kill-switch for disregarding preprocessor define-comments (///#IF and the like) and created a separate CodeSettings.IgnorePreprocessorDefines property for those who really need to turn that off. Some people had been setting -kil...MPC-BE: Media Player Classic BE 1.0.1.0 build 1122: MPC-BE is a free and open source audio and video player for Windows. MPC-BE is based on the original "Media Player Classic" project (Gabest) and "Media Player Classic Home Cinema" project (Casimir666), contains additional features and bug fixes. Supported Operating Systems: Windows XP SP2, Vista, 7 32bit/64bit System Requirements: An SSE capable CPU The latest DirectX 9.0c runtime (June 2010). Install it regardless of the operating system, they all need it. Web installer: http://www.micro...Preactor Object Model: Visual Studio Template .NET 3.5: Visual Studio Template with all the necessary files to get started with POM. You will still need to Get the Preactor.ObjectModel and Preactor.ObjectModleExtensions libraries from Nuget though. You will also need to sign with assembly with a strong name key.Lakana - WPF Framework: Lakana V2: Lakana V2 contains : - Lakana WPF Forms (with sample project) - Lakana WPF Navigation (with sample project)myCollections: Version 2.3.0.0: New in this version : Added TheGamesDB.net API for games and nds Added Fast search options Added order by Artist/Album for music Fixed several provider Performance improvement New Splash Screen BugFixingMicrosoft SQL Server Product Samples: Database: OData QueryFeed workflow activity: The OData QueryFeed sample activity shows how to create a workflow activity that consumes an OData resource, and renders entity properties in a Microsoft Excel 2010 worksheet or Microsoft Word 2010 document. Using the sample QueryFeed activity, you can consume any OData resource. The sample activity uses LINQ to project OData metadata into activity designer expression items. By setting activity expressions, a fully qualified OData query string is constructed consisting of Resource, Filter, Or...F# 3.0 Sample Pack: FSharp 3.0 Sample Pack for Visual Studio 2012 RTM: F# 3.0 Sample Pack for Visual Studio 2012 RTMANPR MX: ANPR_MX Release 1: ANPR MX Release 1 Features: Correctly detects plate area for the average North American plate. (It won't work for the "European" plate size.) Provides potential values for the recognized plate. Allows images 800x600 and below (.jpg / .png). The example requires the VC 10 runtime & .NET 4 Framework to be already installed. The Source code project was made on Visual Studio 2010.Cocktail: Cocktail v1.0.1: PrerequisitesVisual Studio 2010 with SP1 (any edition but Express) Optional: Silverlight 4 or 5 Note: Install Silverlight 4 Tools and then the Silverlight 4 Toolkit. Likewise for Silverlight 5 Tools and the Silverlight 5 Toolkit DevForce Express 6.1.8.1 Included in the Cocktail download, DevForce Express requires registration) Important: Install DevForce after all other components. Download contentsDebug and release assemblies API documentation Source code License.txt Re...weber: weber v0.1: first release, creates a basic browser shell and allows user to navigate to web sites.New Projects.NET Code Editor & Compiler Component: .Net compiler component with integrated advanced text box, VisualStudio like highlightning, ability to intercept and show StandardOutput strings.NET Plugin Manager: Provides agnostic functionality for tiered plugin loading, unloading, and plugin collection management.Amazon Control Panel v2: Amazon Control Panel is a application that lets you control you Amazon Seller Central account using the Amazon MWS (Merchant Web Service) API.AutoSPSourceBuilder: AutoSPSourceBuilder: a utility for automatically building a SharePoint 2010 or 2013 install source including service packs, language packs & cumulative updates.CAOS: RBAC acess controllChat Forum: An Internet  forum,  or message  board,  is  an online discussion  site conversations  in  the  form  of  posted  messages.CRM 2011 - Many-To-Many Relationship Entity View: This Silverlight Web Resource for CRM 2011 will allow user to see N:N relationship entity data from single place.dardasim: dardasim gil and lior Tel Cabir DolphinsDBAManage: ???????ERP??,????!DimDate Generator: A SSIS project for generation a data dimansion table and data.DNL: eine grße .net bibliothek für entwicklerDouban FM for Metro: A music radio client for http://douban.fm running on Windows 8 / WinRTExtended WPF Control: Extended WPF Control for research and learning.FizzBuzzDaveC: Implements the classic FizzBuzz programmine exercise.HamStart: Nothing for now...Infopath XSN Modifier: A tool for editing the dataconnections of Infopath.KH Picture Resizer: Picture Resizer ermöglicht es Bilder per Drag and Dop zu verkleinern. Das Program wurde in C# geschrieben und nutzt Windows Forms.Korean String Extension for .NET: ?? ??? ??? ????? ???? string??? ??? ????? Extension library for "string" class that enhances "Hangul Jamo system" features Lucky Loot - Tattoo Shop Management Application: Lucky Loot - Tattoo Shop Management Application Por: Eric Gabriel Rodrigues Castoldi Objetivo: Trabalho de Conclusão de Curso de Sistemas de InformaçãoMagnOS - C# Cosmos Operating System: MagnOS is an Open Source operating system, made to learn how to make operating systems with Cosmos.Móa mày: Project m?iOData Samples: A collection of samples demonstrating solutions and functionality in WCF Data Services, ODataLib and EdmLib.Online Image Editor: Online Photo CanvasOptimuss Administración: La mejor aplicación de Gestión y Control EscolarOptimuss Obelix: La mejor aplicación de Gestión y Control EscolarPersonal Website: My personal websitePlanisoft: Proyecto de Planilla para clinica los fresnosPROYECTODT: ..................................................................................................................PtLibrary: PtLibrary stands for Peter Thönell's Delphi library. PtSettings and PtSettingsGUI make the management and use of settings extremely easy and powerful.RTS WebServer: A small lightweight, modern and fast webserver (template). with in the feature the newest technologies like SPDY and websocketsStandards: Standards is an Intranet application (using Windows authentication) designed to document and manage company standards. It is written in C#/MVC 4.Truttle OS: This is an OS I made with CosmosWfp System: zdgdsfgdsfgzpo: projekt na zaliczenie zpo???: ???

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46  | Next Page >