Search Results

Search found 14213 results on 569 pages for 'distributed programming'.

Page 272/569 | < Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >

  • MySQL Cluster 7.3: On-Demand Webinar and Q&A Available

    - by Mat Keep
    The on-demand webinar for the MySQL Cluster 7.3 Development Release is now available. You can learn more about the design, implementation and getting started with all of the new MySQL Cluster 7.3 features from the comfort and convenience of your own device, including: - Foreign Key constraints in MySQL Cluster - Node.js NoSQL API  - Auto-installation of higher performance distributed, clusters We received some great questions over the course of the webinar, and I wanted to share those for the benefit of a broader audience. Q. What Foreign Key actions are supported: A. The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL Q. Where are Foreign Keys implemented, ie data nodes or SQL nodes? A. They are implemented in the data nodes, therefore can be enforced for both the SQL and NoSQL APIs Q. Are they compatible with the InnoDB Foreign Key implementation? A. Yes, with the following exceptions: - InnoDB doesn’t support “No Action” constraints, MySQL Cluster does - You can choose to suspend FK constraint enforcement with InnoDB using the FOREIGN_KEY_CHECKS parameter; at the moment, MySQL Cluster ignores that parameter. - You cannot set up FKs between 2 tables where one is stored using MySQL Cluster and the other InnoDB. - You cannot change primary keys through the NDB API which means that the MySQL Server actually has to simulate such operations by deleting and re-adding the row. If the PK in the parent table has a FK constraint on it then this causes non-ideal behaviour. With Restrict or No Action constraints, the change will result in an error. With Cascaded constraints, you’d want the rows in the child table to be updated with the new FK value but, the implicit delete of the row from the parent table would remove the associated rows from the child table and the subsequent implicit insert into the parent wouldn’t reinstate the child rows. For this reason, an attempt to add an ON UPDATE CASCADE where the parent column is a primary key will be rejected. Q. Does adding or dropping Foreign Keys cause downtime due to a schema change? A. Nope, this is an online operation. MySQL Cluster supports a number of on-line schema changes, ie adding and dropping indexes, adding columns, etc. Q. Where can I see an example of node.js with MySQL Cluster? A. Check out the tutorial and download the code from GitHub Q. Can I use the auto-installer to support remote deployments? How about setting up MySQL Cluster 7.2? A. Yes to both! Q. Can I get a demo Check out the tutorial. You can download the code from http://labs.mysql.com/ Go to Select Build drop-down box Q. What is be minimum internet speen required for Geo distributed cluster with synchronous replication? A. if you're splitting you cluster between sites then we recommend a network latency of 20ms or less. Alternatively, use MySQL asynchronous replication where the latency of your WAN doesn't impact the latency of your reads/writes. Q. Where you can one learn more about the PayPal project with MySQL Cluster? A. Take a look at the following - you'll find press coverage, a video and slides from their keynote presentation  So, if you want to learn more, listen to the new MySQL Cluster 7.3 on-demand webinar  MySQL Cluster 7.3 is still in the development phase, so it would be great to get your feedback on these new features, and things you want to see!

    Read the article

  • Orchestrating the Virtual Enterprise, Part II

    - by Kathryn Perry
    A guest post by Jon Chorley, Oracle's CSO & Vice President, SCM Product Strategy Almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value. Jon ChorleyChief Sustainability Officer & Vice President, SCM Product StrategyOracle Corporation

    Read the article

  • Blueprints for Oracle NoSQL Database

    - by dan.mcclary
    I think that some of the most interesting analytic problems are graph problems.  I'm always interested in new ways to store and access graphs.  As such, I really like the work being done by Tinkerpop to create Open Source Software to make property graphs more accessible over a wide variety of datastores.  Since key-value stores like Oracle NoSQL Database are well-suited to storing property graphs, I decided to extend the Blueprints API to work with it.  Below I'll discuss some of the implementation details, but you can check out the finished product here: http://github.com/dwmclary/blueprints-oracle-nosqldb.  What's in a Property Graph?  In the most general sense, a graph is just a collection of vertices and edges.  Vertices and edges can have properties: weights, names, or any number of other traits.  In an undirected graph, edges connect vertices without direction.  A directed graph specifies that all edges have a head and a tail --- a direction.  A multi-graph allows multiple edges to connect two vertices.  A "property graph" encompasses all of these traits. Key-Value Stores for Property Graphs Key-Value stores like Oracle NoSQL Database tend to be ideal for implementing property graphs.  First, if any vertex or edge can have any number of traits, we can treat it as a hash map.  For example: Vertex["name"] = "Mary" Vertex["age"] = 28 Vertex["ID"] = 12345  and so on.  This is a natural key-value relationship: the key "name" maps to the value "Mary."  Moreover if we maintain two hash maps, one for vertex objects and one for edge objects, we've essentially captured the graph.  As such, any scalable key-value store is fertile ground for planting graphs. Oracle NoSQL Database as a Scalable Graph Database While Oracle NoSQL Database offers useful features like tunable consistency, what lends it to storing property graphs is the storage guarantees around its key structure.  Keys in Oracle NoSQL Database are divided into two parts: a major key and a minor key.  The storage guarantee is simple.  Major keys will be distributed across storage nodes, which could encompass a large number of servers.  However, all minor keys which are children of a given major key are guaranteed to be stored on the same storage node.  For example, the vertices: /Personnel/Vertex/1  and /Personnel/Vertex/2 May be stored on different servers, but /Personnel/Vertex/1-/name and  /Personnel/Vertex/1-/age will always be on the same server.  This means that we can structure our graph database such that retrieving all the properties for a vertex or edge requires I/O from only a single storage node.  Moreover, Oracle NoSQL Database provides a storeIterator which allows us to store a huge number of vertices and edges in a scalable fashion.  By storing the vertices and edges as major keys, we guarantee that they are distributed evenly across all storage nodes.  At the same time we can use a partial major key to iterate over all the vertices or edges (e.g. we search over /Personnel/Vertex to iterate over all vertices). Fork It! The Blueprints API and Oracle NoSQL Database present a great way to get started using a scalable key-value database to store and access graph data.  However, a graph store isn't useful without a good graph to work on.  I encourage you to fork or pull the repository, store some data, and try using Gremlin or any other language to explore.

    Read the article

  • My Dog, Cross-Channel Shopping, and Fusion SCM

    - by Kathryn Perry
    A guest post by Mark Carson, Director, Oracle Fusion Supply Chain Management I was walking my dog Max in an open space behind my house. As we tromped through the tall weeds I remembered it is tick season and that I should get Max some protection. While he sniffed merrily in the tick infested brush, I started shopping in the middle of an open field on my phone. I thought it would be convenient to pick up the tick medicine from a pet store on the way home. Searching the pet store website I saw that they had the medicine, but there was no information on whether the store had any in stock and there were no options for shipping it to the store for pickup. I could return it, but not pick it up which seamed kind of odd. I really didn't feel like making calls to the local stores to find out if they had it. Since the product is popular, I tried one of the large 'everything' stores. Browsing its website I could see that it could be shipped to me, shipped to the store for free, and that the store nearest to me had it in stock. Needless to say, this store became a better option. This experience is a small example of why retailers, distributors, and manufactures have placed a high priority on enabling 'cross-channel commerce.' Shoppers like you and me expect to be able to search, compare, buy and return products on-line and over the phone using a variety of devices including PDAs, tablets and in-store kiosks. The pet store lost my business because its web channel had limited information about its stores. I have spoken with many customers and prospects about cross-channel commerce. They all realize the business implications and urgency behind cross-channel commerce but recognize there are challenges to enable it. New and existing applications must be integrated together globally through a consistent cross-channel business process. Integration is required between applications that provide the initial shopping experience and delivery applications associated with warehouses, stores, and partners. The enablement must be accomplished in a flexible way to react to fast-changing product portfolios and new acquisitions, while at the same time minimizing costs through reuse of existing systems. Meanwhile, the business must continue to grow and decision makers need to balance new capability with peak seasons. The challenges above are not unique to retail. Any customer in any industry who has multiple points for capturing orders and multiple points for fulfilling orders will face these challenges. With this in mind, we had a unique opportunity in Fusion SCM to re-think how to build a set of modular and flexible applications in the order management space that would make these challenges easier to conquer. The results are Fusion Distributed Order Orchestration and Global Order Promising. These applications can help companies, such as the pet store, enable true cross-channel commerce. The apps provide highly adaptable and flexible business processes to automate order orchestration across multiple cross-channel systems. They also show a global view of supply across warehouses, stores, and partners for real-time availability and more accurate order promising. Additional capability includes a standards-based integration framework for seamless execution and the ability to reuse existing systems for faster and lower cost implementations. OK, that was a mouthful of features and benefits. As Max waited to cross the street (he can do basic math too), I wondered if he could relate. He does not care about leash laws, pick-up courtesy, where he can/can't walk, what time of day it is, or even ticks. He does not care about how all these things could make walking complicated. He just wants to walk. Similarly, customers just want to shop and companies just want to make it easier to sell and deliver. You can learn more about Distributed Order Orchestration and Global Order Promising in cross-channel here.

    Read the article

  • My Red Gate Experience

    - by Colin Rothwell
    I’m Colin, and I’ve been an intern working with Mike in publishing on Simple-Talk and SQLServerCentral for the past ten weeks. I’ve mostly been working “behind the scenes”, making improvements to the spam filtering, along with various other small tweaks. When I arrived at Red Gate, one of the first things Mike asked me was what I wanted to get out of the internship. It wasn’t a question I’d given a great deal of thought to, but my immediate response was the same as almost anybody: to support my growing family. Well, ok, not quite that, but money was certainly a motivator, along with simply making sure that I didn’t get bored over the summer. Three months is a long time to fill, and many of my friends end up getting bored, or worse, knitting obsessively. With the arrogance which seems fairly common among Cambridge people, I wasn’t expecting to really learn much here! In my mind, the part of the year where I am at Uni is the part where I learn things, whilst Red Gate would be an opportunity to apply what I’d learnt. Thankfully, the opposite is true: I’ve learnt a lot during my time here, and there has been a definite positive impact on the way I write code. The first thing I’ve really learnt is that test-driven development is, in general, a sensible way of working. Before coming, I didn’t really get it: how could you test something you hadn’t yet written? It didn’t make sense! My problem was seeing a test as having to test all the behaviour of a given function. Writing tests which test the bare minimum possible and building them up is a really good way of crystallising the direction the code needs to grow in, and ensures you never attempt to write too much code at time. One really good experience of this was early on in my internship when Mike and I were working on the query used to list active authors: I’d written something which I thought would do the trick, but by starting again using TDD we grew something which revealed that there were several subtle mistakes in the query I’d written. I’ve also been awakened to the value of pair programming. Whilst I could sort of see the point before coming, I also thought that it was impossible that two people would ever get more done at the same computer than if they were working separately. I still think that this is true for projects with pieces that developers can easily work on independently, and with developers who both know the codebase, but I’ve found that pair programming can be really good for learning a code base, and for building up small projects to the point where you can start working on separate components, as well as solving particularly difficult problems. Later on in my internship, for my down tools week project, I was working on adding Python support to Glimpse. Another intern and I we pair programmed the entire project, using ping pong pair programming as much as possible. One bonus that this brought which I wasn’t expecting was that I found myself less prone to distraction: with someone else peering over my shoulder, I didn’t have the ever-present temptation to open gmail, or facebook, or yammer, or twitter, or hacker news, or reddit, and so on, and so forth. I’m quite proud of this project: I think it’s some of the best code I’ve written. I’ve also been really won over to the value of descriptive variables names. In my pre-Red Gate life, as a lone-ranger style cowboy programmer, I’d developed a tendency towards laziness in variable names, sometimes abbreviating or, worse, using acronyms. I’ve swiftly realised that this is a bad idea when working with a team: saving a few key strokes is inevitably not worth it when it comes to reading code again in the future. Longer names also mean you can do away with a majority of comments. I appreciate that if you’ve come up with an O(n*log n) algorithm for something which seemed O(n^2), you probably want to explain how it works, but explaining what a variable name means is a big no no: it’s so very easy to change the behaviour of the code, whilst forgetting about the comments. Whilst at Red Gate, I took the opportunity to attend a code retreat, which really helped me to solidify all the things I’d learnt. To be completely free of any existing code base really lets you focus on best practises and think about how you write code. If you get a chance to go on a similar event, I’d highly recommend it! Cycling to Red Gate, I’ve also become much better at fitting inner tubes: if you’re struggling to get the tube out, or re-fit the tire, letting a bit of air out usually helps. I’ve also become quite a bit better at foosball and will miss having a foosball table! I’d like to finish off by saying thank you to everyone at Red Gate for having me. I’ve really enjoyed working with, and learning from, the team that brings you this web site. If you meet any of them, buy them a drink!

    Read the article

  • New R Interface to Oracle Data Mining Available for Download

    - by charlie.berger
      The R Interface to Oracle Data Mining ( R-ODM) allows R users to access the power of Oracle Data Mining's in-database functions using the familiar R syntax. R-ODM provides a powerful environment for prototyping data analysis and data mining methodologies. R-ODM is especially useful for: Quick prototyping of vertical or domain-based applications where the Oracle Database supports the application Scripting of "production" data mining methodologies Customizing graphics of ODM data mining results (examples: classification, regression, anomaly detection) The R-ODM interface allows R users to mine data using Oracle Data Mining from the R programming environment. It consists of a set of function wrappers written in source R language that pass data and parameters from the R environment to the Oracle RDBMS enterprise edition as standard user PL/SQL queries via an ODBC interface. The R-ODM interface code is a thin layer of logic and SQL that calls through an ODBC interface. R-ODM does not use or expose any Oracle product code as it is completely an external interface and not part of any Oracle product. R-ODM is similar to the example scripts (e.g., the PL/SQL demo code) that illustrates the use of Oracle Data Mining, for example, how to create Data Mining models, pass arguments, retrieve results etc. R-ODM is packaged as a standard R source package and is distributed freely as part of the R environment's Comprehensive R Archive Network (CRAN). For information about the R environment, R packages and CRAN, see www.r-project.org. R-ODM is particularly intended for data analysts and statisticians familiar with R but not necessarily familiar with the Oracle database environment or PL/SQL. It is a convenient environment to rapidly experiment and prototype Data Mining models and applications. Data Mining models prototyped in the R environment can easily be deployed in their final form in the database environment, just like any other standard Oracle Data Mining model. What is R? R is a system for statistical computation and graphics. It consists of a language plus a run-time environment with graphics, a debugger, access to certain system functions, and the ability to run programs stored in script files. The design of R has been heavily influenced by two existing languages: Becker, Chambers & Wilks' S and Sussman's Scheme. Whereas the resulting language is very similar in appearance to S, the underlying implementation and semantics are derived from Scheme. R was initially written by Ross Ihaka and Robert Gentleman at the Department of Statistics of the University of Auckland in Auckland, New Zealand. Since mid-1997 there has been a core group (the "R Core Team") who can modify the R source code archive. Besides this core group many R users have contributed application code as represented in the near 1,500 publicly-available packages in the CRAN archive (which has shown exponential growth since 2001; R News Volume 8/2, October 2008). Today the R community is a vibrant and growing group of dozens of thousands of users worldwide. It is free software distributed under a GNU-style copyleft, and an official part of the GNU project ("GNU S"). Resources: R website / CRAN R-ODM

    Read the article

  • The Joy Of Hex

    - by Jim Giercyk
    While working on a mainframe integration project, it occurred to me that some basic computer concepts are slipping into obscurity. For example, just about anyone can tell you that a 64-bit processor is faster than a 32-bit processer. A grade school child could tell you that a computer “speaks” in ‘1’s and ‘0’s. Some people can even tell you that there are 8 bits in a byte. However, I have found that even the most seasoned developers often can’t explain the theory behind those statements. That is not a knock on programmers; in the age of IntelliSense, what reason do we have to work with data at the bit level? Many computer theory classes treat bit-level programming as a thing of the past, no longer necessary now that storage space is plentiful. The trouble with that mindset is that the world is full of legacy systems that run programs written in the 1970’s.  Today our jobs require us to extract data from those systems, regardless of the format, and that often involves low-level programming. Because it seems knowledge of the low-level concepts is waning in recent times, I thought a review would be in order.       CHARACTER: See Spot Run HEX: 53 65 65 20 53 70 6F 74 20 52 75 6E DECIMAL: 83 101 101 32 83 112 111 116 32 82 117 110 BINARY: 01010011 01100101 01100101 00100000 01010011 01110000 01101111 01110100 00100000 01010010 01110101 01101110 In this example, I have broken down the words “See Spot Run” to a level computers can understand – machine language.     CHARACTER:  The character level is what is rendered by the computer.  A “Character Set” or “Code Page” contains 256 characters, both printable and unprintable.  Each character represents 1 BYTE of data.  For example, the character string “See Spot Run” is 12 Bytes long, exclusive of the quotation marks.  Remember, a SPACE is an unprintable character, but it still requires a byte.  In the example I have used the default Windows character set, ASCII, which you can see here:  http://www.asciitable.com/ HEX:  Hex is short for hexadecimal, or Base 16.  Humans are comfortable thinking in base ten, perhaps because they have 10 fingers and 10 toes; fingers and toes are called digits, so it’s not much of a stretch.  Computers think in Base 16, with numeric values ranging from zero to fifteen, or 0 – F.  Each decimal place has a possible 16 values as opposed to a possible 10 values in base 10.  Therefore, the number 10 in Hex is equal to the number 16 in Decimal.  DECIMAL:  The Decimal conversion is strictly for us humans to use for calculations and conversions.  It is much easier for us humans to calculate that [30 – 10 = 20] in decimal than it is for us to calculate [1E – A = 14] in Hex.  In the old days, an error in a program could be found by determining the displacement from the entry point of a module.  Since those values were dumped from the computers head, they were in hex. A programmer needed to convert them to decimal, do the equation and convert back to hex.  This gets into relative and absolute addressing, a topic for another day.  BINARY:  Binary, or machine code, is where any value can be expressed in 1s and 0s.  It is really Base 2, because each decimal place can have a possibility of only 2 characters, a 1 or a 0.  In Binary, the number 10 is equal to the number 2 in decimal. Why only 1s and 0s?  Very simply, computers are made up of lots and lots of transistors which at any given moment can be ON ( 1 ) or OFF ( 0 ).  Each transistor is a bit, and the order that the transistors fire (or not fire) is what distinguishes one value from  another in the computers head (or CPU).  Consider 32 bit vs 64 bit processing…..a 64 bit processor has the capability to read 64 transistors at a time.  A 32 bit processor can only read half as many at a time, so in theory the 64 bit processor should be much faster.  There are many more factors involved in CPU performance, but that is the fundamental difference.    DECIMAL HEX BINARY 0 0 0000 1 1 0001 2 2 0010 3 3 0011 4 4 0100 5 5 0101 6 6 0110 7 7 0111 8 8 1000 9 9 1001 10 A 1010 11 B 1011 12 C 1100 13 D 1101 14 E 1110 15 F 1111   Remember that each character is a BYTE, there are 2 HEX characters in a byte (called nibbles) and 8 BITS in a byte.  I hope you enjoyed reading about the theory of data processing.  This is just a high-level explanation, and there is much more to be learned.  It is safe to say that, no matter how advanced our programming languages and visual studios become, they are nothing more than a way to interpret bits and bytes.  There is nothing like the joy of hex to get the mind racing.

    Read the article

  • Application Logging needs work

    Application Logging Application logging is the act of logging events that occur within an application much like how a court report documents what happens in court case. Application logs can be useful for several reasons, but the most common use for logs is to recreate steps to find the root cause of applications errors. Other uses can include the detection of Fraud, verification of user activity, or provide audits on user/data interactions. “Logs can contain different kinds of data. The selection of the data used is normally affected by the motivation leading to the logging. “ (OWASP, 2009) OWASP also stats that logging include applicable debugging information like the event date time, responsible process, and a description of the event. “There are many reasons why a logging system is a necessary part of delivering a distributed application. One of the most important is the ability to track exactly how many users are using the application during different time periods.” (Hatton, 2000) Hatton also states that application logging helps system designers determine whether parts of an application aren't being used as designed. He implies that low usage can be used to identify if users like or do not like aspects of a system based on user usage of the application. This enables application designers to extract why users don't like aspects of an application so that changes can be made to increase its usefulness and effectiveness. “Logging memory usage can also assist you in tuning up the internals of your application. If you're experiencing a randomly occurring problem, being able to match activities performed with the memory status at the time may enable you to discover the cause of the problem. It also gives you a good indication of the health of the distributed server machine at the time any activity is performed. “ (Hatton, 2000) Commonly Logged Application Events (Defined by OWASP) Access of Data Creation of Data Modification of Data in any form Administrative Functions  Configuration Changes Debugging Information(Application Events)  Authorization Attempts  Data Deletion Network Communication  Authentication Events  Errors/Exceptions Application Error Logging The functionality associated with application error logging is actually the combination of proper error handling and applications logging.  If we look back at Figure 4 and Figure 5, these code examples allow developers to handle various types of errors that occur within the life cycle of an application’s execution. Application logging can be applied within the Catch section of the TryCatch statement allowing for the errors to be logged when they occur. By placing the logging within the Catch section specific error details can be accessed that help identify the source of the error, the path to the error, what caused the error and definition of the error that occurred. This can then be logged and reviewed at a later date in order recreate the error that was received based data found in the application log. By allowing applications to log errors developers IT staff can use them to recreate errors that are encountered by end-users or other dependent systems.

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • Webcast On-Demand: Building Java EE Apps That Scale

    - by jeckels
    With some awesome work by one of our architects, Randy Stafford, we recently completed a webcast on scaling Java EE apps efficiently. Did you miss it? No problem. We have a replay available on-demand for you. Just hit the '+' sign drop-down for access.Topics include: Domain object caching Service response caching Session state caching JSR-107 HotCache and more! Further, we had several interesting questions asked by our audience, and we thought we'd share a sampling of those here for you - just in case you had the same queries yourself. Enjoy! What is the largest Coherence deployment out there? We have seen deployments with over 500 JVMs in the Coherence cluster, and deployments with over 1000 JVMs using the Coherence jar file, in one system. On the management side there is an ecosystem of monitoring tools from Oracle and third parties with dashboards graphing values from Coherence's JMX instrumentation. For lifecycle management we have seen a lot of custom scripting over the years, but we've also integrated closely with WebLogic to leverage its management ecosystem for deploying Coherence-based applications and managing process life cycles. That integration introduces a new Java EE archive type, the Grid Archive or GAR, which embeds in an EAR and can be seen by a WAR in WebLogic. That integration also doesn't require any extra WebLogic licensing if Coherence is licensed. How is Coherence different from a NoSQL Database like MongoDB? Coherence can be considered a NoSQL technology. It pre-dates the NoSQL movement, having been first released in 2001 whereas the term "NoSQL" was coined in 2009. Coherence has a key-value data model primarily but can also be used for document data models. Coherence manages data in memory currently, though disk persistence is in a future release currently in beta testing. Where the data is managed yields a few differences from the most well-known NoSQL products: access latency is faster with Coherence, though well-known NoSQL databases can manage more data. Coherence also has features that well-known NoSQL database lack, such as grid computing, eventing, and data source integration. Finally Coherence has had 15 years of maturation and hardening from usage in mission-critical systems across a variety of industries, particularly financial services. Can I use Coherence for local caching? Yes, you get additional features beyond just a java.util.Map: you get expiration capabilities, size-limitation capabilities, eventing capabilites, etc. Are there APIs available for GoldenGate HotCache? It's mostly a black box. You configure it, and it just puts objects into your caches. However you can treat it as a glass box, and use Coherence event interceptors to enhance its behavior - and there are use cases for that. Are Coherence caches updated transactionally? Coherence provides several mechanisms for concurrency control. If a project insists on full-blown JTA / XA distributed transactions, Coherence caches can participate as resources. But nobody does that because it's a performance and scalability anti-pattern. At finer granularity, Coherence guarantees strict ordering of all operations (reads and writes) against a single cache key if the operations are done using Coherence's "EntryProcessor" feature. And Coherence has a unique feature called "partition-level transactions" which guarantees atomic writes of multiple cache entries (even in different caches) without requiring JTA / XA distributed transaction semantics.

    Read the article

  • Mercurial fails while commiting/updating/etc. using Mercuriual+TrueCrypt+MAC

    - by lukewar
    While trying to work with Mercurial on project located on TrueCrypt partition I always get en error as follows: ** unknown exception encountered, details follow ** report bug details to http://mercurial.selenic.com/bts/ ** or [email protected] ** Mercurial Distributed SCM (version 1.5.2+20100502) ** Extensions loaded: Traceback (most recent call last): File "/usr/local/bin/hg", line 27, in mercurial.dispatch.run() File "/Library/Python/2.6/site-packages/mercurial/dispatch.py", line 16, in run sys.exit(dispatch(sys.argv[1:])) File "/Library/Python/2.6/site-packages/mercurial/dispatch.py", line 30, in dispatch return _runcatch(u, args) File "/Library/Python/2.6/site-packages/mercurial/dispatch.py", line 50, in _runcatch return _dispatch(ui, args) File "/Library/Python/2.6/site-packages/mercurial/dispatch.py", line 470, in _dispatch return runcommand(lui, repo, cmd, fullargs, ui, options, d) File "/Library/Python/2.6/site-packages/mercurial/dispatch.py", line 340, in runcommand ret = _runcommand(ui, options, cmd, d) File "/Library/Python/2.6/site-packages/mercurial/dispatch.py", line 521, in _runcommand return checkargs() File "/Library/Python/2.6/site-packages/mercurial/dispatch.py", line 475, in checkargs return cmdfunc() File "/Library/Python/2.6/site-packages/mercurial/dispatch.py", line 469, in d = lambda: util.checksignature(func)(ui, *args, **cmdoptions) File "/Library/Python/2.6/site-packages/mercurial/util.py", line 401, in check return func(*args, **kwargs) File "/Library/Python/2.6/site-packages/mercurial/commands.py", line 3332, in update return hg.update(repo, rev) File "/Library/Python/2.6/site-packages/mercurial/hg.py", line 362, in update stats = _merge.update(repo, node, False, False, None) File "/Library/Python/2.6/site-packages/mercurial/merge.py", line 495, in update _checkunknown(wc, p2) File "/Library/Python/2.6/site-packages/mercurial/merge.py", line 77, in _checkunknown for f in wctx.unknown(): File "/Library/Python/2.6/site-packages/mercurial/context.py", line 660, in unknown return self._status[4] File "/Library/Python/2.6/site-packages/mercurial/util.py", line 156, in get result = self.func(obj) File "/Library/Python/2.6/site-packages/mercurial/context.py", line 622, in _status return self._repo.status(unknown=True) File "/Library/Python/2.6/site-packages/mercurial/localrepo.py", line 1023, in status if (f not in ctx1 or ctx2.flags(f) != ctx1.flags(f) File "/Library/Python/2.6/site-packages/mercurial/context.py", line 694, in flags flag = findflag(self._parents[0]) File "/Library/Python/2.6/site-packages/mercurial/context.py", line 690, in findflag return ff(path) File "/Library/Python/2.6/site-packages/mercurial/dirstate.py", line 145, in f if 'x' in fallback(x): TypeError: argument of type 'NoneType' is not iterable It is worth mention that Mercurial works perfectly if project is not located on TrueCrypt partition. Configuration: MacOS X 10.6.3 Mercurial Distributed SCM (version 1.5.2+20100502) Python 2.6.5 Have anyone of you generous people able to help me? :)

    Read the article

  • TransactionScope won't work with DB2 provider

    - by Florin
    Hi Everyone, I've been trying to use TransactionScope with a DB2 database (using DB2 .Net provider v 9.0.0.2 and c# 2.0) which SHOULD be supported according to IBM. I have tried all the advice i could find on the IBM forums (such as here) to no avail. I have enabled XA transactions on my XP Sp2 machine, tried also from a Win 2003 Server machine but i consistently get the infamous error: ERROR [58005] [IBM][DB2/NT] SQL0998N Error occurred during transaction or heuristic processing. Reason Code = "16". Subcode = "2-80004005". SQLSTATE=58005 The windows event log says: The XA Transaction Manager attempted to load the XA resource manager DLL. The call to LOADLIBRARY for the XA resource manager DLL failed: DLL=C:\APPS\IBM\DB2v95fp2\SQLLIB\BIN\DB2APP.DLL File=d:\comxp_sp2\com\com1x\dtc\dtc\xatm\src\xarmconn.cpp Line=2467. Also, granted the NETWORK SERVICE user full rights to the folder and dll. Here's the MSDTC startup message MS DTC started with the following settings: Security Configuration (OFF = 0 and ON = 1): Network Administration of Transactions = 0, Network Clients = 0, Inbound Distributed Transactions using Native MSDTC Protocol = 0, Outbound Distributed Transactions using Native MSDTC Protocol = 0, Transaction Internet Protocol (TIP) = 0, XA Transactions = 1 Any help would be much appreciated! Thanks, Florin

    Read the article

  • What makes merging in DVCS easy?

    - by afriza
    I read at Joel on Software: With distributed version control, the distributed part is actually not the most interesting part. The interesting part is that these systems think in terms of changes, not in terms of versions. and at HgInit: When we have to merge, Subversion tries to look at both revisions—my modified code, and your modified code—and it tries to guess how to smash them together in one big unholy mess. It usually fails, producing pages and pages of “merge conflicts” that aren’t really conflicts, simply places where Subversion failed to figure out what we did. By contrast, while we were working separately in Mercurial, Mercurial was busy keeping a series of changesets. And so, when we want to merge our code together, Mercurial actually has a whole lot more information: it knows what each of us changed and can reapply those changes, rather than just looking at the final product and trying to guess how to put it together. By looking at the SVN's repository folder, I have the impression that Subversion is maintaining each revisions as changeset. And from what I know, Hg is using both changeset and snapshot while Git is purely using snapshot to store the data. If my assumption is correct, then there must be other ways that make merging in DVCS easy. What are those?

    Read the article

  • Is their a definitive list for the differences between the current version of SQL Azure and SQL Serv

    - by Aim Kai
    I am a relative newbie when it comes to SQL Azure!! I was wondering if there was a definitive list somewhere regarding what is and is not supported by SQL Azure in regards to SQL Server 2008? I have had a look through google but I've noticed some of the blog posts are missing things which I have found through my own testing: For example, quite a lot is summarised in this blog entry http://www.keepitsimpleandfast.com/2009/12/main-differences-between-sql-azure-and.html Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags which is a repeat of the MSDN page http://msdn.microsoft.com/en-us/library/ff394115.aspx I've noticed from my own testing that the following seem to have issues when migrating from SQL Server 2008 to the Azure: XML Types (the msdn does mention large custom types - I guess it may include this?? even if the data schema is really small?) Multi-part views I've been using SQL Azure Migration Wizard v3.1.8 to migrate local databases into the cloud. I was wondering if anyone could point to a list or give me any information till when these features are likely to be included in SQL Azure.

    Read the article

  • Choosing between .NET Service Bus Queues vs Azure Queue Service

    - by ChrisV
    Just a quick question regarding an Azure application. If I have a number of Web and Worker roles that need to communicate, documentation says to use the Azure Queue Service. However, I've just read that the new .NET Service Bus now also offers queues. These look to be more powerful as they appear to offer a much more detailed API. Whilst the .NSB looks more interesting it has a couple of issues that make me wary of using it in distributed application. (for example, Queue Expiration... if I cannot guarantee that a queue will be renewed on time I may lose it all!). Has anyone had any experience using either of these two technologies and could give any advice on when to choose one over the other. I suspect that whilst the service bus looks more powerful, as my use case is really just enabling Web/Worker roles to communicate between each other, that the Azure Queue Service is what I'm after. But I'm just really looking for confirmation of that before progamming myself in to a corner :-) Thanks in advance. UPDATE Have read up about the two systems over the break. It defo looks like .NET service bus is more specifically designed for integrating systems rather than providing a general purpose reliable messaging system. Azure Queues are distributed and so reliable and scalable in a way that .NSB queues are not and so more suitable for code hosted within Azure itself. Thanks for the responses.

    Read the article

  • Is Git ready to be recommended to my boss?

    - by Mike Weller
    I want to recomment Git to my boss as a new source control system, since we're stuck in the 90s with VSS (ouch), but are the tools and 3rd party support good enough yet? Specifically I'm talking about GUI front-ends similar to TortoiseSVN, decent visual diff/merge support, as well as stuff like email commit notifications and general support from 3rd parties like IDEs and build systems. Even though this will be used by programmers, we really need this kind of stuff in our team. I don't want to leave everyone stuck with a new tool, and even a new source control paradigm (distributed), with nothing but a command-line app and some online tutorials. This would be a step backwards. So what do you think... is Git ready? What decent tools exist for Git and what third party development apps support it? EDIT: My original question was pretty vague so I'm updating it to specifically ask for a list of available tools and 3rd party support for Git. Maybe we can get a community wiki post with a list of stuff. I also do not consider 'use subversion' to be an adequate answer. There are other reasons to use a distributed source control system other than offline editing - private and cheap branches being one of them.

    Read the article

  • Extract 2 numbers preceded with two different strings in a paragrapf using TCL Regular Expression

    - by Madhu
    Hi, I need to extract two different numbers preceded by two different strings. Employee Id-- Employee16(I need 16) and Employee links-- Employee links:2 (I need 2). Source String looks like following: Employee16, Employee name is QueenRose Working for 46w0d Billing is Distributed 65537 assigned tasks, 0 reordered, 0 unassigned 0 discarded, 0 lost received, 5/255 load received sequence unavailable, 0xC2E7 sent sequence Employee links: 2 active, 0 inactive (max not set, min not set) Dt3/5/10:0, since 46w0d, no tasks pending Dt3/5/10:10, since 21w0d, no tasks rcvd Employee is currently working in Hardware section. Employee19, Employee name is Edward11 Working for 48w4d Billing is Distributed 206801498 assigned tasks, 0 reordered, 0 unassigned 655372 discarded, 0 lost received, 9/255 load received sequence unavailable, 0x23CA sent sequence Employee links: 7 active, 0 inactive (max not set, min not set) Dt3/5/10:0, since 47w2d, tasks pending Dt3/5/10:10, since 28w6d, no tasks pending Dt3/5/10:11, since 18w4d, no tasks pending Dt3/5/10:12, since 18w4d, no tasks pending Dt3/5/10:13, since 18w4d, no tasks pending Dt3/5/10:14, since 18w4d, no tasks pending Dt3/5/10:15, since 7w2d, no tasks pending Employee is currently working in Hardware sectione. Employee6 (inactive) Employee links: 2 Dt3/5/10:0 (inactive) Dt3/5/10:10 (inactive) Employee7 (inactive) Employee links: 2 Dt3/5/10:0 (inactive) Dt3/5/10:10 (inactive) Tried with the following: Multilink(\d+)[^\n\r]*[^M]*Member links:\s+(\d+) But is not listing all the Ids and links. Can anybody help me getting this? Thanks in advance, Madhu.

    Read the article

  • How to generate a number in arbitrary range using random()={0..1} preserving uniformness and density?

    - by psihodelia
    Generate a random number in range [x..y] where x and y are any arbitrary floating point numbers. Use function random(), which returns a random floating point number in range [0..1] from P uniformly distributed numbers (call it "density"). Uniform distribution must be preserved and P must be scaled as well. I think, there is no easy solution for such problem. To simplify it a bit, I ask you how to generate a number in interval [-0.5 .. 0.5], then in [0 .. 2], then in [-2 .. 0], preserving uniformness and density? Thus, for [0 .. 2] it must generate a random number from P*2 uniformly distributed numbers. The obvious simple solution random() * (x - y) + y will generate not all possible numbers because of the lower density for all abs(x-y)>1.0 cases. Many possible values will be missed. Remember, that random() returns only a number from P possible numbers. Then, if you multiply such number by Q, it will give you only one of P possible values, scaled by Q, but you have to scale density P by Q as well.

    Read the article

  • Velocity CTP: can we 'search' for objects?

    - by Stato Machino
    It appears that 'tags' allow us to associate a 'search term' with the objects placed into the Velocity cache space. However, these can only be queried within a 'region'. Further, regions somehow limit the locality of objects in the cache to a single server (or maybe something kinda like that). So this appears to make it hard to perform any operation for which the unique Id of the cached item is not persisted or continuously available to the application that stores and retrieves objects to and from the cache. In any case, I can't see an easy way to 'cleanse' the cache of objects or to find objects across the entire cache that may share some prefix, postfix or infix values in the cache key so that i can clear out the cache of object repeatedly created in unit tests, for example. And I am unsure about the consequences of regions being associated with single server cache locations. So I would appreciate any help with the following questions: What is the difference between a 'distributed cache' (called a 'partitioned' cache??) when using regions, and a 'local cache'? 1.a. In particular, are the region-oriented values in a distributed cache visible through a cache factory that is configured to 'see' the entire cache space? Are the operations of creating and removing 'regions' efficient enough that it would be reasonable to create a region and a group of tags for each bundle of objects that need to be cached? 2.a. Or does this just push the problem of scoping the 'search for objects' up the chain because the ability of the DataCache object to query down through regions and tags as limited as querying for the cache keys of objects themselves. Thanks, Stato

    Read the article

  • Best practices to deal with "slightly different" branches of source code

    - by jedi_coder
    This question is rather agnostic than related to a certain version control program. Assume there is a source code tree under certain distributed version control. Let's call it A. At some point somebody else clones it and gets its own copy. Let's call it B. I'll call A and B branches, even if some version control tools have different definitions for branches (some might call A and B repositories). Let's assume that branch A is the "main" branch. In the context of distributed version control this only means that branch A is modified much more actively and the owner of branch B periodically syncs (pulls) new updates from branch A. Let's consider that a certain source file in branch B contains a class (again, it's also language agnostic). The owner of branch B considers that some class methods are more appropriate and groups them together by moving them inside the class body. Functionally nothing has changed - this is a very trivial refactoring of the code. But the change gets reflected in diffs. Now, assuming that this change from branch B will never get merged into branch A, the owner of branch B will always get this difference when pulling from branch A and merging into his own workspace. Even if there's only one such trivial change, the owner of branch B needs to resolve conflicts every time when pulling from branch A. As long as branches A and B are modified independently, more and more conflicts like this appear. What is the workaround for this situation? Which workflow should the owner of branch B follow to minimize the effort for periodically syncing with branch A?

    Read the article

  • Flow-Design Cheat Sheet &ndash; Part I, Notation

    - by Ralf Westphal
    You want to avoid the pitfalls of object oriented design? Then this is the right place to start. Use Flow-Oriented Analysis (FOA) and –Design (FOD or just FD for Flow-Design) to understand a problem domain and design a software solution. Flow-Orientation as described here is related to Flow-Based Programming, Event-Based Programming, Business Process Modelling, and even Event-Driven Architectures. But even though “thinking in flows” is not new, I found it helpful to deviate from those precursors for several reasons. Some aim at too big systems for the average programmer, some are concerned with only asynchronous processing, some are even not very much concerned with programming at all. What I was looking for was a design method to help in software projects of any size, be they large or tiny, involing synchronous or asynchronous processing, being local or distributed, running on the web or on the desktop or on a smartphone. That´s why I took ideas from all of the above sources and some additional and came up with Event-Based Components which later got repositioned and renamed to Flow-Design. In the meantime this has generated some discussion (in the German developer community) and several teams have started to work with Flow-Design. Also I´ve conducted quite some trainings using Flow-Orientation for design. The results are very promising. Developers find it much easier to design software using Flow-Orientation than OOAD-based object orientation. Since Flow-Orientation is moving fast and is not covered completely by a single source like a book, demand has increased for at least an overview of the current state of its notation. This page is trying to answer this demand by briefly introducing/describing every notational element as well as their translation into C# source code. Take this as a cheat sheet to put next to your whiteboard when designing software. However, please do not expect any explanation as to the reasons behind Flow-Design elements. Details on why Flow-Design at all and why in this specific way you´ll find in the literature covering the topic. Here´s a resource page on Flow-Design/Event-Based Components, if you´re able to read German. Notation Connected Functional Units The basic element of any FOD are functional units (FU): Think of FUs as some kind of software code block processing data. For the moment forget about classes, methods, “components”, assemblies or whatever. See a FU as an abstract piece of code. Software then consists of just collaborating FUs. I´m using circles/ellipses to draw FUs. But if you like, use rectangles. Whatever suites your whiteboard needs best.   The purpose of FUs is to process input and produce output. FUs are transformational. However, FUs are not called and do not call other FUs. There is no dependency between FUs. Data just flows into a FU (input) and out of it (output). From where and where to is of no concern to a FU.   This way FUs can be concatenated in arbitrary ways:   Each FU can accept input from many sources and produce output for many sinks:   Flows Connected FUs form a flow with a start and an end. Data is entering a flow at a source, and it´s leaving it through a sink. Think of sources and sinks as special FUs which conntect wires to the environment of a network of FUs.   Wiring Details Data is flowing into/out of FUs through wires. This is to allude to electrical engineering which since long has been working with composable parts. Wires are attached to FUs usings pins. They are the entry/exit points for the data flowing along the wires. Input-/output pins currently need not be drawn explicitly. This is to keep designing on a whiteboard simple and quick.   Data flowing is of some type, so wires have a type attached to them. And pins have names. If there is only one input pin and output pin on a FU, though, you don´t need to mention them. The default is Process for a single input pin, and Result for a single output pin. But you´re free to give even single pins different names.   There is a shortcut in use to address a certain pin on a destination FU:   The type of the wire is put in parantheses for two reasons. 1. This way a “no-type” wire can be easily denoted, 2. this is a natural way to describe tuples of data.   To describe how much data is flowing, a star can be put next to the wire type:   Nesting – Boards and Parts If more than 5 to 10 FUs need to be put in a flow a FD starts to become hard to understand. To keep diagrams clutter free they can be nested. You can turn any FU into a flow: This leads to Flow-Designs with different levels of abstraction. A in the above illustration is a high level functional unit, A.1 and A.2 are lower level functional units. One of the purposes of Flow-Design is to be able to describe systems on different levels of abstraction and thus make it easier to understand them. Humans use abstraction/decomposition to get a grip on complexity. Flow-Design strives to support this and make levels of abstraction first class citizens for programming. You can read the above illustration like this: Functional units A.1 and A.2 detail what A is supposed to do. The whole of A´s responsibility is decomposed into smaller responsibilities A.1 and A.2. FU A thus does not do anything itself anymore! All A is responsible for is actually accomplished by the collaboration between A.1 and A.2. Since A now is not doing anything anymore except containing A.1 and A.2 functional units are devided into two categories: boards and parts. Boards are just containing other functional units; their sole responsibility is to wire them up. A is a board. Boards thus depend on the functional units nested within them. This dependency is not of a functional nature, though. Boards are not dependent on services provided by nested functional units. They are just concerned with their interface to be able to plug them together. Parts are the workhorses of flows. They contain the real domain logic. They actually transform input into output. However, they do not depend on other functional units. Please note the usage of source and sink in boards. They correspond to input-pins and output-pins of the board.   Implicit Dependencies Nesting functional units leads to a dependency tree. Boards depend on nested functional units, they are the inner nodes of the tree. Parts are independent, they are the leafs: Even though dependencies are the bane of software development, Flow-Design does not usually draw these dependencies. They are implicitly created by visually nesting functional units. And they are harmless. Boards are so simple in their functionality, they are little affected by changes in functional units they are depending on. But functional units are implicitly dependent on more than nested functional units. They are also dependent on the data types of the wires attached to them: This is also natural and thus does not need to be made explicit. And it pertains mainly to parts being dependent. Since boards don´t do anything with regard to a problem domain, they don´t care much about data types. Their infrastructural purpose just needs types of input/output-pins to match.   Explicit Dependencies You could say, Flow-Orientation is about tackling complexity at its root cause: that´s dependencies. “Natural” dependencies are depicted naturally, i.e. implicitly. And whereever possible dependencies are not even created. Functional units don´t know their collaborators within a flow. This is core to Flow-Orientation. That makes for high composability of functional units. A part is as independent of other functional units as a motor is from the rest of the car. And a board is as dependend on nested functional units as a motor is on a spark plug or a crank shaft. With Flow-Design software development moves closer to how hardware is constructed. Implicit dependencies are not enough, though. Sometimes explicit dependencies make designs easier – as counterintuitive this might sound. So FD notation needs a ways to denote explicit dependencies: Data flows along wires. But data does not flow along dependency relations. Instead dependency relations represent service calls. Functional unit C is depending on/calling services on functional unit S. If you want to be more specific, name the services next to the dependency relation: Although you should try to stay clear of explicit dependencies, they are fundamentally ok. See them as a way to add another dimension to a flow. Usually the functionality of the independent FU (“Customer repository” above) is orthogonal to the domain of the flow it is referenced by. If you like emphasize this by using different shapes for dependent and independent FUs like above. Such dependencies can be used to link in resources like databases or shared in-memory state. FUs can not only produce output but also can have side effects. A common pattern for using such explizit dependencies is to hook a GUI into a flow as the source and/or the sink of data: Which can be shortened to: Treat FUs others depend on as boards (with a special non-FD API the dependent part is connected to), but do not embed them in a flow in the diagram they are depended upon.   Attributes of Functional Units Creation and usage of functional units can be modified with attributes. So far the following have shown to be helpful: Singleton: FUs are by default multitons. FUs in the same of different flows with the same name refer to the same functionality, but to different instances. Think of functional units as objects that get instanciated anew whereever they appear in a design. Sometimes though it´s helpful to reuse the same instance of a functional unit; this is always due to valuable state it holds. Signify this by annotating the FU with a “(S)”. Multiton: FUs on which others depend are singletons by default. This is, because they usually are introduced where shared state comes into play. If you want to change them to be a singletons mark them with a “(M)”. Configurable: Some parts need to be configured before the can do they work in a flow. Annotate them with a “(C)” to have them initialized before any data items to be processed by them arrive. Do not assume any order in which FUs are configured. How such configuration is happening is an implementation detail. Entry point: In each design there needs to be a single part where “it all starts”. That´s the entry point for all processing. It´s like Program.Main() in C# programs. Mark the entry point part with an “(E)”. Quite often this will be the GUI part. How the entry point is started is an implementation detail. Just consider it the first FU to start do its job.   Patterns / Standard Parts If more than a single wire is attached to an output-pin that´s called a split (or fork). The same data is flowing on all of the wires. Remember: Flow-Designs are synchronous by default. So a split does not mean data is processed in parallel afterwards. Processing still happens synchronously and thus one branch after another. Do not assume any specific order of the processing on the different branches after the split.   It is common to do a split and let only parts of the original data flow on through the branches. This effectively means a map is needed after a split. This map can be implicit or explicit.   Although FUs can have multiple input-pins it is preferrable in most cases to combine input data from different branches using an explicit join: The default output of a join is a tuple of its input values. The default behavior of a join is to output a value whenever a new input is received. However, to produce its first output a join needs an input for all its input-pins. Other join behaviors can be: reset all inputs after an output only produce output if data arrives on certain input-pins

    Read the article

  • The Benefits of Smart Grid Business Software

    - by Sylvie MacKenzie, PMP
    Smart Grid Background What Are Smart Grids?Smart Grids use computer hardware and software, sensors, controls, and telecommunications equipment and services to: Link customers to information that helps them manage consumption and use electricity wisely. Enable customers to respond to utility notices in ways that help minimize the duration of overloads, bottlenecks, and outages. Provide utilities with information that helps them improve performance and control costs. What Is Driving Smart Grid Development? Environmental ImpactSmart Grid development is picking up speed because of the widespread interest in reducing the negative impact that energy use has on the environment. Smart Grids use technology to drive efficiencies in transmission, distribution, and consumption. As a result, utilities can serve customers’ power needs with fewer generating plants, fewer transmission and distribution assets,and lower overall generation. With the possible exception of wind farm sprawl, landscape preservation is one obvious benefit. And because most generation today results in greenhouse gas emissions, Smart Grids reduce air pollution and the potential for global climate change.Smart Grids also more easily accommodate the technical difficulties of integrating intermittent renewable resources like wind and solar into the grid, providing further greenhouse gas reductions. CostsThe ability to defer the cost of plant and grid expansion is a major benefit to both utilities and customers. Utilities do not need to use as many internal resources for traditional infrastructure project planning and management. Large T&D infrastructure expansion costs are not passed on to customers.Smart Grids will not eliminate capital expansion, of course. Transmission corridors to connect renewable generation with customers will require major near-term expenditures. Additionally, in the future, electricity to satisfy the needs of population growth and additional applications will exceed the capacity reductions available through the Smart Grid. At that point, expansion will resume—but with greater overall T&D efficiency based on demand response, load control, and many other Smart Grid technologies and business processes. Energy efficiency is a second area of Smart Grid cost saving of particular relevance to customers. The timely and detailed information Smart Grids provide encourages customers to limit waste, adopt energy-efficient building codes and standards, and invest in energy efficient appliances. Efficiency may or may not lower customer bills because customer efficiency savings may be offset by higher costs in generation fuels or carbon taxes. It is clear, however, that bills will be lower with efficiency than without it. Utility Operations Smart Grids can serve as the central focus of utility initiatives to improve business processes. Many utilities have long “wish lists” of projects and applications they would like to fund in order to improve customer service or ease staff’s burden of repetitious work, but they have difficulty cost-justifying the changes, especially in the short term. Adding Smart Grid benefits to the cost/benefit analysis frequently tips the scales in favor of the change and can also significantly reduce payback periods.Mobile workforce applications and asset management applications work together to deploy assets and then to maintain, repair, and replace them. Many additional benefits result—for instance, increased productivity and fuel savings from better routing. Similarly, customer portals that provide customers with near-real-time information can also encourage online payments, thus lowering billing costs. Utilities can and should include these cost and service improvements in the list of Smart Grid benefits. What Is Smart Grid Business Software? Smart Grid business software gathers data from a Smart Grid and uses it improve a utility’s business processes. Smart Grid business software also helps utilities provide relevant information to customers who can then use it to reduce their own consumption and improve their environmental profiles. Smart Grid Business Software Minimizes the Impact of Peak Demand Utilities must size their assets to accommodate their highest peak demand. The higher the peak rises above base demand: The more assets a utility must build that are used only for brief periods—an inefficient use of capital. The higher the utility’s risk profile rises given the uncertainties surrounding the time needed for permitting, building, and recouping costs. The higher the costs for utilities to purchase supply, because generators can charge more for contracts and spot supply during high-demand periods. Smart Grids enable a variety of programs that reduce peak demand, including: Time-of-use pricing and critical peak pricing—programs that charge customers more when they consume electricity during peak periods. Pilot projects indicate that these programs are successful in flattening peaks, thus ensuring better use of existing T&D and generation assets. Direct load control, which lets utilities reduce or eliminate electricity flow to customer equipment (such as air conditioners). Contracts govern the terms and conditions of these turn-offs. Indirect load control, which signals customers to reduce the use of on-premises equipment for contractually agreed-on time periods. Smart Grid business software enables utilities to impose penalties on customers who do not comply with their contracts. Smart Grids also help utilities manage peaks with existing assets by enabling: Real-time asset monitoring and control. In this application, advanced sensors safely enable dynamic capacity load limits, ensuring that all grid assets can be used to their maximum capacity during peak demand periods. Real-time asset monitoring and control applications also detect the location of excessive losses and pinpoint need for mitigation and asset replacements. As a result, utilities reduce outage risk and guard against excess capacity or “over-build”. Better peak demand analysis. As a result: Distribution planners can better size equipment (e.g. transformers) to avoid over-building. Operations engineers can identify and resolve bottlenecks and other inefficiencies that may cause or exacerbate peaks. As above, the result is a reduction in the tendency to over-build. Supply managers can more closely match procurement with delivery. As a result, they can fine-tune supply portfolios, reducing the tendency to over-contract for peak supply and reducing the need to resort to spot market purchases during high peaks. Smart Grids can help lower the cost of remaining peaks by: Standardizing interconnections for new distributed resources (such as electricity storage devices). Placing the interconnections where needed to support anticipated grid congestion. Smart Grid Business Software Lowers the Cost of Field Services By processing Smart Grid data through their business software, utilities can reduce such field costs as: Vegetation management. Smart Grids can pinpoint momentary interruptions and tree-caused outages. Spatial mash-up tools leverage GIS models of tree growth for targeted vegetation management. This reduces the cost of unnecessary tree trimming. Service vehicle fuel. Many utility service calls are “false alarms.” Checking meter status before dispatching crews prevents many unnecessary “truck rolls.” Similarly, crews use far less fuel when Smart Grid sensors can pinpoint a problem and mobile workforce applications can then route them directly to it. Smart Grid Business Software Ensures Regulatory Compliance Smart Grids can ensure compliance with private contracts and with regional, national, or international requirements by: Monitoring fulfillment of contract terms. Utilities can use one-hour interval meters to ensure that interruptible (“non-core”) customers actually reduce or eliminate deliveries as required. They can use the information to levy fines against contract violators. Monitoring regulations imposed on customers, such as maximum use during specific time periods. Using accurate time-stamped event history derived from intelligent devices distributed throughout the smart grid to monitor and report reliability statistics and risk compliance. Automating business processes and activities that ensure compliance with security and reliability measures (e.g. NERC-CIP 2-9). Grid Business Software Strengthens Utilities’ Connection to Customers While Reducing Customer Service Costs During outages, Smart Grid business software can: Identify outages more quickly. Software uses sensors to pinpoint outages and nested outage locations. They also permit utilities to ensure outage resolution at every meter location. Size outages more accurately, permitting utilities to dispatch crews that have the skills needed, in appropriate numbers. Provide updates on outage location and expected duration. This information helps call centers inform customers about the timing of service restoration. Smart Grids also facilitates display of outage maps for customer and public-service use. Smart Grids can significantly reduce the cost to: Connect and disconnect customers. Meters capable of remote disconnect can virtually eliminate the costs of field crews and vehicles previously required to change service from the old to the new residents of a metered property or disconnect customers for nonpayment. Resolve reports of voltage fluctuation. Smart Grids gather and report voltage and power quality data from meters and grid sensors, enabling utilities to pinpoint reported problems or resolve them before customers complain. Detect and resolve non-technical losses (e.g. theft). Smart Grids can identify illegal attempts to reconnect meters or to use electricity in supposedly vacant premises. They can also detect theft by comparing flows through delivery assets with billed consumption. Smart Grids also facilitate outreach to customers. By monitoring and analyzing consumption over time, utilities can: Identify customers with unusually high usage and contact them before they receive a bill. They can also suggest conservation techniques that might help to limit consumption. This can head off “high bill” complaints to the contact center. Note that such “high usage” or “additional charges apply because you are out of range” notices—frequently via text messaging—are already common among mobile phone providers. Help customers identify appropriate bill payment alternatives (budget billing, prepayment, etc.). Help customers find and reduce causes of over-consumption. There’s no waiting for bills in the mail before they even understand there is a problem. Utilities benefit not just through improved customer relations but also through limiting the size of bills from customers who might struggle to pay them. Where permitted, Smart Grids can open the doors to such new utility service offerings as: Monitoring properties. Landlords reduce costs of vacant properties when utilities notify them of unexpected energy or water consumption. Utilities can perform similar services for owners of vacation properties or the adult children of aging parents. Monitoring equipment. Power-use patterns can reveal a need for equipment maintenance. Smart Grids permit utilities to alert owners or managers to a need for maintenance or replacement. Facilitating home and small-business networks. Smart Grids can provide a gateway to equipment networks that automate control or let owners access equipment remotely. They also facilitate net metering, offering some utilities a path toward involvement in small-scale solar or wind generation. Prepayment plans that do not need special meters. Smart Grid Business Software Helps Customers Control Energy Costs There is no end to the ways Smart Grids help both small and large customers control energy costs. For instance: Multi-premises customers appreciate having all meters read on the same day so that they can more easily compare consumption at various sites. Customers in competitive regions can match their consumption profile (detailed via Smart Grid data) with specific offerings from competitive suppliers. Customers seeing inexplicable consumption patterns and power quality problems may investigate further. The result can be discovery of electrical problems that can be resolved through rewiring or maintenance—before more serious fires or accidents happen. Smart Grid Business Software Facilitates Use of Renewables Generation from wind and solar resources is a popular alternative to fossil fuel generation, which emits greenhouse gases. Wind and solar generation may also increase energy security in regions that currently import fossil fuel for use in generation. Utilities face many technical issues as they attempt to integrate intermittent resource generation into traditional grids, which traditionally handle only fully dispatchable generation. Smart Grid business software helps solves many of these issues by: Detecting sudden drops in production from renewables-generated electricity (wind and solar) and automatically triggering electricity storage and smart appliance response to compensate as needed. Supporting industry-standard distributed generation interconnection processes to reduce interconnection costs and avoid adding renewable supplies to locations already subject to grid congestion. Facilitating modeling and monitoring of locally generated supply from renewables and thus helping to maximize their use. Increasing the efficiency of “net metering” (through which utilities can use electricity generated by customers) by: Providing data for analysis. Integrating the production and consumption aspects of customer accounts. During non-peak periods, such techniques enable utilities to increase the percent of renewable generation in their supply mix. During peak periods, Smart Grid business software controls circuit reconfiguration to maximize available capacity. Conclusion Utility missions are changing. Yesterday, they focused on delivery of reasonably priced energy and water. Tomorrow, their missions will expand to encompass sustainable use and environmental improvement.Smart Grids are key to helping utilities achieve this expanded mission. But they come at a relatively high price. Utilities will need to invest heavily in new hardware, software, business process development, and staff training. Customer investments in home area networks and smart appliances will be large. Learning to change the energy and water consumption habits of a lifetime could ultimately prove even more formidable tasks.Smart Grid business software can ease the cost and difficulties inherent in a needed transition to a more flexible, reliable, responsive electricity grid. Justifying its implementation, however, requires a full understanding of the benefits it brings—benefits that can ultimately help customers, utilities, communities, and the world address global issues like energy security and climate change while minimizing costs and maximizing customer convenience. This white paper is available for download here. For further information about Oracle's Primavera Solutions for Utilities, please read our Utilities e-book.

    Read the article

  • CodePlex Daily Summary for Wednesday, November 02, 2011

    CodePlex Daily Summary for Wednesday, November 02, 2011Popular ReleasesWYGIWYS Driver: 1.1.2.0 release: Win32, x86, kernel mode driver.Reset IPv6: 1.0.0.0: First release. Only works for Spanish and English operating system editions.BExplorer (Better Explorer): Better Explorer 2.0.0.631 Alpha: Changelog: Added: Some new functions in ribbon Added: Possibility to choose displayed columns Added: Basic Search Fixed: Some bugs after navigation Fixed: Attempt to fix slow navigation and slow start Known issues: - BreadcrumbBar fails on some situations - Basic search not work quite well in some situations Please if anyone find bugs be kind and report them at the Issue Tracker! Thanks!DotNetNuke® Community Edition: 05.06.04: Major Highlights Fixed issue with upgrades on systems that had upgraded the Telerik library to 6.0.0 Fixed issue with Razor Host upgrade to 5.6.3 The logic for module administration checks contains incorrect logic in 1 place, opening the possibility of a user with edit permissions gaining access to functionality they should not have through a particularly crafted url Security FixesBrowsers support the ability to remember common strings such as usernames/addresses etc. Code was adde...Terminals: Version 2.0 - Beta 2 Release: The team has finally put the nail into the official release date for version 2.0. As bugs are winding down on the 2.0 Roadmap we decided to push out another build - the first 2.0 Beta build. Please take time to use and abuse this release. We left logging in place, and this is a debug build so be sure to submit your logs on each bug reported, and please do report all bugs! Check the source code page on the site, this beta includes all commits since (and including) the 90428 check-in back i...iTuner - The iTunes Companion: iTuner 1.4.4322: Added German (unverified, apologies if incorrect) Properly source invariant resources with correct resIDs Replaced obsolete lyric providers with working providers Fix Pseudolater to correctly morph every third char Fix null reference in CatalogBaseTrack Folder Changes: Track Folder Changes 1.0: Track Folder Changes 1.0 (binary)Windows Workflow Foundation on Codeplex: Microsoft.Activities v1.8.8: Microsoft.Activities Overview How do I install Microsoft.Activities? Updates in this release9318Technical Analysis Engine for .NET: Technical Analysis Engine 1.25: What's new in the 1.25 release (2011-11-01): - Added Williams %R indicator - Added Moving Average Envelopes indicatorBF3Rcon.NET: BF3Rcon 3.0: This release is targeted for RCON documentation based on R3. Everything should be beta stable, but it's alpha because I haven't been able to fully test it. When a stable release is ready, a proper changelog will be kept. Important Edit: There's one method that will keep this from working in Mono. GeneratePasswordHash uses void HashAlgorithm.Dispose(), which isn't in Mono. This will have to be changed to Clear() in the next release. If anyone needs a Mono version of this immediately, you can...BoxWorld: BoxWorld_2011.10.30: BoxWorld - 8.0.1110.30 This is the initial release of BoxWorld. I'd recommend downloading the installer as it contains the compiled code and everything all nicely contained. By default, you end up with this directory structure: C:\Program Files\ViperWorks\BoxWorld C:\Program Files\ViperWorks\BoxWorld\Data C:\Program Files\ViperWorks\BoxWorld\Interface C:\Program Files\ViperWorks\BoxWorld\Source In the root you have the compiled EXE files, one for the main release, one for the LITE release ...VidCoder: 1.2.1: Fixed a couple regressions: video encoder was blank in queue and crashes with the High Profile preset when opening the Settings window. Fixed problem with auto-update introduced in 1.2.0. If you have 1.2.0 you will need to update manually to get this.Koober: Koober - The Ebook Creator 0.2: The official release of Koober as Open source. Koober is a ebook creator for Windows, and Koob Reader is the reader.patterns & practices: Enterprise Library Contrib: Enterprise Library Contrib - 5.0 (Oct 2011): This release of Enterprise Library Contrib is based on the Microsoft patterns & practices Enterprise Library 5.0 core and contains the following: Common extensionsTypeConfigurationElement<T> - A Polymorphic Configuration Element without having to be part of a PolymorphicConfigurationElementCollection. AnonymousConfigurationElement - A Configuration element that can be uniquely identified without having to define its name explicitly. Data Access Application Block extensionsMySql Provider - ...Network Monitor Open Source Parsers: Network Monitor Parsers 3.4.2748: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, NetowrkMonitor_Parsers.msi continues to improve quality and fix bugs. It has included the fo...Media Companion: MC 3.420b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Movies Fixed: Fanart and poster scraping issues TV Shows (Re)Added: Rebuild single show Fixed: Issue when shows are moved from original location Ability to handle " for actor nicknames Crash when episode name contains "<" (does not scrape yet) Clears fanart when switch...patterns & practices - Unity: Unity 3.0 for .NET4.5 Preview: The Unity 3.0.1026.0 Preview enables Unity to work on .NET 4.5 with both the WinRT and desktop profiles. The major changes include: Unity projects updated to target .NET 4.5. Dynamic build plans modified to use compiled lambda expressions instead of Reflection.Emit Converting reflection to use the new TypeInfo for reflection. Projects updated to work with the Microsoft Visual Studio 2011 Preview Notes/Known Issues: The Microsoft.Practices.Unity.UnityServiceLocator class cannot be use...Managed Extensibility Framework: MEF 2 Preview 4: Detailed information on this release is available on the BCL team blog.AcDown????? - Anime&Comic Downloader: AcDown????? v3.6: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6?? ??“????”...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.33: Add JSParser.ParseExpression method to parse JavaScript expressions rather than source-elements. Add -strict switch (CodeSettings.StrictMode) to force input code to ECMA5 Strict-mode (extra error-checking, "use strict" at top). Fixed bug when MinifyCode setting was set to false but RemoveUnneededCode was left it's default value of true.New ProjectsAlgorithms: algorithms, ACM C++ programs, Electronic design automation algorithmsAll-In-One: Ein Programm - Alle Funktionen Einfach Komponente erstellen - und schon gibt es eine Funktion mehr!Bar Code File Copy: Bar Code File Copy is a mobile app that allows user to initiate file transfer from a PC to another with the help of barcodes and Windows Phone 7.BlekBoox: Google Books API CF-Soft: this is my workspace to study C# programming.CodePaste.NET Tray Application: A simple tray application to create snippets on CodePaste.NET. Right click the app and select New Snippet. A snippet will be created, the contents of your clipboard will be pasted and your clipboard will be replaced with the URL after the code snippet is successfully posted.Common libs of fanthos: Fanthos.Libs.Controls provides panel and button with special draw and special events. Fanthos.Libs.Network provides class that you can use the method likes Async WCF in dotNet 2.0.Database Filesystem: The DBFS project implements a filesystem paradigm to the storage of objects in a database. The primary development language is Visual Basic .NET.Destructible Texture2D: The project shows how to create a destructible texture2D in XNA. This project is developed using the XNA framework and C# programming languageEject and Close CD Tray: For some reason i cannot seem to find any command line program to eject or close the cd tray. So i've created my own version and sharing it for everyone to use. It is an extremely simplistic program with the sole aim of ejecting or closing the cd tray and nothing else.Excel add-in for pricing options: Price options using various distributions.FDFToolkit .NET: FDFToolkit .NET makes it easier to read, write, create, populate, merge FDF, XDP, XFDF, and XML data with Acrobat and LiveCycle PDF forms. FDFToolkit.net utilizes iTextSharp 4.x technologies, under MPL license.Fiction Catalog: A catalog project designed to store information about fictional literature.Gangbee Software: This is customized version of Nopcommerce 1.9Guess Number: This simple program is letting you to guess number from 1 to 20. H? th?ng Khách s?n: H? th?ng qu?n lí khách s?nHelloTipi Photo Downloader: Permet de télécharger les photos en qualité original de votre site de famille [url:HelloTipi|http://www.hellotipi.com].Image Tagger & Resizer: Resize, and text in the lower right of picture with i.e. copyright information.Kinect Sensor Controller: A project using kinect sensor to control objectsKJFramework.Dynamic ??????: KJFramework.Dynamic ???????? KJFramework????????????????????,?????????: KJFramework.Dynamic(??????). ???????????????????????,???????????????,?????????, ????????????????????,????,???????????????,??????????(??,???????)。 ????,?????????????????????(Component), ??????DynamicDomainComponent??。 ??,????????????,???????????DynamicDomainService??。 ??????,???????????? ???????DynamicDomainService。 ??DynamicDomainService??,????????????????。 ?????, ????????????(Tunnel)???,??????(Tunnel)...Lan ETS Player Registration Software: This is the custom software that the Lan ETS staff uses to register people when they arrive at the LAN party.LINQ to Calil: LINQ to Calil ????? (http://calil.jp/) ???????? API ? LINQ ????????。???????????? LINQ ???????????????。.NET Framework 4 ???????、Silverlight ??????????。 LINQ to Calil ??????????????????????????????????????。Login Form in VB: <project name> makes it easier for <target user group> to <activity>. You'll no longer have to <activity>. It's developed in <programming language>.MbpGpsTester: MbpGpsTester is a gps test tool.Mi proyecto: Mi proyectoMy NuGET Packages: This project contains a custom program that allows you to quickly update NuGET packages from multiple sources and package feeds. It also contains set-up of some packages that are used by HydroDesktop.MyyoutubeforWp7: MyyoutubeforWp7 is a youtube application where users can search and view the videoNERO'S TOOLS: NEROS TOOLS will be an area of programming tools and resources for the automation control industry and the everyday usage of major control systems software.NetWebScript: NetWebScript is a IL to JavaScript compiler. It allow to write complex web application in C# or any other .Net language. It provide Visual Studio integration to debug into orginial sources. It allow to share code with server.Northern Lights WP7 Toolkit: Northern Lights WP7 Toolkit contains some common tools for WP7 Developers.SccmBoundaryHealthTool: .NET application that will search the AD forest for SCCM boundary configurations and report the following: * Overlapping boundaries, including those inter-boundary type, e.g. IP subnet overlaps with AD site. * Unassigned AD sites * Orphaned AD sites in SCCMSeven: Seven is a set of tools for programs development using the Oberon 07 language. SOAP Test Application (with EWS Tools): Application that allows testing of SOAP implementations, in particular Exchange Web Services.SocialCastDotNet: C# library for accessing the SocialCast APISosyal: Sosyal as.sqlce_perfDemo: just a bit of code to demonstrate sqlce insert performance on the desktop. I'm using sqlce on the desktop to persist data as its the fastest thing I can find. Insert perfromance seems to top out at around 65,000records/sec on my laptop but it also seems to limited at this point by my hard drive speeds. The sample code should be able to accept a dataTable, dataReader or dataset and persit to a sqlce database. Also in the code is a small wrapper on the sqlce connectionstring (localconne...SqlMon for OracleClient: This is a project to doing a tool just like Quest SQL Monitor that capture sql sentence in client side.So it's so cool that juect join in and user it. It's developed in C# and Visual Studio 2010 and bases on the EasyHook that is a powerfull .NET hook library.It injects in oci.dll and hook the "OCIStmtPrepare" function which excute before sql excuting,through the parameter "stms" we can get the sqltext and display itStackOverflow Offline Reader using a pdf reader: Using one or more keywords and number of questions to download, the app creates and displays a single pdf file of the questions and answers pertaining to the keyword(s). This enables the user to read questions in an offline manner or to read in a top/down fashion. std::streambuf wrapper for COM IStream: This provides a subclass of std::streambuf that wraps a COM IStream, so you can use an IStream with any C++ code that uses iostreams or the STL algorithms with a streambuf_iterator. It does no internal buffering whatsoever (I'd be happy to change this if someone needs it).step out ring signatures .net: Simple program to demonstrate how srs works in real world of .net.System Center Service Manager 2010 SP1 WCF and Web Service: Simple WCF Service and Web Service for SCSM 2010 SP1.System32 - SDK de componentes y acceso a datos para .NET: System32 - SDK de componentes y acceso a datos para .NET Taha Mail: A Java Web Based MailTest TDS: Test TFS - 1Track Folder Changes: Displays in real time a tree with the list of created/deleted/changed files in a specific directory and its subdirectoriesumbTrafficCentral: A plugin for Umbraco that adds robust and scalable handling of redirects, storing the data in the media library. WebDAV Test Application: Application to test WebDAV functionality.XnaXaml: XnaXaml allows you to add a XAML file to your Content Pipeline in XNA and edit straight from Visual Studio. It then takes the XAML file and parses through it to create mock objects that can be used to render controls to the screen in XNA. These mock objects can also be used to pass data to any GUI system you choose.???-?????????? ?? ????????????? ????????? ??????????? ??????? ???: Under construction

    Read the article

  • CodePlex Daily Summary for Wednesday, June 19, 2013

    CodePlex Daily Summary for Wednesday, June 19, 2013Popular ReleasesSQL Compact Query Analyzer: 1.0.1.25676: Beta build of SQL Compact Query Analyzer Bug fixes: - Resolved issue where the application crashes when loading a database that contains tables without a primary key Features: - Displays database information (database version, filename, size, creation date) - Displays schema summary (number of tables, columns, primary keys, identity fields, nullable fields) - Displays the information schema views - Displays column information (database type, clr type, max length, allows null, etc) - Support...xFunc: xFunc (2.2.2.0): Added: hex, bin, oct numbers; gcd (greatest common factor) and lcm (least common multiple); Fixed: the "undef" function;Prism Model Factory Extensions: 1.0.0.0: Initial releaseCODE Framework: 4.0.30618.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.Toolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.18): XrmToolbox improvement Use new connection controls (use of Microsoft.Xrm.Client.dll) New display capabilities for tools (size, image and colors) Added prerequisites check Added Most Used Tools feature Tools improvementNew toolSolution Transfer Tool (v1.0.0.0) developed by DamSim Updated toolView Layout Replicator (v1.2013.6.17) Double click on source view to display its layoutXml All tools list Access Checker (v1.2013.6.17) Attribute Bulk Updater (v1.2013.6.18) FetchXml Tester (v1.2013.6.1...Media Companion: Media Companion MC3.570b: New* Movie - using XBMC TMDB - now renames movies if option selected. * Movie - using Xbmc Tmdb - Actor images saved from TMDb if option selected. Fixed* Movie - Checks for poster.jpg against missing poster filter * Movie - Fixed continual scraping of vob movie file (not DVD structure) * Both - Correctly display audio channels * Both - Correctly populate audio info in nfo's if multiple audio tracks. * Both - added icons and checked for DTS ES and Dolby TrueHD audio tracks. * Both - Stream d...LINQ Extensions Library: 1.0.4.2: New to release 1.0.4.2 Custom sorting extensions that perform up to 50% better than LINQ OrderBy, ThenBy extensions... Extensions allow for fine tuning of the sort by controlling the algorithm each sort uses.ExtJS based ASP.NET Controls: FineUI v3.3.0: ??FineUI ?? ExtJS ??? ASP.NET ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license)。 ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI???? ExtJS ?????????,???? ExtJS ?。 ????? FineUI ? ExtJS ?:http://fineui.com/bbs/forum.php?mod=viewthrea...CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.0 RC2: This is the second release candidate of CMake Tools for Visual Studio 1.0, which contains the following bug fixes and enhancements: The CMake Editor is now available for file extension mapping and in the Open With dialog box. A bug that caused CMake IntelliSense to appear when editing ordinary text files has been fixed. Commenting characters are now properly placed for large blocks of code. Smart indentation now behaves correctly when the cursor is not at the end of the line. Function...BarbaTunnel: BarbaTunnel 8.0: Check Version History for more information about this release.ExpressProfiler: ExpressProfiler v1.5: [+] added Start time, End time event columns [+] added SP:StmtStarting, SP:StmtCompleted events [*] fixed bug with Audit:Logout eventpatterns & practices: Data Access Guidance: Data Access Guidance Drop4 2013.06.17: Drop 4Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.94: add dstLine and dstCol attributes to the -Analyze output in XML mode. un-combine leftover comma-separates expression statements after optimizations are complete so downstream tools don't stack-overflow on really deep comma trees. add support for using a single source map generator instance with multiple runs of MinifyJavaScript, assuming that the results are concatenated to the same output file.Kooboo CMS: Kooboo CMS 4.1.1: The stable release of Kooboo CMS 4.1.0 with fixed the following issues: https://github.com/Kooboo/CMS/issues/1 https://github.com/Kooboo/CMS/issues/11 https://github.com/Kooboo/CMS/issues/13 https://github.com/Kooboo/CMS/issues/15 https://github.com/Kooboo/CMS/issues/19 https://github.com/Kooboo/CMS/issues/20 https://github.com/Kooboo/CMS/issues/24 https://github.com/Kooboo/CMS/issues/43 https://github.com/Kooboo/CMS/issues/45 https://github.com/Kooboo/CMS/issues/46 https://github....VidCoder: 1.5.0 Beta: The betas have started up again! If you were previously on the beta track you will need to install this to get back on it. That's because you can now run both the Beta and Stable version of VidCoder side-by-side! Note that the OpenCL and Intel QuickSync changes being tested by HandBrake are not in the betas yet. They will appear when HandBrake integrates them into the main branch. Updated HandBrake core to SVN 5590. This adds a new FDK AAC encoder. The FAAC encoder has been removed and now...Employee Info Starter Kit: v6.0 - ASP.NET MVC Edition: Release Home - Getting Started - Hands on Coding Walkthrough – Technology Stack - Design & Architecture EISK v6.0 – ASP.NET MVC edition bundles most of the greatest and successful platforms, frameworks and technologies together, to enable web developers to learn and build manageable and high performance web applications with rich user experience effectively and quickly. User End SpecificationsCreating a new employee record Read existing employee records Update an existing employee reco...OLAP PivotTable Extensions: Release 0.8.1: Use the 32-bit download for... Excel 2007 Excel 2010 32-bit (even Excel 2010 32-bit on a 64-bit operating system) Excel 2013 32-bit (even Excel 2013 32-bit on a 64-bit operating system) Use the 64-bit download for... Excel 2010 64-bit Excel 2013 64-bit Just download and run the EXE. There is no need to uninstall the previous release. If you have problems getting the add-in to work, see the Troubleshooting Installation wiki page. The new features in this release are: View #VALUE! Err...WPF Application Framework (WAF): WPF Application Framework (WAF) 3.0.0.440: Version: 3.0.0.440 (Release Candidate): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Please build the whole solution before you start one of the sample applications. Requirements .NET Framework 4.5 (The package contains a solution file for Visual Studio 2012) Changelog Legend: [B] Breaking change; [O] Marked member as obsolete Samples: Use ValueConverters via StaticResource instead of x:Static. Other Downloads Downloads OverviewBlackJumboDog: Ver5.9.1: 2013.06.13 Ver5.9.1 (1) Web??????SSI?#include???、CGI?????????????????????? (2) ???????????????????????????Free language translator and file converter: Free Language Translator 3.3: some bug fixes and a new link to video tutorials on Youtube.New ProjectsA model oriented programming language and IDE for model driven development: Mo+ Solution Builder is a model oriented programming language and integrated development environment for effective model driven development.ActiveWindowsStoreAppFromWin32: This is a dome of how to active Windows Store App From MFC and pass value to itAjax Jquery ASP.NET Control Toolkit: User Controls to facilitate ASP.NET integration to Ajax and JQuery components.Animations: empresaAricie - Distributed Caching Provider: Aricie - Distributed Caching Provider is a DotNetNuke module and a DNN caching provider, which leverages distributed caching technologies (AppFabric, Memcached)Background Action Pool<TParam, TReturn> (.Net Task Pool): Provides a generic typed Pool for running Tasks/Funcs in an extra thread with return value and various parameters for every task Bordecal.ImportsSorter: Visual Studio add-in for sorting imports/using blocksCodagenic Bootstrap Theme: Codagenic Ecommerce 3 ThemeeBook: eBook is an engine designed to perform text mining on TEI based document.Microsoft CRM 2011 RESTful Grid: MIcrosoft CRM 2011 RESTful Grid CRUD (Create, Read, Update, Destroy) data operations.MyHandsCursor 1.0: MyHandsCursor is a demo application that uses the Kinect for Windows SDK 1.7 .mylib2: mylib2nodejs chat server: nodejs websocket jquery chat serverPiGraph: Ph?n m?n h? tr? kh?o sát hàm s? - Pixels Ð?i H?c Khoa H?c T? Nhiên TP.HCM - Khoa Công Ngh? Thông TinPurpleKnife: The Swiss Army Knife for Extreme Networks XOS switches.PVMapper Desktop: PVMapper Open Source TeamSSQForecast: TESTStored Procedure Pager: Stored Procedure Pager is an open source asp.net/mvc user pager control.The most important feature is support for Stored Procedure.Tetris Online: A 2-player Tetris game you play against another person on the internet.ubdblib: A library providing multipolar model parameterization of electron density using ubdb aspherical atom data bank.Umbraco Multi User Picker: Multi User Picker is an Umbraco datatype for selecting one or more users.UnLinkAlbumTracks: Unlinks tracks in Windows Media Player, allowing them to have their album info updated independently.Vis på Google Maps hvor medlemmerne bor: This project is only relevant for people living in Denmark Ud fra en CSV-fil genererer dette lille program en KML fil indeholdende et koordinatsæt for hvor adresse i CSV-filen. Dette kan f.eks. bruges til at vise hvor medlemmerne af en forening bor.

    Read the article

< Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >