Search Results

Search found 653 results on 27 pages for 'overkill'.

Page 24/27 | < Previous Page | 20 21 22 23 24 25 26 27  | Next Page >

  • SQL Server source control from Visual Studio

    - by David Atkinson
    Developers have long since had to context switch between two IDEs, Visual Studio for application code development and SQL Server Management Studio for database development. While this is accepted, especially given the richness of the database development feature set in SSMS, loading a separate tool can seem a little overkill. This is where SQL Connect comes in. This is an add-in to Visual Studio that provides a connected development experience for the SQL Server developer. Connected database development involves modifying a development sandbox database, as opposed to offline development, where SQL text files are modified independently of the database. One of the main complaints of Data Dude (VS DBPro) is that it enforces the offline approach. This gripe is what SQL Connect addresses. If you don't already use SQL Source Control, you can get up and running with SQL Connect by adding a new project to your Visual Studio solution as follows: Then choose your existing development database and you're ready to go. If you already use SQL Source Control, you will need to link SQL Connect to your existing database scripts folder repository, so SQL Connect and SQL Source Control can be used collaboratively (note that SQL Source Control v.3.0.9.18 or later is required). Locate the repository (this can be found in the Setup tab in SQL Source Control). .and create a working folder for it (here I'm using TortoiseSVN). Back in Visual Studio, locate the SQL Connect panel (in the View menu if it hasn't auto loaded) and select Import SQL Source Control project Locate your working folder and click Import. This creates a Red Gate database project under your solution: From here you can modify your development database, and manage your changes in source control. To associate your development database with the project, right click on the project node, select Properties, set the database and Save. Now you're ready to make some changes. Locate the object you'd like to modify in the Solution Explorer, and double click it to invoke a query window or table designer. You also have the option to edit the creation SQL directly using Edit SQL File in Project. Keeping the development database and Visual Studio project in sync is as easy as clicking on a button. One you've made your change, you can use whichever mechanism you choose to commit to source control. Here I'm using the free open-source AnkhSVN to integrate Subversion with Visual Studio. Maintaining your database in a Visual Studio solution means that you can commit database changes and application code changes in the same changeset. This is desirable if you have continuous integration set up as you want to ensure that all files related to a change are committed atomically, so you avoid an interim "broken build". More discussion on SQL Connect and its benefits can be found in the following article on Simple Talk: No More Disconnected SQL Development in Visual Studio The SQL Connect project team is currently assessing the backlog for the next development effort, and they'd appreciate your feature suggestions, as well as your votes on their suggestions site: http://redgate.uservoice.com/forums/140800-sql-connect-for-visual-studio- A 28-day free trial of SQL Connect is available from the Red Gate website. Technorati Tags: SQL Server

    Read the article

  • Employee Info Starter Kit: Project Mission

    - by Mohammad Ashraful Alam
    Employee Info Starter Kit is an open source ASP.NET project template that is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, it illustrates how to utilize Microsoft ASP.NET 4.0, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule. where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. User Stories The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee record Update an existing employee record Delete existing employee records Key Technology Areas ASP.NET 4.0 Entity Framework 4.0 T-4 Template Visual Studio 2010 Architectural Objective There is no universal architecture which can be considered as the best for all sorts of applications around the world. Based on requirements, constraints, environment, application architecture can differ from one to another. Trade-off factors are one of the important considerations while deciding a particular architectural solution. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. “Productivity” as the architectural objective typically also includes other trade-off factors as well as, such as testability, flexibility, performance etc. Fortunately Microsoft .NET Framework 4.0 and Visual Studio 2010 includes lots of great features that have been implemented cleverly in this project to reduce these trade-off factors in the minimum level. Why Employee Info Starter Kit is Not a Framework? Application frameworks are really great for productivity, some of which are really unavoidable in this modern age. However relying too many frameworks may overkill a project, as frameworks are typically designed to serve wide range of different usage and are less customizable or editable. On the other hand having implementation patterns can be useful for developers, as it enables them to adjust application on demand. Employee Info Starter Kit provides hundreds of “connected” snippets and implementation patterns to demonstrate problem solutions in actual production environment. It also includes Visual Studio T-4 templates that generate thousands lines of data access and business logic layer repetitive codes in literally few seconds on the fly, which are fully mock testable due to language support for partial methods and latest support for mock testing in Entity Framework. Why Employee Info Starter Kit is Different than Other Open-source Web Applications? Software development is one of the rapid growing industries around the globe, where the technology is being updated very frequently to adapt greater challenges over time. There are literally thousands of community web sites, blogs and forums that are dedicated to provide support to adapt new technologies. While some are really great to enable learning new technologies quickly, in most cases they are either too “simple and brief” to be used in real world scenarios or too “complex and detailed” which are typically focused to achieve a product goal (such as CMS, e-Commerce etc) from "end user" perspective and have a long duration learning curve with respect to the corresponding technology. Employee Info Starter Kit, as a web project, is basically "developer" oriented which actually considers a hybrid approach as “simple and detailed”, where a simple domain has been considered to intentionally illustrate most of the architectural and implementation challenges faced by web application developers so that anyone can dive into deep into the corresponding new technology or concept quickly. Roadmap Since its first release by 2008 in MSDN Code Gallery, Employee Info Starter Kit gained a huge popularity in ASP.NET community and had 1, 50,000+ downloads afterwards. Being encouraged with this great response, we have a strong commitment for the community to provide support for it with respect to latest technologies continuously. Currently hosted in Codeplex, this community driven project is planned to have a wide range of individual editions, each of which will be focused on a selected application architecture, framework or platform, such as ASP.NET Webform, ASP.NET Dynamic Data, ASP.NET MVC, jQuery Ajax (RIA), Silverlight (RIA), Azure Service Platform (Cloud), Visual Studio Automated Test etc. See here for full list of current and future editions.

    Read the article

  • When to use Aspect Oriented Architecture (AOA/AOD)

    When is it appropriate to use aspect oriented architecture? I think the only honest answer to this question is that it depends on the context for which the question is being asked. There really are no hard and fast rules regarding the selection of an architectural model(s) for a project because each model provides good and bad benefits. Every system is built with a unique requirements and constraints. This context will dictate when to use one type of architecture over another or in conjunction with others. To me aspect oriented architecture models should be a sub-phase in the architectural modeling and design process especially when creating enterprise level models. Personally, I like to use this approach to create a base architectural model that is defined by non-functional requirements and system quality attributes.   This general model can then be used as a starting point for additional models because it is targets all of the business key quality attributes required by the system.Aspect oriented architecture is a method for modeling non-functional requirements and quality attributes of a system known as aspects. These models do not deal directly with specific functionality. They do categorize functionality of the system. This approach allows a system to be created with a strong emphasis on separating system concerns into individual components. These cross cutting components enables a systems to create with compartmentalization in regards to non-functional requirements or quality attributes.  This allows for the reduction in code because an each component maintains an aspect of a system that can be called by other aspects. This approach also allows for a much cleaner and smaller code base during the implementation and support of a system. Additionally, enabling developers to develop systems based on aspect-oriented design projects will be completed faster and will be more reliable because existing components can be shared across a system; thus, the time needed to create and test the functionality is reduced.   Example of an effective use of Aspect Oriented ArchitectureIn my experiences, aspect oriented architecture can be very effective with large or more complex systems. Typically, these types of systems have a large number of concerns so the act of defining them is very beneficial for reducing the system’s complexity because components can be developed to address each concern while exposing functionality to the other system components. The benefits to using the aspect oriented approach as the starting point for a system is that it promotes communication between IT and the business due to the fact that the aspect oriented models are quality attributes focused so not much technical understanding is needed to understand the model.An example of this can be in developing a new intranet website. Common Intranet Concerns: Error Handling Security Logging Notifications Database connectivity Example of a not as effective use of Aspect Oriented ArchitectureAgain in my experiences, aspect oriented architecture is not as effective with small or less complex systems in comparison.  There is no need to model concerns for a system that has a limited amount of them because the added overhead would not be justified for the actual benefits of creating the aspect oriented architecture model.  Furthermore, these types of projects typically have a reduced time schedule and a limited budget.  The creation of the Aspect oriented models would increase the overhead of a project and thus increase the time needed to implement the system. An example of this is seen by creating a small application to poll a network share for new files and then FTP them to a new location.  The two primary concerns for this project is to monitor a network drive and FTP files to a new location.  There is no need to create an aspect model for this system because there will never be a need to share functionality amongst either of these concerns.  To add to my point, this system is so small that it could be created with just a few classes so the added layer of componentizing the concerns would be complete overkill for this situation. References:Brichau, Johan; D'Hondt, Theo. (2006) Aspect-Oriented Software Development (AOSD) - An Introduction. Retreived from: http://www.info.ucl.ac.be/~jbrichau/courses/introductionToAOSD.pdf

    Read the article

  • The Power to Control Power

    - by speakjava
    I'm currently working on a number of projects using embedded Java on the Raspberry Pi and Beagle Board.  These are nice and small, so don't take up much room on my desk as you can see in this picture. As you can also see I have power and network connections emerging from under my desk.  One of the (admittedly very minor) drawbacks of these systems is that they have no on/off switch.  Instead you insert or remove the power connector (USB for the RasPi, a barrel connector for the Beagle).  For the Beagle Board this can potentially be an issue; with the micro-SD card located right next to the connector it has been known for people to eject the card when trying to power off the board, which can be quite serious for the hardware. The alternative is obviously to leave the boards plugged in and then disconnect the power from the outlet.  Simple enough, but a picture of underneath my desk shows that this is not the ideal situation either. This made me think that it would be great if I could have some way of controlling a mains voltage outlet using a remote switch or, even better, from software via a USB connector.  A search revealed not much that fit my requirements, and anything that was close seemed very expensive.  Obviously the only way to solve this was to build my own.Here's my solution.  I decided my system would support both control mechanisms (remote physical switch and USB computer control) and be modular in its design for optimum flexibility.  I did a bit of searching and found a company in Hong Kong that were offering solid state relays for 99p plus shipping (£2.99, but still made the total price very reasonable).  These would handle up to 380V AC on the output side so more than capable of coping with the UK 240V supply.  The other great thing was that being solid state, the input would work with a range of 3-32V and required a very low current of 7.5mA at 12V.  For the USB control an Arduino board seemed the obvious low-cost and simple choice.  Given the current requirments of the relay, the Arduino would not require the additional power supply and could be powered just from the USB.Having secured the relays I popped down to Homebase for a couple of 13A sockets, RS for a box and an Arduino and Maplin for a toggle switch.  The circuit is pretty straightforward, as shown in the diagram (only one output is shown to make it as simple as possible).  Originally I used a 2 pole toggle switch to select the remote switch or USB control by switching the negative connections of the low voltage side.  Unfortunately, the resistance between the digital pins of the Arduino board was not high enough, so when using one of the remote switches it would turn on both of the outlets.  I changed to a 4 pole switch and isolated both positive and negative connections. IMPORTANT NOTE: If you want to follow my design, please be aware that it requires working with mains voltages.  If you are at all concerned with your ability to do this please consult a qualified electrician to help you.It was a tight fit, especially getting the Arduino in, but in the end it all worked.  The completed box is shown in the photos. The remote switch was pretty simple just requiring the squeezing of two rocker switches and a 9V battery into the small RS supplied box.  I repurposed a standard stereo cable with phono plugs to connect the switch box to the mains outlets.  I chopped off one set of plugs and wired it to the rocker switches.  The photo shows the RasPi and the Beagle board now controllable from the switch box on the desk. I've tested the Arduino side of things and this works fine.  Next I need to write some software to provide an interface for control of the outlets.  I'm thinking a JavaFX GUI would be in keeping with the total overkill style of this project.

    Read the article

  • RewriteRule not working for certain URLs

    - by keiki
    There are a few domains pointing towards the same server, and of course I need them all redirect to only one of them. Redirects work, but only for certain URLs. What works: http://www.domain.com, http://domain.com, domain.com/index.html, domain.com/index.php, , domain.com/nonExistentDirectory, and if I click in the menu the following URLs are also redirected correctly: domain.com/foo/bar, domain.com/foo/bar.html or .php or other extension. What doesn't work: domain.com/existentDirectory, domain.com/foo/bar (if I type the URL in the address bar). If anyone will have the time and skill and will to tell me where's the mistake, I'll be deeply grateful. Here's my .htaccess file: AddHandler x-httpd-php .html .htm <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file \.(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> <ifModule mod_expires.c> ExpiresActive On ExpiresDefault "access plus 1 seconds" ExpiresByType text/html "access plus 1 seconds" ExpiresByType image/gif "access plus 2592000 seconds" ExpiresByType image/jpeg "access plus 2592000 seconds" ExpiresByType image/png "access plus 2592000 seconds" ExpiresByType text/css "access plus 2592000 seconds" ExpiresByType text/javascript "access plus 216000 seconds" ExpiresByType application/x-javascript "access plus 216000 seconds" </ifModule> <ifModule mod_headers.c> <filesMatch "\\.(ico|pdf|flv|jpg|jpeg|png|gif|swf)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> <filesMatch "\\.(css)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> <filesMatch "\\.(js)$"> Header set Cache-Control "max-age=216000, private" </filesMatch> <filesMatch "\\.(xml|txt)$"> Header set Cache-Control "max-age=216000, public, must-revalidate" </filesMatch> <filesMatch "\\.(html|htm|php)$"> Header set Cache-Control "max-age=1, private, must-revalidate" </filesMatch> </ifModule> <ifModule mod_headers.c> Header unset ETag </ifModule> FileETag None <ifModule mod_headers.c> Header unset Last-Modified </ifModule> # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress RewriteCond %{HTTP_HOST} ^foo\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.foo\.com$ RewriteRule (.*) http://domain.com/$1 [R=301,L,QSA] RewriteCond %{HTTP_HOST} ^foo1\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.foo1\.com$ RewriteRule (.*) http://domain.com/$1 [R=301,L,QSA] RewriteCond %{HTTP_HOST} ^foo2\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.foo2\.com$ RewriteRule (.*) http://domain.com/$1 [R=301,L,QSA] RewriteCond %{HTTP_HOST} ^foo3\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.foo3\.com$ RewriteRule (.*) http://domain.com/$1 [R=301,L,QSA] RewriteCond %{HTTP_HOST} ^foo8\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.foo8\.com$ RewriteRule (.*) http://domain.com/$1 [R=301,L,QSA] Thinking that the above version was overkill, I've also tried to redirect all the requests for domains different than the main on to be redirected to it like this: RewriteCond %{HTTP_HOST} !^domain\.com$ [NC] RewriteRule ^(.*)$ http://domain.com [L,R=301] Is it also wrong? Because it doesn't work either! P.S. @Sodved I've tried that and it doesn't help (I comment here because I can't seem to be able to comment your answer.) Removing the following piece of code didn't solve the issue either, so the problem must be somewhere else: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress New details: using this tool for checking the redirects I got the following results for the URLs that are not redirected: Checked link: http://domain.com/aDirectory/ Type of link: direct link (note the trailing slash above) and: Checked link: http://domain.com/aDirectory Type of redirect: 301 Moved Permanently Redirected to: http://domain.com/aDirectory/ (no trailing slash here) I hope/suspect I'm getting closer to the cause of this behavior.

    Read the article

  • Advanced Record-Level Business Intelligence with Inner Queries

    - by gt0084e1
    While business intelligence is generally applied at an aggregate level to large data sets, it's often useful to provide a more streamlined insight into an individual records or to be able to sort and rank them. For instance, a salesperson looking at a specific customer could benefit from basic stats on that account. A marketer trying to define an ideal customer could pull the top entries and look for insights or patterns. Inner queries let you do sophisticated analysis without the overhead of traditional BI or OLAP technologies like Analysis Services. Example - Order History Constancy Let's assume that management has realized that the best thing for our business is to have customers ordering every month. We'll need to identify and rank customers based on how consistently they buy and when their last purchase was so sales & marketing can respond accordingly. Our current application may not be able to provide this and adding an OLAP server like SSAS may be overkill for our needs. Luckily, SQL Server provides the ability to do relatively sophisticated analytics via inner queries. Here's the kind of output we'd like to see. Creating the Queries Before you create a view, you need to create the SQL query that does the calculations. Here we are calculating the total number of orders as well as the number of months since the last order. These fields might be very useful to sort by but may not be available in the app. This approach provides a very streamlined and high performance method of delivering actionable information without radically changing the application. It's also works very well with self-service reporting tools like Izenda. SELECT CustomerID,CompanyName, ( SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID ) As Orders, DATEDIFF(mm, ( SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) ,getdate() ) AS MonthsSinceLastOrder FROM Customers Creating Views To turn this or any query into a view, just put CREATE VIEW AS before it. If you want to change it use the statement ALTER VIEW AS. Creating Computed Columns If you'd prefer not to create a view, inner queries can also be applied by using computed columns. Place you SQL in the (Formula) field of the Computed Column Specification or check out this article here. Advanced Scoring and Ranking One of the best uses for this approach is to score leads based on multiple fields. For instance, you may be in a business where customers that don't order every month require more persistent follow up. You could devise a simple formula that shows the continuity of an account. If they ordered every month since their first order, they would be at 100 indicating that they have been ordering 100% of the time. Here's the query that would calculate that. It uses a few SQL tricks to make this happen. We are extracting the count of unique months and then dividing by the months since initial order. This query will give you the following information which can be used to help sales and marketing now where to focus. You could sort by this percentage to know where to start calling or to find patterns describing your best customers. Number of orders First Order Date Last Order Date Percentage of months order was placed since last order. SELECT CustomerID, (SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) As Orders, (SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS LastOrder, (SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS FirstOrder, DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) AS MonthsSinceFirstOrder, 100*(SELECT COUNT(DISTINCT 100*DATEPART(yy,OrderDate) + DATEPART(mm,OrderDate)) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) / DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) As OrderPercent FROM Customers

    Read the article

  • Experimenting with other search engines

    - by Bill Graziano
    I’ve been a Google user so long I can hardly remember what I used before it.  Alta Vista maybe?  Or Yahoo.  I’ve tried Bing off and on but it never really stuck.  I probably care more about search engines than your average user because of their impact on SQLTeam.com.  Lately I’ve been trying two other search engines and actually switched to one of them. I’ve played with Blekko a little in the past.  They have some interesting ways to “slice up” your results.  For example, searching on “SQL Server /blogs /date” should just search all the recently updated blogs.  Those two extra words on the search are slashtags.  The full list of slashtags runs from /forums to just see forums to /twitter to /nikon to /reviews and on and on and on.  I laughed when I saw they had slashtags for both liberal and conservative.  I’d hate to find any search results that don’t match my existing worldview :)  You can also create your own slashtags.  I created a mini-search engine for the SQL Server blogs that I read.  You can search it for “backup” at http://blekko.com/ws/backup+/billgraziano/sql-sites.  I uploaded my OPML and it limited the search to just those sites.  It seems like the site is focusing more on curating results and less on algorithms.  This is an interesting site for those power searchers.  There are some great ways to curate results using slashtags.  For 99% of my searches (type words, click on one of the first few links) slashtags are overkill.  They do have some good information on page and site ranking though so I’ll probably send some time looking through that. Blekko recently got my attention again when they said they were banning “content farms” - and that includes eHow and experts-exchange.  I always feel used when I click on a link to EE and find myself scrolling all the way to the bottom to see if I can find the answer.  Sometimes it’s there but sometimes it tells me I need to pay first.  I’ve longed for a way to always exclude certain sites.  Blekko might be taking a hammer to a problem that needs a scalpel but it’s an interesting choice.  (And some of the comments in the TechCrunch link are interesting if you’re a search nerd.) DuckDuckGo is an odd name for a search engine.  Their big hook is that they don’t have search history.  If you wade through your Google account you can probably find the page where it stores your search history.  It was pretty enlightening to find mine.  It was easy to disable but that got me started looking at other search engines.  DDG (or DukGo) just feels like Google used to in the old days.  The results are good enough and the site is fast. Searches will return a snippet from WikiPedia or other site (like StackOverflow) at the top.  I think the idea is to answer the question without needing to visit the site.  I’m not sure that’s a good thing for SQLTeam.com. The only thing I really miss is image search.  You can add a “!i” at the end of any search and it will search the images on Bing.  Bing doesn’t have a great image search but it works for most of what I need.  They call these exclamation marks “!bangs” and they are kinda, sorta like slashtags.  I’ve been using DuckDuckGo now for a few weeks and I’m pretty happy with it.  I use Chrome for my browser and it was an easy switch to make.  It’s still a little surprising seeing my search results come up in a different format.  I’m starting to get used to it though.

    Read the article

  • Best way to process a queue in C# (PDF treatment)

    - by Bartdude
    First of all let me expose what I would like to do : I already dispose of a long-time running webapp developed in ASP.NET (C#) 2.0. In this app, users can upload standard PDF files (text+pics). The app is running in production on a Windows Server 2003 and has a dedicated database server (SQL server 2008) also running Windows Server 2003. I myself am a quite experienced web developer, but never actually programmed anything non-web (or at least nothing serious). I plan on adding a functionality to the webapp for which I would need a jpg snapshot of each page of the PDF. Creating these "thumbnails" isn't the big deal as such, I already do it inside my webapp using ghostscript. I've only done it on 1 page documents for now though, and the new functionality will need to process bigger documents. In order for this process to be transparent aswell for the admins as the final users, I would like to implement some kind of queue to delay the processing of the thumbnails. There again, no problem to create the queue, it will consist of records in a table, with enough info to find the pdf document back. Then I will need to process this queue, and that's were my interrogations start. Obviously the best solution to process it isn't an ASP script or so, so I will have to get out of my known environment. No problem, but I have no idea which direction to go. Therefore, a few questions : What should I develop ? I presumably need something that is "standby" on the server, runs when needed, then returns to idle state until further notice.Should I be looking into Windows service ? Is there another more appropriate type of project ? Depending on the first answer, what will be the approach ? Should I have somehow SQL server "tell" the program/service/... to process the queue, or should I have that program/service/... periodically check the state of the queue and treat new items. In both case, which functionality can I use ? we're not talking about hundreds of PDF a day (max 50 maybe), I can totally afford to treat the queue 1 item at a time. Can you confirm I don't have to look much further on threads and so ? (I found a lot of answers talking about threads in queue treatment, but it looks quite overkill for my needs) Maybe linked to the previous question : what about concurrent call to the program, whatever it is ? Let's suppose it is currently running, and a new record comes in the queue, what should be the behaviour ? I don't need much detailed answers and would already be happy with answers like "You can do the processing with a service, and yes it's possible to have sqlserver on machine A trigger a service start on machine B" or "You have to develop xxx and then use the scheduler to run it every xxx minutes". I don't mind reading articles and so, but I can hardly afford to spend too much time learning stuff to finally realize I went the wrong way for this project, so basically I'm trying to narrow down the scope of matters I need to investigate. Thanks for reading me, I hope I'll find some helping hands on here :-)

    Read the article

  • Bidirectional real-time sync of large file tree between two distant linux servers

    - by dlo
    By large file tree I mean about 200k files, and growing all the time. A relatively small number of files are being changed in any given hour though. By bidirectional I mean that changes may occur on either server and need to be pushed to the other, so rsync doesn't seem appropriate. By distant I mean that the servers are both in data centers, but geographically remote from each other. Currently there are only 2 servers, but that may expand over time. By real-time, it's ok for there to be a little latency between syncing, but running a cron every 1-2 minutes doesn't seem right, since a very small fraction of files may change in any given hour, let alone minute. EDIT: This is running on VPS's so I might be limited on the kinds of kernel-level stuff I can do. Also, the VPS's are not resource-rich, so I'd shy away from solutions that require lots of ram (like Gluster?). What's the best / most "accepted" approach to get this done? This seems like it would be a common need, but I haven't been able to find a generally accepted approach yet, which was surprising. (I'm seeking the safety of the masses. :) I've come across lsyncd to trigger a sync at the filesystem change level. That seems clever though not super common, and I'm a bit confused by the various lsyncd approaches. There's just using lsyncd with rsync, but it seems this could be fragile for bidirectionality since rsync doesn't have a notion of memory (eg- to know whether a deleted file on A should be deleted on B or whether it's a new file on B that should be copied to A). lipsync appears to be just a lsyncd+rsync implementation, right? Then there's using lsyncd with csync2, like this: http://www.axivo.com/community/threads/lightning-fast-synchronization-with-csync2-and-lsyncd.121/ ... I'm leaning towards this approach, but csync2 is a little quirky, though I did do a successful test of it. I'm mostly concerned that I haven't been able to find a lot of community confirmation of this method. People on here seem to like Unison a lot, but it seems that it is no longer under active development and it's not clear that it has an automatic trigger like lsyncd. I've seen Gluster mentioned, but maybe overkill for what I need? UPDATE: fyi- I ended up going with the original solution I mentioned: lsyncd+csync2. It seems to work quite well, and I like the architectural approach of having the servers be very loosely joined, so that each server can operate indefinitely on its own regardless of the link quality between them.

    Read the article

  • Multi-Role Domain Controllers for Small Offices (< 50 clients)

    - by kce
    Warning: I'm a Linux/*NIX admin so this is all new to me. I understand that it's not considered a good idea to have only a single domain controller, and that it is also probably a good idea for a domain controller to only do AD/DHCP/DNS (Here). We have two offices, location A with 30 users and location B with 10 users. Our two offices are separated by a WAN that is not particularly robust so I have be instructed that we need to have standalone services in each office. This means that according to "best practices" we will need to build a domain controller and a separate file server in each office. Again, I am not knowledgeable in the ways of Windows but this seems a little unnecessary for an organization of 40 users. People have commented that I could "get away with" running file services on the domain controller as long as the "load is light". That just seems to generate more questions than it answers. What constitutes light load? What are the potential consequences of mixing these roles? Ideally I would prefer to only have one physical machine at each location. The one in location A (the location with IT staff) can act as the primary domain controller and the one in the smaller office can act as the backup domain controller. If either domain controller fails we can still use the other one for authentication (albeit with some latency) and if the WAN connection fails each office still has access to their respective "local" domain controller. If the file services are ALSO run on each server (and synchronized with something like DFS), a similar arrangement in terms of redundancy can be had without having to purchase, build and install two additional separate servers. It's not that I'm adverse to that (well, any more adverse than I am to whole thing to begin with) but to my simple mind it just seems, well a bit overkill. I can definitely see the benefits of functional separation when we're talking larger organizations, but I need to consider the additional overhead too. None of this excludes having a DRP setup for the domain controller/s. I assume you can lose two domain controllers just as easily as one.

    Read the article

  • In search of a network file system with extended caching to speed up file access

    - by Brecht Machiels
    I'm running a small home server that stores my documents. The disks in this server are in a RAID 1 configuration (using Linux md) and it's also periodically being backup up to an external hard drive to make sure I don't lose them. However, I'm always accessing the files from other computers on the home network using an SMB share, and this results in a considerable speed penalty (especially when connected over WLAN). This is quite annoying when editing large files, such as digital camera RAWs, for example. I've been looking for a solution to this problem. It would have to offer some kind of local caching to speed up the file access. The client would preferably not keep a copy of all data on the server, as it consists of a very large collection of photographs, most of which I will not access frequently. Instead, it should only cache the accessed files and sync the changes back in the background. Ideally, it would also do some smart read-ahead (cache the files that are in the same directory as the currently opened file, for examples), but I suppose that's asking a bit much. Synchronization should be automatic (on file change). Conflicting file changes (at the same time on different clients) are unlikely to happen in my use case, but I would prefer if they are handled properly (notification to the user). I've come across the following options, so far: something similar to Dropbox. iFolder seems to be the only thing that comes close, but its reputation (stability) and requirements put me off. A distributed file system such as OpenAFS. I'm not sure this will speed up file access. It is probably overkill for what I need. Maybe NFS or even Samba offer these possibilities. I read a bit about Windows' Offline Files, but its operation seems limited (at least on Windows XP). As this is just for personal use, I'm not willing to spend a lot of money. A free solution would be preferred. Also, the server needs to run on Linux, and I need a client for at least Windows.

    Read the article

  • How can I prevent an unintentional DDOS running ColdFusion 8 with IIS 6?

    - by Eric Belair
    We had an interesting outage today on one of our client's websites. Out of nowhere, the website was inaccessible. The website runs by itself on a dedicated physical Windows 2000 server (probably overkill, I know, but that's a discussion for a different day). After restarting IIS and ColdFusion Application Service, the problem came back several times. My initial thought was that it was a DNS issue, which happens occasionally - the last time it happened was after Hurricane Sandy when we our ISP was out, and we had to make some network config changes. But, it was not a DNS issue. My second thought was that it was a DDOS attack, but, there's very little reason anyone would want to take this site down. When we called our ISP, the operator on the other end noted that traffic was spiking significantly. As it turned out, the client had unintentionally caused a DDOS on the website, after they FTPed a very large video file, and then mass emailed a link to it. Hundreds of people clicked the link and brought the site to its knees. I am primarily a Website Programmer, but I often have to contribute to server administration at times. Sadly, I'm the resident ColdFusion and IIS expert, but I don't have a lot of experience with this issue. What are some basic steps that I can take to prevent this from happening in the future, since we cannot always control what files the client posts to the website. Here are some ideas I had, but I'm unsure of the impact: Limit the number of connections in IIS. Put media files on a separate server (like an Amazon site, etc.). File requests of this type currently behind a server-script (i.e. /www.site.com/viewFile.cfm?fileId=1424545, where the fileId references a file off the webroot) that logs requests, and pushes the file to the browser using CFCONTENT. I could edit this script to reject requests when they exceed a certain amount in a given time-frame (i.e. a 5MB can be accessed globally 10 times in an hour). This may cause some users frustration, but, if hundreds of users are attempting to view the file, the site is going to crash anyways, as it did today, which is way more frustrating, since there is no "pretty" message explaining why they can't get to the file. I'm open to any suggestions, as I'm continuing my research to report to the CTO with the best options, so that we can put a solution into effect. Thank you.

    Read the article

  • How to reset mysql's replication settings completely, without reinstalling it?

    - by user38060
    I set up mysql replication by adding references to binlogs, relay logs etc in my.cnf restarted mysql, it worked. I wanted to change it so I deleted all binlog related files including log-bin.index, removed binlog statements from my.cnf restarted server, works set master to '', purge master logs since now(), reset slave, stop slave, stop master. now, to set up replication again, I added binlog statements to the server. But then I hit this problem when restarting with: sudo mysqld (the only way to see mysql's startup errors) I get this error: /usr/sbin/mysqld: File '/etc/mysql/var/log-bin.index' not found (Errcode: 13) Because indeed, this file does not exist! (I deleted it, while trying to set up a new replication system) Hmm, if I change the config line to: log-bin-index = log-bin.index I get a different error: [ERROR] Can't generate a unique log-filename /etc/mysql/var/bin.(1-999) [ERROR] MSYQL_BIN_LOG::open failed to generate new file name. [ERROR] Aborting The first time I set up replication on this system, I didn't need to create this file. I did the same thing - added references to a previously non-existing file, and mysql created it. Same with relay logs, etc. I don't know why mysql insists on trying to read the old folder. Should I just reinstall the whole package again? That seems like overkill. my my.cnf: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = IP key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP table_cache = 64 sort_buffer =64K net_buffer_length =2K query_cache_limit = 1M query_cache_size = 16M slow_query_log_file = /etc/mysql/var/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes expire_logs_days = 10 max_binlog_size = 100M server-id = 3 log-bin = /etc/mysql/var/bin.log log-slave-updates log-bin-index = /etc/mysql/var/log-bin.index log-error = /etc/mysql/var/error.log relay-log = /etc/mysql/var/relay.log relay-log-info-file = /etc/mysql/var/relay-log.info relay-log-index = /etc/mysql/var/relay-log.index auto_increment_increment = 10 auto_increment_offset = 3 master-host = HOST master-user = USER master-password=PWD replicate-do-db = DBNAME collation_server=utf8_unicode_ci character_set_server=utf8 skip-character-set-client-handshake [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash [myisamchk] key_buffer_size = 16M sort_buffer_size = 8M [mysqlhotcopy] interactive-timeout !includedir /etc/mysql/conf.d/ Update: Changing all the /etc/mysql/var/xxx paths in binlog & relay log statements to local has somehow solved the problem. I thought it was apparmor causing it at first, but when I added /etc/mysql/* rw, to apparmor's config and restarted it, it still couldn't read the full path.

    Read the article

  • Forwarding RDP via a Linux machine using iptables: Not working

    - by Nimmy Lebby
    I have a Linux machine and a Windows machine behind a router that implements NAT (the diagram might be overkill, but was fun to make): I am forwarding RDP port (3389) on the router to the Linux machine because I want to audit RDP connections. For the Linux machine to forward RDP traffic, I wrote these iptables rules: iptables -t nat -A PREROUTING -p tcp --dport 3389 -j DNAT --to-destination win-box iptables -A FORWARD -p tcp --dport 3389 -j ACCEPT The port is listening on the Windows machine: C:\Users\nimmy>netstat -a Active Connections Proto Local Address Foreign Address State (..snip..) TCP 0.0.0.0:3389 WIN-BOX:0 LISTENING (..snip..) And the port is forwarding on the Linux machine: # tcpdump port 3389 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 01:33:11.451663 IP shieldsup.grc.com.56387 > linux-box.myapt.lan.ms-wbt-server: Flags [S], seq 94663035, win 8192, options [mss 1460], length 0 01:33:11.451846 IP shieldsup.grc.com.56387 > win-box.myapt.lan.ms-wbt-server: Flags [S], seq 94663035, win 8192, options [mss 1460], length 0 However, I am not getting any successful RDP connections from the outside. The port is not even responding: C:\Users\outside-nimmy>telnet example.com 3389 Connecting To example.com...Could not open connection to the host, on port 3389: Connect failed Any ideas? Update Per @Zhiqiang Ma, I looked at nf_conntrack proc file during a connection attempt and this is what I see (192.168.3.1 = linux-box, 192.168.3.5 = win-box): # cat /proc/net/nf_conntrack | grep 3389 ipv4 2 tcp 6 118 SYN_SENT src=4.79.142.206 dst=192.168.3.1 sport=43142 dport=3389 packets=6 bytes=264 [UNREPLIED] src=192.168.3.5 dst=4.79.142.206 sport=3389 dport=43142 packets=0 bytes=0 mark=0 secmark=0 zone=0 use=2 2nd update Got tcpdump on the router and it seems that win-box is sending an RST packet: 21:20:24.767792 IP shieldsup.grc.com.45349 > linux-box.myapt.lan.3389: S 19088743:19088743(0) win 8192 <mss 1460> 21:20:24.768038 IP shieldsup.grc.com.45349 > win-box.myapt.lan.3389: S 19088743:19088743(0) win 8192 <mss 1460> 21:20:24.770674 IP win-box.myapt.lan.3389 > shieldsup.grc.com.45349: R 721745706:721745706(0) ack 755785049 win 0 Why would Windows be doing this?

    Read the article

  • Creating a network link between 2 very close buildings

    - by Daniel Johnson
    I have a charity who have two adjacent medium sized modern detached houses (in the UK): the buildings stand next to each other and are less than 5 metres apart. They have DSL connected to a single computer in one of the buildings. They want to add a network with wireless, and want it to work across both buildings. Being a charity they need to keep costs down. The network would be used for sharing Word documents, e-mail, browsing and skyping. My initial thoughts were to connect the buildings with fibre. So: Option 1 Use fibre between the buildings. Sufficient cable and two TP-LINK MC100CM Fast Ethernet Media Converters. Cost ~£80.00. But there is the extra cost and hassle of running the cable down and up the external walls, lifting and relaying paving, and burying underground. Never having fitted fibre I'm also a little worried about going up the wall and then bending the cable at 90 degrees to go through the wall and into the building. Option 2 Use two TP-Link TL-WA7510N High Powered Outdoor 5Ghz 15dBi Wireless antennas to connect the buildings. There is a clear line of sight at first floor level. Cost ~£100. And much easier to fit than fibre! Is using the TL-WA7510Ns overkill? Is there something more suitable? I had hoped to use some Netgear stuff, e.g. two DGN2200, one in each house and also use them to provide the wireless link between the buildings. However, in bridge mode wireless client association is not available and repeater mode with client association only supports WEP security which isn't strong enough. Is there something similar that would be up to the job? Option 3 Connect the buildings with UTP cable. My concerns here are risk of electric shock due to a difference of potential between the buildings (or are they so close this shouldn't be an issue) and protection from lightning strikes. Is fitting lighting arrestors expensive? And what can be done to ameliorate against the risk of shock? This all falls outside my area of expertise so I would really appreciate some advice.

    Read the article

  • System won't boot: Gigabyte HD 7790 1GB OC GPU issue or Corsair VS550 PSU issue?

    - by MGOwen
    Installed a new GPU, and PC won't boot. Turn it on and: No monitor signal at all (tried HDMI and VGA via DVI, on 2 working monitors). CPU and GPU fans DO spin, but No system beeps, no sounds from drives (they might make a small noise in the first 1 second or so, but there's definitely no OS loading or anything like that) If hit "power off" button it turns off immediately (no holding down for 3 seconds like usual) If I put my old HD 5670 GPU back in, everything works fine. But (plot twist!) card is not totally dead. My friend put it in his PC, and it works fine (he even played a game for 15 minutes, no issues). He has a Corsair TX850 850W and a Gigabyte MB. So my main theory is: the GPU isn't getting enough power from the PSU. But is it: Bad PSU? Seems unlikely, since it works fine with the other GPU. Also, the PSU Is brand new and 550W (single 42A/504W 12V rail). Overkill for this GPU. Corsair is a decent brand, but maybe just mine is faulty? Bad GPU? Could it be drawing more power than it should be, somehow, or something? Supposedly HD 7790 needs only 21A/75W on the 12v rail, though this one is factory overclocked a bit... but should that triple the power requirement? Something else? Could there be a motherboard incompatibility somehow? Both MB and GPU are less than a year old and PCI Express 3.0 x16. Things I've tried: Re-seating the video card Testing PC with old GPU (works fine, same PCIe slot). Checked AMD's stated amp/watt requirements of a 7790 and my PSU (see above). My PSU can output twice the amps (single rail) and 5x the Wattage a 7790 needs. Here are the full specs: Gigabyte HD 7790 1GB OC GPU Corsair VS550 550W PSU 4GB RAM AsRock H61M U3S3 motherboard i3-2100 500GB SATA HDD (2007-ish) blu-ray drive (new) PCI 802.11g card Edit: Motherboard BIOS Update seems to have fixed it. (If anyone has same problem and it doesn't work, comment here).

    Read the article

  • Independence Day for Software Components &ndash; Loosening Coupling by Reducing Connascence

    - by Brian Schroer
    Today is Independence Day in the USA, which got me thinking about loosely-coupled “independent” software components. I was reminded of a video I bookmarked quite a while ago of Jim Weirich’s “Grand Unified Theory of Software Design” talk at MountainWest RubyConf 2009. I finally watched that video this morning. I highly recommend it. In the video, Jim talks about software connascence. The dictionary definition of connascence (con-NAY-sense) is: 1. The common birth of two or more at the same time 2. That which is born or produced with another. 3. The act of growing together. The brief Wikipedia page about Connascent Software Components says that: Two software components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system. Connascence is a way to characterize and reason about certain types of complexity in software systems. The term was introduced to the software world in Meilir Page-Jones’ 1996 book “What Every Programmer Should Know About Object-Oriented Design”. The middle third of that book is the author’s proposed graphical notation for describing OO designs. UML became the standard about a year later, so a revised version of the book was published in 1999 as “Fundamentals of Object-Oriented Design in UML”. Weirich says that the third part of the book, in which Page-Jones introduces the concept of connascence “is worth the price of the entire book”. (The price of the entire book, by the way, is not much – I just bought a used copy on Amazon for $1.36, so that was a pretty low-risk investment. I’m looking forward to getting the book and learning about connascence from the original source.) Meanwhile, here’s my summary of Weirich’s summary of Page-Jones writings about connascence: The stronger the form of connascence, the more difficult and costly it is to change the elements in the relationship. Some of the connascence types, ordered from weak to strong are: Connascence of Name Connascence of name is when multiple components must agree on the name of an entity. If you change the name of a method or property, then you need to change all references to that method or property. Duh. Connascence of name is unavoidable, assuming your objects are actually used. My main takeaway about connascence of name is that it emphasizes the importance of giving things good names so you don’t need to go changing them later. Connascence of Type Connascence of type is when multiple components must agree on the type of an entity. I assume this is more of a problem for languages without compilers (especially when used in apps without tests). I know it’s an issue with evil JavaScript type coercion. Connascence of Meaning Connascence of meaning is when multiple components must agree on the meaning of particular values, e.g that “1” means normal customer and “2” means preferred customer. The solution to this is to use constants or enums instead of “magic” strings or numbers, which reduces the coupling by changing the connascence form from “meaning” to “name”. Connascence of Position Connascence of positions is when multiple components must agree on the order of values. This refers to methods with multiple parameters, e.g.: eMailer.Send("[email protected]", "[email protected]", "Your order is complete", "Order completion notification"); The more parameters there are, the stronger the connascence of position is between the component and its callers. In the example above, it’s not immediately clear when reading the code which email addresses are sender and receiver, and which of the final two strings are subject vs. body. Connascence of position could be improved to connascence of type by replacing the parameter list with a struct or class. This “introduce parameter object” refactoring might be overkill for a method with 2 parameters, but would definitely be an improvement for a method with 10 parameters. This points out two “rules” of connascence:  The Rule of Degree: The acceptability of connascence is related to the degree of its occurrence. The Rule of Locality: Stronger forms of connascence are more acceptable if the elements involved are closely related. For example, positional arguments in private methods are less problematic than in public methods. Connascence of Algorithm Connascence of algorithm is when multiple components must agree on a particular algorithm. Be DRY – Don’t Repeat Yourself. If you have “cloned” code in multiple locations, refactor it into a common function.   Those are the “static” forms of connascence. There are also “dynamic” forms, including… Connascence of Execution Connascence of execution is when the order of execution of multiple components is important. Consumers of your class shouldn’t have to know that they have to call an .Initialize method before it’s safe to call a .DoSomething method. Connascence of Timing Connascence of timing is when the timing of the execution of multiple components is important. I’ll have to read up on this one when I get the book, but assume it’s largely about threading. Connascence of Identity Connascence of identity is when multiple components must reference the entity. The example Weirich gives is when you have two instances of the “Bob” Employee class and you call the .RaiseSalary method on one and then the .Pay method on the other does the payment use the updated salary?   Again, this is my summary of a summary, so please be forgiving if I misunderstood anything. Once I get/read the book, I’ll make corrections if necessary and share any other useful information I might learn.   See Also: Gregory Brown: Ruby Best Practices Issue #24: Connascence as a Software Design Metric (That link is failing at the time I write this, so I had to go to the Google cache of the page.)

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Collaborative Design via Excel?

    - by thatjeffsmith
    As you may have heard last week, we have a new version of Oracle SQL Developer Data Modeler now available as an Early Adopter release. Version 3.3 has quite a few new features and I’ll be previewing them here. Today’s topic is our new Excel integration. It builds off of last week’s lesson: Search, so you may want to go read that first. They say it takes a village to raise a child. I say it takes a team to build a data model. You have your techie folks, your business folks, your in-betweeners, and your database geeks. Who gets to define how customers are represented and stored in your database? That data lives forever, so you better get it right from the beginning, or you’ll be living in a hacker’s paradise for years to come. Lots of good rantings, ravings, and advice on this topic in general on Karen Lopez’s (@datachick) blog. But let’s say you are the primary modeler on a project. You dutifully interview the business folks for their requirements. You sit down and start to model and think you’re pretty close. Now you need someone to confirm your assumptions and provide some feedback. Do you send your model over? Take a screenshot and blow it up on a whiteboard? Export to HTML and let them take a magic marker to their monitors? Or maybe you bite the bullet and install your modeling software on their desktops and take the hours or days required to train them up on how to use the the tool. Wouldn’t it be nice if they could just mark up their corrections in Excel and let you suck the updates back in? This is what we have started to build in Oracle SQL Developer Data Modeler. Let’s say you have a new table called ‘UT_STARTUPS.’ It looks a little something like this: A table in Oracle SQL Developer Data Modeler What I would like to do is have my team or co-worker review how I have defined those columns. Perhaps TIMESTAMP is overkill or maybe the column names themselves aren’t up to snuff. What I am going to do is now search for all the columns in my table, then export that to Excel. So do a search for UT_STARTUPS. Search, filter, then Report With the filter set to ‘Columns,’ if I do a report I’ll be only getting the columns that are resolving to my search term. So as long as my table name is unique in the model, I should get what I’m looking for. Here’s what I see when I click on the Report button: XLS or XLSX, either format is just fine I want to decide how the Column data is exported to Excel though, so I’m going to create a report template that I can use going forward. So click the ‘Manage’ button and setup a new template. I’m going to call mine ‘CollaborativeDevelopment.’ The templates allow me to define what properties are included in the reports. Once this is set, I’ll have the XLS file generated, and get to work Now let the Excel junkies do their stuff Note that not ALL of the report properties are update-able (yes, I made up a new word there) via Excel. We’ll have the full list of properties documented going forward, but in my Excel sheet, note that I can’t change the table name or the data types for the columns. I’m going to update some column names and supply ‘nice’ comments so the database users know what’s what. Here’s my input for the designer/architect/database dude: Be kind, please rew…use comments. Save the file, email it back to your modeler. Update the model from Excel That’s right, it’s a right mouse click from your model in the tree If everything goes right, you’ll see a nice confirmation message: It’s alive! Another to-do item on tap – making this dialog more informative. We’ll be showing exactly what in your model was updated from Excel. Let’s take another look at the model now Voila! Why are we doing this again? The goal is to reduce the number of round-trips from the modeler and the business process owner. One is used to working with Excel – why not allow them to mark up their changes in the tool they already know? This is an early adopter release and I anticipate this feature getting a good bit of tuning up before we release. Why don’t you download 3.3, give it a whirl, and let us know what you think?

    Read the article

  • FairScheduling Conventions in Hadoop

    - by dan.mcclary
    While scheduling and resource allocation control has been present in Hadoop since 0.20, a lot of people haven't discovered or utilized it in their initial investigations of the Hadoop ecosystem. We could chalk this up to many things: Organizations are still determining what their dataflow and analysis workloads will comprise Small deployments under tests aren't likely to show the signs of strains that would send someone looking for resource allocation options The default scheduling options -- the FairScheduler and the CapacityScheduler -- are not placed in the most prominent position within the Hadoop documentation. However, for production deployments, it's wise to start with at least the foundations of scheduling in place so that you can tune the cluster as workloads emerge. To do that, we have to ask ourselves something about what the off-the-rack scheduling options are. We have some choices: The FairScheduler, which will work to ensure resource allocations are enforced on a per-job basis. The CapacityScheduler, which will ensure resource allocations are enforced on a per-queue basis. Writing your own implementation of the abstract class org.apache.hadoop.mapred.job.TaskScheduler is an option, but usually overkill. If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go. In particular, we can do user-specific pools so that default users get their fair share, and specific users are given the resources their workloads require. To enable fair scheduling, we're going to need to do a couple of things. First, we need to tell the JobTracker that we want to use scheduling and where we're going to be defining our allocations. We do this by adding the following to the mapred-site.xml file in HADOOP_HOME/conf: <property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.FairScheduler</value> </property> <property> <name>mapred.fairscheduler.allocation.file</name> <value>/path/to/allocations.xml</value> </property> <property> <name>mapred.fairscheduler.poolnameproperty</name> <value>pool.name</value> </property> <property> <name>pool.name</name> <value>${user.name}</name> </property> What we've done here is simply tell the JobTracker that we'd like to task scheduling to use the FairScheduler class rather than a single FIFO queue. Moreover, we're going to be defining our resource pools and allocations in a file called allocations.xml For reference, the allocation file is read every 15s or so, which allows for tuning allocations without having to take down the JobTracker. Our allocation file is now going to look a little like this <?xml version="1.0"?> <allocations> <pool name="dan"> <minMaps>5</minMaps> <minReduces>5</minReduces> <maxMaps>25</maxMaps> <maxReduces>25</maxReduces> <minSharePreemptionTimeout>300</minSharePreemptionTimeout> </pool> <mapreduce.job.user.name="dan"> <maxRunningJobs>6</maxRunningJobs> </user> <userMaxJobsDefault>3</userMaxJobsDefault> <fairSharePreemptionTimeout>600</fairSharePreemptionTimeout> </allocations> In this case, I've explicitly set my username to have upper and lower bounds on the maps and reduces, and allotted myself double the number of running jobs. Now, if I run hive or pig jobs from either the console or via the Hue web interface, I'll be treated "fairly" by the JobTracker. There's a lot more tweaking that can be done to the allocations file, so it's best to dig down into the description and start trying out allocations that might fit your workload.

    Read the article

  • RTS Voxel Engine using LWJGL - Textures glitching

    - by Dieter Hubau
    I'm currently working on an RTS game engine using voxels. I have implemented a basic chunk manager using an Octree of Octrees which contains my voxels (simple square blocks, as in Minecraft). I'm using a Voronoi-based terrain generation to get a simplistic yet relatively realistic heightmap. I have no problem showing a 256*256*256 grid of voxels with a decent framerate (250), because of frustum culling, face culling and only rendering visible blocks. For example, in a random voxel grid of 256*256*256 I generally only render 100k-120k faces, not counting frustum culling. Frustum culling is only called every 100ms, since calling it every frame seemed a bit overkill. Now I have reached the stage of texturing and I'm experiencing some problems: Some experienced people might already see the problem, but if we zoom in, you can see the glitches more clearly: All the seams between my blocks are glitching and kind of 'overlapping' or something. It's much more visible when you're moving around. I'm using a single, simple texture map to draw on my cubes, where each texture is 16*16 pixels big: I have added black edges around the textures to get a kind of cellshaded look, I think it's cool. The texture map has 256 textures of each 16*16 pixels, meaning the total size of my texture map is 256*256 pixels. The code to update the ChunkManager: public void update(ChunkManager chunkManager) { for (Octree<Cube> chunk : chunks) { if (chunk.getId() < 0) { // generate an id for the chunk to be able to call it later chunk.setId(glGenLists(1)); } glNewList(chunk.getId(), GL_COMPILE); glBegin(GL_QUADS); faces += renderChunk(chunk); glEnd(); glEndList(); } } Where my renderChunk method is: private int renderChunk(Octree<Cube> node) { // keep track of the number of visible faces in this chunk int faces = 0; if (!node.isEmpty()) { if (node.isLeaf()) { faces += renderItem(node); } List<Octree<Cube>> children = node.getChildren(); if (children != null && !children.isEmpty()) { for (Octree<Cube> child : children) { faces += renderChunk(child); } } return faces; } Where my renderItem method is the following: private int renderItem(Octree<Cube> node) { Cube cube = node.getItem(-1, -1, -1); int faces = 0; float x = node.getPosition().x; float y = node.getPosition().y; float z = node.getPosition().z; float size = cube.getSize(); Vector3f point1 = new Vector3f(-size + x, -size + y, size + z); Vector3f point2 = new Vector3f(-size + x, size + y, size + z); Vector3f point3 = new Vector3f(size + x, size + y, size + z); Vector3f point4 = new Vector3f(size + x, -size + y, size + z); Vector3f point5 = new Vector3f(-size + x, -size + y, -size + z); Vector3f point6 = new Vector3f(-size + x, size + y, -size + z); Vector3f point7 = new Vector3f(size + x, size + y, -size + z); Vector3f point8 = new Vector3f(size + x, -size + y, -size + z); TextureCoordinates tc = textureManager.getTextureCoordinates(cube.getCubeType()); // front face if (cube.isVisible(CubeSide.FRONT)) { faces++; glTexCoord2f(TEXTURE_U_COORDINATES[tc.u], TEXTURE_V_COORDINATES[tc.v]); glVertex3f(point1.x, point1.y, point1.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u + 1], TEXTURE_V_COORDINATES[tc.v]); glVertex3f(point4.x, point4.y, point4.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u + 1], TEXTURE_V_COORDINATES[tc.v + 1]); glVertex3f(point3.x, point3.y, point3.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u], TEXTURE_V_COORDINATES[tc.v + 1]); glVertex3f(point2.x, point2.y, point2.z); } // back face if (cube.isVisible(CubeSide.BACK)) { faces++; glTexCoord2f(TEXTURE_U_COORDINATES[tc.u + 1], TEXTURE_V_COORDINATES[tc.v]); glVertex3f(point5.x, point5.y, point5.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u + 1], TEXTURE_V_COORDINATES[tc.v + 1]); glVertex3f(point6.x, point6.y, point6.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u], TEXTURE_V_COORDINATES[tc.v + 1]); glVertex3f(point7.x, point7.y, point7.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u], TEXTURE_V_COORDINATES[tc.v]); glVertex3f(point8.x, point8.y, point8.z); } // left face if (cube.isVisible(CubeSide.SIDE_LEFT)) { faces++; glTexCoord2f(TEXTURE_U_COORDINATES[tc.u], TEXTURE_V_COORDINATES[tc.v]); glVertex3f(point5.x, point5.y, point5.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u + 1], TEXTURE_V_COORDINATES[tc.v]); glVertex3f(point1.x, point1.y, point1.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u + 1], TEXTURE_V_COORDINATES[tc.v + 1]); glVertex3f(point2.x, point2.y, point2.z); glTexCoord2f(TEXTURE_U_COORDINATES[tc.u], TEXTURE_V_COORDINATES[tc.v + 1]); glVertex3f(point6.x, point6.y, point6.z); } // ETC ETC return faces; } When all this is done, I simply render my lists every frame, like this: public void render(ChunkManager chunkManager) { glBindTexture(GL_TEXTURE_2D, textureManager.getCubeTextureId()); // load all chunks from the tree List<Octree<Cube>> chunks = chunkManager.getTree().getAllItems(); for (Octree<Cube> chunk : chunks) { if (frustum.cubeInFrustum(chunk.getPosition(), chunk.getSize() / 2)) { glCallList(chunk.getId()); } } } I don't know if anyone is willing to go through all of this code or maybe you can spot the problem right away, but that is basically the problem, and I can't find a solution :-) Thanks for reading and any help is appreciated!

    Read the article

  • Dynamically switching the theme in Orchard

    - by Bertrand Le Roy
    It may sound a little puzzling at first, but in Orchard CMS, more than one theme can be active at any given time. The reason for that is that we have an extensibility point that allows a module (or a theme) to participate in the choice of the theme to use, for each request. The motivation for building the theme engine this way was to enable developers to switch themes based on arbitrary criteria, such as user preferences or the user agent (if you want to serve a mobile theme for phones for example). The choice is made between the active themes, which is why there is a difference between the default theme and the active themes. In order to have a say in the choice of the theme, all you have to do is implement IThemeSelector. That interface is quite simple as it only has one method, GetTheme, that takes the current RequestContext and returns a ThemeSelectorResult or null if the implementation of the interface does not want to participate in the current request (we'll see an example in a moment). ThemeSelectorResult itself is just a ThemeName string property and an integer Priority. We're using a priority so that an arbitrary number of implementations of IThemeSelector can contribute to the choice of a theme. If you look for existing implementations of the interface in Orchard, you'll find four: AdminThemeSelector: selects the TheAdmin theme with a very high priority (100) if the current request is for a page that is part of the admin. Otherwise, null is returned, which enables other implementations to choose the theme. PreviewThemeSelector: selects the preview theme if there is one, with a high priority (90), and null otherwise. This enables administrators to view the site under a different theme while everybody else continues to see the current default theme. SiteThemeSelector: this is the implementation that is doing what you expect most of the time, which is to get the current theme from site settings and set it with a priority of –5. SafeModeThemeSelector: this is the fallback implementation, which should almost never win. It sets the theme as the safe mode theme, which has no style and just uses the default templates for everything. The priority is very low (-100). While this extensibility mechanism is great to have, I wanted to bring that level of choice into the hands of the site administrator rather than just developers. In order to achieve that, I built the Vandelay Theme Picker module. The module provides administration UI to create rules for theme selection. It provides its own extensibility point (the IThemeSelectionRule interface) and one implementation of a rule: UserAgentThemeSelectorRule. This rule gets the current user agent from the context and tries to match it with a regular expression that the administrator can configure in the admin UI. You can for example configure a rule with a regular expression that matches IE6 and serve a different subtheme where the stylesheet has been tweaked for such an antique browser. Another possible configuration is to detect mobile devices from their agent string and serve the mobile theme. All those operations can be done with this module entirely from the admin UI, without writing a line of code. The module also offers the administrator the opportunity to inject a link into the front-end in a specific zone and with a specific position that enables the user to switch to the default theme if he wishes to. This is especially useful for sites that use a mobile theme but still want to allow users to use the full desktop site. While the module is nice and flexible, it may be overkill. On my own personal blog, I have only two active themes: the desktop theme and the mobile theme. I'm fine with going into code to change the criteria on which to switch the theme, so I'm not using my own Theme Picker module. Instead, I made the mobile theme a theme with code (in other words there is a csproj file in the theme). The project includes a single C# file, my MobileThemeSelector for which the code is the following: public class MobileThemeSelector : IThemeSelector { private static readonly Regex _Msie678 = new Regex(@"^Mozilla\/4\.0 \(compatible; MSIE [678]" + @"\.0; Windows NT \d\.\d(.*)\)$", RegexOptions.IgnoreCase); private ThemeSelectorResult _requestCache; private bool _requestCached; public ThemeSelectorResult GetTheme(RequestContext context) { if (_requestCached) return _requestCache; _requestCached = true; var userAgent = context.HttpContext.Request.UserAgent; if (userAgent.IndexOf("phone", StringComparison.OrdinalIgnoreCase) != -1 || _Msie678.IsMatch(userAgent) || userAgent.IndexOf("windows live writer", StringComparison.OrdinalIgnoreCase) != -1) { _requestCache = new ThemeSelectorResult { Priority = 10, ThemeName = "VuLuMobile" }; } return _requestCache; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The theme selector selects the current theme for Internet Explorer versions 6 to 8, for phones, and for Windows Live Writer (so that the theme that is used when I write posts is as simple as possible). What's interesting here is that it's the theme that selects itself here, based on its own criteria. This should give you a good panorama of what's possible in terms of dynamic theme selection in Orchard. I hope you find some fun uses for it. As usual, I can't wait to see what you're going to come up with…

    Read the article

  • Design review for application facing memory issues

    - by Mr Moose
    I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the problem domain is as follows; The application takes in a “dataset” that consists of numerous text files containing related data An individual text file within the dataset usually contains approx 20 “headers” that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. The memory problem Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; perform all the validation whilst the data was in memory relate it using an object model Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We’ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can’t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. Potential improvements I’ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can’t be trusted to be unique across all submitters). Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. Questions For systems like this that import large quantities of data, how to you go about keeping transactions small. I’ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? The tab delimited data section is read into a DataTable to be viewed in a grid. I don’t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention.

    Read the article

  • Silverlight Reporting Application Part 3.5 - Prism Background and WCF RIA [Series Intermission]

    Taking a step back before I dive into the details and full-on coding fun, I wanted to once again respond to a comment on my last post to clear up some things in regards to how I'm setting up my project and some of the choices I've made. Aka, thanks Ben. :) Prism Project Setup For starters, I'm not the ideal use case for a Prism application. In most cases where you've got a one-man team, Prism can be overkill as it is more intended for large teams who are geographically dispersed or in applications that have a larger scale than my Recruiting application in which you'll greatly benefit from modularity, delayed loading of xaps, etc. What Prism offers, though, is a manner for handling UI, commands, and events with the idea that, through a modular approach in which no parts really need to know about one another, I can update this application bit by bit as hiring needs change or requirements differ between offices without having to worry that changing something in the Jobs module will break something in, say, the Scheduling module. All that being said, here's a look at how our project breakdown for Recruit (MVVM/Prism implementation) looks: This could be a little misleading though, as each of those modules is actually another project in the overall Recruit solution. As far as what the projects actually are, that looks a bit like this: Recruiting Solution Recruit (Shell up there) - Main Silverlight Application .Web - Default .Web application to host the Silverlight app Infrastructure - Silverlight Class Library Project Modules - Silverlight Class Library Projects Infrastructure &Modules The Infrastructure project is probably something you'll see to some degree in any composite application. In this application, it is going to contain custom commands (you'll see the joy of these in a post or two down the road), events, helper classes, and any custom classes I need to share between different modules. Think of this as a handy little crossroad between any parts of your application. Modules on the other hand are the bread and butter of this application. Besides the shell, which holds the UI skeleton, and the infrastructure, which holds all those shared goodies, the modules are self-contained bundles of functionality to handle different concerns. In my scenario, I need a way to look up and edit Jobs, Applicants, and Schedule interviews, a Notification module to handle telling the user when different things are happening (i.e., loading from database), and a Menu to control interaction and moving between different views. All modules are going to follow the following pattern: The module class will inherit from IModule and handle initialization and loading the correct view into the correct region, whereas the Views and ViewModels folders will contain paired Silverlight user controls and ViewModel class backings. WCF RIA Services Since we've got all the projects in a single solution, we did not have to go the route of creating a WCR RIA Services Class Library. Every module has it's WCF RIA link back to the main .Web project, so the single Linq-2-SQL (yes, I said Linq-2-SQL, but I'll soon be switching to OpenAccess due to the new visual designer) context I'm using there works nicely with the scope of my project. If I were going for completely separating this project out and doing different, dynamically loaded elements, I'd probably go for the separate class library. Hope that clears that up. In the future though, I will be using that in a project that I've got in the "when I've got enough time to work on this" pipeline, so we'll get into that eventually- and hopefully when WCF RIA is in full release! Why Not use Silverlight Navigation/Business Template? The short answer- I'm a creature of habit, and having used Silverlight for a few years now, I'm used to doing lots of things manually. :) Plus, starting with a blank slate of a project I'm able to set up things exactly as I want them to be. In this case, rather than the navigation frame we would see in one of the templates, the MainRegion/ContentControl is working as our main navigation window. In many cases I will use theSilverlight navigation template to start things off, however in this case I did not need those features so I opted out of using that. Next time when I actually hit post #4, we're going to get into the modules and starting to get functionality into this application. Next week is also release week for the Q1 2010 release, so be sure to check out our annualWebinar Week (I might be biased, but Wednesday is my favorite out of the group). Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • WIF-less claim extraction from ACS: SWT

    - by Elton Stoneman
    WIF with SAML is solid and flexible, but unless you need the power, it can be overkill for simple claim assertion, and in the REST world WIF doesn’t have support for the latest token formats.  Simple Web Token (SWT) may not be around forever, but while it's here it's a nice easy format which you can manipulate in .NET without having to go down the WIF route. Assuming you have set up a Relying Party in ACS, specifying SWT as the token format: When ACS redirects to your login page, it will POST the SWT in the first form variable. It comes through in the BinarySecurityToken element of a RequestSecurityTokenResponse XML payload , the SWT type is specified with a TokenType of http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0 : <t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">   <t:Lifetime>     <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T07:31:18.655Z</wsu:Created>     <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T09:11:18.655Z</wsu:Expires>   </t:Lifetime>   <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">     <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">       <Address>http://localhost/x.y.z</Address>     </EndpointReference>   </wsp:AppliesTo>   <t:RequestedSecurityToken>     <wsse:BinarySecurityToken wsu:Id="uuid:fc8d3332-d501-4bb0-84ba-d31aa95a1a6c" ValueType="http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> [ base64string ] </wsse:BinarySecurityToken>   </t:RequestedSecurityToken>   <t:TokenType>http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0</t:TokenType>   <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>   <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> </t:RequestSecurityTokenResponse> Reading the SWT is as simple as base-64 decoding, then URL-decoding the element value:     var wrappedToken = XDocument.Parse(HttpContext.Current.Request.Form[1]);     var binaryToken = wrappedToken.Root.Descendants("{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}BinarySecurityToken").First();     var tokenBytes = Convert.FromBase64String(binaryToken.Value);     var token = Encoding.UTF8.GetString(tokenBytes);     var tokenType = wrappedToken.Root.Descendants("{http://schemas.xmlsoap.org/ws/2005/02/trust}TokenType").First().Value; The decoded token contains the claims as key/value pairs, along with the issuer, audience (ACS realm), expiry date and an HMAC hash, which are in query string format. Separate them on the ampersand, and you can write out the claim values in your logged-in page:     var decoded = HttpUtility.UrlDecode(token);     foreach (var part in decoded.Split('&'))     {         Response.Write("<pre>" + part + "</pre><br/>");     } - which will produce something like this: http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant=2012-08-31T06:57:01.855Z http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod=http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname=XYZ http://schemas.xmlsoap.org/ws/2005/05/identity/claims/[email protected] http://schemas.xmlsoap.org/ws/2005/05/identity/claims/[email protected] http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider=http://fs.svc.xyz.com/adfs/services/trust Audience=http://localhost/x.y.z ExpiresOn=1346402225 Issuer=https://x-y-z.accesscontrol.windows.net/ HMACSHA256=oDCeEDDAWEC8x+yBnTaCLnzp4L6jI0Z/xNK95PdZTts= The HMAC hash lets you validate the token to ensure it hasn’t been tampered with. You'll need the token signing key from ACS, then you can re-sign the token and compare hashes. There's a full implementation of an SWT parser and validator here: How To Request SWT Token From ACS And How To Validate It At The REST WCF Service Hosted In Windows Azure, and a cut-down claim inspector on my github code gallery: ACS Claim Inspector. Interestingly, ACS lets you have a value for your logged-in page which has no relation to the realm for authentication, so you can put this code into a generic claim inspector page, and set that to be your logged-in page for any relying party where you want to check what's being sent through. Particularly handy with ADFS, when you're modifying the claims provided, and want to quickly see the results.

    Read the article

  • Easiest, most fun way to program 2D games? Flash? XNA? Some other engine?

    - by Maxi
    Hi, this is a post detailing my search for the most enjoyable way for a hobbyist game programmer to sweeten his free time with making a game. My requirements: I looked at Flash first, I made a couple of small games but I'm doubtful of the performance. I would like to make a fairly large strategy game, with several hundred units fighting simultaneously, explosions and animations included. Also zoomable maps. I saw that Adobe has a new 3D API for Flash, but I don't know if that improves 2D performance aswell, I couldn't find anything related to that question on their MAX10 sessions. Would you say that Flash is a good technology for making large 2D games easily? I really like Actionscript, and I love how easy everything is in Flash. There are several engines available which make it even easier. I just do this for fun, and it would be even better if there were proper animation/particle editors available and if the engine I were to use, would be available for multiple platforms. (so more people can play my game once finished). I'd like to have it available on many mobile platforms aswell. (because I love touch input for some reason) I do know the XNA framework pretty well, but there are no good engines available for it, and it will only run on Windows, which is a huge turn off. Even bigger is, that you need to install the XNA redistributable each time you want to give the game to someone. If I use XNA, I would have to make all the tools myself, and I'd probably have to make them with WPF. (I'd love to make tools with Adobe AIR, but unfortunately the API's for image manipulation etc. are far worse in Flash, than they are in XNA/WPF.) Now, I'm aware that I could make my own engine that supports each of those platforms, but quite frankly, that would be too much work plowing through APIs. After all, I want to make a game, not an engine. So the question becomes: Is there maybe a cross platform (free or free to develop?) engine available that I could use for 2D development? I prefer: C#, Actionscript. I don't mind using c++ if the toolset is above average, but I highly doubt that there is something out there like that. Please prove me wrong :) So summary: I'd like to use Flash, but I don't know if it scales well enough. I'm not a scripter, I want some real APIs that I can work with inside a proper IDE. Just for information, I looked at several alternatives, I'm actually looking for a long time already. You'd help me a lot to make a decision finally. Feature-wise the Flatredball engine would be ideal. But I tried their tools, and quite frankly, they are horrible. Absolutely unusable, I'd need to make my own for sure. I didn't look at their API, but if their tools are so bad, I'm not inclined to look further. Unity3D. This one is quite nice, but I really don't need 3D, and it is quite ...a lot of work to learn. I also don't like that it is so expensive to use for different platforms and that I can only code for it through scripting. You have to buy each platform separately. The editor usability is average, the product overall is good enough for most purposes, but learning it myself would be overkill. Shiva 3D. It looks good enough, but again: I don't really need 3D. The editor usability is a little worse than Unity3D in my opinion and it wasn't clear to me how to start programming. I think it requires C++ for coding, so that's a negative too. I want to have fun, and c# is fun ;) SDL. Quite frankly, I'd still need to port to all those different SDL implementations. And I don't like OpenGL style programming, it's just plain ugly. And it needs c++, I know that there might be some wrappers available, but I don't like to use wrappers, because... Irrlicht. A lot of features, but support seems to be low and it is aimed at enthusiasts. C# bindings get dropped repeatedly. I'm not an engine enthusiast, I just want to make a game. I don't see this happening with Irrlicht. Ogre3D. Way too much work, it's just a graphics engine. Also no multiple platform support and c++. Torque2D. Costs something to use, and I didn't hear a lot of good things about support and documentation. Also costs extra for each platform.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27  | Next Page >