Search Results

Search found 16639 results on 666 pages for 'task engine'.

Page 456/666 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • What are my choices for server side sandboxed scripting?

    - by alfa64
    I'm building a public website where users share data and scripts to run over some data. The scripts are run serverside in some sort of sandbox without other interaction this cycle: my Perl program reads from a database a User made script, adds the data to be processed into the script ( ie: a JSON document) then calls the interpreter, it returns the response( a JSON document or plain text), i save it to the database with my perl script. The script should be able to have some access to built in functions added to the scripting language by myself, but nothing more. So i've stumbled upon node.js as a javascript interpreter, and and hour or so ago with Google's V8(does v8 makes sense for this kind of thing?). CoffeeScript also came to my mind, since it looks nice and it's still Javascript. I think javascript is widespread enough and more "sandboxeable" since it doesn't have OS calls or anything remotely insecure ( i think ). by the way, i'm writing the system on Perl and Php for the front end. To improve the question: I'm choosing Javascript because i think is secure and simple enough to implement with node.js, but what other alternatives are for achieving this kind of task? Lua? Python? I just can't find information on how to run a sandboxed interpreter in a proper way.

    Read the article

  • Oracle NoSQL Database: Cleaner Performance

    - by Charles Lamb
    In an earlier post I noted that Berkeley DB Java Edition cleaner performance had improved significantly in release 5.x. From an Oracle NoSQL Database point of view, this is important because Berkeley DB Java Edition is the core storage engine for Oracle NoSQL Database. Many contemporary NoSQL Databases utilize log based (i.e. append-only) storage systems and it is well-understood that these architectures also require a "cleaning" or "compaction" mechanism (effectively a garbage collector) to free up unused space. 10 years ago when we set out to write a new Berkeley DB storage architecture for the BDB Java Edition ("JE") we knew that the corresponding compaction mechanism would take years to perfect. "Cleaning", or GC, is a hard problem to solve and it has taken all of those years of experience, bug fixes, tuning exercises, user deployment, and user feedback to bring it to the mature point it is at today. Reports like Vinoth Chandar's where he observes a 20x improvement validate the maturity of JE's cleaner. Cleaner performance has a direct impact on predictability and throughput in Oracle NoSQL Database. A cleaner that is too aggressive will consume too many resources and negatively affect system throughput. A cleaner that is not aggressive enough will allow the disk storage to become inefficient over time. It has to Work well out of the box, and Needs to be configurable so that customers can tune it for their specific workloads and requirements. The JE Cleaner has been field tested in production for many years managing instances with hundreds of GBs to TBs of data. The maturity of the cleaner and the entire underlying JE storage system is one of the key advantages that Oracle NoSQL Database brings to the table -- we haven't had to reinvent the wheel.

    Read the article

  • How is determined an impact of a requirement change on the existing code?

    - by MainMa
    Hi, How companies working on large projects evaluate an impact of a single modification on an existing code? Since my question is probably not very clear, here's an example: Let's take a sample business application which deals with tasks. In the database, each task has a state, 0 being "Pending", ... 5 - "Finished". A new requirement adds a new state, between 2nd and 3rd one. It means that: A constraint on the values 1 - 5 in the database must be changed, Business layer and code contracts must be changed to add a new state, Data access layer must be changed to take in account that, for example the state StateReady is now 6 instead of 5, etc. The application must implement a new state visually, add new controls for it, new localized strings for tool-tips, etc. When an application is written recently by one developer, it's more or less easy to predict every change to do. On the other hand, when an application was written for years by many people, no single person can anticipate every change immediately, without any investigation. So since this situation (such changes in requirements) is very frequent, I imagine there are already some clever techniques and ways to predict the impact. Is there any? Do you know any books which deal about this subject? Note: my question is not related to How do you deal with changing requirements? question. In fact, I'm not interested in evaluating the cost of a change, but rather the way to predict the parts of an application which will be concerned by the change. What will be those changes and how difficult they are really doesn't matter in my question.

    Read the article

  • How to make Google recognize language for a multilingual website?

    - by Julien Fouilhé
    Few weeks ago, I implemented translation functionality for the website of my company. The website is now available in french and english and I did look on the internet the best way to do if we want to do not lose any ranking and to have our pages on Google. Here is what I did: I did set a response header: Content-Language:en and Content-Language:fr My URLs are formatted as: http://www.website.com/en/... and http://www.website.com/fr/... My html tag is set with a lang attribute: <html lang="en"> and <html lang="fr"> There is a <link rel="alternate" hreflang="en" href="EnglishPageUrl"> on french pages and a <link rel="alternate" hreflang="en" href="frenchPageUrl"> on english pages. But Google keeps referring to some english pages when I'm doing a search on french engine, knowing that the website was first only available in english. Is that normal? Do I have to wait still, it has been now almost one month, I thought it would be okay...? Thank you.

    Read the article

  • Database Driven Web Application, C# Front-End and F# Back-End meaning

    - by user1473053
    Hi I am an intern working with ASP.NET. My current task is to make a website which will incorporate some jquery viewing features. This project seems to me will be primarily dealing with reading data from a database and making graphs out of them. This will require me to make custom queries from whatever the client is looking at. I think it is going to be what this guy calls an Ad Hoc Query tool My plan for this is to make it a database-driven website. So I can utilize the jquery dynamic viewing capabilities. I stumbled upon the functional programming paradigm and found F#. I read that because of it's functional programming paradigm, it makes it a good language to do asynchronous functions. I read about how you can use this with LINQ to SQL and how easy it is to make queries without actually putting the query language in. I understand the concept of the MVC design pattern. But I don't understand what they mean about C# being the front-end and F# being the back-end. Can someone clarify this to me? Also what are your thoughts about doing this project in this way? Any comments and thoughts are greatly appreciated. I feel as if learning F# will be a great learning experience for me. My guess is that the F# back-end is like the part where it controls the calls to the database. F# is possibly the model part of the design pattern. And C# is the controller. So HTML, Javascript and Jquery stuff will be my View design pattern. Clarify please?

    Read the article

  • The Home Stretch: NetBeans IDE 7.1 Release Candidate

    - by TinuA
    The first release candidate build of NetBeans IDE 7.1 is live and available for download, which means the big release (GA) is expected any day soon.NetBeans IDE 7.1 delivers support for JavaFX 2.0, enabling the full compile, debug and profile development cycle for JavaFX 2.0 applications and keeping developers in sync with the latest from the Java platform. Beyond JavaFX support, 7.1 also provides significant Swing GUI Builder enhancements, CSS3 support, and visual debugging tools for JavaFX and Swing user interfaces. And Git--a much anticipated featured--has been integrated into the IDE."The entire NetBeans team is tremendously excited about this release, which provides developers with more state-of-the-art tools for building front-end clients," says NetBeans Engineering Director John Jullion-Ceccarelli. "Whether you are doing JavaFX, HTML5, Swing, or JSF, NetBeans 7.1 will let you quickly and easily develop great-looking and full-featured clients for your Java or PHP-based applications."But there's one more task to check off before the general availability: The NetBeans team has launched a Community Acceptance Survey to get user feedback about the release candidate. Download the RC build, test it and take the survey to let the team know if NetBeans IDE 7.1 is ready for its debut!

    Read the article

  • Advice on choosing a book to read

    - by Kioshiki
    I would like to ask for some recommendations on useful books to read. Initially I had intended on posting quite a long description of my current issue and asking for advice. But I realised that I didn’t have a clear idea of what I wanted to ask. One thing that is clear to me is that my knowledge in various areas needs improving and reading is one method of doing that. Though choosing the right book to read seems like a task in itself when there are so many books out there. I am a programmer but I also deal with analysis, design & testing. So I am not sure what type of book to read. One option might be to work through two books at the same time. I had thought maybe one about design or practices and another of a more technical focus. Recently I came across one book that I thought might be useful to read: http://xunitpatterns.com/index.html It seems like an interesting book, but the comments I read on amazon.co.uk show that the book is probably longer than it needs to be. Has anyone read it and can comment on this? Another book that I already own and would probably be a good one to finish reading is this: http://www.amazon.co.uk/Code-Complete-Practical-Handbook-Construction/dp/0735619670/ref=sr_1_1?ie=UTF8&qid=1309438553&sr=8-1 Has anyone else read this who can comment on its usefulness? Beyond these two I currently have no clear idea of what to read. I have thought about reading a book related to OO design or the GOF design patterns. But I wonder if I am worrying too much about the process and practices and not focusing on the actual work. I would be very grateful for any suggestions or comments. Many Thanks, Kioshiki

    Read the article

  • Best Method of function parameter validation

    - by Aglystas
    I've been dabbling with the idea of creating my own CMS for the experience and because it would be fun to run my website off my own code base. One of the decisions I keep coming back to is how best to validate incoming parameters for functions. This is mostly in reference to simple data types since object validation would be quite a bit more complex. At first I debated creating a naming convention that would contain information about what the parameters should be, (int, string, bool, etc) then I also figured I could create options to validate against. But then in every function I still need to run some sort of parameter validation that parses the parameter name to determine what the value can be then validate against it, granted this would be handled by passing the list of parameters to function but that still needs to happen and one of my goals is to remove the parameter validation from the function itself so that you can only have the actual function code that accomplishes the intended task without the additional code for validation. Is there any good way of handling this, or is it so low level that typically parameter validation is just done at the start of the function call anyway, so I should stick with doing that.

    Read the article

  • Java Magazine: Growing on Open

    - by Tori Wieldt
    The November/December issue of Java Magazine is now out, with several great Java stories, including: Growing on Open AgroSense provides an all-Java open source platform for sustainable farming and precision agriculture. An Engine for Big Data Hadoop uses Java for large-scale analytics. JavaFX in SpringStephen Chin shows you why to use the Spring framework on the client. JCP Executive Q&A: Mike MilinkovichThe Eclipse Foundation’s executive director assesses the state of Java and the JCP. Exploring Lambda Expressions for the Java Language and the JVMBen Evans, Martijn Verburg, and Trisha Gee help you get ready for lambda expressions in Java SE 8. Get Started with Java SE for Embedded Devices on Raspberry PiWe walk you through getting Linux and Java SE for Embedded Devices to run on the Raspberry Pi in less than an hour. Java NationGet the news from JavaOne 2012 in San Francisco. Java Magazine is a bi-monthly online publication. It includes technical articles on the Java language and platform; Java innovations and innovators; JUG and JCP news; Java events; links to online Java communities; and videos and multimedia demos. Subscriptions are free. Do you have feedback about Java Magazine? Send a tweet to @oraclejavamag.

    Read the article

  • Which approach is the most maintainable?

    - by 2rs2ts
    When creating a product which will inherently suffer from regression due to OS updates, which of these is the preferable approach when trying to reduce maintenance cost and the likelihood of needing refactoring, when considering the task of interpreting system state and settings for a lay user? Delegate the responsibility of interpreting the results of inspecting the system to the modules which perform these tasks, or, Separate the concerns of interpretation and inspection into two modules? The first obviously creates a blob in which a lot of code would be verbose, redundant, and hard to grok; the second creates a strong coupling in which the interpretation module essentially has to know what it expects from inspection routines and will have to adapt to changes to the OS just as much as the inspection will. I would normally choose the second option for the separation of concerns, foreseeing the possibility that inspection routines could be re-used, but a developer updating the product to deal with a new OS feature or something would have to not only write an inspection routine but also write an interpretation routine and link the two correctly - and it gets worse for a developer who has to change which inspection routines are used to get a certain system setting, or worse yet, has to fix an inspection routine which broke after an OS patch. I wonder, is it better to have to patch one package a lot or two packages, each somewhat less so?

    Read the article

  • How can I stop a process from moving to the background?

    - by Alex
    I have a machine running Ubuntu server version 12.04.3 LTS. On it, I'm attempting to run a node.js server that needs to stay up and running at all times. I'm running into an issue, however, where periodically I see this happen: [1]+ Stopped sudo node server.js When this happens, I have to manually bring it back with fg, which works fine, at least until it stops again. As far as I can tell, it isn't functioning properly while stopped, since I get no log files in those windows of time. So my question is this: Is there a way to prevent it from being stopped like that? I'm running it in a tmux window, if that changes anything. Also, to address the question before it gets asked: I'm running it as sudo due to some ecryptfs issues I've been having. I was originally running it in my home directory, but when it was left alive for too long things would get out of sync and the file writes it has to do would just stop working. To mitigate that, I moved it out of my home directory, but its new location requires me to use sudo permissions for everything to work correctly. Hopefully that isn't related to the whole background task thing. (sudo and tmux tags included in case one or both turn out to actually be relevant to the solution.)

    Read the article

  • I've inherited 200K lines of spaghetti code -- what now?

    - by kmote
    I hope this isn't too general of a question; I could really use some seasoned advice. I am newly employed as the sole "SW Engineer" in a fairly small shop of scientists who have spent the last 10-20 years cobbling together a vast code base. (It was written in a virtually obsolete language: G2 -- think Pascal with graphics). The program itself is a physical model of a complex chemical processing plant; the team that wrote it have incredibly deep domain knowledge but little or no formal training in programming fundamentals. They've recently learned some hard lessons about the consequences of non-existant configuration management. Their maintenance efforts are also greatly hampered by the vast accumulation of undocumented "sludge" in the code itself. I will spare you the "politics" of the situation (there's always politics!), but suffice to say, there is not a consensus of opinion about what is needed for the path ahead. They have asked me to begin presenting to the team some of the principles of modern software development. They want me to introduce some of the industry-standard practices and strategies regarding coding conventions, lifecycle management, high-level design patterns, and source control. Frankly, it's a fairly daunting task and I'm not sure where to begin. Initially, I'm inclined to tutor them in some of the central concepts of The Pragmatic Programmer, or Fowler's Refactoring ("Code Smells", etc). I also hope to introduce a number of Agile methodologies. But ultimately, to be effective, I think I'm going to need to hone in on 5-7 core fundamentals; in other words, what are the most important principles or practices that they can realistically start implementing that will give them the most "bang for the buck". So that's my question: What would you include in your list of the most effective strategies to help straighten out the spaghetti (and prevent it in the future)?

    Read the article

  • ODBC in SSIS 2012

    - by jamiet
    In August 2011 the SQL Server client team published a blog post entitled Microsoft is Aligning with ODBC for Native Relational Data Access in which they basically said "OLE DB is the past, ODBC is the future. Deal with it.". From that blog post:We encourage you to adopt ODBC in the development of your new and future versions of your application. You don’t need to change your existing applications using OLE DB, as they will continue to be supported on Denali throughout its lifecycle. While this gives you a large window of opportunity for changing your applications before the deprecation goes into effect, you may want to consider migrating those applications to ODBC as a part of your future roadmap.I recently undertook a project using SSIS2012 and heeded that advice by opting to use ODBC Connection Managers rather than OLE DB Connection Managers. Unfortunately my finding was that the ODBC Connection Manager is not yet ready for primetime use in SSIS 2012. The main issue I found was that you can't populate an Object variable with a recordset when using an Execute SQL Task connecting to an ODBC data source; any attempt to do so will result in an error:"Disconnected recordsets are not available from ODBC connections." I have filed a bug on Connect at ODBC Connection Manager does not have same funcitonality as OLE DB. For this reason I strongly recommend that you don't make the move to ODBC Connection Managers in SSIS just yet - best to wait for the next version of SSIS before doing that.I found another couple of issues with the ODBC Connection Manager that are worth keeping in mind:It doesn't recognise System Data Source Names (DSNs), only User DSNs (bug filed at ODBC System DSNs are not available in the ODBC Connection Manager)  UPDATE: According to a comment on that Connect item this may only be a problem on 64bit.In the OLE DB Connection Manager parameter ordinals are 0-based, in the ODBC Connection Manager they are 1-based (oh I just can't wait for the upgrade mess that ensues from this one!!!)You have been warned!@jamiet

    Read the article

  • Offset Forward vector of object based on Rotation

    - by Taylor
    I'm using the Bullet 3D physics engine in a iOS application running openGL ES 1.1 Currently I'm accepting info from the gyroscope to allow the user to "look around" a 3d world that follows a bouncing ball (note: it only takes in the yaw to look around 360 degrees). Im also accepting information from the accelerometer based on the tilt to push the ball. As of right now, to move forward, the user tilts the devise forward (using the accelerometer); to move to the right, the user tilts the devise to the right and so on. The forward vector is currently along it's local Z-axis. The problem is that I want to change the ball bounce based on where the user has changed the view. If I change the view, the ball bounces in the fixed direction. I want to change the forward facing direction so that when a user changes the view (say to the look at the right of the world, the user rotates the device), tilting the devise forward will result in a forward force in that direction. Basically, I want the forward vector to take the rotation into consideration. Sorry if I didn't explain the issue well enough, its kind of confusing to write down.

    Read the article

  • What is a Coding Dojo?

    - by huwyss
    Recently i found out that there is a thing called "coding dojo". The point behind it is that software developers want to have a space to learn new stuff like processes, methods, coding details, languages, and whatnot in an environment without stress. Just for fun. No competition. No results required. No deadlines.Some days ago I joined the Zurich coding dojo. We were three programmers with different backgrounds.We gave ourselves the task to develop a method that takes an input value and returns its prime factors. We did pair programming and every few minutes we switched positions. We used test driven development. The chosen programming language was Ruby.I haven't really done TDD before. It was pretty interesting to see the algorithm develop following the testcases.We started with the first test input=1 then developed the most simple productive program that passed this very first test. Then we added the next test input=2 and implemented the productive code. We kept adding tests and made sure all tests are passed until we had the general solution.When we improved the performance of our code we saw the value of the tests we wrote before. Of course our first performance improvement broke several tests.It was a very interesting experience to see how other developers think and how they work. I will participate at the dojo again and can warmly recommend it to anyone. There are  coding dojos all over the world.Have fun!

    Read the article

  • Entity system in Lua, communication with C++ and level editor. Need advice.

    - by Notbad
    Hi!, I know this is a really difficult subject. I have been reading a lot this days about Entity systems, etc... And now I'm ready to ask some questions (if you don't mind answering them) because I'm really messed. First of all, I have a 2D basic editor written in Qt, and I'm in the process of adding entitiy edition. I want the editor to be able to receive RTTI information from entities to change properties, create some logic being able to link published events to published actions (Ex:A level activate event throws a door open action), etc... Because all of this I guess my entity system should be written in scripting, in my case Lua. In the other hand I want to use a component based design for my entities, and here starts my questions: 1) Should I define my componentes en C++? If I do this en C++ won't I loose all the RTTI information I want for my editor?. In the other hand, I use box2d for physics, if I define all my components in script won't it be a lot of work to expose third party libs to lua? 2) Where should I place the messa system for my game engine? Lua? C++?. I'm tempted to just have C++ object to behave as servers, offering services to lua business logic. Things like physics system, rendering system, input system, World class, etc... And for all the other things, lua. Creation/Composition of entities based on components, game logic, etc... Could anyone give any insight on how to accomplish this? And what aproach is better?. Thanks in advance, HexDump.

    Read the article

  • How to implement "bullet time" in a multiplayer game?

    - by Tom
    I have never seen such a feature before, but it should provide an interesting gameplay opportunity. So yes, in a multiplayer/real-time environment (imagine FPS), how could I implement a slow motion/bullet time effect? Something like an illusion for the player that's currently slo-mo'ed. So everybody sees him "real-time", but he sees everything slowed down. Update A sidenote: keep in mind that a FPS game has to be balanced in order for it to be fun. So yes, this bullet time feature has to be solid, giving a small advantage to the "player", while not taking away from other players. Plus, there is a possibility that two players could activate their bullet time at the same time. Furthermore: I'm going to implement this in the future no matter what it takes. And, the idea is to build a whole new game engine for all this. If that gives new options, I'm more then interested in hearing the ideas. Meanwhile, here with my team we're thinking about this too, when our theory will be crafted, I'm going to share it here. Is this even possible? So, the question on "is this even possible" has been answered, now it's time to find the best solution. I'm keeping the "answer" until something exceptionally good comes up, like a prototype theory with something close to working pseudo code.

    Read the article

  • Advice on designing web application with a 40+ year lifetime

    - by user2708395
    Scenario Currently, I am apart of a health care project whose main requirement is to capture data with unknown attributes using user generated forms by health care providers. The second requirement is that data integrity is key and that the application will be used for 40+ years. We are currently migrating the client's data from the past 40 years from various sources (Paper, Excel, Access, etc...) to the database. Future requirements are: Workflow management of forms Schedule management of forms Security/Role based management Reporting engine Mobile/Tablet support Situation Only 6 months in, the current (contracted) architect/senior programmer has taken the "fast" approach and has designed a poor system. The database is not normalized, the code is coupled, the tiers have no dedicated purpose and data is starting to go missing since he has designed some beans to perform "deletes" on the database. The code base is extremely bloated and there are jobs just to synchronize data since the database is not normalized. His approach has been to rely on backup jobs to restore missing data and doesn't seem to believe in re-factoring. Having presented my findings to the PM, the architect will be removed when his contract ends. I have been given the task to re-architect this application. My team consists of me and one junior programmer. We have no other resources. We have been granted a 6-month requirement freeze in which we can focus on re-building this system. I suggested using a CMS system like Drupal, but for policy reasons at the client's organization, the system must be built from scratch. This is the first time that I will be designing a system with a 40+ lifespan. I have only worked on projects with 3-5 year lifespans, so this situation is very new, yet exciting. Questions What design considerations will make the system more "future proof"? What experiences have you had in designing such systems - both failures and successes? What questions should be asked to the client/PM to make the system more "future proof"?

    Read the article

  • Most suited technology for browser games?

    - by Tingle
    I was thinking about making a 2D MMO which I would in the long run support on various plattforms like desktop, mac, browser, android and ios. The server will be c++/linux based and the first client would go in the browser. So I have done some research and found that webgl and flash 11 support hardware accelerated rendering, I saw some other things like normal HTML5 painting. So my question is, which technology should I use for such a project? My main goal would be that the users have a hassle free experience using what there hardware can give them with hardware acceleration. And the client should work on the most basic out-of-the-box pc's that any casual pc or mac user has. And another criteria would be that it should be developer friendly. I've messed with webgl abit for example and that would require writing a engine from scratch - which is acceptable but not preferred. Also, in case of non-actionscript, which kind language is most prefered in terms of speed and flexability. I'm not to fond of javascript due to the garbage collector but have learned to work around it. Thank you for you time.

    Read the article

  • Precise: Evolution laggy due to IMAP -profile or due to some odd Sync -issue?

    - by Izzy
    I'm fighting with Evolution. Basically it's working fine -- but it is very slow to react in certain situations. There is apparently some problem with syncing and IMAP. Helper questios Could be that changing away from Bonobo has to do with slowing-down? There might be some trouble with the new engine and "asynchronous actions". What to do about it? I want to get the previous "working mood" back. How can I speed this thing up? Different scenarios when sending a mail, the composer window hangs there inactive for a couple of seconds, everything grayed out. Though there is a green check mark saying it's sent, I'm not sure a) why it's still blocking everything and b) whether I could simply close it without "breaking"/"losing" anything. In earlier versions, the composer window was closing pretty fast, and one could see the message being stored into the local "outbox" until it was sent, and one could immediately continue with the next task. I prefer that behaviour over the current. switching between modules. Coming from mail and switching to the address book takes a couple of seconds. Same for switching to the calendar. I read about different "possible causes" and tried a few things: I only have 3 local address books, so no networking should be involved here. To make sure, I switched to offline mode and then tried to access the address book. No noticeable difference. I use 3 Google Calendars. Switching to offline mode made a minor difference, but so minor that it also could be "imagination" since one might have expected this in this case according to some reports, disabling the tasks should help. Well, it didn't in my case, as I don't use them regularly (just two local items stored here)

    Read the article

  • Stylecop 4.7.36.0 is out!

    - by TATWORTH
    Stylecop 4.7.36.0 has been released at http://stylecop.codeplex.com/releases/view/79972This is an update to coincide with the latest ReSharper. The full fix list is:4.7.36.0 (508dbac00ffc)=======================Fix for 7344. Don't throw 1126 inside default expressions.Fix for 7371. Compare Namespace parts using the CurrentCulture and not InvariantCulture.Fix for 7386. Don't throw casing violations for filed names in languages that do not support case (like Chinese). Added new tests.fix for 7380. Catch Exception caused by CRM Toolkit.Update ReSharper 7.0 dependency to 7.0.1 (7.0.1098.2760)Fix for 7358. Use the RuleId in the call to MSBuild Logging.Fix for 7348. Update suggestion text for constructors.Fix for 7364. Don't throw 1126 for New Array Expressions.Fix for 7372. Throw 1126 inside catch blocks wasn't working. Add new tests.Fix for 7369. Await is allowed to be inside parenthesis. Add new tests.Fix testsCorrect styling issues.Fix for 7373. Typeparam violations were not being thrown in all cases. Added new tests.Fix for 7361. Rule 1120 was logging against the root element and so Suppressions wouldn't work. Fixed and added tests.Updating de-DE resources - from Michael Diermeier - thank you.Change for 7368. Add the violation count into the Task outputs.Fix for 7383. Fix for memory leak in plugins.Update environment to detect ReSharper 7Fix for 7378. Null reference exception from command line run in message output.Update release history.

    Read the article

  • What is the simplest way to render video into memory (for drawing to a texture) in .NET?

    - by sebf
    In my project I would like to be able to play back video on surfaces in the world. I intend to do this by having the video frames rendered to a block of memory, then use this to update a texture each frame. Everything is in place - except for the part that actually gets the video. I have looked on Google and found that the video library world is very expansive (and geared towards video processing), and am having trouble finding a suitable one. FFMpeg is very comprehensive, but is an entire suite and would take a good amount of work to integrate. So far the most promising library I've found is the one based on the VLC player libraries - by virtue of it using the same resources as VLC Player it is known to be very capable; it also renders to blocks of memory, but the API (at least of the one on Codeplex) is more of a port of the C++ API rather than a managed wrapper. The 'solution' can be any wrapper/API/library, but with characteristics that make it suitable for use in a rendering engine, namely: Renders the video frame data to memory, so it can be picked up and passed to a texture on the GPU easily. Super simple - all that is needed is a way to load, jump and render a frame programatically - ideally it would use the systems codecs and not require an assortment of plugins. Permissive license (LGPL or more free-er) .NET bindings at least; all the better if it is natively managed Can anyone suggest a lightweight, (.NET) library, that can take a video file, and spit out some frames into a byte[]?

    Read the article

  • Could these people get arrested? [closed]

    - by Vinicius Horta
    I have seen many of what is called 'private servers' mmorpg (multiplayer online games), which uses stolen sources,modified executable, clients and server. People launch up their own server using VPS or dedicated server and distribute online service among players disclaiming it has educational purposes only, saying they are studying the game engine and selling items for players disclaming it as 'donations', so it seems like they are getting donations to keep studying. We all know it's a comercial method. All of it is copyrighted material from enterprise ABCD. (ABCD = Fictional name, I'm not mentioning names). At their website they include the following: "Private Server XXXX" does not allow/support any conection to any company/organization associated with the game "XXXX". If you are anyway affiliated with enterprise "XXXXXXX", or any other company/organization associated with the game "XXXX" you may not view/open/read/execute/play/download any part of "Private Server XXXX" nor view "Private Server XXXX" website, if any company/organization requests you to investigate our website/server, you may not view "Private SERVER XXXX" or execute any action mentioned above. Any person caught disobeting this disclaimer will be punish to the fulleste extent of law. Can this guys get arrested? Do they disclaimer works? If I'm owner of enterprise X and I know people stole my source and are using them but they have such disclaimer I'm not allowed to investigate them?

    Read the article

  • Hosting and scaling of a facebook application on cloud?

    - by DhruvPathak
    We would be building a facebook application in django(Python), but still not sure of where to host it economically,and with a good provision to scale in case the app gets viral. Some details about the app: i) Would be HTML based like a website,using django as a framework. ii) 100K is the number of expected pageviews in a day,if the app is viral. iii) The users will not generate any media content,only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment,cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost ? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. ( Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management ? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • SEO and external sites that serve responsive images (like Re-SRC)

    - by Baumr
    Re-SRC is a tool that allows you to automatically serve responsive images for your website from their cloud servers. It delivers a new image file each time the browser window (viewport) is resized. To use it in your HTML when linking to an image, you would do the following: <img src="http://app.resrc.it//www.your-domain.com/img/img001.jpg"/> Some more background for SEO considerations: As an example, looking at their demo page's code, the src of the Arc de Triomphe photo — when the browser window is resized to be at a tablet-width — shows this particular file at it's widest. It is found under the following URL: http://app4-uk.resrc.it/s=w560,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg If the viewport is increased to desktop-width, then a smaller image is served in line with the design; see this URL: http://app4-uk.resrc.it/s=w320,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg If I change the viewport to be about half-way between those two, then the image's URL is: http://app4-uk.resrc.it/s=w240,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg In other words, I found that there is a separate file for every 10-pixel increment of the image width. Very cool for saving bandwidth on mobile devices and service responsive/retina images on others, but... Here are two problems I see for SEO: The img on your site, part of your semantic markup, will not be hosted on your site at all, or even a server you control. Any links to these images will pass on "link juice" to Re-SRC's site instead. You are serving a vast array of different image files to different people — some may link to one, others to another size. Then there's the question of what different search engine crawlers will see. Also: There seems to be no fallback option if their servers are down. Do you see any other concerns? Or, perhaps, do you not see those as concerns?

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >