Search Results

Search found 22301 results on 893 pages for 'software sources'.

Page 352/893 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Node.js vs PHP processing speed

    - by Cody Craven
    I've been looking into node.js recently and wanted to see a true comparison of processing speed for PHP vs Node.js. In most of the comparisons I had seen, Node trounced Apache/PHP set ups handily. However all of the tests were small 'hello worlds' that would not accurately reflect any webpage's markup. So I decided to create a basic HTML page with 10,000 hello world paragraph elements. In these tests Node with Cluster was beaten to a pulp by PHP on Nginx utilizing PHP-FPM. So I'm curious if I am misusing Node somehow or if Node is really just this bad at processing power. Note that my results were equivalent outputting "Hello world\n" with text/plain as the HTML, but I only included the HTML as it's closer to the use case I was investigating. My testing box: Core i7-2600 Intel CPU (has 8 threads with 4 cores) 8GB DDR3 RAM Fedora 16 64bit Node.js v0.6.13 Nginx v1.0.13 PHP v5.3.10 (with PHP-FPM) My test scripts: Node.js script var cluster = require('cluster'); var http = require('http'); var numCPUs = require('os').cpus().length; if (cluster.isMaster) { // Fork workers. for (var i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('death', function (worker) { console.log('worker ' + worker.pid + ' died'); }); } else { // Worker processes have an HTTP server. http.Server(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.write('<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n'); for (var i = 0; i < 10000; i++) { res.write('<p>Hello world</p>\n'); } res.end('</body>\n</html>'); }).listen(80); } This script is adapted from Node.js' documentation at http://nodejs.org/docs/latest/api/cluster.html PHP script <?php echo "<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n"; for ($i = 0; $i < 10000; $i++) { echo "<p>Hello world</p>\n"; } echo "</body>\n</html>"; My results Node.js $ ab -n 500 -c 20 http://speedtest.dev/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: Server Hostname: speedtest.dev Server Port: 80 Document Path: / Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 14.603 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95066500 bytes HTML transferred: 95035000 bytes Requests per second: 34.24 [#/sec] (mean) Time per request: 584.123 [ms] (mean) Time per request: 29.206 [ms] (mean, across all concurrent requests) Transfer rate: 6357.45 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 2 Processing: 94 547 405.4 424 2516 Waiting: 0 331 399.3 216 2284 Total: 95 547 405.4 424 2516 Percentage of the requests served within a certain time (ms) 50% 424 66% 607 75% 733 80% 813 90% 1084 95% 1325 98% 1843 99% 2062 100% 2516 (longest request) PHP/Nginx $ ab -n 500 -c 20 http://speedtest.dev/test.php This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: nginx/1.0.13 Server Hostname: speedtest.dev Server Port: 80 Document Path: /test.php Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 0.130 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95109000 bytes HTML transferred: 95035000 bytes Requests per second: 3849.11 [#/sec] (mean) Time per request: 5.196 [ms] (mean) Time per request: 0.260 [ms] (mean, across all concurrent requests) Transfer rate: 715010.65 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 1 Processing: 3 5 0.7 5 7 Waiting: 1 4 0.7 4 7 Total: 3 5 0.7 5 7 Percentage of the requests served within a certain time (ms) 50% 5 66% 5 75% 5 80% 6 90% 6 95% 6 98% 6 99% 6 100% 7 (longest request) Additional details Again what I'm looking for is to find out if I'm doing something wrong with Node.js or if it is really just that slow compared to PHP on Nginx with FPM. I certainly think Node has a real niche that it could fit well, however with these test results (which I really hope I made a mistake with - as I like the idea of Node) lead me to believe that it is a horrible choice for even a modest processing load when compared to PHP (let alone JVM or various other fast solutions). As a final note, I also tried running an Apache Bench test against node with $ ab -n 20 -c 20 http://speedtest.dev/ and consistently received a total test time of greater than 0.900 seconds.

    Read the article

  • Where has my parallel port gone? ioperm(888,1,1) returns -1.

    - by marcusw
    I have an old Dell Dimension 8200 running Gentoo which I use solely to control various things using the parallel port. After shutting it down a few weeks ago, I started it up again today and tried to access the parallel port like I usually do. Unfortunately, my code bombed out when it tried to call ioperm(888,1,1) to grab the parallel port which returned an error code of -1. There have been no changes to the system be it hardware or software, no updates, no tweaking, no dropping the case, no over-amping the data pins, nothing. The port and the software have been working fine for months with no changes, and were working fine when I shut it down last. Running my code with root privileges changes nothing. What is breaking this and how can I fix it?

    Read the article

  • Outlook 2003 Add-In Setup Project with COM DLL Deployment Problem

    - by Malkier
    Hi, I developed an Outlook 2003 add-in which uses the com dll redemption. I created a visual studio 2008 setup project, added a custom action to run "caspol.exe -machine -addgroup 1 -strong -hex [key] -noname -noversion FullTrust -n \"Name\" -description \"desc\" and moved the registry keys under software to HKLM as described in http://msdn.microsoft.com/en-us/library/cc136646.aspx#AutoDeployVSTOse_InstallingtheAddinforAllUsers to ensure all-users compatibility. I included the redemption.dll in the setup with vsdrfCOMSelfReg (vsdrfCOM throwed an error). My problem is: When installing the setup on a test machine under an admin account, it runs fine under all users, however when we use the company wide software deployment which runs under a system account the setup executes but the add-in wont load. If I repair the installation with an admin account again it loads just fine. Shouldn't a system account have the required permissions to install all of the components? What options do I have? Thanks for any suggestions.

    Read the article

  • Selling an app to a company - How much to charge?

    - by Moshe
    I wrote an app targeting a particular clientele. A software company with a reputation among my target audience is willing to negotiate a price to either license or buy it. As a newcomer to the app store, I am not sure that I will successfully market it myself. What would be appropriate terms of a sale or license and what about pricing? I am looking for answers that draw from personal experience with software, although not necessarily apps. I've seen this post on SO, but it's a few years old and I assume that the app market has changed and stabilized somewhat. Thanks.

    Read the article

  • Creating an installer with WPF forms, packaged files and custom setup actions

    - by RodH257
    I'm trying to create a way of deploying a set of tools (which are add-ins to 3rd party software) to my users. I would like to do the following: User Enters Serial Dlls in their directory structure is extracted to program files a file is copied to a location in ProgramData (this registers my add-ins to the 3rd party application) Online activation for software is performed Can anyone point me into the right direction for this? I had a look at deployment projects in Visual Studio but I'm not sure if they are what I'm after. Main problem is they are ugly, I would like to have a nice WPF installer, and have a more custom experience. But I guess that can be traded off if its going to make things easier. I was thinking, I could just make my own C# project that extracts the files, but I have no idea how to package them up and extract them all as part of one download (like the MSI files that the deployment projects create). Can anyone point me in the right direction?

    Read the article

  • what architecture for implementing a richtext editor?

    - by genesys
    Hi! Can someone give me some hints on how a clean implementation (designwise) of a richtext editor could look like that allows for things like setting fonts, setting character colors and so on? And when and how are characters rendered? are characters rendered only once and the bitmap representation is cached? Is there any article or book covering what software design would be appropriate for that? background is that we're working on a text editing software for a language that cannot be displayed with unicode any hint is appreciated! thanks!

    Read the article

  • How to use Mesa3D on Mac OS X and Windows

    - by gutsblow
    Hello all, I need to use Mesa3D for a cross platform application(windows and Mac only) which uses only offline software rendering. The reason I wanted to use Mesa3D is because it has the same Drawing calls as OpenGL and they are really easy. Now I know that Apple itself has a software implementation (which I heard is flaky), but I prefer using Mesa so that it's a lot easier for me to maintain the code on both platforms. On windows I managed to compile three DLL's from the Mesa3d source, but don't know what to do with them. On Mac OS X I am completely clueless. I would highly appreciate your help. Thank you once again very much!

    Read the article

  • Tools to create maximum velocity in a .NET dev team

    - by Søren Spelling Lund
    If you were to self-fund a software project which tools, frameworks, components would you employ to ensure maximum productivity for the dev team and that the "real" problem is being worked on. What I'm looking for are low friction tools which get the job done with a minimum of fuss. Tools I'd characterize as such are SVN/TortioseSVN, ReSharper, VS itself. I'm looking for frameworks which solve the problems inherient in all software projects like ORM, logging, UI frameworks/components. An example on the UI side would be ASP.NET MVC vs WebForms vs MonoRail.

    Read the article

  • Does ASL License complies with MS-Pl license?

    - by John Simons
    I would like to redistribute a compiled version of Yahoo! UI Library: YUI Compressor for .Net (http://yuicompressor.codeplex.com), that according to the web site is licensed under MS-Pl (http://yuicompressor.codeplex.com/license). The project I work in is release under the terms of Apache Software Foundation License 2.0. According to the MS-Pl license "If you distribute any portion of the software in compiled or object code form, you may only do so under a license that complies with this license." , the term complies is not very clear! Does ASL License complies with MS-Pl license?

    Read the article

  • How do I convert PDF to HTML programmatically?

    - by SoaperGEM
    Are there any classes, COM objects, command line utilities, or anything else that I can make an API for that can convert a PDF to an HTML document? Obviously the conversion might be a little rough since PDFs can contain a lot more than HTML can describe. I found a utility called pdftohtml on Source Forge, but quite honestly it does a horrible job with the conversion. I don't care if the software is free or commercial, but is there anything out there at all that I can incorporate with my own software to do this sort of conversion at least decently? I know Google's developed their own method of doing this, since you can click "View as HTML" on a PDF attached to an email through Gmail, but I was hoping there was something out available to the public. Remember, PDF to HTML. I'm NOT worried about HTML to PDF.

    Read the article

  • How would I best address this object type heirachy? Some kind of enum heirarchy?

    - by FerretallicA
    I'm curious as to any solutions out there for addressing object heirarchies in an ORM approach (in this instance, using Entity Framework 4). I'm working through some docs on EF4 and trying to apply it to a simple inventory tracking program. The possible types for inventory to fall into are as follows: INVENTORY ITEM TYPES: Hardware PC Desktop Server Laptop Accessory Input (keyboards, scanners etc) Output (monitors, printers etc) Storage (USB sticks, tape drives etc) Communication (network cards, routers etc) Software What recommendations are there for handling enums in a situation like this? Are enums even the solution? I don't really want to have a ridiculously normalised database for such a relatively simple experiment (eg tables for InventoryType, InventorySubtype, InventoryTypeToSubtype etc). I don't really want to over-complicate my data model with each subtype being inherited even though no additional properties or methods are included (except PC types which would ideally have associated accessories and software but that's probably out of scope here). It feels like there should be a really simple, elegant solution to this but I can't put my finger on it. Any assistance or input appreciated!

    Read the article

  • How do programmers work together on a project?

    - by Laith J
    Hello, I've always programmed alone, I'm still a student so I never programmed with anyone else, I haven't even used a version control system before. I'm working on a project now that requires knowledge of how programmers work together on a piece of software in a company. How is the software compiled? Is it from the version control system? Is it by individual programmers? Is it periodic? Is it when someone decides to build or something? Are there any tests that are done to make sure it "works"? Anything will do. Thanks.

    Read the article

  • When to use MVP in Windows Forms .net application?

    - by Janalopa
    I am familiar with MVC/MVP though my question is simple, I'm about to program a simple Instant Messaging software when the engine and communication part is an open API. so my software will have about 3 forms, a splash screen with login details, the options form and a main form with all the functionality like: Friends List, Send message, Received messages (tabbed), search user, etc. In UI perspective, its important for the GUI to be in 1 form in my application. So my question is, for the only complicated form that I'm going to have, is it necessary to implement an MVP design pattern or in this case its better to just go straight forward and put all the logic in 1 place? THANKS Janalopa!

    Read the article

  • Combining GPL with MPL and BSD

    - by thr
    I have a software project I want to release under GPLv3, it uses two pieces of code that other parties have developed (one is the DLR by Microsoft, which is under the Microsoft Public License and the other piece of code is under the New BSD License). The BSD licensed code is compiled into the same binary as my code (but none of it is changed) The Ms-PL licensed code is compiled into another assembly next to my code and linked at runtime (and none of it is changed what so ever). Can I release my software under GPLv3 and without any legal problems?

    Read the article

  • Why doesn't Maven's mvn clean ever work the first time?

    - by hoffmandirt
    Nine times out of ten when I run mvn clean on my projects I experience a build error. I have to execute mvn clean multiple times until the build error goes away. Does anyone else experience this? Is there any way to fix this within Maven? If not, how do you get around it? I wrote a bat file that deletes the target folders and that works well, but it's not practical when you are working on multiple projects. I am using Maven 2.2.1. [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to delete directory: C:\Documents and Settings\user\My Documents\software-developm ent\a\b\c\application-domain\target. Reason: Unable to delete directory C:\Documen ts and Settings\user\My Documents\software-development\a\b\c\application-domai n\target\classes\com\a\b [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6 seconds [INFO] Finished at: Fri Oct 23 15:22:48 EDT 2009 [INFO] Final Memory: 11M/254M [INFO] ------------------------------------------------------------------------

    Read the article

  • Autotesting a network interface

    - by Machado
    Hi All, I'm developing a software component responsible for testing if a network interface has conectivity with the internet. Think of it as the same test the XBOX360 does to inform the user if it's connected with the Live network (just as an example). So far I figured the autotest would run as this: 1) Test the physical network interface (if the cable is conected, has up/downlink, etc...) 2) Test the logical network (has IP address, has DNS, etc...) 3) Connects to the internet (can access google, for example) 4) ??? 5) Profit! (just kidding...) My question relates to step 3: How can I detect, correctly, if my software has connection with the internet ? Is there any fixed IP address to ping ? The problem is that I don't want to rely solely on google.com (or any other well-known address), as those can change in time, and my component will be embbeded on a mobile device, not easy to update. Any suggestions ?

    Read the article

  • use jquery autocomplete with carousel for preview and selection

    - by dave
    Hi, I've successfully used jquery autocomplete to display a list of matching images based on user input. The user experience isn't great though due to the number of potential matches - even with fairly prescriptive input. I've found this example at nokia http://www.nokia.co.uk/support/download-software/device-software-update (I know it's written in flash) which would provide the ideal interface for what I'm trying to achieve. Does anyone have any pointers for doing this using jquery autocomplete as a starting point? Or better still know of an existing javascript library that provides this functionality? I'm using the latest release of jquery if that matters. Thanks, Dave.

    Read the article

  • Creating an installer with WPF forms, packaged files and custom setup actions in C#

    - by RodH257
    I'm trying to create a way of deploying a set of tools (which are add-ins to 3rd party software) to my users. I would like to do the following: User Enters Serial Dlls in their directory structure is extracted to program files a file is copied to a location in ProgramData (this registers my add-ins to the 3rd party application) Online activation for software is performed Can anyone point me into the right direction for this? I had a look at deployment projects in Visual Studio but I'm not sure if they are what I'm after. Main problem is they are ugly, I would like to have a nice WPF installer, and have a more custom experience. But I guess that can be traded off if its going to make things easier. I was thinking, I could just make my own C# project that extracts the files, but I have no idea how to package them up and extract them all as part of one download (like the MSI files that the deployment projects create). Can anyone point me in the right direction?

    Read the article

  • What next in the career map for a Lead QA Engineer

    - by chandran
    I am a Lead QA Engineer in a Software company and at a stage in my career wherein i need to plan my next move. Option 1: The very obvious move would be to stay as a QA Lead and eventually become a QA Manager. But i don't see very good prospects/future after that. Or am i wrong? Option 2: I love programming/coding, though i haven't spent a whole lot of time on that. So a direct move to becoming a Software Developer is not possible. Will moving to Test Automation eventually lead me to development. Even so, am i looking at step-down in pay and career-level. Option 3: Moving to Product Management. Is this even possible and if so what would be the best approach. Appreciate all your responses in advance. Thanks.

    Read the article

  • Ipad, closed environment and threat to privacy

    - by Akshay Bhat
    I had an unusual question about ipad, Since ipad environment is closed and does not allows installation of diagnostic and security related programs. How can then we be sure that any of the software installed on ipad is not infringing upon our privacy by doing stuff such as homing back information, etc. We cant install a packet tracer or any other software to check for attacks on privacy. Also given Apples poor track record (the safari browser was broken in one day), I don't think trusting apple solely would be a good idea. This might not seem to be a big issue but for business users it would be a significant concern.

    Read the article

  • Unlocking Productivity

    - by Michael Snow
    Unlocking Productivity in Life Sciences with Consolidated Content Management by Joe Golemba, Vice President, Product Management, Oracle WebCenter As life sciences organizations look to become more operationally efficient, the ability to effectively leverage information is a competitive advantage. Whether data mining at the drug discovery phase or prepping the sales team before a product launch, content management can play a key role in developing, organizing, and disseminating vital information. The goal of content management is relatively straightforward: put the information that people need where they can find it. A number of issues can complicate this; information sits in many different systems, each of those systems has its own security, and the information in those systems exists in many different formats. Identifying and extracting pertinent information from mountains of farflung data is no simple job, but the alternative—wasted effort or even regulatory compliance issues—is worse. An integrated information architecture can enable health sciences organizations to make better decisions, accelerate clinical operations, and be more competitive. Unstructured data matters Often when we think of drug development data, we think of structured data that fits neatly into one or more research databases. But structured data is often directly supported by unstructured data such as experimental protocols, reaction conditions, lot numbers, run times, analyses, and research notes. As life sciences companies seek integrated views of data, they are typically finding diverse islands of data that seemingly have no relationship to other data in the organization. Information like sales reports or call center reports can be locked into siloed systems, and unavailable to the discovery process. Additionally, in the increasingly networked clinical environment, Web pages, instant messages, videos, scientific imaging, sales and marketing data, collaborative workspaces, and predictive modeling data are likely to be present within an organization, and each source potentially possesses information that can help to better inform specific efforts. Historically, content management solutions that had 21CFR Part 11 capabilities—electronic records and signatures—were focused mainly on content-enabling manufacturing-related processes. Today, life sciences companies have many standalone repositories, requiring different skills, service level agreements, and vendor support costs to manage them. With the amount of content doubling every three to six months, companies have recognized the need to manage unstructured content from the beginning, in order to increase employee productivity and operational efficiency. Using scalable and secure enterprise content management (ECM) solutions, organizations can better manage their unstructured content. These solutions can also be integrated with enterprise resource planning (ERP) systems or research systems, making content available immediately, in the context of the application and within the flow of the employee’s typical business activity. Administrative safeguards—such as content de-duplication—can also be applied within ECM systems, so documents are never recreated, eliminating redundant efforts, ensuring one source of truth, and maintaining content standards in the organization. Putting it in context Consolidating structured and unstructured information in a single system can greatly simplify access to relevant information when it is needed through contextual search. Using contextual filters, results can include therapeutic area, position in the value chain, semantic commonalities, technology-specific factors, specific researchers involved, or potential business impact. The use of taxonomies is essential to organizing information and enabling contextual searches. Taxonomy solutions are composed of a hierarchical tree that defines the relationship between different life science terms. When overlaid with additional indexing related to research and/or business processes, it becomes possible to effectively narrow down the amount of data that is returned during searches, as well as prioritize results based on specific criteria and/or prior search history. Thus, search results are more accurate and relevant to an employee’s day-to-day work. For example, a search for the word "tissue" by a lab researcher would return significantly different results than a search for the same word performed by someone in procurement. Of course, diverse data repositories, combined with the immense amounts of data present in an organization, necessitate that the data elements be regularly indexed and cached beforehand to enable reasonable search response times. In its simplest form, indexing of a single, consolidated data warehouse can be expected to be a relatively straightforward effort. However, organizations require the ability to index multiple data repositories, enabling a single search to reference multiple data sources and provide an integrated results listing. Security and compliance Beyond yielding efficiencies and supporting new insight, an enterprise search environment can support important security considerations as well as compliance initiatives. For example, the systems enable organizations to retain the relevance and the security of the indexed systems, so users can only see the results to which they are granted access. This is especially important as life sciences companies are working in an increasingly networked environment and need to provide secure, role-based access to information across multiple partners. Although not officially required by the 21 CFR Part 11 regulation, the U.S. Food and Drug Administraiton has begun to extend the type of content considered when performing relevant audits and discoveries. Having an ECM infrastructure that provides centralized management of all content enterprise-wide—with the ability to consistently apply records and retention policies along with the appropriate controls, validations, audit trails, and electronic signatures—is becoming increasingly critical for life sciences companies. Making the move Creating an enterprise-wide ECM environment requires moving large amounts of content into a single enterprise repository, a daunting and risk-laden initiative. The first key is to focus on data taxonomy, allowing content to be mapped across systems. The second is to take advantage new tools which can dramatically speed and reduce the cost of the data migration process through automation. Additional content need not be frozen while it is migrated, enabling productivity throughout the process. The ability to effectively leverage information into success has been gaining importance in the life sciences industry for years. The rapid adoption of enterprise content management, both in operational processes as well as in scientific management, are clear indicators that the companies are looking to use all available data to be better informed, improve decision making, minimize risk, and increase time to market, to maintain profitability and be more competitive. As more and more varieties and sources of information are brought under the strategic management umbrella, the ability to divine knowledge from the vast pool of information is increasingly difficult. Simple search engines and basic content management are increasingly unable to effectively extract the right information from the mountains of data available. By bringing these tools into context and integrating them with business processes and applications, we can effectively focus on the right decisions that make our organizations more profitable. More Information Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • FFMPEG running in Command Line but not PHP

    - by Freeman
    I am using ffmpeg build for windows to make video thumbnails . The command works well in command line but not from PHP exec method. am using PHP 5.2.11 Here is the command. "E:/Documents and Settings/x/WINDOWS/ffmpeg" -itsoffset -4 -v "E:/Program Files/Apache Software Foundation/Apache2.2/htdocs/bs/files/videogal/c08c3d20eeb9083ed033577bd154cba6.flv" -vcodec mjpeg -vframes 1 -an -f rawvideo -s 320x240 "E:/Program Files/Apache Software Foundation/Apache2.2/htdocs/bs/files/gallery/8ff43b72b932d2a34e7a6733672ad4d6.jpg" 2>&1 Can somebody help. I checked the permissions they seem fine. GD is installed.

    Read the article

  • Disabling Task manager using c# in OS Hardened machine

    - by srk
    I am using the below code to disable the task manager for a kiosk application which works perfectly public void DisableTaskManager() { RegistryKey regkey; string keyValueInt = "1"; string subKey = "Software\\Microsoft\\Windows\\CurrentVersion\\Policies\\System"; try { regkey = Registry.CurrentUser.CreateSubKey(subKey); regkey.SetValue("DisableTaskMgr", keyValueInt); regkey.Close(); } catch (Exception ex) { MessageBox.Show("DisableTaskManager" + ex.ToString()); } } But when i run this in OS hardened machine i get the following error, DisableTaskManagerSystem.UnauthorizedAccessException: Access to the registry key 'HKey_Current_User\Software\Mictrosoft\Windows\CurrentVersion\Policies\System' is denied. at Microsoft.win32.RegistryKey.win32Error(int32 errorcode, String str) How can i overcome this ? I need to do this for a Kiosk application.

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >