Search Results

Search found 26434 results on 1058 pages for 'folder options'.

Page 59/1058 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • how to iterate through multiple select options with jquery

    - by amir
    I was just wondering if it's possible to go through multiple select options and get their values and text(if one is selected get the value and text, if 2 is selected get both of their values and text and so on) I have 15 select boxes in one page? any help would be appreciated. <form> <select class="select" name="select3" id="select3"> <option value="0">0</option> <option value="1.99">1</option> <option value="1.99">2</option> <option value="1.99">3</option> <option value="1.99">4</option> <option value="1.99">5</option> <option value="1.99">6</option> <option value="1.99">7</option> <option value="1.99">8</option> </select> </form> <form> <select class="select" name="select" id="select"> <option value="0">0</option> <option value="1.99">1</option> <option value="1.99">2</option> <option value="1.99">3</option> <option value="1.99">4</option> <option value="1.99">5</option> <option value="1.99">6</option> <option value="1.99">7</option> <option value="1.99">8</option> </select> </form> all the select options have the same class. thanks

    Read the article

  • How to persist options selected in AlertDialog spawned from ItemizedOverlay onTap method

    - by ahsteele
    In the description of how to add a list of options to an AlertDialog the official Android documentation alludes to saving a users preferences with one of the "data storage techniques." The examples assume the AlertDialog has been spawned within an Activity class. In my case I've created a class that extends ItemizedOverlay. This class overrides the onTap method and uses an AlertDialog to prompt the user to make a multi-choice selection. I would like to capture and persist the selections for each OverlayItem they tap on. The below code is the onTap method I've written. It functions as written but doesn't yet do what I'd hope. I'd like to capture and persist each selection made by the user to be used later. How do I do that? Is using an AlertDialog in this manner a good idea? Are there better options? protected boolean onTap(int index) { OverlayItem item = _overlays.get(index); final CharSequence[] items = { "WiFi", "BlueTooth" }; final boolean[] checked = { false, false }; AlertDialog.Builder builder = new AlertDialog.Builder(_context); builder.setTitle(item.getTitle()); builder.setMultiChoiceItems(items, checked, new DialogInterface.OnMultiChoiceClickListener() { @Override public void onClick(DialogInterface dialog, int item, boolean isChecked) { // for now just show that the user touched an option Toast.makeText(_context, items[item], Toast.LENGTH_SHORT).show(); } }); builder.setPositiveButton("Okay", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int id) { // should I be examining what was checked here? dialog.dismiss(); } }); builder.setNegativeButton("Cancel", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int id) { dialog.cancel(); } }); AlertDialog alert = builder.create(); alert.show(); return true; }

    Read the article

  • plupload with webpy.

    - by markus
    Hi, i have a problem. I want to upload a file with plupload with the HML5 runtime. This is my html/js code : jQuery(function(){ jQuery("#uploader").pluploadQueue({ // General settings runtimes : 'html5', name : 'file', url : 'http://server.name/addContent', max_file_size : '${maxSize}$_("GB")', }); jQuery('#form_upload_file').submit(function(e) { var uploader = jQuery('#uploader').pluploadQueue(); // Validate number of uploaded files if (uploader.total.uploaded == 0) { // Files in queue upload them first if (uploader.files.length > 0) { // When all files are uploaded submit form uploader.bind('UploadProgress', function() { if (uploader.total.uploaded == uploader.files.length) jQuery('#form_upload_file').submit(); }); uploader.start(); } else alert('You must at least upload one file.'); e.preventDefault(); } }); }); <form id="form_upload_file" action="#" method="POST"> <div id="uploader"></div> <input type="hidden" name="token" value="token" /> <input type="hidden" name="idUser" value="$idUser" /> </form> So, when i click in the button to upload(the submit() method is not called), it does an OPTIONS HTTP request to my server so i don't know what i must do to save the file? this is my webpy code : def OPTIONS(self): web.header('Content-type', 'text/plain: charset=utf-8') web.header('Cache-Control', 'no-store, no-cache, must-revalidate') web.header('Cache-Control', 'post-check=0, pre-check=0', False) web.header('Pragma', 'no-cache') def POST(self): input = web.input(_unicode=False, file={})#on récupère les input self.copy(input.file.file) etc. any idea ? thanks.

    Read the article

  • Directly call distutils' or setuptools' setup() function with command name/options, without parsing

    - by Ryan B. Lynch
    I'd like to call Python's distutils' or setuptools' setup() function in a slightly unconventional way, but I'm not sure whether distutils is meant for this kind of usage. As an example, let's say I currently have a 'setup.py' file, which looks like this (lifted verbatim from the distutils docs--the setuptools usage is almost identical): from distutils.core import setup setup(name='Distutils', version='1.0', description='Python Distribution Utilities', author='Greg Ward', author_email='[email protected]', url='http://www.python.org/sigs/distutils-sig/', packages=['distutils', 'distutils.command'], ) Normally, to build just the .spec file for an RPM of this module, I could run python setup.py bdist_rpm --spec-only, which parses the command line and calls the 'bdist_rpm' code to handle the RPM-specific stuff. The .spec file ends up in './dist'. How can I change my setup() invocation so that it runs the 'bdist_rpm' command with the '--spec-only' option, WITHOUT parsing command-line parameters? Can I pass the command name and options as parameters to setup()? Or can I manually construct a command line, and pass that as a parameter, instead? NOTE: I already know that I could call the script in a separate process, with an actual command line, using os.system() or the subprocess module or something similar. I'm trying to avoid using any kind of external command invocations. I'm looking specifically for a solution that runs setup() in the current interpreter. For background, I'm converting some release-management shell scripts into a single Python program. One of the tasks is running 'setup.py' to generate a .spec file for further pre-release testing. Running 'setup.py' as an external command, with its own command line options, seems like an awkward method, and it complicates the rest of the program. I feel like there may be a more Pythonic way.

    Read the article

  • How to create a log file in the folder which will be created at run time

    - by swati
    Hello Everyone, I new to apache logger.I am using apache log4j for my application. I am using the following configuration file configure the root logger log4j.rootLogger=INFO, STDOUT, DAILY configure the console appender log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender log4j.appender.STDOUT.Target=System.out log4j.appender.STDOUT.layout=org.apache.log4j.PatternLayout log4j.appender.STDOUT.layout.conversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSS} [%p] %c:%L - %m%n configure the daily rolling file appender log4j.appender.DAILY=org.apache.log4j.DailyRollingFileAppender log4j.appender.DAILY.File=log4jtest.log log4j.appender.DAILY.DatePattern='.'yyyy-MM-dd-HH-mm log4j.appender.DAILY.layout=org.apache.log4j.PatternLayout log4j.appender.DAILY.layout.conversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSS} [%p] %c:%L - %m%n So when my application runs it creates a folder called somename_2010-04-09-23-09 . My log file has to be created inside of this somename_2010-04-09-23-09 folder.(Which created at run time..). Is there anyway to do that.. Is there anyway we can specify in the configuration file so that it will create at run time the log file inside of the folder somename_2010-04-09-23-03 folder..? I would really appreciate if some one can answer to my questions. Thanks, Swati

    Read the article

  • Passing options in autospec with Cucumber in Ruby on Rails Development

    - by TK
    I always run autospec to run features and RSpec at the same time, but running all the features is often time-consuming on my local computer. I would run every feature before committing code. I would like to pass the argument in autospec command. autospec doesn't obviously doesn't accept the arguments directly. Here's the output of autospec -h: autotest [options] options: -h -help You're looking at it. -v Be verbose. Prints files that autotest doesn't know how to map to tests. -q Be more quiet. -f Fast start. Doesn't initially run tests at start. I do have a cucumber.yml in config directory. I also have rerun.txt in the Rails root directory. cucumber -h gives me a lot of information about arguments. How can I run autospec against features that are tagged as @wip? I think I can make use of config/cucumber.yml. There are profile definitions. I can run cucumber -p wip to run only @wip-tagged features, but I'd like to do this with autospec. I would appreciate any tips for working with many spec and feature files.

    Read the article

  • Windows Server - share files without access for administrator

    - by Pawel
    We have a MS Windows Server 2008 R8 based server that is administrated by our IT department. We would like to achieve two things simultaneously: A folder on the server, containing several thousand files (new files added frequently) that is accessible to some ActiveDirectory users (e.g. board of directors) but is not accessible by IT department employees IT department employees still maintain rights to administrate the server, including installing new software and services We already checked some solutions: Using NTFS access rights. Unfortunately IT (members of "Administrators" group) can set themselves as new owners of the files and change the permissions so that they gain access to the files. Enabling EFS. Unfortunately even if you do not allow IT to access files, they still can disable EFS completely because they have administrative rights. Moreover as far as I know you have to manually add permissions for all users but the owner for each new file - very inconvenient. Creating a new role for the IT department that has all the privileges apart from taking ownership of files. Unfortunately if you're not a member of the Administrators group, you cannot install new software, no matter what privileges you add to the role. TrueCrypt - nice free encryption software, but with poor sharing capabilities. You can either mount an encryption container on the server (and then IT has access to its contents) or you mount them locally but only one user can mount it for writing. AxCrypt - free encryption software that enables file-by-file encryption on the server. There are some disadvantages though - you have to manually encrypt each new file added. The files have their extensions changes. You can only set one password for all files (so all users have to know this one password). Any other ideas? Our budget is limited so enterprise-class software from Symantec or PGP would probably be not an option.

    Read the article

  • What are my options for selling software independently on Windows?

    - by technomalogical
    I am looking to port a tool from the Mac app store over to Windows, the platform where I spend most of my time these days. I've spoken with the author of the original app and we've begun talking about licensing options should I decide to sell the application, and it seems like it would be feasible. I've never sold software independently, let alone on Windows. That I know of, there is not (yet) an equivalent app store for Windows (maybe one coming with Windows 8). Assuming my product was done today and I was ready to go to market, what options do I have for selling software for Windows as an independent developer or Micro-ISV? I know can sell it through my own website and accept PayPal, but are there options that will offer more visibility, similar to that of the Apple app stores? Any options to avoid?

    Read the article

  • Windows 7 shows a drive as full in summary but files shown on drive are very small

    - by Rob
    I have a drive partitioned so it is seen by Windows as 2 drives: C:\ and D:\ Windows 7 shows D:\ as full up in the graphical summary in 'My Computer' summary of all the drives, e.g. the bar graph indicates full and nearly all of the drive's capacity, 108Gb, is full. So I go into the D:\ drive to look at the files, I see several folders. I select them all and the right-click menu Properties to count their size, expecting the value to be about the same as what Windows reports in the summary, i.e. nearly 108Gb. But the properties window shows the files are very small, Kbs and Mbs, nowhere near 108Gbs. One of the folders is a backup, but its size is very small. I've checked the folder options to show all system files and hidden files too - and counted these in the properties. Something invisible is holding the space. What is happening here? I'm afraid to delete anything if it removes valuable backups. Have I got huge backups here? Why can't I see them? How do I see them?

    Read the article

  • Roaming Profiles & Redirected Folders - storage consumption? offline files and caching?

    - by Ben Swinburne
    I understand the concepts of both roaming profiles and folder redirection and have used both separately before. I am about to set up a network from scratch and would ideally like to use both for the following reasons primarily Roaming profiles allow users to log on to any machine and have their profile Redirected profiles allow users to have their My Documents and Desktop etc backed up without the need to log off at the end of the day. The servers can run their backups overnight and there are no missing files due to the user not logging off. Redirected profiles largely alleviate the slow log in times caused by large profiles. My question is if some of the folders are redirected and therefore not part of the roaming profile what happens on machines which truly roam (i.e. laptops)? If there's offline files or a cache does this mean that the problem whereby a user has to log off comes back? By having them both enabled, is there any duplication i.e. if I have a users$ share and a profiles$ share would I have Desktop twice for example?

    Read the article

  • Windows Server - share files without access for administrator

    - by Pawel
    We have a MS Windows Server 2008 R8 based server that is administrated by our IT department. We would like to achieve two things simultaneously: A folder on the server, containing several thousand files (new files added frequently) that is accessible to some ActiveDirectory users (e.g. board of directors) but is not accessible by IT department employees IT department employees still maintain rights to administrate the server, including installing new software and services We already checked some solutions: Using NTFS access rights. Unfortunately IT (members of "Administrators" group) can set themselves as new owners of the files and change the permissions so that they gain access to the files. Enabling EFS. Unfortunately even if you do not allow IT to access files, they still can disable EFS completely because they have administrative rights. Moreover as far as I know you have to manually add permissions for all users but the owner for each new file - very inconvenient. Creating a new role for the IT department that has all the privileges apart from taking ownership of files. Unfortunately if you're not a member of the Administrators group, you cannot install new software, no matter what privileges you add to the role. TrueCrypt - nice free encryption software, but with poor sharing capabilities. You can either mount an encryption container on the server (and then IT has access to its contents) or you mount them locally but only one user can mount it for writing. AxCrypt - free encryption software that enables file-by-file encryption on the server. There are some disadvantages though - you have to manually encrypt each new file added. The files have their extensions changes. You can only set one password for all files (so all users have to know this one password). Any other ideas? Our budget is limited so enterprise-class software from Symantec or PGP would probably be not an option.

    Read the article

  • access settings from whole jquery component

    - by Pacuraru Daniel
    I am trying to develop a jquery component for dialog modals and i dont know how to access the settings from all component functions. I need to access settings,zIndex from open function and it seems to not work. (function($) { var methods = { init: function(options) { var defaults = { bgClass: "fancy-dialog-bg", bgShow: null, zIndex: 100, show: null }; var settings = $.extend(defaults, options); return this.each(function() { var obj = $(this).hide().css("position", "fixed").css("z-index", settings.zIndex).css("left", "300px").css("top", "200px"); }); }, open: function() { // alert(settings.zIndex); not working var tes = $("<div></div>").css("backgroundColor", "#f00").css("position", "fixed").css("z-index", "99").css("width", "50%").css("height", "100%").css("left", "0").css("top", "0"); $('body').append(tes); var obj = $(this); obj.show(); }, close: function() { var obj = $(this); $("#fancy-dialog-bg-" + obj.attr('id')).remove(); obj.hide(); } }; $.fn.fancyDialog = function(method) { if (methods[method]) { return methods[method].apply(this, Array.prototype.slice.call(arguments, 1)); } else if (typeof method === 'object' || !method) { return methods.init.apply(this, arguments); } else { $.error('Method ' + method + ' does not exist.'); } }; })(jQuery);

    Read the article

  • Where should an application's default folder live?

    - by HotOil
    Hi: I'm creating a little app that configures a connected device and then saves the config information in a file. The filename cannot be chosen by the user, but its location can be chosen. Where is the best place for the app's default save-to folder? I have seen examples out there where it is the "MyDocuments" location (eg Visual Studio does this). I have seen a folder created right at the top of the C:\ drive. I find that to be a little obnoxious, personally. It could be in the Program Files[Manufacturer] or Program Files[Product Name], or wherever the app was installed. I have used this location in the past; I dislike it because Windows Explorer does not allow a user to browse to there very easily ('browsability'). Going with this last notion that 'browsability' is a factor, I suppose MyDocuments is the best choice. Is this the most common, most widely accepted practice? I think historically we have chosen the install folder because that co-locates the data with the device management utilities. But I would really like to get away from that. I don't want the user to have to go pawing through system files to find his/her data, esp if that person is not too Windows-savvy. Also, I am using the .NET WinForms FolderBrowserDialog, and the "Environment.SpecialFolders" enum isn't helpful in setting up the dialog to point into the Program Files folder. Thanks for your input! Suz.

    Read the article

  • Can the "Documents" standard folder be rescued and how?

    - by romkyns
    Anyone who likes their Documents folder to contain only things they place there knows that the standard Documents folder is completely unsuitable for this task. Every program seems to want to put its settings, data, or something equally irrelevant into the Documents folder, despite the fact that there are folders specifically for this job. So that this doesn't sound empty, take my personal "Documents" folder as an example. I don't ever use it, in that I never, under any circumstances, save anything into this folder myself. And yet, it contains 46 folders and 3 files at the top level, for a total of 800 files in 500 folders. That's 190 MB of "documents" I didn't create. Obviously any actual documents would immediately get lost in this mess. My question is: can anything be done to improve the situation sufficiently to make "Documents" useful again, say over the next 5 years? Can programmers be somehow educated en-masse not to use it as a dumping ground? Could the OS start reporting some "fake" location hidden under AppData through the existing APIs, while only allowing Explorer and the various Open/Save dialogs to know where the "real" Documents folder resides? Or are any attempts completely futile or even unnecessary?

    Read the article

  • PHP JQuery: Where to specify uploadify destination folder

    - by Eamonn
    I have an uploadify script running, basic setup. Works fine when I hard code the destination folder for the images into uploadify.php - now I want to make that folder dynamic. How do I do this? I have a PHP variable $uploadify_path which contains the path to the folder I want. I have switched out my hard coded $targetPath = path/to/directory for $targetPath = $uploadify_path in both uploadify.php and check_exists.php, but it does not work. The file upload animation runs, says it is complete, yet the directory remains empty. The file is not hiding out somewhere else either. I see there is an option in the Javascript to specify a folder. I tried this also, but to no avail. If anyone could educate me on how to pass this variable destination to uploadify, I'd be very grateful. I include my current code for checking (basically default): The Javascript <script type="text/javascript"> $(function() { $('#file_upload').uploadify({ 'swf' : 'uploadify/uploadify.swf', 'uploader' : 'uploadify/uploadify.php', // Put your options here }); }); </script> uploadify.php $targetPath = $_SERVER['DOCUMENT_ROOT'] . $uploadify_path; // Relative to the root if (!empty($_FILES)) { $tempFile = $_FILES['Filedata']['tmp_name']; $targetFile = $targetPath . $_FILES['Filedata']['name']; // Validate the file type $fileTypes = array('jpg','jpeg','gif','png'); // File extensions $fileParts = pathinfo($_FILES['Filedata']['name']); if (in_array($fileParts['extension'],$fileTypes)) { move_uploaded_file($tempFile,$targetFile); echo '1'; } else { echo 'Invalid file type.'; } }

    Read the article

  • Update Options on Existing jQuery Object

    - by Vince Kronlein
    I'm using Bootstrap and DataTables in my app and I have a default initializer for tables based on class. I can just add the class data-table to the table and it gets instantiated with the default values I want. I'd like to know how to change or update specific options based on a specific table. if ($.fn.dataTable) { $('.data-table').dataTable( { sDom: "R<'row'<'span6'l><'span6'f>r>t<'row'<'span6'i><'span6'p>>", sPaginationType: "bootstrap", oLanguage: { "sLengthMenu": "_MENU_ &nbsp; records per page" }, aoColumnDefs: [ { "bSortable": false, "aTargets": [ 0 ] } ] }); } All my data tables have a checkbox in the first column so the above removal of sorting works for all of them. But I'd like to be able to update the aoColumnDefs on a table by table basis so I can add other columns that I don't want sorted. So let's say I have a table: $('#member-list'), how do I access this object and update it's datatables options in jQuery? I can't find any reference or help anywhere. Thanks a lot! -V

    Read the article

  • Disable MSBuild output of "Processing /ORDER options..."

    - by Jippers
    The output file from our project build has gone from 6MB to over 75MB in text. Diff'ing the last good build and the first time it blew up, there's a section in the output file like this in the latest: Processing /ORDER options External code objects not listed in the /ORDER file: ?onCallDisconnected@CallStateConnected@CallImpl@space@@UAEXV?$shared_ptr@VCallImpl@space@@@boost@@V?$shared_ptr@VGenericCall@space@@@5@K@Z ; framework.lib(CallStates.obj) ??_DBoolSetting@space@@QAEXXZ ; framework.lib(SettingValueImpl.obj) ...... continues for ~50MB ??$?0U?$pair@$$CBV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@J@std@@@?$allocator@U_Node@?$_Tree_nod@V?$_Tmap_traits@V?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@JU?$less@V?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@@2@V?$allocator@U?$pair@$$CBV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@J@std@@@2@$0A@@std@@@std@@@std@@QAE@ABV?$allocator@U?$pair@$$CBV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@J@std@@@1@@Z ; CallStatistics.obj Finished processing /ORDER options I'm not sure how this got in there, but anyone know how to turn it off?

    Read the article

  • [javascript] Populating jsTree based on XML data uploaded to server folder

    - by PFM
    tl:dr How can I populate jsTree based on a folder location instead of an exact XML url? I'm looking for a little direction on this project. Currently I am trying to copy file structures of hard drives as XML files and recreate them using jsTree on the webserver for a completely independent version of the file structure. I have some python script that outputs XML files that are formed to jsTree and automatically uploads to a folder on the server. The problem is now I am a little lost because I have to manually enter each XML file into jsTree code for it to display so I have multiple entries like this: $("#tree1") .jstree({ "plugins" : [ "themes", "xml_data", "ui", "search", "types" ], "xml_data" : { "ajax" : { "url" : "./XML_DATA/DRIVE1.xml" }, "xsl" : "nest" }, I see in the documentation that instead of populated by the direct file the folders are populated by "server.php" but no where in the php code does it point to any directories or files. After considering the problem I thought of a few solutions and could use some advice on them: Should I be trying to write php code to automatically look through my XML_DATA folder to upload each XML file? Should I just upload all the XML to mySQL and populate my tree based on that? Should the javascript be the code looking through the server's folder for XML files? All the XML is formed the same way but the number of XML files on the server will increase and will have to be refreshed as well as they will be overwritten with changes. Any direction would be appreciated, thanks.

    Read the article

  • Simple Oracle File repository with folder hierarchy

    - by Ope
    I have an application that stores large amount of files (XML and binary) in folder hierarchies. Currently the main method is storing them in file system or using a legacy CMS, which we want to get rid of. The CMS supports Oracle and a customer wants to keep the files in Oracle because of enterprise policies (backup etc.) The question is: Is there a simple implementation of file repository with folder hierarchy for Oracle? What I am looking for is a small .Net component or example code (PL/SQL and/or .Net) that would have the following methods: Create, Delete, Exists Folder CRUD file Move and potentially Copy file or directory Access to files and folders with paths like "/root/folder1/folder2/file.xml" Ability to get all the files and folders in a folder and potentially also the entire directory tree Tree traversal, getting the parent, all children etc. needs to be fast. I need the implementation in .Net, but if it was just the stored procedures, I could create the .Net calling code. I have pointers to generic articles for creating hierarchies in DB, so if I need to do it from scratch, I know where to start. What I am asking here, is there already an implementation that I could take without doing this from scratch? It seems like such a generic requirement... If the answer is a CMS, Document management system or such it should be Open Source or at least quite cheap (some hundreds / server) and it should be possible to deploy it XCopy - hopefully only couple of DLL:s. I do not need - or want - a full featured big CMS with dozens of dlls and especially not an msi-installation. I have tried to google this, but the words "repository", "CMS", "file hierarchy" etc. give so many answers, the searches are pretty much useless. Thanks, OPe

    Read the article

  • Visual Studio Folder Structure

    - by nick
    I am not sure how this works. I am using Visual Studio 2008 and I created a Class Library (say the name is Test). I also selected the option to create a folder for the solution. Following is the directory structure I get: Test - Test - bin - Debug - obj - Debug - Properties - AassemblyInfo.cs - Test.cs - Test.csproj - Test.sln - Test.suo This is default and I have no problems running my code this way. My querry is I see other solutions (class libraries) created in the Subversion by others before have a different structure. The structure for that is as follows: Test - .svn - lib - <<Reference 1>> - <<Reference 2>> - .... - <<Reference N>> - src - bin - Debug - obj - Debug - Properties - AassemblyInfo.cs - Test.cs - Test.csproj - Test.sln - Test.suo My query is how to create this structure? All the references to other projects are maintained in lib folder and source code is maintained in src folder. This is not the case happening with me. When I open the solution in Visual Studio, I cannot see any such folder like lib or src. It shows the same way as mine. Kindly help and forgive me for being so elaborative. Thanks

    Read the article

  • Javascript drop down menu calculation

    - by Janis Yee
    I'm having a bit of an issue with me code. I'm trying to do a calculation from a drop down menu and then it will onChange to a textbox. I've been at it for days trying to figure it out and Googling ways to code the function. Can anyone please help or give me advice on how to approach this? function numGuest() { var a = document.getElementById("guests"); if(a.options[a.selectedIndex].value == "0") { registration.banq.value = "0"; } else if(a.options[a.selectedIndex].value == "1") { registration.banq.value = "30"; } } <select id="guests" name="guests"> <option value="0">0</option> <option value="1">1</option> <option>2</option> <option>3</option> <option>4</option> <option>5</option> </select> <input type="text" id="banq" name="banq" onChange="numGuest()" disabled />

    Read the article

  • How to setup linux permissions the WWW folder?

    - by Xeoncross
    Updated Summery The /var/www directory is owned by root:root which means that no one can use it and it's entirely useless. Since we all want a web server that actually works (and no-one should be logging in as "root"), then we need to fix this. Only two entities need access. PHP/Perl/Ruby/Python all need access to the folders and files since they create many of them (i.e. /uploads/). These scripting languages should be running under nginx or apache (or even some other thing like FastCGI for PHP). The developers How do they get access? I know that someone, somewhere has done this before. With however-many billions of websites out there you would think that there would be more information on this topic. I know that 777 is full read/write/execute permission for owner/group/other. So this doesn't seem to be needed as it leaves random users full permissions. What permissions are need to be used on /var/www so that... Source control like git or svn Users in a group like "websites" (or even added to "www-data") Servers like apache or lighthttpd And PHP/Perl/Ruby can all read, create, and run files (and directories) there? If I'm correct, Ruby and PHP scripts are not "executed" directly - but passed to an interpreter. So there is no need for execute permission on files in /var/www...? Therefore, it seems like the correct permission would be chmod -R 1660 which would make all files shareable by these four entities all files non-executable by mistake block everyone else from the directory entirely set the permission mode to "sticky" for all future files Is this correct? Update: I just realized that files and directories might need different permissions - I was talking about files above so i'm not sure what the directory permissions would need to be. Update 2: The folder structure of /var/www changes drastically as one of the four entities above are always adding (and sometimes removing) folders and sub folders many levels deep. They also create and remove files that the other 3 entities might need read/write access to. Therefore, the permissions need to do the four things above for both files and directories. Since non of them should need execute permission (see question about ruby/php above) I would assume that rw-rw-r-- permission would be all that is needed and completely safe since these four entities are run by trusted personal (see #2) and all other users on the system only have read access. Update 3: This is for personal development machines and private company servers. No random "web customers" like a shared host. Update 4: This article by slicehost seems to be the best at explaining what is needed to setup permissions for your www folder. However, I'm not sure what user or group apache/nginx with PHP OR svn/git run as and how to change them. Update 5: I have (I think) finally found a way to get this all to work (answer below). However, I don't know if this is the correct and SECURE way to do this. Therefore I have started a bounty. The person that has the best method of securing and managing the www directory wins.

    Read the article

  • Simulating remote environment?

    - by ropstah
    I'm building a .NET MVC application which will be deployed on a Windows 2003 server. The server has a folder @ c:\Website\Files which needs to be written to from the application. How do I cope with this in my development environment so that the MSI setup file, which I will compile, will work correctly when deployed? p.s. the folder is NOT located in a subdirectory of the application project

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >