Search Results

Search found 18079 results on 724 pages for 'compiler options'.

Page 68/724 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • See socket options on existing sockets created by other apps?

    - by nailer
    I'd like to test whether particular socket options have been set on an existing socket. Ie, pretty much everything you can see in: #!/usr/bin/env python '''See possible TCP socket options''' import socket sockettypelist = [x for x in dir(socket) if x.startswith('SO_')] sockettypelist.sort() for sockettype in sockettypelist: print sockettype Anyone know how I can see the options on existing sockets, ie those created by other processes? Alas nearly all the documentation I read on Python socket programming is about making new sockets.

    Read the article

  • How do I remove unnecessary options from a select field in a Drupal form?

    - by thismax
    I'm using the better_exposed_filters module to create a set of exposed filters for a view. One of the filters is being displayed as a select field, and I would like the field to only display options that are actually associated with content in the database. Currently, I am doing this using the hook_form_alter() method. For simplification, in the following example the field is called 'foo' and the content type with that field is called 'bar': function my_module_form_alter(&$form, $form_state, $form_id) { // Get all the values of foo that matter $resource = db_query('select distinct field_foo_value from {content_type_bar}'); $foo = array(); while($row = db_fetch_object($resource)) { $foo[$row->field_foo_value] = $row->field_foo_value; } $form['foo']['#options'] = $manufacturers; } This works great -- the form displays only the options I want to display. Unfortunately, the view doesn't actually display anything initially and I also get the following error message: An illegal choice has been detected. Please contact the site administrator. After I filter options with the form once, everything seems to work fine. Does anyone know how I can solve this problem? I'm open to an entirely different way of weeding out filter options, if need be, or a way that I can figure out how to address that error.

    Read the article

  • Current SPARC Architectures

    - by Darryl Gove
    Different generations of SPARC processors implement different architectures. The architecture that the compiler targets is controlled implicitly by the -xtarget flag and explicitly by the -arch flag. If an application targets a recent architecture, then the compiler gets to play with all the instructions that the new architecture provides. The downside is that the application won't work on older processors that don't have the new instructions. So for developer's there is a trade-off between performance and portability. The way we have solved this in the compiler is to assume a "generic" architecture, and we've made this the default behaviour of the compiler. The only flag that doesn't make this assumption is -fast which tells the compiler to assume that the build machine is also the deployment machine - so the compiler can use all the instructions that the build machine provides. The -xtarget=generic flag tells the compiler explicitly to use this generic model. We work hard on making generic code work well across all processors. So in most cases this is a very good choice. It is also of interest to know what processors support the various architectures. The following Venn diagram attempts to show this: A textual description is as follows: The T1 and T2 processors, in addition to most other SPARC processors that were shipped in the last 10+ years supported V9b, or sparcvis2. The SPARC64 processors from Fujitsu, used in the M-series machines, added support for the floating point multiply accumulate instruction in the sparcfmaf architecture. Support for this instruction also appeared in the T3 - this is called sparcvis3 Later SPARC64 processors added the integer multiply accumulate instruction, this architecture is sparcima. Finally the T4 includes support for both the integer and floating point multiply accumulate instructions in the sparc4 architecture. So the conclusion should be: Floating point multiply accumulate is supported in both the T-series and M-series machines, so it should be a relatively safe bet to start using it. The T4 is a very good machine to deploy to because it supports all the current instruction sets.

    Read the article

  • how to properly free a char **table in C

    - by Samantha
    Hello, I need your advice on this piece of code: the table fields options[0], options[1] etc... don't seem to be freed correctly. Thanks for your answers int main() { .... char **options; options = generate_fields(user_input); for(i = 0; i < sizeof(options) / sizeof(options[0]); i++) { free(options[i]); options[i] = NULL; } free(options); } char ** generate_fields(char *) { char ** options = malloc(256*sizeof(char *)); ... return options; }

    Read the article

  • Difficulty with MooTools Class.extend

    - by Erin Drummond
    Consider the following code: var Widget = new Class({ Implements: [Options], options: { "name" : "BaseWidget" }, initialize: function(options) { alert("Options are: " + JSON.stringify(options)); //alerts "Options are: undefined" this.setOptions(options); alert("My options are: " + JSON.stringify(this.options)); //alerts "My options are: { 'name' : 'BaseWidget' }" }, getName: function() { return this.options.name; } }); var LayoutWidget = Widget.extend({ initialize: function() { this.parent({ "name" : "Layout" }); } }); alert(new LayoutWidget().getName()); //alerts "BaseWidget" I am having difficulty in determining why the argument passed in the "this.parent()" call in "initialize" function of LayoutWidget is coming through as "undefined" in the initialize function of Widget. I am using MooTools 1.2.2. Would somebody be able to point me in the right direction?

    Read the article

  • Recover data from physically damaged harddrive. What are my options?

    - by Michael Kniskern
    I was trying to replace the power supply in my desktop PC and ended up physically damaging the data connection from the hard drive to the motherboard. The plastic shelf for the copper prongs on the hard drive broke into the cable. Here is a picture of my handy work: I went to Best Buy Geek Squad to discuss my options and they said that they will need to send it to the recover center it could cost anywhere between $250 to $1600 USD to recover the data out the hard drive Is this reasonable for data recovery from a physically damaged hard drive? Are there any other options I can explore? I am going to talk to the data doctors to see what my options are. Update I took the HD to Data Doctors, and they told me that the SATA connection was broken to they would need to replaced the data connector and then copy the data to a brand new hard drive. So, with the initial analysis, cost of replacement parts, and data recovery fee it came out to $865.00 USD. The technician specifically stated if this was an older hard drive that would just need to replace the data connector. But because there is specific information related to the individual hard drive in the flash ROM, they need to transfer the data to a brand new hard drive.

    Read the article

  • 10GE network: Is it still deadly expensive? Any options?

    - by BarsMonster
    Hi! I am building home cluster where I going to have about 16 nodes which can live with 1G ports, but I really want to have 10GE on file server & central node. It's all local, so no need for cabels longer than 3-5m. And ofcourse I want to spend as little money as possible (not going to spend more than whole cluster costs) :-) What are my options? 1) Legacy solution is to take some 24-48 port 1GE switch, and connect to file/central nodes via 4-8 aggregated links. This will work I guess, cost is very acceptable, but I am not sure if it's ok to use that much aggregated links. And ofcourse it would be hard to double bandwidth when needed... :-D 2) Switch with several 10GE uplink 'ports'. As far as I see, they all require modules which costs about 1000$, so I will need 4 10G modules, and 2 10GE cards... Smells like way more than 5000$+... 3) Connect file & central node via 2 10G cards directly, and put 4 quadport 1GE NICs on fileserver. I am saving on 2 10G modules and a switch, fileserver will have to do packet routing, but it's still gonna have alot of CPU's left :-) 4) Any other options? Infiniband? 5) Are MyriNet adaptors works fine? I guess there are no cheaper options? 6) Hmm... Scrap fileserver, put it all on central node and provide dedicated 1GE port for each of the nodes... This is sad...

    Read the article

  • How "duplicated" Java code is optimized by the JVM JIT compiler?

    - by Renan Vinícius Mozone
    I'm in charge of maintaining a JSP based application, running on IBM WebSphere 6.1 (IBM J9 JVM). All JSP pages have a static include reference and in this include file there is some static Java methods declared. They are included in all JSP pages to offer an "easy access" to those utility static methods. I know that this is a very bad way to work, and I'm working to change this. But, just for curiosity, and to support my effort in changing this, I'm wondering how these "duplicated" static methods are optimized by the JVM JIT compiler. They are optimized separately even having the exact same signature? Does the JVM JIT compiler "sees" that these methods are all identical an provides an "unified" JIT'ed code?

    Read the article

  • GWT application throws an exception when run on Google Chrome with compiler output style set to 'OBF

    - by Elifarley
    I'd like to know if you guys have faced the same problem I'm facing, and how you are dealing with it. Sometimes, a small and harmless change in a Java class ensues strange errors at runtime. These errors only happen if BOTH conditions below are true: 2) the application is run on Google Chrome, and 1) the GWT JavaScript compiler output style is set to 'OBF'. So, running the application on Firefox or IE always works. Running with the output style set to 'pretty' or 'detailed' always works, even on Google Chrome. Here's an example of error message that I got: "((TypeError): Property 'top' of object [object DOMWindow] is not a function stack" And here's what I have: - GWT 1.5.3 - GXT 1.2.4 - Google Chrome 4 and 5 - Windows XP In order to get rid of this Heisenbug, I have to either deploy my application without obfuscation or endure a time-consuming trial-and-error process in which I re-implement the change in slightly different ways and re-run the application, until the GWT compiler is happy with my code. Would you have a better idea on how to avoid this?

    Read the article

  • How do I inhibit "note C6311" in Microsoft C compiler?

    - by piCookie
    In this maximally clipped source example, the manifest constant FOOBAR is being redefined. This is deliberate, and there is extra code in the live case to make use of each definition. The pragma was added to get rid of a warning message, but then a note appeared, and I don't seem to find a way to get rid of the note. I've been able to modify this particular source to #undef between the #define, but I would like to know if there's a way to inhibit the note without requiring #undef, since there are multiple constants being handled the same way. #pragma warning( disable : 4005 ) // 'identifier' : macro redefinition #define FOOBAR FOO #define FOOBAR BAR The compiler banner and output are as follows Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 12.00.8804 for 80x86 Copyright (C) Microsoft Corp 1984-1998. All rights reserved. message.c message.c(3) : note C6311: message.c(2) : see previous definition of 'FOOBAR'

    Read the article

  • "Temporary object" warning - is it me or the compiler?

    - by Roddy
    The following snippet gives the warning: [C++ Warning] foo.cpp(70): W8030 Temporary used for parameter '_Val' in call to 'std::vector<Base *,std::allocator<Base *> >::push_back(Base * const &)' .. on the indicated line. class Base { }; class Derived: public Base { public: Derived() // << warning disappears if constructor is removed! { }; }; std::vector<Base*> list1; list1.push_back(new Base); list1.push_back(new Derived); // << Warning on this line! Compiler is Codegear C++Builder 2007. Oddly, if the constructor for Derived is deleted, the warning goes away... Is it me or the compiler?

    Read the article

  • Determine when using the VC90 compiler in VS2010 instead of VS2008?

    - by Dan
    Is there a (Microsoft-specific) CPP macro to determine when I'm using the VC9 compiler in Visual Studio 2010 as opposed to Visual Studio 2008? _MSC_VER returns the compiler version, so with VS2010 multi-targeting feature, I'll get the same result as with VS2008. The reason for wanting to know the difference is that I created a new VS2010 project which contains code removed from a larger project. I just left the VS2008 stuff "as is" since we're moving away from VS2008 "soon" anyway and I didn't want to go through the hassle of creating a vcproj file along with the new vcxproj. For now, I've just defined my own macro to indicate whether the code is compiled into its own DLL or not; it works just fine, but it would be nice if there were something slightly more elegant.

    Read the article

  • Are macro definitions compatible between MIPS and Intel C compiler?

    - by Derek
    I seem to be having a problem with a macro that I have defined in a C program. I compile this software and run it sucessfully with the MIPS compiler. It builds OK but throws the error "Segmentation fault" at runtime when using icc. I compiled both of these on 64 bit architectures (MIPS on SGI, with -64 flag and icc on an intel platform). Is there some magic switch I need to use to make this work correctly on both system? I turned on warnings for the intel compiler, and EVERY one of the places in my program where a macro is invoked throws a warning. Usually something along the lines of mismatched types on the macro's parameters (int to char *) or some such thing.

    Read the article

  • Compile Flex application without debug? Optimisation options for flex compiler?

    - by maoanz
    I have created a simple test application with the following code var i : int; for (i=0; i<3000000; i++){ trace(i); } When I run the application, it's very slow to load, which means the "trace" is running. I check the flash player by right-clicking, the debugger option is not enable. So I wonder if there is an option to put in compiler to exclude the trace. Otherwise, I have to remove manually all the trace in the program. Are there any other options of compiler to optimize the flex application in a maximum way? Thanks

    Read the article

  • Will an optimizing compiler remove calls to a method whose result will be multiplied by zero?

    - by Tim R.
    Suppose you have a computationally expensive method, Compute(p), which returns some float, and another method, Falloff(p), which returns another float from zero to one. If you compute Falloff(p) * Compute(p), will Compute(p) still run when Falloff(p) returns zero? Or would you need to write a special case to prevent Compute(p) from running unnecessarily? Theoretically, an optimizing compiler could determine that omitting Compute when Falloff returns zero would have no effect on the program. However, this is kind of hard to test, since if you have Compute output some debug data to determine whether it is running, the compiler would know not to omit it because of that debug info, resulting in sort of a Schrodinger's cat situation. I know the safe solution to this problem is just to add the special case, but I'm just curious.

    Read the article

  • Why does the Scala compiler disallow overloaded methods with default arguments?

    - by soc
    While there might be valid cases where such method overloadings could become ambiguous, why does the compiler disallow code which is neither ambiguous at compile time nor at run time? Example: // This fails: def foo(a: String)(b: Int = 42) = a + b def foo(a: Int) (b: Int = 42) = a + b // This fails, too. Even if there is no position in the argument list, // where the types are the same. def foo(a: Int) (b: Int = 42) = a + b def foo(a: String)(b: String = "Foo") = a + b // This is OK: def foo(a: String)(b: Int) = a + b def foo(a: Int) (b: Int = 42) = a + b // Even this is OK. def foo(a: Int)(b: Int) = a + b def foo(a: Int)(b: String = "Foo") = a + b val bar = foo(42)_ // This complains obviously ... Are there any reasons why these restrictions can't be loosened a bit? Especially when converting heavily overloaded Java code to Scala default arguments are a very important and it isn't nice to find out after replacing plenty of Java methods by one Scala methods that the spec/compiler imposes arbitrary restrictions.

    Read the article

  • How do I solve MSSQL 2008 install error, "The MOF compiler could not connect with the WMI server"?

    - by nbolton
    Possibly related to: SQL Server 2008 Install fails error reading etwcls.mof After manually removing MSSQL 2008 from my system (uninstall failed to remove two instances), I receive the following error when trying to re-install: The MOF compiler could not connect with the WMI server. This is either because of a semantic error such as an incompatibility with the existing WMI repository or an actual error such as the failure of the WMI server to start. It seems that mofcomp is failing with one of the .mof files, but I'm not sure which, or why. Digging through the connect article gave some indications, but no solution. I've run winmgmt /salvagerepository, which returns "WMI repository is consistent". Currently, I'm unable to install MSSQL 2008. Please help!

    Read the article

  • How do I solve MSSQL 2008 install error, "The MOF compiler could not connect with the WMI server"?

    - by nbolton
    Possibly related to: SQL Server 2008 Install fails error reading etwcls.mof After manually removing MSSQL 2008 from my system (uninstall failed to remove two instances), I receive the following error when trying to re-install: The MOF compiler could not connect with the WMI server. This is either because of a semantic error such as an incompatibility with the existing WMI repository or an actual error such as the failure of the WMI server to start. It seems that mofcomp is failing with one of the .mof files, but I'm not sure which, or why. Digging through the connect article gave some indications, but no solution. I've run winmgmt /salvagerepository, which returns "WMI repository is consistent". Currently, I'm unable to install MSSQL 2008. Please help!

    Read the article

  • Sound issues after trying everything

    - by Lerp
    I cannot get my sound working properly, no matter what I do, there's always some problem. It's very annoying as it's the only thing preventing me from making Ubuntu my main OS. At the moment my sound always plays through both my speakers and my headphones regardless except the sound through the headphones is crackly. It is also a bit quiet even though everything is maxed. I've managed to improve the situation to a point where the sound out of my speakers is perfect but I have none at all from my headphones. I do have two connectors listed in the sound settings but regardless of which one is selected it always plays through the speakers. I think this might have something to do with the fact that my speakers are plugging into the front of my computer, typically the headphone jack, and my headphones are plugging into the back but when I try disconnecting the speakers from the front there is still no sound from the headphones. I fixed the speaker sound by going through the sound settings and making sure they were all set to 100% then rebooting. Things I have tried: Maxing everything and unmuting everything in alsamixer Uninstalling pulseaudio Making gstreamer use only alsa via gstreamer-properties. This worked with the sound test button including independent sound between headphones and speakers but when I reset the computer it no longer worked. So I tried setting it manually in gconf-editor which didn't work either. Reinstalling alsa and pulseaudio Setting the model in /etc/modprobe.d/alsa-base.conf to 6stack and 6stack-dig neither worked. Upgrading to 12.10 Here's some command output to help you diagnose my problem. aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: AD198x Analog [AD198x Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: AD198x Digital [AD198x Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 2: AD198x Headphone [AD198x Headphone] Subdevices: 1/1 Subdevice #0: subdevice #0 sudo lshw -C sound *-multimedia description: Audio device product: 82801JI (ICH10 Family) HD Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=snd_hda_intel latency=0 resources: irq:70 memory:f7ff8000-f7ffbfff cat /proc/asound/card*/codec* | grep "Codec" Codec: Analog Devices AD1989B cat /etc/modprobe.d/alsa-base.conf # autoloader aliases install sound-slot-0 /sbin/modprobe snd-card-0 install sound-slot-1 /sbin/modprobe snd-card-1 install sound-slot-2 /sbin/modprobe snd-card-2 install sound-slot-3 /sbin/modprobe snd-card-3 install sound-slot-4 /sbin/modprobe snd-card-4 install sound-slot-5 /sbin/modprobe snd-card-5 install sound-slot-6 /sbin/modprobe snd-card-6 install sound-slot-7 /sbin/modprobe snd-card-7 # Cause optional modules to be loaded above generic modules install snd /sbin/modprobe --ignore-install snd $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-ioctl32 ; /sbin/modprobe --quiet --use-blacklist snd-seq ; } # # Workaround at bug #499695 (reverted in Ubuntu see LP #319505) install snd-pcm /sbin/modprobe --ignore-install snd-pcm $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-pcm-oss ; : ; } install snd-mixer /sbin/modprobe --ignore-install snd-mixer $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-mixer-oss ; : ; } install snd-seq /sbin/modprobe --ignore-install snd-seq $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; /sbin/modprobe --quiet --use-blacklist snd-seq-oss ; : ; } # install snd-rawmidi /sbin/modprobe --ignore-install snd-rawmidi $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; : ; } # Cause optional modules to be loaded above sound card driver modules install snd-emu10k1 /sbin/modprobe --ignore-install snd-emu10k1 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-emu10k1-synth ; } install snd-via82xx /sbin/modprobe --ignore-install snd-via82xx $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq ; } # Load saa7134-alsa instead of saa7134 (which gets dragged in by it anyway) install saa7134 /sbin/modprobe --ignore-install saa7134 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist saa7134-alsa ; : ; } # Prevent abnormal drivers from grabbing index 0 options bt87x index=-2 options cx88_alsa index=-2 options saa7134-alsa index=-2 options snd-atiixp-modem index=-2 options snd-intel8x0m index=-2 options snd-via82xx-modem index=-2 options snd-usb-audio index=-2 options snd-usb-caiaq index=-2 options snd-usb-ua101 index=-2 options snd-usb-us122l index=-2 options snd-usb-usx2y index=-2 # Ubuntu #62691, enable MPU for snd-cmipci options snd-cmipci mpu_port=0x330 fm_port=0x388 # Keep snd-pcsp from being loaded as first soundcard options snd-pcsp index=-2 # Keep snd-usb-audio from beeing loaded as first soundcard options snd-usb-audio index=-2 options snd-hda-intel model=6stack

    Read the article

  • Issues with configuration of Apache and mod_auth_sspi

    - by TekiusFanatikus
    I've been able to get this working using XAMP with Apache 2.0.55 and XAMP Apache 2.2.14 without any problems. However, when I attempt to configure our intranet server (Apache 2.0.59), I don't get the same results. The results are that the following variables contain the information desired: $_SERVER["REMOTE_USER"] AND $_SERVER["PHP_AUTH_USER"]. In this case, they are blank. I'm expecting "domain/user_name". Conf file stuff: <Directory "/xxx/xampp/htdocs/"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # #Options Indexes FollowSymLinks Includes ExecCGI Options Indexes FollowSymLinks # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # #AllowOverride All AllowOverride None # # Controls who can get stuff from this server. # #Order allow,deny #Allow from all Order allow,deny Allow from all #NT Domain Login AuthName "Intranet" AuthType SSPI SSPIAuth On SSPIAuthoritative On SSPIDomain "xxxx" SSPIOfferBasic Off SSPIPerRequestAuth On SSPIOmitDomain Off # keep domain name in userid string SSPIUsernameCase lower Require valid-user </Directory> I would like to note that I've modified the paths to reflect the intranet environment. I'm using the following module: http://sourceforge.net/projects/mod-auth-sspi/ Once the module is installed and the conf file is modified, the intranet environment's server scope isn't populated with the expected variables. Edit #1 <Directory "/path_here"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # #Options Indexes FollowSymLinks Includes ExecCGI Options Indexes FollowSymLinks # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # #AllowOverride All AllowOverride None # # Controls who can get stuff from this server. # #Order allow,deny #Allow from all Order allow,deny Allow from all #NT Domain Login AuthName "Intranet" AuthType SSPI SSPIAuth On SSPIAuthoritative On SSPIDomain "domain_here" SSPIOfferBasic On SSPIPerRequestAuth On SSPIOmitDomain Off # keep domain name in userid string SSPIUsernameCase lower Require valid-user </Directory>

    Read the article

  • Knowledge Pathways Designer - Recommended Settings

    - by ted.henson
    The General page of the Options dialog box contains the application preferences for Knowledge Pathways Designer. It is recommended that you leave certain settings as they are, unless you have a specific reason for changing them. The following are a few of the settings on the General page with an explanation of the recommended setting. They are in the order they appear on the page: Allow version 2.0 style links: This option should remain disabled unless you were using content that was created using version 2.0 of Knowledge Pathways and you want the same linking functionality that existed in that version 2.0. This feature enables you to reuse parts of titles that contain no AUs. However, keep in mind that this type of link is not a true link, but a cross between a copy and a link. To create a 2.0 style link, you drag and drop sections between titles. You can only create 2.0 style links to sections that belong to the Title AU. When creating a version 2.0 style link, your mouse pointer will change to indicate a 2.0 link is being created. Confirm deletion of outline items and Confirm deletion of titles: It is recommended that these options remain enabled to avoid deleting something by accident. Display tracking data loss warning when opening a published title: It recommended that this option be enabled so you will receive the warning message when you open the development copy of a title, reminding you of the implications of your changes. ulCopy files when converting a Section to an Assignable Unit: This option should remain enabled unless you have a specific reason for not copying the files. If this is disabled, you will (in effect) lose your content files upon converting because they will not be copied to the new AU directory on the content root. In this case, you would need to use Windows Explorer to copy your files manually. Working with Spelling Options All of the spelling options are enabled by default. Your design team can review these options to determine if you want to make changes, depending upon your specific needs. Understanding Dictionary Options You should leave the dictionary options as they are, unless you have a specific reason for changing them. While you can delete the user (customizable) dictionary, doing so is not recommended. Setting Check In/Check Out Options The ability to check in and check out titles and AUs will impact the efficiency of your design team. Decide what your check in and check out processes are before you start developing titles. The Check In/Check Out page of the Options dialog box contains two options that affect what happens when you open a title using the Open Title dialog box. Both of these options are enabled by default and are described below: Check Out for editing enabled: This option ensures that the Check Out for editing option will be selected when you open the development copy of a title from the Open Title dialog box. If this option is disabled, you must select the Check Out for editing option every time you want to check out a title for editing. Attempt to Check Out for entire branch: When this option is enabled, Designer checks out the selected title and all AUs and sections that are part of that title, provided they are available for check out. If this option is disabled, you will only check out the Title AU and anything that belongs to that Title AU (e.g., sections, questions, etc.), but not other AUs. The Check In/Check Out page of the Options dialog box also contains options that control what happens when you close a title. You can choose one option in the Check In when Closing a Title area. The option selected is a matter of preference and you should determine which option is most appropriate for your design team.

    Read the article

  • Apache Options -Indexes give me 404 instead of 403, why?

    - by netmano
    I have an Apache/2.2.21 (Debian) webserver, which I disabled directory listing with Options -Indexes but now I got 404 error for a directory, but I think I should get a 403. I have no idea why I get 404, rather than 403. What should I check? I have disabled autoindex module, after it I got a 404 for every URL that request a directory listing (eg: www.somesite.com/dir ). How can I get a 403 for this. (The dir does exist) As a try I also put Options -Index in the end of main config file (apache2.conf).

    Read the article

  • How to easily use Windows 7 search advanced options?

    - by Scott Evernden
    Is there an alternative to trying to remember all the advanced search options? Like an actual GUI as we had for windows XP? As powerful as Windows Search apparently is, I cannot possibly remember all the options available. How is a mere mortal like my Dad supposed to understand and retain all this? I get the shakes every time i need to find something on Win 7. Anyone have some relief? Part 2: Why does it RE-run a search if i add a column and try to sort on that?

    Read the article

  • Can I disable Pam Loginuid? Can I find out options used to configure kernel?

    - by dunxd
    I am getting a lot of the following types of error in my secure log on a CentOS 5.4 server: crond[10445]: pam_loginuid(crond:session): set_loginuid failed opening loginuid sshd[10473]: pam_loginuid(sshd:session): set_loginuid failed opening loginuid I've seen discussion of this being caused when using a non-standard kernel without the correct CONFIG_AUDIT and CONFIG_AUDITSYSCALL options set. Where this is the case, it is advised to comment out some lines in the pam.d config files. I am running a Virtual Private Server where I need to use the kernel provided by the supplier. Is there a way to find out what options they used to configure the kernel? I want to verify if the above is the cause. If this turns out not to be the cause, what are the risk of disabling pam_loginuid for crond and sshd?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >